Recently Scott Soames wrote two books on the history of philosophy from 1900 to 1970. Richard Rorty’s review of these books in the LRB has attracted quite a bit of attention among philosophers. A reply by Soames has been printed, but apparently it was cut down quite a bit for space reasons. So a full version of Soames’s reply (warning: PDF) has been put on the web. I expected I’d be rather sympathetic to Soames’s side of this debate, but actually I thought Rorty got in some surprisingly good points, the most central of which were about my primary area of research, vagueness.
Rorty starts his review with what Soames calls an “amusing fable”.
‘I had hoped my department would hire somebody in the history of philosophy,’ my friend lamented, ‘but my colleagues decided that we needed somebody who was contributing to the literature on vagueness.’
‘The literature on what?’ I asked.
‘Dick,’ he replied, exasperated, ‘you’re really out of it. You don’t realise: vagueness is huge.’
I don’t know whether that is true or not. I don’t know anyone who got hired to work on vagueness as such (except at St Andrews where they have a vagueness research project) but it could be true. Both Rorty and Soames agree that in the (near) future vagueness will be a hot topic. And at first in looks like Rorty thinks this is a bad thing. Here’s his initial characterisation of the issue.
To see what philosophy may look like in the future, consider the problem that gave rise to the huge literature on vagueness: the paradox of the heap. Soames formulates it as follows: ‘If one has something that is not a heap of sand, and one adds a single grain of sand to it, the result is still not a heap of sand . . . if n grains of sand are not sufficient to make a heap then n+1 grains aren’t either.’ So it seems that ‘no matter how many grains of sand may be gathered together, they are not sufficient to make a heap of sand.’
Some philosophers, such as Crispin Wright, respond to this paradox in the spirit of Wittgenstein. They argue that (as Soames puts it) ‘the rules governing ordinary vague predicates simply do not allow for sharp and precise lines dividing objects to which the predicates apply from objects of any other sort.’ Others, such as Timothy Williamson, hold (again in Soames’s words) that ‘vague predicates are in fact perfectly precise – in the sense that there are sharp and precise lines dividing objects to which they truly apply from objects to which they truly do not – but it is impossible for us ever to know where these lines are.’
An educational administrator (a dean in the US, a pro-vice-chancellor in Britain), asked to ratify the appointment of someone who has produced a brilliant new theory of heaps, might be tempted to ask whether this sort of thing is really philosophy.
Soames seems to take this to be a sign that Rorty has stopped corresponding with philosophical reality.
Richard Rorty (LRB, 20, January) begins his provocative discussion of my two volumes on analytic philosophy with an amusing fable about a department chair facing the daunting task of convincing the Dean to hire a philosopher working on vagueness. Not to worry. As Rorty himself ends up implicitly acknowledging, the imagined job candidate is not striving to become an expert on the composition of heaps, with the idea of solving practical problems perplexing civil engineers, but of illuminating the rules of our logic and language, and their application to the world. This enterprise is one of several in which analytic philosophers are forging ahead by replacing Rorty’s metaphorical question — Are the sentences we use to describe the world maps of an independent reality? — with more specific, nonmetaphorical questions on which real progress can be made. Rorty’s failure to register this is, I believe, connected to a broader failure of perspective arising from his disengagement from analytic philosophy in the last 25 years.
And if that’s all Rorty had said, that would have been a perfectly good reply. But Rorty has a more nuanced take on the vagueness debates than you might think from reading the above passages.
The controversy between realists, who think that the notion of truth as correspondence to reality can be saved, and pragmatists, who regard it as hopeless, seems to me much more fruitful than the question of whether ‘Water is H2O’ is a necessary truth. The debate about the utility of the ‘map’ metaphor has been going on for a long time now, and shows no signs of abating. It seethes beneath the surface of discussions of many seemingly unrelated questions. One such question is the nature of vague predicates. Timothy Williamson ends his much discussed book Vagueness with arguments against the ‘nominalist’ suggestion that ‘properties, relations and states of affairs are mere projections onto the world of our forms of speech,’ and concludes that ‘our contact with the world is as direct in vague thought as it is in any thought.’ Crispin Wright takes up the topic of vagueness not because he cares deeply about how many grains it takes to make a heap but because doing so helps him formulate a view about the extent to which mastering a language can be treated as a matter of obedience to semantical rules – rules about how to line words up with things. It is an underlying concern with the question of whether and how language gets in touch with the world that has made vagueness a hot topic. Perceived relevance to such larger questions enables philosophers who specialise in heaps to shrug off the suggestion that they are trivialising a discipline that once had considerable cultural importance (and, in some countries, still does).
It seems to me that there is much in that that is correct, and which someone who hadn’t been following the relatively recent literature on vagueness (at least the post-1994 literature, when Vagueness was published) relatively closely couldn’t have picked up upon. It’s not true that everyone who writes on vagueness thinks that it is relevant to debates about the correspondence metaphor (I’m not sure that any of Cornell’s many vagueness-people think that it is relevant for example) but it is in part how Williamson and Wright view their disagreement. Williamson’s theory of vagueness is an attempt to show how realism can be defended even on its toughest ground. If he’s right then, as Rorty realises, the rout is on. And looking at Williamson’s larger body of work, not to mention his comments in his vagueness work, this isn’t some unintended side-effect of someone working in a fundamentally different field, it’s one reason why Williamson is interested in vagueness.
So I think Rorty’s right on this point. One of the areas that is, as Soames acknowledges, a cutting-edge area of philosophy is an area where (contemporary versions of) debates about realism are being fought out. And, at least in Britain, those debates are informed by the successes and failures of realist and anti-realist proposals of days gone by. Score one for the claim that the correspondence debates (or the realism debates as I think of them) are central to philosophy.
Rorty’s main point here is to challenge Soames to find similar areas of philosophy where the debates he thinks are central, in particular debates about the epistemological significance of common sense and the distinctions between various modal and quasi-modal concepts, are just as important. Soames doesn’t even set himself that challenge in the book. I’m not convinced that it is a challenge that can’t be met. The contemporary debates about property dualism, for example, wouldn’t be possible in their current form without Kripke’s work. And they are important parts of philosophy. So I’m not sure Rorty ultimately wins this debate, but I do think he’s making points and that Soames should be responding to them.
There is a philosophical tradition, most prominently associated with Quine, that includes among its core commitments the following two claims.
So it would be a little troubling if best scientific theory started quantifying over imaginary friends. But some say that’s what will happen. The Quineans will have to find some way to paraphrase away the imaginary friends without paraphrasing away the benefits, should the benefits be genuine.
To understand political power correctly and derive it from its proper source, we must consider what state all men are naturally in. It is a state in which men are perfectly free to order their actions, and dispose of their possessions and themselves, in any way they like, without asking anyone else’s permission - all this subject only to limits set by the law of nature. It is also a state of equality, in which no-one has more power and authority than anyone else; because it is simply obvious that creatures of the same species and status, all born to all the same advantages of nature and to the use of the same abilities, should also be equal ·in other ways·, with no-one being subjected to or subordinate to anyone else, unless ·God·, the lord and master of them all, were to declare clearly and explicitly his wish that some one person be raised above the others and given an undoubted right to dominion and sovereignty.
The latest of Jonathan Bennett’s renderings of the classics of early modern philosophy into modern English is now out on the web: the Second Treatise of Government . In my experience it is a work that students find especially opaque in the original, much as I love the archaic language. (Sceptics might be interested to read Bennett’s rationale for his project.)
I'm rather proud of a piece I've written about a new anthology of essays, Wittgenstein Reads Weininger, for Notre Dame Philosophical Reviews. (A nice online journal that just does short reviews. They just underwent a redesign. Now smoothly searchable.) I think I did a pretty solid job of covering this modest quadrant of scholarly specialization - this suburb of Wittgenstein's Vienna, if you will; while also providing some clear views of the city; and some sense of the strange bird who roosts and rules there - this fierce Austrian double-eagle, gripping Frege and Russell in one sharp beak! Schopenhauer, Kraus ... and Otto Weininger in the other! Who understands how such an ornithologico-philosophical thing could be? (As Wittgenstein once paraphrased Plato to one of his over-awed followers: 'I study not these things - e.g. logic - but myself, to learn whether I am a Typhon-like monster, or a simpler sort of creature.') And so I managed to turn a book review into a modestly original short essay. The editor very kindly let me ramble on twice as long as I was supposed to. But it's still quite short [UPDATE: I think the word I was reaching for was 'long'.] The Kraus quote I stuck on at the end is one of my favorites.
I’ve been looking at some data from the Philosophical Gourmet Report, a well-known and widely-used reputational survey of philosophers in the U.S., Canada, the U.K. and Australasia. The survey asks philosophers to rate the overall reputations of graduate programs as well as their strength in various subfields. The ratings are endogenous, in the sense that the philosophers who produce them are members of the departments that are being rated. This gives the survey some interesting relational properties and allows for an analysis of the social structure of reputation in the field. I’ve written a working paper that analyses the data from this perspective. It’s still in in pretty rough shape: there’s not much in the way of theory or a framing argument yet, and it’s way short on citations to the relevant literature. (So don’t be too harsh on it.)
I’m not sure whether philosophers will like the paper. On the one hand, they tend to think of themselves as sensible individuals guided by common-sense and rational argument. This makes them resist thinking of themselves in sociological terms, subject to the influences of context, social relations and role constraints. On the other hand, in my experience they have an insatiable capacity for gossip. Within the limits of the data, the paper addresses three questions:
The answers are, in short, “It depends on the field”, “Yes, but only sometimes, and then only for high-status specialists”, and “A great deal.” Some quick findings:
Over the fold are two visualizations of the field: the first is a blockmodel describing the relational structure of prestige amongst U.S. philosophy departments. The second is a segment plot showing the profile of departments across a range of different subfields. I think they’re both pretty cool, so read on.
A Blockmodel of Prestige
First, the blockmodel. You can get the full-size version of this graphic, or a higher-resolution PDF version, or the captioned version from the paper. This picture is a department by department matrix. Each colored cell represents the average “vote” by a department in the row for a department in the column. Departments are sorted in the same order in the rows and columns, according to an algorithm that groups them by how similar their voting patterns. The row and column numbers DO NOT correspond to the PGR rankings. (That is, Department 10 in the figure is not the 10th-ranked department.) Purple and blue cells represent high rankings; green cells represent middling rankings; brown and yellow cells represent low rankings. (The captioned version provides a reference scale.) The main diagonal is blank because departments are not allowed vote for themselves. In a high-consensus field, we’d expect each column to be the same color all the way down: that is, everyone agrees on how good a particular department is. In a low-consensus field, we’d expect more heterogeneity, with disagreement on the quality of particular programs. The data suggest that — at least according to the respondents to the PGR — philosophy is a very high-consensus discipline.
To help the interpretation, we can further group departments into “blocks” based on their similarity: members of the same blocks will stand in similar structural relations to other departments. In this case, I’ve generated a model with 5 blocks. Blocks are set off by thicker lines that project out into the margin. Block 1 is made up of the just first four departments, so the first four rows and columns show Block 1’s assessment of itself, for example. The four Block 1 departments enjoy the highest prestige and the greatest degree of consensus about their quality. Looking down the first four columns lets you see what everyone else thinks of Block 1 — almost everyone agrees they’re the best, as you can see by the almost unbroken strip of purple and dark blue. Looking across the first four rows lets you see what Block 1 thinks of everyone else. Thus, focusing on the intersections of the graph created by the thicker horizontal lines lets you see how different blocks relate to one another (and themselves). For instance, the bottom right corner of the figure shows what the lowest-status block, Block 5, thinks of itself, so to speak. It turns out that it agrees with everyone else’s assessment of its relatively low quality. In fact, as I show in the paper, Block 5 thinks a little better of Block 1 than Block 1 thinks of itself, and thinks a little worse of itself than Block 1 thinks of it. In other words, the lowest-prestige block is slightly more committed to the hierarchy than the highest-prestige block. Although the mean scores awarded to blocks varies across blocks, there is complete agreement on the rank-ordering of blocks. So, for example, there’s no dissenting group that thinks itself better than everyone else believes.
Departmental Strength in Specialist Areas
This is a segment plot. Again, you can get a larger version of it, or a much nicer PDF version. For each department, the wedges of the plot represent the department’s reputation rank in a particular subfield. The bigger the wedge, the better the reputation rank. A department that was ranked first in all areas would look like the key at the bottom. To simplify the presentation I’ve grouped metaphysics, philosophy of mind and philosophy of language into a single group, “MML.” (This has a substantive justification, as strength in these areas is highly correlated.) The distribution of segments gives a nice picture of a each department’s profile: what it’s known for. I’ve also ordered the subfields clockwise from the left, roughly in order of their contribution to the overall reputation of a department. You’ll notice that Princeton and Oxford are the only departments in the Top 15 or so to have a roughly symmetric “fan-like” structure, indicating strength in a wide variety of areas. By contrast, NYU is very strong in MML, Ethics and History, but not ancient or continental. Rutgers’ profile looks like a chambered nautilus: it’s very strong in MML and Epistemology, and gets progressively weaker as you move around the half-circle. Yet NYU and Rutgers outscore Oxford and Princeton in terms of overall reputation. This is because — as the paper shows — not all areas contribute equally to the status of a department. Strength in MML is more important than strength in, say, Ancient philosophy or (especially) Continental philosophy. The segment plot makes all the specialties have the same weight, but in reality this isn’t the case — so Oxford doesn’t capitalize on its strength in Ancient philosophy, for example. Michigan is an unusual case in that it ranks very highly despite lacking a strong reputation for MML and Epistemology. Conversely, strength in MML will only get you so far: MIT and the ANU excel in these areas, but probably won’t go any higher in the ratings without diversifying.
The plot shows some other features of departments and the field in general, too. Harvard’s relative weakness in metaphysics and mind is clear, for instance, as is Chicago’s strength in continental philosophy. As one moves down the rankings, the size of the wedges declines, of course, and departments with distinctive niches appear: the LSE is strong in the philosophy of science, Penn in modern history, Wisconsin in Science.
The data have a number of limitations, of course. For one thing, not all departments are present in the survey, and in most cases only one or two representatives of those departments were sampled. But it’s still a rich dataset. The draft paper has a fuller discussion of all this, together with a few other neat visualizations of the structure of the field. Comments are welcome, of course.
Update 2: Following a query from Tom Hurka, I discovered a small error in the segment plot. The original “History” measure mistakenly included Ancient history, which wasn’t my intention. That’s been fixed now, and the History measure is a department’s mean score in 17th Century, 18th Century and Kant/German Idealism.
There’s also a second question about interpreting the plots, especially if you’re looking closely at the profile of a particular department. The size of the wedges is not determined directly by the scores departments get in each area. First, they are scaled to have values between 0 and 1 in order for the plot wedges to be drawn. This rescaling can affect how departments appear. Imagine a department that scores the same in two subfields. If one subfield has a wider range of scores than another, however, a gap may open up in their position when this scaling takes place. This happened in some cases in the original segment plot. The range of observed scores for ethics is wider than for modern history, for example. The result is the relative position of some departments will differ from the former to the latter, even though their raw scores might be the same in both subfields. Originally I just left it at that, but to simplify things I’ve redone the segment diagram so that the size of the wedges is directly proportional to a department’s rank in that subfield. Bear in mind that ranks are calculated after being rounded to one decimal place, so ranks will often be tied.
Visually representing multivariate data is tricky, and this is an example of where a segment plot might be slightly confusing and the choice between using the raw means or the ranks isn’t entirely clear-cut. The ranks bring out the relative ordering of deparments but don’t convey how, for some fields, a few departments might be head and shoulders above the rest of the field. On the other hand, the ranks are easier to interpret than a “relative reputation” measure.
The latest Prospect has a nice piece on Durkheim by Michael Prowse, arguing that we should take him seriously as a critic of free-market capitalism. I was, however, struck by this paragraph concerning Durkheim’s views on the advantages of marriage for men:
Durkheim used the example of marriage to illustrate the problem of anomie or inadequate social regulation. You might think that men would be happiest if able to pursue their sexual desires without restraint. But it is not so, Durkheim argued: all the evidence (including relative suicide rates) suggests that men do better when marriage closes their horizons. As bachelors they can chase every woman they find attractive but they are rarely contented because the potential objects of desires are so numerous. Nor do they enjoy any security because they may lose the woman they are currently involved with. By contrast, Durkheim argued, the married man is generally happier: he must now restrict himself to one woman (at least most of the time) but there is a quid pro quo. The marriage rules require the woman to give herself to him: hence his one permitted object of desire is guaranteed. Marriage thus promotes the long-term happiness of men (Durkheim was less certain that it helped women) because it imposes a sometimes irksome constraint on their passions.
No comment from me, except that it reminded me of a dialogue between Gabrielle and her boy-gardener lover during a recent episode of Desperate Housewives . It went something like this:
He: So why did you marry Carlos?
She: Because he promised to give me everything I desired.
He: And did he?
She: Yes.
He: So why aren’t you happy?
She: It turns out I desired the wrong things.
Cue Aristotle stage left?
Next time I teach Grice on conversational implicature and the Cooperative Principle, I think I'll use this sentence as an example of how not to be maximally relevant:
>In Trier, Germany, birthplace of Karl Marx, the prosecutor’s office has been investigating the claim of a woman that babies were being cut up and eaten in Satanist rituals.
Link via Jonathan Goodwin, who reliably bursts with timely and topical quotations. Such as this:
Philosophical works among [the Solipsists] are more or less of this sort: “Does the scarab roll dung into a ball paradigmatically?” “If a mouse urinates in the sea, is there a risk of shipwreck?” “Are mathematical points receptacles for spirits?” “Is a belch an exhalation of the soul?” “Does the barking of a dog make the moon spotted?” and many other arguments of this kind, which are stated and discussed with equal contentiousness. Their Theological works are: “Whether navigation can be established in imaginary space.” “Whether the intelligence known as Burach has the power to digest iron.” “Whether the souls of the Gods have color.” “Whether the excretions of Demons are protective to humans in the eighth degree.” “Whether drums covered with the hide of an ass delight the intellect.”
Discuss. In strict accordance with Grice's Cooperative Principle. That is, "make your conversational contribution [concerning the protective puissance of demon excretions, etc.] such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged."
One of the striking things about the tsunami coverage here in Melbourne has been how much of it has focussed on religion. The recent op-eds in The Age have been full of people arguing about how, or whether, religious views can accommodate tragedies such as we’ve seen in south Asia. Since I’ll be teaching the Problem of Evil as part of philosophy 101 this spring (using God, Freedom and Evil as the primary text), I’ve been following these discussions with some interest. I was surprised to find one of the responses I always dismissed as absurd actually has a little more bite to it when I actually tried thinking about it.
It’s worth noting that there is only a theological problem here for a special kind of theist. Believers in Greek-style polytheism don’t have a problem. Nor do believers whose God is morally pretty good, but not altgether perfect. Maybe a God like that well-intentioned colleague who is sometimes a bit forgetful. And there isn’t a problem for those who don’t believe in an omnipotent God, as apparently some prominent theists do not. But if your God is all-powerful and all-loving, there’s a prima facie problem.
One could try, as Bishop Phillip Jensen apparently did to say that it’s all part of God’s warning. But there are plenty of problems with that. Voltaire’s criticism of a similar move after Lisbon (there are plenty of worse sinners in Paris) seems on the money. And like earth to God, next time you want to send a message, try skywriting. It’s cheap, especially for you, it’s visible, and if there isn’t a plane involved, everyone will notice. During the cricket this week Peter Roebuck was having some fun gently mocking this line, saying something like “Hmmm so we’re meant to think God was sitting around and decided, I know, what we need now is a giant tidal wave that kills a couple of hundred thousand people. I think they’ll have to do better than that.” It could have been a fun discussion but the other commentator seemed a little nervous to be talking about anything more controversial than Yousuf Youhana’s field placings so it got cut off, but Roebuck was correct.
I think there’s a relatively straightforward solution to the Problem of Evil, the modal realism solution due to Donald Turner and Hud Hudson. I also think that the no best world solution is pretty good. To be sure both of these solutions have metaphysical oddities about them, but I think they are both perfectly fine solutions to the theological problems. So I’m mostly interested in mapping out the logical space here rather than trying to work out if there’s a knock-down argument against theism, which I’m pretty sure there isn’t.
But what I most wanted to write about was the response by Rowan Williams, Archbishop of Canterbury.
The extraordinary fact is that belief has survived such tests again and again not because it comforts or explains but because believers cannot deny what has been shown or given to them. They have learned to see the world and life in the world as a freely given gift; they have learned to be open to a calling or invitation from outside their own resources, a calling to accept God’s mercy for themselves and make it real for others; they have learned that there is some reality to which they can only relate in amazement and silence. These convictions are terribly assaulted by all those other facts of human experience that seem to point to a completely arbitrary world, but people still feel bound to them, not for comfort or ease, but because they have imposed themselves on the shape of a life and the habits of a heart.
In the past I always thought this was just a cop-out, akin to refuting Berkeley by kicking a stone, or refuting the sceptic by holding up one’s hands. Then I realised, I support refuting Berkeley by kicking a stone, or refuting the sceptic by holding up one’s hands, so maybe I better look into this more closely!
Put in terms we analytic philosophers would be more comfortable with, the argument might go as follows. It’s a familiar fact that when faced with a valid argument, one always has a choice between believing the conclusion and rejecting one or more of the premises. It’s also a familiar point that given how fragile philosophical reasoning can be, if we have a choice between accepting a philosophical claim and accepting some claim that we’ve received from a more secure evidentiary source, e.g. perception, common sense, gossip, reading tea leaves etc, the right thing to do is reject the philosophical claim. And that’s the right thing to do even if we don’t know why the philosophical claim is false. The upshot of this is that given a tricky philosophical argument for a claim that conflicts with something we know from a secure evidentiary source, and all kidding aside perception is basically a secure source, we should reject the philosophical argument.
Williams, if I’ve read him correctly, is arguing that believers can simply perceive God’s existence. Now this is not much use as an argument for God’s existence, since it is pretty blatantly question-begging, but there’s no such thing as begging-the-question when offering defences of your own view against alleged refutations. So I’m inclined to grant, or at least assume for the sake of argument, that (some) believers do have perceptual knowledge of God’s existence. Does this defeat the problem of evil?
I think not, as a close inspection of the parallels with Moorean common sense arguments shows. (I’m indebted over the next little bit to various conversations with Bill Lycan.) Moore wanted to defend common sense against philosophical attacks, such as McTaggert’s argument for the unreality of time, or the sceptic’s claim that he could not know of the existence of an external world. The trick was to (a) show that the philosopher’s conclusion entailed the opposite of some common sense claim (in McTaggert’s case this was “That I had breakfast today before I had lunch”) and (b) argue that the common sense claim was more plausible than some of the philosophical claim (in McTaggert’s case again “Temporal modes such as pastness and futurity are monadic properties of events.”). Both steps are going to be problematic for someone trying to offer a novel response to the Problem of Evil along Williams’s lines.
I won’t stress too much the problems with (b) here, but the rough idea is that the premises of the atheists Problem of Evil argument are hardly technical philosophical ideas. If they were, the problem wouldn’t get into the newspapers with quite the frequency it does. If anything, it’s the premises here that are common sense, though good philosophers (e.g. Plantinga) have noted that it is hard to get a rigorous statement of them. So the atheist doesn’t look like McTaggert or the sceptic to start.
The bigger problem is that the atheist’s conclusion here is nowhere near as radical as the conclusions Moore rejected, and they need not lead to the rejection of anything genuinely perceptual or common sensical. Remember the theological views I said at the top weren’t threatened by the tragedy, some of which were clearly theistic views. It’s consistent with the perceptions of God, at least as Archbishop Williams describes them, that they are perceptions of a less than all-loving, or a less than all-powerful, God. Indeed, it’s hard to imagine how they could be perceptions of such a God, since these don’t seem to be observational properties.
Here’s an analogy to try and back that up. Imagine a debate between a common sense person Con, and a scientist Sci. Sci tries to argue that given what we know about subatomic physics, and how subatomic things tend to be widely separated, there are no solid objects. If Con responds by kicking a stone and saying “Look, that’s solid”, he’s given a perfectly sound defence of his view that there are solid objects, because he can see and feel (and hence perceptually know) that there are solid objects. Now imagine Sci does not argue with Con, but with Con’s radically common sensical cousin Rad, who thinks there are perfectly solid objects, where a perfect solid has material at every point in its interior. Sci points out that the assumption of perfect solidity is inconsistent with scientific theories. Rad responds by kicking a stone and saying “Look, that’s perfectly solid.” This response fails for multiple reasons. First, of course, it isn’t perfectly solid. Second, even if (per impossible) it was perfectly solid, this isn’t the kind of thing we could detect by simple observation, so Rad couldn’t know that it was perfectly solid.
I think the theist who responds to the Problem of Evil by appeal to their perception (or innate feeling or whatever) of God is in Rad’s position, not Con’s. It is arguable, and nothing in the Problem of Evil tells against it, that someone could perceive God’s existence. But they couldn’t simply perceive these superlative properties of God, because these are not available to simple inspection. (They might believe them for all other sorts of reasons, as we believe that stones are not perfectly solid for reasons that go beyond simple perception.) In general really we can’t simply perceive superlative properties - we can see that someone is tall, but have to infer from all sorts of facts that they are the tallest man in Britain. Hence the believer can’t just see that the conclusion of the Problem of Evil is false. But saying they can is not as bizarre, nor as non-responsive, as I always thought, so I’m rather glad Archbishop Williams wrote this piece. And I’m very glad that Australia is the kind of place where these kinds of debates can take place in public sphere, with something akin to arguments rather than name-calling being offered on both sides.
I'm preparing to teach Nietzsche and am rereading Genealogy of Morals. Here's a bit from §7 of the first essay.
One will have divined already how easily the priestly mode of valuation can branch off from the knightly-aristocratic and then develop into its opposite; this is particularly likely when the priestly caste and the warrior caste are in jealous opposition to one another and are unwilling to come to terms. The knightly-aristocratic value judgments presupposed a powerful physicality, a flourishing, abundant, even overflowing health, together with that which serves to preserve it: war, adventure, hunting, dancing, war games, and in general all that involves vigorous, free, joyful activity. The priestly-noble mode of valuation presupposes, as we have seen, other things: it is disadvantageous for it when it comes to war! As is well known, the priests are the most evil enemies - but why? Because they are the most impotent. It is because of their impotence that in them hatred grows to monstrous and uncanny proportions, to the most spiritual and poisonous kind of hatred. The truly great haters in world history have always been priests; likewise the most ingenious haters: other kinds of spirit hardly come into consideration when conpared with the spirit of priestly vengefulness. Human history would be altogether too stupid a thing without the spirit that the impotent have introduced into it.
A couple things struck me about this old familiar passage this time around. (But you tell me.)
First, how incredibly, um, stock the story line is. OK, you've got some sort of blond beast. Let's call him The Lion King. And he's got a vigorous Lion Prince of a son. Much vigorous pouncing and rough-housing in Act I. In Act II, enter some evil, scheming weakling advisor - who's got evil, scheming weakling written all over him. There's sure to be a scene in which the lovable prince bowls over this priestly figure, quite by accident, to general jollity and acclaim. The priest barely manages to feign genial, avuncular indulgence long enough to escape to his lair, where he howls and seethes with a kind of rage that would simply never occur to the others.
Somehow the big blond beasts - lovable dopes - fail to see it. As Terry Pratchett writes somewhere - Interesting Times, Sourcery? - never trust a Grand Vizier. Ah, Amazon 'search inside':
After a while a tall, saturnine figure appeared from behind the pavilion. He had the look of someone who could think his way through a corkscrew without bending and a certain something about the eyes which would have made the average rabid rodent tiptoe away, discouraged.
That man, you would have said, has got Grand Vizier written all over him. No one can tell him anything about defrauding widows and imprisoning young men in alleged jewel caves. When it comes to dirty work he probably wrote the book or, more probably, stole it from someone else. (Sourcery)
And, even more appropriately:
Once you were in the hands of a Grand Vizier, you were dead. Grand Viziers were always scheming megalomaniacs. It was probably in the job description: "Are you a devious, plotting, unreliable madman? Ah, good, then you can be my most trusted minister." (Interesting Times)
All Nietzsche is saying is what every script doctor at Disney knows by instinct. You root for the Lion King, but the plot doesn't really thicken until you get some Grand Vizier twirling his moustache and laughing mirthlessly. You need both, aesthetically. They complement each other dramatically. To this you add the stock Shakespeare line about 'all the world's a stage', i.e. what's true about art is true of life itself.
OK, in Nietzsche there are two important twists in the corkscrew of cyclical spiritual development. The Lion King is not good and the Grand Vizier is not evil. More precisely, among aristocratic equals these beasts behave themselves well enough, playing and generally taking joy in strength. But:
Once they go outside, where the strange, the stranger is found, they are not much better than uncaged beasts of prey. There they savor a freedom from all social constraints, they compensate themselves in the wilderness for the tension engendered by protracted confinement and enclosure within the peace of society, they go back to the innocent conscience of the beast of prey, as triumphant monsters who perhaps emerge from a disgusting procession of murder, arson, rape, and torture, exhilarated and undistrubed of soul, as if it were no more than a students' prank, convinced they have provided the poets with a lot more material for song and praise.
In short: they drive their enemies before them and hear the lamentations of their women. So it is not exactly surprising that those who get driven eventually come to resent this arrangement.
Now this does complicate the script. Instead of the Lion King as a figure of conventional justice and harmony in the jungle ... well, let's just say a Tarantinoesque scene in which young Simba learns what it means to be king by slaughtering a bunch of innocent animals joyfully ... would not be Disney. Circle of Life indeed. Also, if it were explained that the Grand Vizier/Scar character resents the king because his whole family was cruelly slaughtered; if it were revealed that the king appointed him because he didn't even remember slaughtering the family ... that would hardy be Disney. But it would be Nietzsche. ("To be incapable of taking one's enemies, one's accidents, even one's misdeeds seriously for very long - that is thte sign of strong, full natures in whom there is an excess of the power to form, to mold, to recuperate, and to forget.")
Yes, I suppose it would make the movie more ... interesting. Oh, and the Grand Vizier wins.
OK, so that's how I'm going to explain Nietzsche to my undergraduates, unless I can think of a simpler analogy than a Tarantinoesque ressentiment revenge rewrite of The Lion King. (Suggestions?) But here's my thought for the night.
I've always taken these sort of cartoonish tales of blond barbarians clashing with priests as evocative myths. Myths in the Platonic sense, if you like; but standing Plato on his head. These savage little scenes are the basic platform on which Nietzsche proceeds to sketch, much more subtly than the above passages suggest, certain dynamics of psychological struggle, of Will to Power, that are so important to his account. (Psychological struggle doesn't say it all, but that's a start. Morality as an adaptive trait of the priests, which eventually calls forth in response a higher and more complex Blond Beast, this time a vastly more spiritually subtle artist-type. So it goes. The Circle of Life. Eternal Recurrence, even. Amor Fati. Hakuna Matata.)
But this time I was struck by the inconsistency of my view with the note at the end of the essay.
I take the opportunity provided by this treatise to express publicly and formally a desire I have previously voiced only in occasional conversation with scholars; namely, that some philosophical faculty might advance historical studies of morality through a series of prize-essays - perhaps this present book will serve to provide a powerful impetus in this direction. In case this idea should be implemented, I suggest the following question: it deserves the attention of philologists and historians as well as that of professional philosophers:
"What light does linguistics, and especially the study of etymology, throw on the history of the evolution of moral concepts?"
On the other hand, it is equally necessary to engage the interest of physiologists and doctors in these problems (of the value of existing evaluations); it may be left to academic philosophers to act as advocates and mediators in this matter too, after they have on the whole succeeded in the past in transforming the originally so reserved and mistrustful relations between philosophy, physiology, and medicine into the most amicable and fruitful exchange.
The slightly antic tone is classic Nietzsche, but the proposal is evidently serious. I guess what I'm wondering tonight - it's really a simple question, but I now don't know what I think the answer is: does Nietzsche think his fable about the blond beasts vs. the priests is in some sense literally, historically true; as opposed to being a sort of evocative myth? (Yes, of course, it does depend on whether he believes in truth. And, if so, in what sense. I'm not a bleating epistemological lambkin here, you know, Nietzsche-wise.) What do you think?
Not only is it MLA Season, it’s also time for the meetings of the American Philosophical Association’s Eastern Division. The APA meetings are scheduled at this time of the year because, as is well known, philosophers hate Christmas — even if a good number of its senior wranglers do their best to look like Santa. So here I am in Boston. This year I even have a professional excuse to be here, because I’m doing some work on the relationship between specialization and status amongst philosophy departments.
Unlike most academic associations, the APA doesn’t have a proper national meeting, just regional ones. But the Eastern APA is the biggest, partly because there’s a high concentration of philosophers on the East Coast,1 but mostly because the job market happens at it. Like the MLA, Philosophy departments interview their shortlist of 10 to 15 candidates at the meetings, with a view to whittling them down to three or four for campus visits. Personally, I don’t believe this stage adds any useful information to the recruitment process, unless you are interested in whether a candidate can sit comfortably in a cramped hotel suite.
I nearly got an interview at the APA myself a few years ago, when I accidentally sat at the wrong table in an empty conference room, put my feet up and started reading some book or other. After about half an hour some people started filing in to the room, but I wasn’t paying attention. Then two guys (one with a Santa beard-in-training) sat down at my table. “Mr Robertson? We’re from East Jesus State University,”2 said one of them, “Shall we begin?” I should have said yes, but of course instead I was a coward and mumbled something about not being Mr Robertson. Pity: I’ve become quite good at bluffing my way amongst philosophers, and I might have gotten a fly-out.
1 Every single Mets fan, for instance.
2 Not its real name.
I was looking at Peter King’s website, especially his book One Hundred Philosophers and I thought this passage on David Lewis was delightful.
Lewis’ philosophical interests were broad, as evidenced by the contents of the five volumes of his collected papers published so far: ethics, politics, metaphysics, epistemology, philosophical logic, language - he wrote on a vast range of subjects, from holes to worlds, from Anselm to Mill, from the mind to time travel. In everything he wrote he was rigorous, committed, and clear, but perhaps the most distinctive thing about him was his attitude to other philosophers, and especially to criticism: one can scarcely find a book or paper attacking Lewis’ views that doesn’t contain an acknowledgement to him for his help. What mattered to him - what he loved - were the ideas, the arguments, the philosophy, not winning or being right. He was the ideal, the model philosopher; he’s also (and this is a very different matter) widely regarded as being the best philosopher of his generation - perhaps of the twentieth century. (Emphasis added.)
The model philosopher indeed.
The postgraduate colloquium at La Trobe have archived their radio show on the web. I’ve listened to bits of the “democracy” programme and to this Pom’s ears, the participants begin by exuding a certain antipodean charm and thereby remind me of a certain Monty Python sketch … but the discussion gets serious pretty quickly. It continues the be marked by a certain Australian robustness, however, as when one participant utters the words:
”.. if only those stupid arseholes out there would vote the right way, and take the right decision … yet we can’t disenfrancise any of them ….”
Other programmes have a bit too much of a po-mo ring about them for my taste, but others will disagree.
There’s been a lot of hubbub, both here and elsewhere in the blogworld, about the Becker-Posner blog. But if it’s intellectual firepower in a group blog you’re after, you should be reading Left2Right. Here’s its mission statement, which should be good for setting off a round of debates.
In the aftermath of the 2004 Presidential election, many of us have come to believe that the Left must learn how to speak more effectively to ears attuned to the Right. How can we better express our values? Can we learn from conservative critiques of those values? Are there conservative values that we should be more forthright about sharing? “Left2Right” will be a discussion of these and related questions.
Although we have chosen the subtitle “How can the Left get through to the Right?”, our view is that the way to get through to people is to listen to them and be willing to learn from them. Many of us identify ourselves with the Left, but others are moderates or independents. What we share is an interest in exploring how American political discourse can get beyond the usual talking points.
The contributors so far include Elizabeth Anderson, Kwame Appiah, Josh Cohen, Stephen Darwall, Gerald Dworkin, David Estlund, Don Herzog, Jeff McMahan, Seana Shiffrin, and David Velleman. Wowsa. And many other names you may have heard of, from Peter Railton to Richard Rorty, are listed as being part of the team. This should be worth following.
It’s a commonly heard complaint that philosophy and philosophers are too divorced from the real world and practical considerations. I always thought this kind of concern was overblown, but nevertheless I’m glad to see philosophy brought into contact with the real world in new and interesting ways. As in this Friday’s Philosophy and Wine conference at the University of London. The philosophers who are speaking are quite distinguished - Roger Scruton, Kent Bach and Barry Smith - and there is a wine tasting as part of the conference, so it looks like it should be a lot of fun. Any readers in London with a spare Friday and an interest in, er, philosophy of wine should pop along.
Keith Burgess-Jackson responds to Chris' post. "What's interesting (and ironic) is that nobody at the site engaged my argument. In the insular world of liberalism, argumentation is unnecessary. One mocks conservatives; one doesn't engage their arguments." OK, obviously the dogs voting thing wasn't the man's argument, so it was very unfair for Chris to seize on that. The argument goes like this: "Some disappointed pundits have said that this [voter rejection of gay marriage] reflects bigotry. No. It reflects intelligence. The other day, Pat Caddell said that homosexual “marriage” isn’t a conservative/liberal issue. It’s an intelligence/stupidity issue. I agree. I have said in this blog many times that the very idea of homosexual marriage is incoherent, which is why I put the word “marriage” in quotation marks."
So the argument is: supporters of gay marriage are stupid? Or: some guy says homosexual marriage is incoherent? (How could some guy be wrong, after all? Makes no sense.)
Let's take the first. Supporters of gay marriage are stupid. What sort of argument is that? (And why would any supporter of gay marriage think that it was OK to short-curcuit the argument process by mocking arguments like this?) Let's consider this philosophical authority on 'how to argue': all arguments are either deductive or inductive. Well, the statement in question is neither, since both deductive and inductive arguments require at least one premise and a conclusion, I should think. Here we have, well, just a statement. Reading on we make a little more progress:
All argumentation, to be effective, must be ad hominem in nature. The term “ad hominem” has two very different uses in philosophy. They must not be confused. You have probably heard of the ad hominem fallacy. (A fallacy is an argument that is psychologically attractive but logically infirm; it seems like a good argument, but isn’t.) This fallacy consists in dismissing someone’s argument on the ground that he or she is a bad person (a Marxist, for example, or a goddamned Democrat). This is clearly fallacious, for bad people can make good arguments and good people bad arguments. One cannot transfer goodness or badness from arguers to arguments any more than one can transfer goodness or badness from politicians to policies. Even Hitler was capable of making, and probably did make, a sound argument.
The other use of the term “ad hominem” has nothing to do with fallacies. Indeed, it describes a respectable mode of argumentation. According to the British philosopher John Locke (1632-1704), “A third way [to persuade] is, to press a Man with Consequences drawn from his own Principles, or Concessions. This is already known under the Name of Argumentum ad Hominem” (An Essay Concerning Human Understanding, book IV, chap. XVII, sec. 21). Let us unpack this. People (most of them, anyway) have principles. Principles have implications. If I can show you that your principle commits you to belief B, then I force you to either embrace B or abandon your principle. Yes, this is coercive. All argumentation is coercive. It is the imposition of a choice by one person on another. In the example given, I tell you that you cannot have both your principle and your belief that non-B (or your nonbelief in B, if you are merely agnostic about it). You can’t both have your cake and eat it.
To summarize, the ad hominem fallacy is an attack on a person. It is disreputable and disrespectful. Don’t do it. The argumentum ad hominem is an appeal to (i.e., an argument directed to) a person (rather than to the world at large). It is reputable, respectful,and respectable. Do not confuse the two.
OK, there are good ad hominem arguments and bad ad hominem arguments. The bad consist of calling people bad (or goddamn Democrats.) It seems to me plausible that 'supporters of gay marriage are stupid' - if that is your whole argument - qualifies as a bad ad hominem argument. (My argument to this conclusion requires the additional premise that being stupid is commonly regarded as bad.) A good ad hominem argument will sting an interlocutor with his or her own principles. For example, if someone wanted to argue, say, that gay marriage was bad (or incoherent), a good ad hominem counter-argument might be built upon that person's own commitment to offering at least some deductive or inductive considerations in favor of that conclusion (were there evidence that the person in question was committed to arguments being either deductive or inductive, e.g. not purely abusive or some silliness about voting dogs.)
In all seriousness, in tut-tutting Chris & co. for failing to address his argument, I think Keith Burgess-Jackson is failing to notice the use of an ab hominem - or 'just walk away' - style of argument. (Closely related but distinct from peri hominem argument, often employed by Kierkegaard to get around Hegel.) By making jokes about how dogs might actually be capable of voting, about puppies shooting people, Chris and his commenters were doing the dialectically rigorous thing. Until such time as a person shows willing to engage in argument, either deductive or inductive, 'just walk away' is the proper argumentative approach. A little judicious mockery never hurt either. [UPDATE: Belle informs me that, since it is obviously the ablative, it should be ab homine argument.]
In that spirit, I offer the following counter-argument to the 'argument' against gay marriage contained in Keith Burgess-Jackson's original post.
Today a student walked into my office and said, 'So who is this Don Knotts guy?'
In an attempt to buy time, I did my best Obi Wan. 'Don Knotts? Now that's a name I haven't heard in a long time.'
This student, a Singaporean film buff - really, he knows everything; just brimming with trivia and facts - was extremely disturbed to discover that there was this famous American actor, who had made many movies, all of which are quite unknown to my student. He had visited IMDb and was maddened by the length of titles. I stammered out some wisdom about how, yes, The Ghost and Mr. Chicken was regarded as his finest work after leaving The Andy Griffith Show. Movies from my youth: Hot Lead and Cold Feet. The Apple Dumpling Gang. I explained that I hadn't seen earlier works like No Time For Sergeants, so couldn't comment authoritativevly. 'But why are you asking these questions, my son?'
It turned out he'd seen this, which I hadn't seen. It's incredibly funny. Really well done.
All by way of saying: if you've got an argument, out with it. You can't really expect people to respond to all this stuff about voting dogs. Which reminds me of a scene from The Shaggy D.A. Also of an old Bill Cosby sketch, oddly enough. Which just goes to show that I am brimming with ammunition for perfectly sound ab hominem arguments. Best then to offer deductive or inductive arguments. In short, are there any considerations that make the conclusion 'gay marriage is bad (or incoherent)' probably or necessarily true?
[File this post with my old one about ad hominid arguments, i.e. bad evolutionary psychology arguments that just make up a bunch of stuff about how it was 'back in caveman days.']
Via Butterflies & Wheels I came across the following ludicrous and offensive argument against gay marriage from Keith Burgess-Jackson, the self-styled AnalPhilosopher :
I have said in this blog many times that the very idea of homosexual marriage is incoherent, which is why I put the word “marriage” in quotation marks. I do the same for dog “voting.” If we took our dogs to the polls and got them to push levers with their paws, they would not be voting. They would be going through the motions of voting. It would be a charade. Voting is not made for dogs. They lack the capacity to participate in the institution. The same is true of homosexuals and marriage.
Richard Chappell at Philosophy etc says nearly all that needs to be said about Burgess-Jackson’s “argument”, so I wouldn’t even have bothered mentioning it if I hadn’t been in conversation on Tuesday with the LSE’s Christian List whose article “Democracy in Animal Groups: A Political Science Perspective” is forthcoming in Trends in Ecology and Evolution . List draws on Condorcet’s jury theorem (previously discussed on CT here ) to shed more light on research by Conradt and Roper in their paper Group decision-making in animals , from Nature 421 (155—8) in 2003. Conradt and Roper have this to say about animal voting:
Many authors have assumed despotism without testing, because the feasibility of democracy, which requires the ability to vote and to count votes, is not immediately obvious in non-humans. However, empirical examples of ‘voting’ behaviours include the use of specific body postures, ritualized movements, and specific vocalizations, whereas ‘counting of votes’ includes adding-up to a majority of cast votes, integration of voting signals until an intensity threshold is reached, and averaging over all votes. Thus, democracy may exist in a range of taxa and does not require advanced cognitive capacity.
[Tiresome, humourless and literal-minded quasi-Wittgensteinian comments, putting inverted commas around “voting” etc. are hereby pre-emptively banned from the comments thread.]
Following on from last week’s case, which was concerned with the ontological argument, this week’s nutter in Laurie’s Inbox gives us the complete and comprehensive solution to consciousness and morality, two perennial favorites.
The Essay (Forward this to all!)Emphasis in the original, naturally. Onward:
[Name Redacted to Protect the Innocent]
This universe is filled with atoms, unified into clusters or systems. They make up all things of matter from rocks to humans. All things of matter are either unconscious (a.k.a dead/ non living), semi- conscious (partially conscious), or fully conscious. Intelligence is the ability to be conscious. If one conceives, then one also sees an amount of right and wrong. One always does what one truly conceives is right. When one does not conceive what is right, then one is blind and may do wrong. Right and wrong will always exist as long as life exists. …
The condition of being unconscious is having no consciousness of all the worlds components, including right and wrong, at any given point in time (not being aware of anything) … The condition of semi-consciousness is defined as not fully conceiving the world at any point in time. … The condition of being fully conscious is being conscious of everything in existence at any point in time, past, present, and future (a.k.a all knowing). … The more conscious one is, the more likely one would be to make righteous decisions. The less conscious one is, the more likely one would be to make wrongful decisions. The humans living on Earth, and all other living beings on Earth are not fully conscious. The following are not rules. They are statements that are facts (not mere philosophies) that my semi-conscious mind has conceived. …
You can’t make this stuff up. Apart from the semi-consciousness, I like the insistence on facts. Reminds me of the work of another philosopher: “I am Vroomfondel, and that is not a demand, that is a solid fact! What we demand is solid facts!”
Brian Leiter’s Philosophical Gourmet report is now out in its latest version.
[UPDATE: I hadn’t noticed that Kieran gets a credit for statistical advice on the front page!]
The left and right hemiblogospheres are presently linked - if at all - by a corpus callosum of profound mutual contempt. Countless linky axons of aggravation transmit negative affect side to side. I won’t bother demonstrating this obvious fact with links, though I discuss it a bit here.
And so, in the interest of entente - or at least to preclude the need for split-brain surgery to prevent the equivalent of a interwebs-wide grand mal epileptic seizure, as the storm moves left to right and back - I propose … a contest! Awards! For outstanding and meritorious achievements in the field of contempt. I think we will call these awards … The Contumelys! (I imagine sort of a golden turd-looking thing on the head of an human figure, on a pedestal.) I haven’t really worked out all the details because I haven’t worked out any of them. There must be awards for Left and Right. And I think, though I can’t imagine how I could enforce this, that lefties should nominate righties and vice versa. I’m certainly not prepared to be judge, jury nor executioner. Except for executioner. I’m more than prepared to delete comments to this thread mercilessly. Because mostly your contempt isn’t worth much to me. Unless you find some way to profit yourself, or others, by it.
UPDATE: Apparently there are problems with comments not showing, even after waiting, even after multiple attempts to post. Sorry about that. What can I say? Be aware there’s a problem and try again later. Maybe it’s temporary. Kieran?
As Nietzsche writes [I’ve changed one term to suit the age]:
The fool interrupts. - The writer of this blog post is no misanthrope; today one pays too dearly for hatred of man. If one would hate the way man was hated formerly, Timonically, wholly, without exception, with a full heart, with the whole love of hatred, then one would have to renounce contempt. And how much fine joy, how much patience, how much graciousness even do we owe precisely to our contempt! Moreover, it makes us the "elect of God": refined contempt is our taste and privilege, our art, our virtue perhaps, as we are the most modern of moderns. (Gay Science, §379)
(Don’t worry, right-wingers, you won’t get gay science cooties, let alone gay science married, if you read that all the way through before noticing where it was from.)
Why is there no verb, ‘to x something’, meaning ‘to have and exhibit contempt for something’?
But why draw attention to this semantic space, however vast and central to our emotional lives? Why dig down for bitter roots, which sprout as hardy volunteers soon enough?
Not to raise a crop of Timons, gentle reader. Rather, it seems to me we should not (as the servant puts it) ‘walk, like contempt, alone’. For one thing, if fifty truly exemplary specimens from both sides could be exhibited for all to judge, at least one line of complaint would be reduced to shamed silence: aggrieved noise to the effect that it is down to the terrible, awful, unprovoked incivility and hostility of the other side.
What I am looking for, you see, are the artistic, witty, insightful, thoughtful, well-turned, educational expressions of contempt. The contempt should be, at least in part, reality-based. Elegant and brainy if you please. The sort of thing that might raise a grudging smile if you were behind a veil of ignorance, watching the mud wrestling yet unable to tell which westler - is you. In short, we seek the better angels of our contemptuous natures. (Nothing from Adam Yoshida, obviously. Or those slack-jawed morons Kieran quotes below. Please stay well within the bounds of human decency.) A few gem-like paragraphs maximum, perhaps with a link to the original. I’m hoping that the effect of concentrated, high quality volleys, back and forth, will produce a higher - hence deflating - sense of the absurdity of the emotional exercise.
Hence, contemptuous catharsis.
Probably I’ll just provoke a vicious undignified mess. In which case I’ll close the thread and pretend it never happened.
But I hope I don’t have to, for I hope no one will have to write of me, as Melville did of Ahab (chapter 34):
He blogged in the world, as the last of the Grisly Bears lived in settled Missouri. And as when Spring and Summer had departed, that wild Logan
of the woods, burying himself in the hollow of a tree, lived out the
winter there, sucking his own paws; so, in his inclement, howling old
age, Holbo’s soul, shut up in the caved trunk of his body, there fed
upon the sullen paws of its gloom!
I confess a partisan agenda as well, before I declare the games open. Russell Arben Fox has an interesting post up about the hazards of left condescension to religious folk. Namely, it ticks them off, causing them to vote Republican, doing nobody any good. "The left doesn’t have to flirt with theocracy … it just needs to show some respect." I think there is something right about this analysis. But there is also something wrong with it. Anyway, I find myself semi-agreeing with Jim Henley, on the other side. (Oh, I see Kieran got there first. Well I won’t make a link. Just scroll down, you lazy bastards.) Henley rather strongly maintains this ‘moral values’ dog don’t hunt. To think so is "naive and even condescending." Damn. This is bad, if true, because then Democrats are condescending if they do, condescending if they don’t. But how can that possibly be right? Pardon, but it is false that there is anything inherently condescending about being a Democrat. Quite the contrary. We’ll get back to that. Henley writes:
Conservative, values-minded Christians aren’t looking for validation. They’re looking for specific policy outcomes that their strongly-held
beliefs entail - among them, the prohibition of abortion and the
marginalization and if possible elimination of homosexuality. They are
not empty urns waiting to be filled with liberal policies dissolved in
honeyed words about faith.
Mmmm, liberal policies dissolved in honeyed words. Oh, wait, just me sucking my paws, dreaming of spring. This brings me back to my initial skeptical reaction to Russell’s post.
It seems to me there is an assymetry between left and right in terms of sensitivities and respect, the latter being a two-way street in any well-designed status traffic system. The simplest way to see the assymetry is to turn Russell’s formula around and offer it as advice for the right (imagining that the percentages had been just a little different.) You don’t have to flirt with gays, just show them some respect. But, of course, this isn’t a proposal of a diplomatic means to an end, as Russell’s proposal is for lefties, who have nothing against religion. (Or if they do, that has nothing to do with being a lefty.) For the right offering simple respect is abject surrender, since the whole point is NOT to show respect.
This struck me while I was reading prof. Bainbridge’s post and TCS essay (as per this post over at our other blog).
1) Bainbridge is sensitive to, and upset by, perceived liberal airs of superiority, leading to disrespect for his Middle American, conservative, religious dignity, sensibilities, values and virtues. (So he indulges in what I have come to call ‘the narcissism of small diffidences’. His antennae are exquisitely attuned to sleights. Or maybe he’s just ticked off this week, a real sore winner. I can’t honestly say.)
2) Bainbridge feels no compunction about disrespecting liberals - lobbing insult for insult at Eric Alterman, which is fair enough. But beyond that, he not only exhibits airs of moral superiority himself (who doesn’t think their own morality is superior? You would change it otherwise.) He actually assumes his own moral superiority over liberals as a premise in his argument to the conclusion that the conservative majority should legislate to reflect "their morals and values" because these are the true moral values.
As Randy Barnett notes, in a post Bainbridge singles out for criticism: "Assuming morality is an objective matter, majority opinion does not make something immoral." Not without the additional premise that this majority - conservatives - is inherently and certainly morally superior to this minority - liberals (and libertarians).
Again, everyone is a little elite of one, in that we all assume our own morality is superior (in some sense). But we don’t all go and insert a claim to personal, inherent moral superiority as a premise in our arguments. From the fact that I am better, what I say must be true? If some liberal argues like that, you can ask him to cut it out. For Bainbridge, this argument form is indispensable.
In short, dammit, its conservatives who can’t help be condescending. Bainbridge rails against elitism, but it is his position that is not just elitist but inherently elitist. He rails against revolting elites, while participating in a revolt of the elites. At least Russell Kirk - whom Bainbridge quotes as a cudgel against libertarians - is unapologetic on behalf of the elitism he shaires with Bainbridge. "Civilized society requires orders and classes," etc. Well, there you go.
Back to the Contumelys. I have this naive notion that despair at the spectacle of absolute ludicrous quantities of mutual contempt on all sides breeds tolerance and mutual respect - due to laughter at the absurdity of it, or weariness at the endlessness of it. Like in Doctor Marvin Monroe’s Family Therapy Center.
And tolerance is good. It erodes the bad, causing Democrats to win in the long run. As Burke writes, "nothing aggravates tyranny so much as contumely." Lets purge it clean out of our systems.
So what were the clever, wise, insightfully contemptuous things both sides said about the other, all electoral season long?
I’m really trying to be even-handed here. It’s been a bad week.
Richard Heck has put together what should become one of the coolest philosophy sites on the internet - a searchable database of online papers.
There isn’t much up there yet because individuals with papers have to register and deposit their own papers. (Which if you’re a philosopher with online papers you should do right now.) But this will in time be a phenomenal resource for philosophers and people wanting an introduction to philosophy, and we’ll all be very grateful to Richard for putting together such a wonderful site.
Brian Leiter points out that the London Review of Books has recently published a characteristically clever and funny piece by Jerry Fodor in review of a critical work about the writings of Saul Kripke, Kripke: Names, Necessity and Identity, by Christopher Hughes.
True, Chris has already linked to that LRB article, but I’ve my own meanderings to add rather late in the day. They’re below the fold.
Leiter’s post includes a delightful email to-and-fro with Fodor concerning the possibility of distinguishing analytical philosophy from any other kind, so I’d recommend (sigh) reading the whole thing if that’s the kind of stuff that lights your fire.
Anyway. The rather twisted heuristic I’d formed by the time I left philosophy post-doctorally was that above a certain level of brilliance, analytical philosophers just weren’t worth reading by someone like me since I’d be unable to work out whether they were genuinely visionary, utterly potty, or just writing on a level that I couldn’t comprehend.
Kripke was different in that, although his cleverness is famously up there at something like the world-historical level, I found his Wittgenstein book to be incredibly useful as a key to figuring out what the hell was going on in the Investigations. That certain Wittgenstein groupies threw their toys out of their respective prams in response to Kripke’s book, yea even unto walking out of his lectures, I leave as a phenomenon to be explained by historians. I found Colin McGinn’s book very handy later on, but Kripke got me started.
So far as Naming and Necessity goes, I think I probably agree with Fodor’s line that the modal intuitions that Kripke relies upon are at base intuitions about the proper application of concepts, and hence that you can’t use Kripke to dodge Quine. But hey, this isn’t a bit of philosophy I ever claimed to know very much about, and I don’t doubt that Brian would be able to say many wise things contradicting that view if he chose to.
Still, I’m going to seek out the Hughes book, since I believe it would do my brain some good to read a book about Kripke which Fodor believes contains ‘nothing a competent graduate student won’t be able to cope with’. Once upon a time, I was one of those, and I’d like to find out how much neural degeneration has gone on in the interim.
Wish me luck.
Why does no-one read analytical philosophy (except for analytical philosophers) and what was the revolution wrought by Saul Kripke? Jerry Fodor explains , over at the LRB.
While everyone else was watching the debate, I was rewriting my lecture on Descartes’ “Second Meditation”. Since you can’t understand it without knowing a bit about Descartes’ physics, I always say a bit about that. My favorite discussion of the subject appears not in any secondary source, however, but in John Barth’s novel, The Sot-Weed Factor:
“Tell me, Eben: how is’t, d’you think, that the planets are moved in their courses?”“Why, said Ebenezer, “’tis that the cosmos is filled with little particles moving in vortices, each of which centers on a star; and ‘tis the subtle push and pull of these particles in our solar vortex that slides the planets along their orbs - is’t not?”
“So saith Descartes,” Burlingame smiled. “And d’you haply recall what is the nature of light?”
“If I have’t right,” replied Ebenezer, “’tis an aspect of the vortices - of the press of inward and outward forces in ‘em. The celestial fire is sent through space from the vortices by this pressure, which imparts a transitional motion to little light globules - ”
“Which Renatus kindly hatched for that occasion,” Burlingame interrupted. “And what’s more he allows his globules both a rectilinear and a rotatary motion. If only the first occurs when the globules smite our retinae, we see white light; if both, we see color. And if this were not magical enough - mirabile dictu! - when the rotatory motion surpasseth the rectilinear, we see blue; when the reverse, we see red; and when the twain are equal, we see yellow. What fantastical drivel!”
“You mean ‘tis not the truth? I must say, Henry, it sounds reasonable to me. In sooth, there is a seed of poetry in it; it hath an elegance.”
“Aye, it hath every virtue and but one small defect, which is, that the universe doth not operate in that wise.”
I haven’t seen a transcript. So far the spin is confusing. I am seeing red and blue. Edwards won. Cheney won. Rotary surpasseth rectilinear and vice versa. I do hope Edwards got in a little something in the ‘unfortunately, the universe doesn’t work that way’ vein, domestic and foreign policy-wise.
Exercises left to the interested reader: 1) Translate the debate into proper 17th Century English. 2) Compose a rigorously Cartesian calculation of the ‘who won the debate’ wrap-up, with pundits as planets and blogs as little globules. 3) Rewrite The Sot-Weed Factor as a Robert Ludlum Novel.
John Holbo’s post on apocalyptic Christianity and its political implications raises a couple of questions I’ve been wondering about for a while.
The first one relates to my memories of the late 1960s, when most people of my acquaintance gave at least some credence to the belief that there would be a revolution of some kind, sometime soon. At about the same time, I encountered the Revelations-based eschatology of people like Hal Lindsey. Thirty years later, there’s been no revolution, and I don’t know of anyone who seriously expects one. As I recollect, belief in the possibility of a revolution had pretty much disappeared by 1980.
Revelations-based prophecies have similarly failed time after time, but they seem to be more popular than ever. What is about apocalyptic Christianity as a belief system that protects it from empirical refutation? I assume there’s heaps of research on this kind of thing, but I hope to get readers to point me to the good stuff.
The second point is that, as can be seen from Lindsey’s site, he and other apocalyptic Christians have strong political views, which could broadly be summarised as favouring a vigorous military response to Antichrist (variously identified with the Soviet Union, the UN and so on). How does this work? Do they think that another six armoured divisions could turn the tide at Armageddon? If so, wouldn’t this prevent the arrival of the Millennium and the Day of Judgement1?
And how does all this affect believers in rapture? Do they install automatic watering systems for their gardens and arrange for unsaved neighbours to feed the cat? Or do they just pay into their IRAs as if they expect the world to last forever?
1 There’s a genre of horror movies (The Omen, The Final Conflict and so on) that takes pretty much this premise.
We have a pretty clear division of labour as regards paradoxes here at CT. Brian, Daniel and occasionally Chris set them up, while I, along with the commenters, try to knock them down. Following Chris’ discussion of the paradox of rational voting, I found myself wondering about the sorites paradox1. Once I got the thing into a form where I felt confident about the truth-value of the premises, I came to the conclusion that the argument fails, at least in my language, at n = 2.
Here’s my general version of the paradox
1. I would never call a single item “a heap”
2. If I would never call n items “a heap”, I would never call n+1 items “a heap”
Hence, by repeated induction, I would never call n items “a heap” for any integer n
Introspection tells me that I would under at least some circumstances, call three items “a heap”, but that2 I would never call two items “a heap”. And if I replace “heap” by “crowd”, I don’t have to rely on introspection. Premise 2 is proverbially false in this case, for n = 2.
So, the paradox comes down to the fact that, in ordinary English usage, three items are sometimes a heap and sometimes not.
1 The problem is that, as the Florida 2000 election showed, no single voter is ever decisive. But if Gore had received, say, 5 000 more votes, he would almost certainly have taken office.
2 There are some formal contexts, for example in computing and game theory, where a heap might have two elements, or one, or zero.
That’s Latin for ‘do your own research, pal!’
This is, I suppose, a follow-up to my previous post on media bias. Some commenters think I didn’t get my liberal game face on sufficiently. I thought the red honk nose was pretty scary, on a man my age. Family man. Anyhoo. The point of connection is another passage from Aldous Huxley’s The Devils of Loudon.
Students of Descartes’ “First Meditation” might want to check out this under-appreciated, thinly novelized, allegedly accurate historical account of a curious serious of events in Loudun in the early 1600’s. A number of nuns decided they were possessed by devils and a luckless guy, name of Grandier (jerk, couldn’t have happened to a nicer), was behind it. (I gather he is more sympathetic in the Ken Russell version.) And that was how his troubles began. (I’ve blogged this before, but not here.)
During the spring and summer of 1634 the main purpose of the exorcisms was not the deliverance of the nuns but the indictment of Grandier. The aim was to prove, out of the mouth of Satan himself, that the parson [Grandier] was a magician, and had bewitched the nuns. But Satan is, by definition, the Father of Lies, and his evidence is therefore worthless. To this argument Laubardemont [who wants to get Grandier], his excorcists and the Bishop of Poiters replied by affirming that, when duly constrained by a priest of the Roman Church, devils are bound to tell the truth. In other words, anything to which a hysterical nun was ready, at the instigation of her exorcist, to affirm on oath, was for all practical purposes a divine revelation. For inquisitors, this doctrine was a real convenience. But it had one grave defect; it was manifestly unorthodox. In the year 1610 a committee of learned theologians had discussed the admissibility of diabolic evidence and issued the following authoritative decision. “We, the undersigned Doctors of the Faculty of Paris, touching certain questions which have been proposed to us, are of the opinion that one must never admit the accusation of demons, still less must one exploit exorcisms for the purpose of discovering a man’s faults or for determining if he is a magician; and we are further of the opinion that, even if the said exorcisms should have been applied in the present of the Holy Sacrament, with the devil forced to swear an oath (which is a ceremony of which we do not at all approve), one must not for all that give any credit to his words, the devil being always a liar and the Father of Lies.” Furthermore, the devil is man’s sworn enemy, and is therefore ready to endure all the torments of exorcism for the sake of doing harm to a single soul. If the devil’s evidence were admitted, the most virtuous people would be in the greatest danger; for it is precisely against these that Satan rages most violently. “Wherefore St. Thomas (Book 22, Question 9, Article 22) maintains with the authority of St. Chrysostoml, DAEMONI, ETIAM VERA DICENTI, NON EST CREDENDUM. (The devil must not be believed, even when he tells the truth.)
My point is not that sound files, video clips, transcripts saved by Media Matters are inherently no more conducive to veridicality in reportage than an hysterical nun in the hands of an Inquisition exorcist. Nay, I have a different agenda. Concerns about media bias go back a long way, amigo.
By the by, I think maybe we won’t touch bottom regarding all this ‘but that just happened in the false Iraq’ stuff until someone actually goes and does the necessary; explicitly founds a formal tux, tie and tails gnostic Philip K Dick-style full on pro-Bush religion about how the reason why there is still trouble, even though it was mission accomplished in 2003, is that it’s really still 2003, but the Evil Liberal God of our world wants you to believe that time has passed. (Link via boingboing.) Thank goodness the Good God is sending us pink laserbeams of information via the interweb.
No, seriously. If you are arguing about media bias, there is no point but to be generous and Rawlsian about it. Otherwise you’re just preaching to the choir.
I’ve been holding off posting about memogate out of respect for the worm’s rotational speed. Where we stop, nobody knows. Here I go.
First, a bit of prophecy. No, prophecy’s a fool’s game. On to philosophy (ba-dum, crash, thanks for coming out tonight.)
Some months ago Colby Cosh quoted an amusing law: “A public figure is often condemned for an action that is taken unfairly out of context but nevertheless reflects, in a compelling and encapsulated manner, an underlying truth about that person.” Dan Rather is a perfect case in point. No, not for memogate. For that he is to burn, and rightly so. I’m thinking about the other thing. Today in Slate:
And don’t forget the 1986 “What’s the frequency, Kenneth?” attack, in which Rather was accosted by street toughs on Park Avenue in New York. You can hardly blame Rather for that one, but Boyer [author of Who Killed CBS?] notes that such things rarely seem to happen to Tom Brokaw and Peter Jennings. It’s as if Rather attracts half the madness in the universe, and the other half comes out of his mouth.
Dan “Courage” Rather: his long and public career littered with not inconsiderable evidence he is a bit nuts (if not, as our author entertainingly exaggerates, “barking mad”); yet our author cannot refrain from hanging it all - praeteritio - from the catchy hook of the ONE mad thing that happened to him NOT as a result of his own madness.
A confirming instance, then. The law, however, would be wiser if it read ‘perceived underlying truth’. It governs a common species of characterological confirmation bias, facetiously glossed as microcosmic-macrocosmic insight (‘all the madness of the universe in a single anchor’), sprinkled with grated scapegoat and a dollop of poetic justice as fairness. (Unreasonable people deserve to be unreasonably condemned.) Garnish with reserved preconceptions. Serve. Feeds many. In fact, feeds back. (You have a view of someone’s character. You seize on the slenderest confirming ‘evidence’ as dispositive proof. This ‘evidence’ becomes an emblem, intensifying the initial view. So you are more inclined to seize more ‘evidence’. So it goes.)
I trust no one thinks reasoning in this bats-are-bugs way is respectable. In the realm of politics, in particular, potentially decent folks get seriously tarred with this sort of horsefeathers. (It isn’t just that it’s not what you say about the issues, it’s what the issues say about you; sadly, it’s what the non-issues say about you.)
Memogate is indeed a case in point, although not with respect to Rather. The victim here may turn out to be Kerry. Here is Hugh Hewitt on the memo Meta-Meme: The Forgery is John Kerry and John Kerry is the Forgery:
John Kerry ought not to have been hurt by the Rathergate scandal. [Hewitt ought to be made captain of Bush’s Praeteritiorian Guard for this brilliant disavowal-before-the-avowal.] There is thus far no firm evidence that any Kerry operative arranged for the forgeries to surface or helped persuade Rather et al to run with the doctored docs. [And with this bait of truth we may catch the carp of falsehood.]Since the docs are inauthentic, Rather was fooled, Rather is liberal, the media is liberal, the media is fooled, the medias is biased, liberals are inauthentic, Kerry is liberal, Kerry is inauthentic, Kerry is the same as the memos, the forged memos are an emblem of a deep truth about Kerry’s character. QED. BFA (by free association).But the scandal has indeed hurt him, badly, and not just as some talking heads suggested by denying him even the possibility of traction with the electorate at a moment when his campaign is foundering.
The forged docs have hurt Rather and CBS because he and it ought to have seen through them. They shouldn’t have been fooled because the docs were so obviously inauthentic.
Which brings us to John Kerry, the candidate who is defined by his inauthenticity.
Kerry is one of the most liberal members of the United States Senate - the most liberal member if you believe The National Journal - but Kerry has tried to portray himself as a centrist.
(And never mind the big bug scourge of the skies ‘most liberal’ charge.)
It is harder to get the media (a.k.a. the Mainstream Media, a.k.a. Old Media) off the memogate hook.This is a delicate point, but I think however the flames blow, it’s not the partisan heat but the stupidity.
It is truly very surprising that expert - even amateur - document authentication at CBS was functionally absent. (In the early days of memogate, there was much intelligent and vigorous amateur apologetics on behalf of the memos by outsiders. But CBS itself was a no-show.)
To what extent is it reasonable to generalize from this case - Rather’s case - the the mainstream media as a whole? One-off flukey breakdown or systemic defect? You’ve got to ask yourself like any sane person: what’s the frequency, Kenneth?
Instapundit links approvingly to the Slate article quoted above, which argues Rather is stark raving; also that his uniquely privileged position allowed him to continue in this lively vein for decades where others would have been fired. Whether this is true or not, Reynolds is happy to buy, but it hardly seems a model for mainstream/old media as a whole. How is Rather the exception that proves the rule, rather than the exception that just is the exception? I realize he’s powerful, but there are other people in the media. How does the way he is prove the way the others are?
I much enjoyed the recent, lively roundtable discussion of ‘moral relativism’ - initiated by Vokokh, taken up by Yglesias, Weatherson, Drum, and others. Let us consider ‘liberal media bias’. As with ‘moral relativism’, the very meaning of the charge - never mind its truth - is elusive. This makes proof by means of anecdotal evidence doubly dubious. The non-question-begging form of the question is ‘what is media bias?’ Liberals say there is conservative bias. Everyone grants that there are, at least potentially, non-partisan forms of bias, e.g. bias in favor of sensationalism, bias in favor of stories that can be told like exciting stories. But what does it really mean to say ‘the media is biased in favor of x’, where x is some ethico-political perspective, value or party? The charge may look simple but is almost certainly compound. This consideration gets short shrift even in rather lengthy discussions of the subject I have read. Let’s just google up a set of possible definitions for ‘bias’. WordNet gets top hits and will do:
1) a partiality that prevents objective consideration of an issue or situation
2) a line or cut across a fabric that is not at right angles to a side of the fabric
3) influence in an unfair way; “you are biasing my choice by telling me yours”
4) cause to be biased
So bias may be: an intellectual failing (procedural irrationality), a moral failing (procedural injustice), diagonality or obliquity (a.k.a. somehow not playing it straight for tactical reasons), or being the cause in others of any or all of the aforementioned.
I think the charge of ‘media bias’ is usually an unspecified mix of all four. (If you doubt that fabric has much to do with alleged media fabrications, think of Polonius’ little disquisition on push-polling: “And thus do we of wisdom and of reach/ With windlasses and with assays of bias/ By indirections find directions out.” Well, not quite push-polling, but it’s Rovean in its deceitfulness.)
Four related senses might not seem too intoxicating a semantic mix. But then again it might be better to decide once and for all whether what you are saying is that everyone in the mainstream media is insane, or evil, or insane and evil, or neither insane and/or evil yet systemically the cause of insanity and/or evil in others, or just sneaky. Or what.
You might think it is self-evident journalistic duty to weigh evidence fairly; ethics and objectivity then cannot fail to line up. Well, you wouldn’t be wrong, but it’s hard to say what ‘fairly’ means in a way that will forestall dispute. Worse, people do not bother to charge media bias unless they mean to indict ‘the system’. But to allege general media bias would seem to require fairly comprehensive evidence concerning the whole media ecology. Above all, we also need a normative background of ‘media fairness as justice’, to coin a phrase. The idea is surely supposed to be that partisan journalists can (or ideally should) be brought to see, not that their specific partisan allegiance is mistaken, but that their over-aggressive deployments of it on some or all occasions should be voluntarily restrained in the service of higher principles of fairness (or something). Maybe we should work out what general agreements about distribution of informational goods would be ideally agreeable to reasonable parties (say, from behind a veil of ignorance preventing all parties from knowing what parties they belong to, and what media organs they control.)
We needn’t get so infernally Rawlsian right off the bat, it will be objected. (It would annoy conservatives to have to become John Rawls in order to defeat liberalism, which is why I am enjoying writing this post.) Just a Free Market of Ideas, not a Welfare State of Ideas, please. The truth will out. Some Hayekian order of things.
This is a possible view. But I don’t think it’s what most complainers about media bias are asking for. Let me suggest why. Suppose it were suggested that a Free Market of Ideas is guaranteed sufficiently by free speech guarantees. Well, obviously this isn’t enough for most conservative complainers about media bias, who know they’ve already got the First Amendment. (I’m setting aside screwballs who think Kerry wants to take away Bibles.) What conservatives think they should get is equal representation on the evening news and the editorial page. (Never mind for the moment whether they are unreasonable not to notice they’ve already got it.) But why should Republicans be represented, let alone equally? Isn’t a demand to have one’s voice not just not muzzled but broadcast an unconscionable prejudgment of the operation of the invisible hand of the free market? If it turns out that 89% of journalists are Democrats (I think I saw that figure, which was not accompanied by any statistic about how many CEO’s of media corporations are Republicans) - if most journalists are Democrats, why should this be regarded as proof of Democratic bias, rather than proof that truth itself is biased against Republicans, since the market rejects them? Obviously the response will run: this market isn’t fair. Which may be fair enough. But there’s that word again, ‘fair’. If a market should be not just free but ‘fair’, what sense of justice helps us understand what ‘fair’ means? Economically, we understand that a free market is supposed to be optimal for the production of wealth and acceptable for its distrubition. But what is a free market in ideas - a Media Market of Ideas - supposed to optimize: truth? The trouble with saying the market of ideas is supposed to optimize truth, and then complaining about partisanship as an obstacle, is that you must decide whether one partisan perspective is ‘truer’ than the other. If yes, then partisanship is actually what the market should aim at, not avoid. If not, then partisanship is not obviously a hindrance to truth. (Think about it. No really, this isn’t sophistry.)
It is plausible what people really think should be optimized is not the production of truths but the expression of belief (and not because this mix conduces to truth but as an end in itself). A range of beliefs. A suitable, representative range of beliefs. The thought is that everyone has a right not just to get the truth from the media but to have their ‘truth’ (i.e. belief) heard by others. The media ought to ‘sound like’ America. But unless this thought is accompanied by a relativistic belief that all beliefs are equally valid, we are now heading away from a free market, not towards it. The idea that everyone is entitled to equal representation in the media is akin to the economic proposition that everyone should be guaranteed about the same amount of money, which is not notably Hayekian. The sense of ‘fairness’ in play here is at least Rawlsian, if not positively communistic. Its doxastic affirmative action. You make sure no one is left behind, is silenced, unheard, excluded. Proper distribution of the media pie is more important, maybe, than ‘growing’ the pie.
A third possibility, aside from maximizing truth or starting a doxastic welfare state, is to suggest that media fairness truly does entail cultivating some lofty class of Watcher-like observers who indeed do nothing that could possibly express partisanship, let alone influence politics to the advantage of one side or the other. This strikes me as not just unrealistic but pretty clearly incoherent, but someone might try to make out how it is possible.
A fourth possibility is that media fairness just means coming up with the least uncivilized forum you can in which people who can’t agree to agree can disagree without hurting each other. Media as scream therapy Let everyone get their doxastic aggressions out with respect to everyone else, and - to make this possible - make the fight maximally open to all whose blood is angried up by the news. On this model the point is more to relieve stress than to arrive at truth (which may be deemed counter-intuitive, though it is certainly true that people enjoy politics the way they enjoy sports, as entertainment for spectators, and as good mental exercise for amateur participants.) This is media-as-those-padded-bats - you know, the ones in in Dr. Marvin Monroe’s family therapy center? Fairness just means: effective aggression therapy. (Any takers for this theory of media fairness?)
A fifth possibility is that major media figures, like Rather, should be regarded as political appointees - like Supreme Court justices (especially in light of average retirement age.) But there are awkwardnesses whose relevation I leave to the reader’s imagination.
A sixth possibility is that somehow it is mystically or pragmatically necessary to have a balance of partisanship in the media - just as there needs to be vigorous prosecution and defense in a trial. No one person can think both like a prosecutor and a defense attorney, and no one person can be left and right, but you need both, so the tasks need to be split, with a jury to vote. Why this sort of institutionally-induced schizophrenia is a precondition for civic mental health is obscure but perhaps there is something to it.
Looking over my notes I have fourteen points to go. That will never do. Quickly now, one of the big problems with wading into media bias with vague notions is that there are too many things that could potentially answer to ‘bias’. This creates what we might call a ‘grievance hazard’ (analogous to a moral hazard). The narcissism of small diffidences, giving people the pleasure of feeling abused by ‘the system’.
1) Just to get it out of the way, there is what we may call the null sense of ‘bias’. Charges of liberal media bias are often insincere. In What Liberal Media? Eric Alterman quotes former Republican chair Rich Bond about how crying ‘liberal bias’ is often just a matter of ‘working the ref’ in the hopes he’ll cut extra slack next time. (So charging ‘media bias’ is sometimes actually an indulgence in it, if ‘bias’ is understood to cover ‘attempts to make others biased’.) Alterman also quotes Grover Norquist: “The conservative press is self-consciously conservative and self-consciously part of the team. The liberal press is much larger, but at the same time it sees itself as the establishment press. So it’s conflicted. Sometimes it thinks it needs to be critical of both sides.” Norquist hereby acknowledges the existence of what Yglesias aptly terms the ‘hack gap’, which considerably hobbles the left. Well, I’m obviously a liberal talking here. But conservatives can at least agree that there is such a thing as working the ref. Probably they think liberals do it too.
2. Related to the null sense of ‘media bias’ is an analytic or redundant sense. Let’s use ‘liberal bias’ as our example. Liberal media = liberal media bias. Since liberalism is known (to all decent God-fearing folk) to be just plain wrong and not in any sense an even minimally valid point of view, any liberal reporter or reportage or media organ must be biased. Any story - even a true story - that would tend to incline viewers towards liberalism is biased (since a thing that causes bias is itself, by definition, biased.) Any flagrantly wrong result, systematically arrived at through multiple trials, must be the result of some systemic defect - a bias, if you will - in the mechanism.
Making ‘liberal media bias’ into a pleonastic way of saying ‘liberal media’ is obviously not terribly interesting, unless we want to change the subject and debate the permissibility of profound intolerance for alternative points of view in a democratic system. (Is it immoral to think the other side is intolerably evil and wrong?) Arguing about media bias, in a spirit of justice, is only interesting if you are trying to articulate an overarching framework of fairness that it is reasonable to expect all parties to grant is a fair framework. The assumption that liberalism is wrong will not be granted by liberals, ergo this proposal is a non-starter. (And of course the same is true in reverse about conservatism.) Nevertheless this redundancy sense of bias is important, I think, because it is the source of many frivolous accusations of bias, which are not so much insincere (as per 1) but too fanatically sincere for their own good.
I think senses 1) and 2) generate a lot of noise that one should strive to hear past. They explain how a lot of people come to feel aggrieved, i.e. by inhabiting their academy award-winning poor-poor-pitiful-me performances with a bit too much Method Acting sincerity, or else by mistaking their own intolerance of other points of view for other points of views’ intolerance of them. (I think this page is pretty clearly 95% noise, due to 1 and 2. And this is one of the more prominent conservative media watchdogs (I think). It is a very bad sign that liberal media bias is explained as partly due to the fact that, since “journalism school graduates have not been properly educated about the importance of telling the truth, there is a constant influx of new journalists who start out on the wrong foot.” Assuming that liberals know they are lying - since everyone knows liberalism is false, but not everyone knows lying is wrong - is paranoid.)
3) There’s the ‘I am the center of the universe’ sense of bias. This is a slight broadening of 2). Degree of media bias is a straightforward function of degree of divergence from my own views, which are (naturally) sober and considered and unbiased. There is a sense in which we all inevitably feel this way, even though it’s silly. This is actually a interesting psychological datum. It has to do with partisan alignment (values generally) being, at bottom, a rather murky business - a temperament, but how does one arrive at it? A temperament is not an intellectual conclusion or even a choice. It is tempting, then, to flirt with the notion that all partisanship just IS a sort of bias, i.e. a systemic tendency to arrive at some set of irrational results. (It’s not rational because at bottom it has no transparent explanation.) Of course this lesson is never applied home. And it isn’t really clear that it is usefully applied abroad. It trivializes the notion of bias by universalizing it (with one minute but egotistically important exception.)
Obviously this is useless as a basis for formulating any shared sense of media fairness. (“I’m the center of the universe.” “No, I’m the center of the universe.” “I’m the center of the universe and so’s my wife!”) But here again I think we probably have the source of much useless noise, as per 1) and 2).
4) Alternatively you could more democratically declare that bias is a function of degree of deviation from some inherently moderate mean or average of all the positions, weighted for popularity. I think there is a sense in which this is what people are probably saying they are looking for from their news anchors when they say they should be ‘unbiased’ (even though if they got it they’d probably be bored). But it doesn’t make a lot of sense. It turns any strong but unpopular or extreme view into a bias, by definition. But people who have unusual views are often far more reflective about them than people who don’t. Turning the point around, this approach makes being bland and wishy-washy and unreflective (just going with the flow) into being fair, by definition. There might be an argument for insisting that news anchors be bland, in this average-of-all-positions way, as a sort of crude proxy of Watcher-like lofty uninvolvement. But it wouldn’t make much sense to go on and call this ‘fair and balanced’. It’s more a diplomatic solution than an intellectual one. So it shouldn’t be argued for by arguing against ‘bias’.
5) What 3) and 4) try to do is turn (almost) any sort of licit partisanship into bias, by definition. This isn’t really satisfactory, so you might try making ‘bias’ a function of illicit partisanship. You say you are ‘fair and balanced’, or you say your media organ presents both sides, but it really pushes one side. This amounts to working the ref (as per 1) but from the inside, as it were. (It is also a ‘not playing it straight’ kind of case. You present yourself as one thing, when in fact you are something else.) There is a bit of oddity here in that it does not necessarily follow that the person pretending to be what they are not - e.g. neutral - is actually biased. They might be highly reflective about their views and aware of the arguments on both sides. Nevertheless they try to get other people to accept the views they think are right in sneaky ways. Let me subdivide these cases further:
5a) You pretend to be utterly objective and non-partisan, in a Watcher-like way, but really you have some (possibly very slight) partisan bent, and you perfectly well know it.5b) You say you are utterly objective and non-partisan, in a Watcher-like way, but really you have some (possibly very slight) partisan bent, only you don’t know it. (You are always unconsciously picking loaded terms to describe disputes - describing those you like as activists, those you don’t as special interests; some folks are freedom fighters, others are terrorists, so forth. Your colors show.)
5c) Underreported partisanship. You frankly admit to being partisan, but you intentionally fail to disclose the degree of your partisanship. (You say you are a moderate Republican when actually you are way out on the right wingtip, say.) This can seem less dishonest (biased) than a) or b), because at least you aren’t pretending to be neutral. So it’s only one lie, not two. But if the degree of disconnect between your self-positioning and your actual position is extreme it is arguable that you are more biased (in the sense of sneaky and causing bias in others) than a) or b).
There is a temptation (especially on the right) to assume that no opinion journalism can be accused of media bias (talk radio is defended this way) because it doesn’t pretend to be what it is not, i.e. neutral (whereas supposedly network news does cultivate this lofty air). But this doesn’t obviously make sense since there are things besides ‘neutrality’ that you can pretend to. The potential for deceit (causing bias in others) is huge once you have decided you can express whatever opinion you please, however irrational, false or misleading.
5d) Like 5c but a case of severe false-consciousness, rather than conscious concealment. It is quite likely that, due to 1)-3) many people who are not moderates, in any clear sense, sincerrely believe that they are. This is likely to lead to them being biased in a number of obvious ways.
6) Rhetorical bias. A nice quote:
Spoken by a good actor – and every great preacher, every successful advocate and politician is, among other things, a consummate actor – words can exercise an almost magical power over their hearers. Because of the essential irrationality of this power, even the best-intentioned of public speakers probably do more harm than good. When an orator, by the mere magic of words and a golden voice, persuades his audience of the rightness of a bad cause, we are very properly shocked. We ought to feel the same dismay whenever we find the same irrelevant tricks being used to persuade people of the rightness of a good cause. The belief engendered may be desirable, but the grounds for it are intrinsically wrong, and those who use the devices of oratory for instilling even right beliefs are guilty of pandering to the least creditable elements in human nature. By exercising their disastrous gift of the gab, they deepen the quasi-hypnotic trance in which most human beings live and from which it is the aim and purpose of all true philosophy, all genuinely spiritual religion to deliver them. – Aldous Huxley, The Devils of Loudun
By the logic of this admirably high-minded passage, any media person, process or product that avails itself of the tools of rhetoric, for good or ill, is biased: intellectually and morally discreditable, and the cause of bias in others. The trouble is that this is obviously too high a standard for practical purposes. The fact that Huxley writes almost magically well almost makes us overlook the fact that what he proposes makes no sense. We are all completely guilty of using some rhetoric, so there’s no point trying to maintain our own innocence if this is the standard. Here again we have a sort of grievance hazard. I think many accusations of bias do boil down to a feeling of indignation at the other side’s rhetoric, and a feeling that one’s own is not only decorative but functional.
Still, it does seem that one has to distinguish between degrees. There are sound arguments, rhetorically armored and streamlined for maximum penetration of thick enemy skulls. And there is just plain nonsense all the way down. And a lot in between. The former pole is less clearly ‘biased’ than the latter.
And there is no question that the liveliness of the general issue of ‘media bias’ depends on this sense - shared by Huxley - that we are all half-hypnotized and, dammit, something ought to be done to mitigate our sorry state.
Rhetoric needs something like ‘just war’ theory. Some set of rules that acknowledges that violence, deceit and nastiness are inevitable, but perhaps some of the worst excesses can be legislated away for the greater good of all. (Advocating rhetoric-free argumentation is like advocating pacificism. Noble but sadly not fit for this world of ours, unless you are incredibly lucky enough to be dealing with some University educated British folks in a moment of great self-doubt - like Ghandi.)
I don’t think many people who condemn their opponents’ use of rhetoric as ‘bias’ have a suitable ‘just war’ theory of argument, as it were.
I guess I’m just sayin that, between the rock of not really having a good ecological theory of the media as a whole, and the hard place of not having really worked out exactly what sort of bias is bothering them, I suspect certain bloggy media critics could slow down before trying to leverage Dan Rather’s pain into a world-historical moment.
I have several more points but that’s enough. I’m going to watch “Hellboy”. Do think Tom Waits ever wonders whether if he went to the gym he could look like Ron Perlman?
Matt Yglesias and Kevin Drum have been discussing various ethical buzzwords that have been flying around recently, all starting from this post of Eugene Volokh’s. I don’t have enough expertise to helpfully say very much here, but I thought I’d try adding some small points.
First, it isn’t relativist in any interesting sense to say that whether a particular action is right or wrong is relative to the circumstances in which it happens. Of course whether a particular act of getting into a car and driving off with it is dependent on, among other things, whether it is my car. If you want you can say that makes the morality of the action relative to ownership relations or the like, but I don’t think that’s a very helpful way of talking. Neither Matt nor Kevin nor Eugene is talking this way, though Eugene feels compelled to argue that it isn’t helpful.
Actually what Eugene is considering in the relevant passage (the stuff on exceptions to principles) is moral particularism. Like every term in philosophy, that one covers a family of cases, but at the heart of them is the view that moral principles are not very important to morality, and what is more important is the application of good moral judgement. The most extreme version is that there are no exceptionless moral principles. As Eugene rightly says, this way of thinking doesn’t line up easily with either moral relativist views or with any side of the political spectrum. (I guess it would be hard to be a relativist particularist because the moral standards of cultures seem to often be defined by principles, which seems to create an important place for principles. But this is just to back up the point Eugene is driving at, that particularism is not a form of relativism.)
Matt suggests that moral relativism is a form of moral non-cognitivism. If the two links I’ve given there are reasonable summaries of how the terms are usually used, I think that’s not right. (Matt’s taken more meta-ethics courses recently than I have, so I’m really not the expert here though.) I think one can consistently be a non-cognitivist, even an old fashioned expressivist, and say that when someone from another culture Boos things we Hooray or Hoorays things we Boo they are doing something wrong. That looks like absolutism, and so non-relativism, to me.
It’s complicated because both anti-relativism and anti-non-cognitivism (i.e. cognitivism) are often called realism. But that just shows how confusing realism can be.
Finally, I think there’s an historical error in Matt’s post, but I’m away from my books so I can’t check this for sure.
You’d be hard-pressed to find anyone drawing controversial normative conclusions from meta-ethical premises and, indeed, I think most philosophers would call foul on anyone who did.
If I remember right, or if this website is correct, R. M. Hare explicitly derived his preference consequentialism from his prescriptivism, and Peter Singer has endorsed this line of argument. Matt’s right about the general point that cognitivism/non-cognitivism debates usually cross-cut normative debates, but there are some prominent examples of reasoning from one to the other in the literature.
Imagine that one day, a big bloke with wings taps you on the shoulder. It’s OK, he says, Brian sent me. To offer you this potential wager, on behalf of God, who has more or less given up on the human race except as a subject for philosophy conundrums.
In the envelope in my left hand, he says, I have a number, called X. At some point in the recent past, X was drawn by God from a uniform distribution over the real numbers from 0 to 1 inclusive. You can have a look at it if you like.
In my right hand, he says, I have a mobile telephone which will allow me to receive a message from God with another number, Y, which will also be drawn by God from a uniform distribution on the line 0 to 1 inclusive.
The wager is this; if you accept the wager, and X and Y are equal, then every human being currently alive on the planet earth will be horribly tortured for the next ninety million trillion years and then killed. If you accept the wager but X and Y are not equal, then a small, relatively undeserving child somewhere, will be given a lollipop.
So, do you take the wager or not?
“Go on”, says the angel. “Look at it as a problem of utility maximisation. Just look at the utility associated with each possible outcome, multiplied by the probability of that outcome.
“In fact, it’s quite easy. Define p as the probability that Y equals X. In the favourable outcome (which occurs with probability 1-p), a child gets a lollipop, which increases the sum of utilities by a small amount. In the unfavourable outcome (occuring with probability p), the sum utilities is reduced by a massive amount. So what is p?
“Well, what’s the probability that a continuous random variable Y will be equal to a particular value X?
“It’s zero.
“Therefore, if you look at the calculation, accepting the bet gives a zero chance of a very horrible outcome, and therefore a (1-0) certain small increment to utility, so you should take it”
“Hurray”, you say, aware that angels have in the past not be so forthcoming in explaining the mathematics. But you are still nagged by doubts; wouldn’t the laugh be rather terribly on you if Y turned out to equal X after all?
I’ve been researching an article about the morality of human cloning, prompted almost entirely by Brian’s article co-written with Sarah McGrath, which is a defense of the permissibility of cloning. Prior to reading McGrath and Weatherson’s paper (or more precisely, Brian’s multiple postings here when they were writing it) I had no real intuitions about the issue, but their defense prompted me to think about what might count as good reasons to prohibit cloning. I share their dismay at the weakness of the arguments most commonly presented.
But this post isn’t directly about why cloning might be prohibited.
In the course of their defense of cloning Weatherson and McGrath appeal to an interest parents have for which they make no argument; the interest in rearing children who are their biological descendents. They need, I think, to assume this interest because without it there would be no claim to assistance in reproduction except when the supply of potential adoptees had been exhausted (which, in our world, is not the case). Potential adoptees have an extremely strong interest in finding a family home, because it is better to be brought up in families than in other kinds of institutions. I have looked, rather lackadaisically, for other pro, or more-or-less-pro, cloners, and those I have found — John Robertson, Mary Warnock, and Buchanan et.al. all make this assumption, and do so without any argument.
Now, in the context of the actual debate about cloning, it is fine to make this assumption, because the anti-cloners, also, tend to share it, although it plays no role in defending their opposition to cloning (or almost no role). But is it true?
Well, obviously it is true, right? Then why does it get so little defense? I can see immediately that lots of people care a great deal about having biological offspring, and raising them. And I can also see that there might be both efficiency considerations and child-centered reasons for wanting children, other-things being equal, to be raised by their biological parents. And I am not, I promise, contemplating the idea that we should redistribute children away from their biological parents to those best suited to rearing them or anything like that. But I am having a hard time figuring out why parents have an interest in rearing children who are biologically related to them sufficiently strong that it would support, for example, a policy that would enable people to do that even at the cost that some significant number of children (potential adoptees) will be reared in orphanages rather than in family homes.
I’ve talked to lots of people about this, and what they tend to give me is just intuitions, not actual arguments. The pro-cloners insist, rightly as far as I can see, that rearing a child biologically related to oneself is a different activity (or project) than rearing a non-biologically related child. Sure it is different. But it doesn’t seem to me that it is incommensurable, nor that it is more valuable, or even that it contributes more value to the person who is doing the rearing. And much of the activity is the same (when described at what seems to me the same level of abstraction): providing for the needs, including the developmental needs, of a child, and participating in a loving relationship with that child. There doesn’t, to me, seem to be enough of a difference to play the role that the pro-cloners want it to.
Can anyone either direct me to literature that makes the case for there being a morally weighty parental interest in rearing a biologically descended child, or just give me a good argument to that effect?
I was pleased to see this paragraph from Matthew Yglesias.
As a journalist, I keenly feel the pain of the generalist. I find myself in Mead’s shoes all the time — needing to somehow touch on a range of material that I am perfectly aware I don’t understand nearly as well as those people who’ve spent years focusing in on it narrowly. I like to think that having studied philosophy as an undergraduate is a reasonably good preparation for such a task. Obviously, I never wind up writing an article about meta-ethics or the way structurally similar issues about reductionism pop up in diverse areas (insofar as I know a lot about anything, it’s these things), but what philosophy fundamentally teaches you about (especially as an undergraduate when you don’t really have the time to master any particular sub-area) is how to spot an unsound argument, irrespective of the topic of discussion. That’s a useful and generally applicable thing. And I think we’ll see it pop up again and again in this discussion.
I like to think that some of the specific things I teach in undergraduate classes have relevance to what my students go on to do, but ultimately I’d be happy if most of the students picked up just the kind of skills Matt is talking about. One of the side effects of philosophy being so abstract and disconnected from everyday considerations is that to do well at it, you have to be good at reasoning about unfamiliar topics. And in the modern economy that’s a very valuable skill.
Brian Leiter reports that Sidney Morgenbesser has died at the age of 92 [Sorry, that’s what NPR said, the right age is 82]. NPR have an audio tribute with Arthur Danto . I’ll post links to obituaries as they appear. There was a rash of Morgenbesser anecdotes posted a while back, the best place to start is probably with this post at Normblog and follow the links back. My favourite:
Question:”Why is there something rather than nothing?”
Morgenbesser: “Even if there were nothing you’d still be complaining!”
Obits: New York Times , New York Sun , Columbia News ,
Brian Leiter passes on the sad news that John Passmore has died. Here is The Australian’s obituary. If any others appear I’ll try to update this post with links to them.
I’m in the middle of reading Andrew Crumey’s rather intruiging novel Mr Mee at the moment. One minor point of interest is that this may be the first work of fiction to contain a description of the Monty Hall problem (see Brian’s post below ) in the form of a letter, supposedly written in 1759 from a Jean-Bernard Rosier to the Encyclopedist d’Alembert:
Sir, you may know that many years ago one of our countrymen was taken prisoner in a remote and barren region of Asia noted only for the savagery of its inhabitants. The man’s captors, uncertain what to do with him, chose to settle the issue by means of a ring hidden beneath one of three wooden cups. If the prisoner could correctly guess which cup hid the gold band, he would be thrown out to face the dubious tenderness of the wolves; otherwise he was to be killed on the spot. By placing bets on the outcome, his cruel hosts could enjoy some brief diversion from the harsh austerity of their nomadic and brutal existence.
The leader of the tribe, having hidden his own ring, commanded that the unfortunate prisoner be brought forward to make his awful choice. After considerable hesitation, and perhaps a silent prayer, the wretch placed his trembling hand upon the middle cup. Bets were placed; then the leader, still wishing to prolong the painful moment of uncertainty which so delighted his audience, lifted the rightmost cup, beneath which no ring was found. The captive gave a gasp of hope, and amidst rising laughter from the crowd, the leader now reached for the left, saying that before turning it over he would allow his prisoner a final opportunity to change his choice. Imagine yourself to be in that poor man’s position, Monsieur D’Alembert, and tell me, what would you now do?
Via Justin Leiber, here’s a playable version of the Monty Hall Problem. It’s simultaneously a lesson in decision theory and in the perils of small sample sizes - my first two plays I lost the car by switching.
New from MIT Press comes Causation and Counterfactuals, an anthology edited by John Collins, Ned Hall and L.A. Paul. At the Pacific APA meetings, the latter was recently identified, much to her disgust, as “Kieran Healy from Crooked Timber’s wife.” Causation and Counterfactuals presents the best recent work on the counterfactual analysis of causation, which helps us understand the metaphysical underpinnings of sentences like “If you don’t buy it you’ll be sorry,” “If I hadn’t blogged so much my own book would be finished by now,” and “If everyone on CT posted a shameless plug simultaneously, who’d be responsible?” The book is also perhaps the only place to read the full, gripping saga of Billy and Suzy, a tale of passion, overdetermination, war, double prevention and appalling violence.
Orin Kerr writes: “The Engligh language needs a word for when advocates on both sides of an ongoing debate switch rhetorical positions, and yet they insist on decrying the inconsistency of their opponents while overlooking their own inconsistency.” If prof. Kerr will settle for a phrase, let me suggest ‘poetic justice as fairness’. I know it will never catch on among the non-Rawls joke getting set, but it’s the best I can do. (Actually what I am talking about is a slightly more generic version of what Kerr is talking about.) ‘Poetic justice as fairness’ denotes a vendetta-based, rather than abstract reason-based approach to argument. Dialectic as feud; Hatfields and the McCoys do thesis and antithesis, with stupidity as synthesis. The rule is: if you think your opponent commited a fallacy in the recent past, you are allowed to commit a fallacy. And no one can remember when it started, but the other side started it. It is difficult to break the tragic cycle of intellectual violence once it starts.
Timothy Burke has a post up at Cliopatra about why he doesn’t like Michael Moore, which is in this general vein:
What I find equally grating is the defense of Moore’s work as “fighting dirty” because the other side is doing so. I agree that many of the critics of Fahrenheit are astonishing hypocrites, applying standards that they systematically exempt their own favored pundits and politicians from, but the proposition that one has to play by those degraded rules to win the game repels me. If it’s true, then God help us all.
UPDATE: From comments received, it is clear my post appears even more naive than, in fact, it may be. I appear to be marvelling that these beings you call ‘humans’ sometimes employ rhetoric. Actually, I’m just giving a name to a peculiar slip. 1) You preceive that the enemy has employed a fallacy or other illicit rhetorical technique. 2) You denounce this as such. 3) You employ the very same trick against the enemy when the wheel turns and the opportunity arises. 4) You do so with a sense not just that it is fair to fight fire with fire but that somehow the bad argument has become mysteriously good, due to the fact that there is poetic justice in deploying it. (Admittedly, this isn’t what Burke is talking about, so my rather narrow point about argumentative psychology was muddled more than helped by the inclusion of the quote.)
2nd UPDATE: It occurs to me that the Rawls connection was probably not clear either. So I’ll just tuck a few further meditations discretely under the fold.
Why should poetic justice feel like argumentative fairness? Because it is right on the line between two senses of ‘just’: 1) x is just if x makes some sort of absolute moral sense; 2) x is just if x has been contractually agreed upon in advance by relevant parties. Rawls’ strategy, by means of his ‘original position’, is sort of to have it both ways. It’s sort of a social contract, and sort of an abstract argument. (This is violent oversimplification.)
Poetic justice as fairness is a matter of holding your enemies to positions they have ‘agreed’ to beforehand, even if those positions don’t make sense. So it feels fair in sense 2.
Let me provide a concrete example (which bears some relation to a spat I had with Tom Smith of the Right Coast some months ago, but I won’t pin the following tale on Tom.) Suppose you are a conservative who is bothered by the fact that conservatives are numerically under-represented in humanities departments. (Let’s grant under-representation, in some absolute sense, for the sake of argument.) Let’s suppose further that you are annoyed by what seems to you a flabby rhetoric of ‘diversity’ on behalf of affirmative action programs of which you disapprove. (Never mind for now that there are less intellectually disheveled ways of affirming affirmative action than incoherent hand-waving about diversity.) You think your lefty enemies are, by their own stated principles of ‘diversity’, committed to affirmative action for conservatives. This result would, of course, horrify the lefties. You therefore have, plausibly, the basis for a kind of reductio ad absurdum on the argument from diversity to affirmative action. (Yes, I know only foolish lefties would ever allow themselves to be pinned in this obvious way. Never mind about that.) Now: should you, as a conservative, actually affirm the absurd consequence of the reductio? Should you ask for affirmative action for conservatives? By your intellectual lights this would be wrong, because you are philosophically dead set against affirmative action. So no, you shouldn’t ask for it. Nevertheless your conservative mind may feel that it is not only tactically tempting, and a poetically just petard hoist, but truly intellectually fair, i.e. not hypocritical, to ask for it. Why? Because, in a sense, your opponents have ‘contracted’ to it by means of all the prior diversity talk. You feel they struck a deal in favor of this stuff. And fair is fair, when it comes to contracts.
More specifically, no one can plausibly complain that ‘x is unfair!’ if the following situation obtains: everyone who is unhappy with x has agreed beforehand to principles according to which x is fair; and no one who has not agreed to such principles in advance is complaining.
So if you would like the results - namely, more conservatives - and your opponents have implicitly granted that more conservatives would be fair in principle, then it’s fair to advocate affirmative action for conservatives. So you may end up in the absurd position of advocating affirmative action (which you don’t believe is right) on the basis of an argument (which you regard as nonsense) all the while feeling that this procedure is intellectually on the up and up.
I think people think this way quite a bit, actually. Especially in the sorts of cases Kerr discusses - namely, cases in which parties in and out of power flip-flop on any number of questions without feeling the least bit hypocritical. I think it’s because everyone feels that the other side has sort of ‘contracted’ to admit certain things as fair. But, of course, it doesn’t actually make sense to say that someone has ‘contracted’ to make a bad argument into a good one. That’s just not the sort of thing that can be established by contract.
I’ve been spending the afternoon alternating between writing a syllabus for a decision theory course and websurfing. So naturally I’ve been drawn to web sites about decision theory and game theory. And I was struck by this question David Shoemaker raises - are games played in the classroom covered by rules on human experimentation?
As David notes, some of the games that are most useful for teaching purposes require that we mislead the students, or at least that we don’t get their permission before starting the game. And we, as professors, do learn something from how they respond. Fortunately we’re careful as philosophers to avoid things like experimental design, so we don’t get much useful information from the game, but it can look a little like an unlicenced human experiment.
I hope not because the game David describes looks fun to me. Except I don’t think he should back down from having it count for grades. It’s only 10% after all - I think having 10% of the grade ride on how well you can do at a simple game is perfectly reasonable. It’s just a kind of in-class test I think. Maybe given how simple the game is it should only be 5%, but I don’t think it’s wrong to have it count.
Some philosophers, your humble narrator occasionally included, get irritated when people, especially intro ethics students, focus on what we take to be irrelevant details of what are meant to be serious, if somewhat improbably grisly, examples. But really we’re not upset about the lack of philosophical sophistication our students shown, just about how stylishlessly they complain. If all our intro ethics students were like Fafnir and Giblets I can’t imagine we’d ever be so irritated.
Brian Leiter posts on ‘philosopher’s tics’. Very true, very true. I just happen to have buried something similar in a long recent post you probably didn’t read, and just as well. So here it is. It’s from Imre Kertész’ novel - but I fear it’s his autobiography - Kaddish For a Child Not Born. The narrator is at some sort of forest retreat for writers and thinkers, trying to avoid the writers and thinkers. Alas, he is not successful. “The philosopher was nearing me in a pondering mood; I could see it in the slightly inclined pose of his head, on which his rascally visored cap perched; he approached like a humorous highwayman with a few drinks down his gullet, pondering whether to knock me down or content himself with the loot.”
Brian has already critiqued Christopher Peacocke’s argument that a belief in our capacity to accurately represent the the external world is justifiable a priori by appeal to the mechanism of natural selection. Accurate representations of the world are selected for, so (Brian summarizes) “we probably get basic things right most of the time.”
Brian’s the philosopher, so he’s better able than me to spot the big problems in the argument. (He was selected by graduate school for this.) An additional one strikes me. Elsewhere in the world of arguments from natural selection we find arguments that practices like religion or a belief in God are also fitness-enhancing for a whole bunch of reasons and thus likely to be selected for. But the people who make these arguments do so to explain why religious beliefs are useful fictions, not to show that they therefore accurately represent facts about the world. So while Peacocke’s argument seems plausible as long as we restrict ourselves to the contemplation of tables, contemplation of the varieties of religious experience seems to cause him some problems. Of course, you can say that while accurate representations of the world are selected for in the case of the perception of tables, inaccurate representations are selected in the case of perception of divine entities. But then “basic things” and “most of the time” start to do an awful lot of work in the argument, distinguishing what we get right from what we get wrong starts to look much harder, and the seemingly elegant a priori bridge effected between reality and representation by means of natural selection seems shaky. That’s the problem with arguments from adaptation. They’re a bit too adaptable.
I’ve been reading Christopher Peacocke’s The Realm of Reason, and I was rather struck by one of the moves in it. Unless I’ve really badly misinterpreted what he says in Chapter 3, he thinks you can come to justifiably believe in, and perhaps even know the truth of, theories of natural selection by looking really hard at a kitchen table and reflecting on what you’re doing.
First a little background. Peacocke is a rationalist and a foundationalist. He thinks that whenever you have a justified belief, its justification can be grounded in a chain of rationally acceptable transitions which bottom out in either conscious states or a priori knowable truths. Moreover, the transitions have to be (a) justifiable a priori and (b) have their justified status explained in part in virtue of the contents of the states involved in the transition. There are many problems for this position, the salient one being how do we ever get to know that the external world is anything like how it looks, given such meagre stuff to work with.
Descartes famously appealed to God at just this point. We can tell a priori (via the ontological argument) that a benevolent God exists, and He would not let us be massively deceived. Hence we are a priori justified in going from “That looks flat” to “That is flat”.
In keeping with the spirit of the age, Peacocke replaces God with Darwin. We can justify the transition because we’re probably evolved by natural selection and that means (probably) that we were selected for having accurate representations at least of certain fundamental things. Now as it stands this all seems pretty reasonable, but remember Peacocke is a rationalist. The justification has to work a priori. The mere fact that we’re evolved won’t do it. This is where the tables come in.
The table I’m looking at now has straight edges and round corners. Or at least so it looks to me. That is, I represent straightness when I look at its edges, and roundness when I look at its corners. What explains how I could be the kind of creature that has the capacity for spatial representation? This is a surprising, as Peacocke says Complex, fact and it cries out for explanation. Some say God’s handiwork explains my representational capacities. Others say a mad scientist. But the simplest explanation, says Peacocke. is that I’m the product of a process of natural selection. Unlike the other explanations, this does not need to appeal to representational capacities to explain representation. So a priori we can tell the best explanation of representational capacities is natural selection. And since accurate representations are selected for, we probably get basic things right most of the time.
This is pretty ingenious, but there are at least three things you could say against the argument.
First, it isn’t clear in just what sense natural selection is a particularly simple explanation of the existence of representation. The amount of complex interactions needed to generate a selective process is rather staggering compared to what you need to mirror nature.
Second, inferences to the best explanation should consider all possible explanations, not just the ones that are current in the philosophical literature. And I don’t see why natural selection does best among all possible explanations at explaining the existence of representations. For instance, the hypothesis of a young earth created by a miracle with creatures with representative capacity seems to avoid some of the messy details of evolutionary theory.
Third, no practicing biologist would seriously consider arguing for natural selection on the basis of careful reflections about tables. If they did, there would be a lot more to complaints that natural selection isn’t better supported than creationist fables and hence doesn’t deserve pride of place in schools. Maybe we shouldn’t be completely deferential to scientists, but this kind of consideration seems to have some force to me.
I have quite a bit of sympathy for Peacocke’s overall rationalist program, but this part of it (and it’s a big part) really needs repair and/or replacement.
Matthew Yglesias on John Rawls :
A Theory of Justice is a brilliant work in many ways, but it’s also — quite obviously — wrong in a number of ways and employs a variety of arguments that are pretty dubious. Any undergraduate can see this, and dozens — if not hundreds — do so every semester. Now it seems to me that a slightly more scrupulous philosopher might have looked at the manuscript and said to himself, “this is a very interesting argument I’m putting together here, but it doesn’t quite work. Better keep on revising.” But instead Rawls put his thought-provoking work out there in the press, attracting decades worth of criticisms, counter-criticisms, suggestions for improvement, and so forth, thus becoming the major figure in postwar political philosophy.
Someone who all accounts agree was a deeply serious, thinker who cared most of all about getting it right (“scrupulous”), is thus dismissed by a blogger as a careless promoter of his own reputation. Contrast John Rawls on reading the history of philosophy:
I always too for granted that the writers we were studying were much smarter than I was. If they were not, why was I wasting my time and the students’ time by studying them? If I saw a mistake in their arguments, I supposed those writers saw it too and must have dealt with it. But where? I looked for their way out, not mine. Sometimes their way out was historical: in their day the question need not be raised, or wouldn’t arise and so couldn’t then be fruitfully discussed. Or there was a part of the text I had overlooked, or had not read. I assumed there were never plain mistakes, not ones that mattered anyway. (Lectures on the History of Philosophy , p. xvi)
Since my own copy of the first edition of A Theory of Justice is peppered with silly undergraduate marginal sneers, I shouldn’t be too hard on Yglesias. What of Brad DeLong, though, who responds approvingly to Yglesias’s comments by suggesting that David Hume’s Of the Original Contract constitutes an avant la lettre refutation of Rawls? DeLong reveals nothing but his own catastropic misunderstanding (as a number of his commenters point out).
Following Chris’s post about topics in philosophy that provoke worries about angels and pinheads, I was going to pitch in with a comment setting out my own pet hates, but realised I was veering off-topic when I began to whine not about the problems themselves but about the values of the discipline itself.
Some declarations up front:
But what I also found, at graduate level anyway, were tremendous numbers of people, admittedly much cleverer than I, discussing what looked much more like shmanswers than answers, and being prepared to face down obvious objections by appealing to other shmanswers.
The book that made the most impression on me in my graduate education was Thomas Nagel’s ‘The View from Nowhere’, a key distinction of which was between sceptical, reductionist and heroic views:
Skeptical theories take the contents of our ordinary or scientific beliefs about the world to go beyond their grounds in ways that make them impossible to defend against doubt. There are ways we might be wrong that we can’t rule out. Once we notice this unclosable gap we cannot, except with conscious irrationality, maintain our confidence in those beliefs.It should also be said that Nagel has a wonderful footnote:Reductive theories grow out of skeptical arguments. Assuming that we do know certain things, and acknowledging that we could not know them if the gap between content and grounds were as great as the skeptic thinks it is, the reductionist reinterprets the content of our beliefs about the world so that they claim less. He may interpret them as claims about possible experience or the possible ultimate convergence of experience among rational beings, or as efforts to reduce tension and surprise or to increase order in the system of mental states of the knower, or he may even take some of them, in a Kantian vein, to describe the limits of all possible experience: an inside view of the bars of our mental cage. In any case on a reductive view our beliefs are not about the world as it is in itself - if indeed that means anything. They are about the world as it appears to us…
Heroic theories acknowledge the great gap between the grounds of our beliefs about the world and the contents of those beliefs under a realist interpretation, and they try to leap across the gap without narrowing it. The chasm below is littered with epistemological corpses.
A fourth reaction is to turn one’s back on the abyss and announce that one is now on the other side. This was done by G.E.Moore.I found a lot of reductionism, in Nagel’s sense, about the place and I didn’t like it much. It felt as if those tempted by it didn’t really feel the force of the problems at all, and had stumbled upon philosophy as an alternative outlet for their cleverness to solving the Times Crossword over breakfast. Much worse, though, there seemed to be stacks of writing that seemed to be inspired by Moore’s attitude concerning the necessity of leaping.
In the terms of the Philosophical Lexicon, too much Outsmarting went on in the subject. It also seemed to me that it ought not to be regarded as a good dialectical move, in response to an objection to a theory, to reply ‘but what’s your alternative?’. Not having a position, because all the extant views really obviously won’t do, should be way more acceptable than it seemed to be back when I did this stuff.
Of course, a discipline can’t have too many supersmart people, and philosophy may well be just too damn hard for the likes of me. Still, I worried, and worry, that there are too many professional philosophers who appear to be more interested in showing how superlatively clever they are than in addressing the permanent problems of the subject.
Ring any bells for anyone?
Brian Leiter has a couple of interesting posts reflecting on the state of analytical philosophy, and also links to Dan Dennett’s The Higher-Order Truths of Chmess , which I hadn’t read before. Dennett cites Donald Hebb’s dictuum “If it isn’t worth doing, it isn’t worth doing well,” and remarks
Each of us can readily think of an ongoing controversy in philosophy whose participants would be out of work if Hebb’s dictum were ruthlessly applied.
I confess to succumbing to feeling of utter despair whenever I have to listen to people talking twaddle about twater on twin-earth, so that would be my candidate even though I have dear colleagues who care passionately about the topic. But the twaddlers themselves would, no doubt, want to consign some of my pet interests to the bin. Commenters are invited to nominate the disputes that drive them crazy, and those who care about the tw-topic are invited to explain to the rest of us why we should think it matters.
Post intentionally left empty.
Well, not quite.1 But as I was perusing Font Requirements for Next Generation Air Traffic Management (a pdf document I happened upon after googling for something quite different) I came upon several pages bearing the words:
Page intentionally left blank.
Which, of course, it wasn’t. There must be many examples of such self-defeating performatives.
1 On “quite” see below .
I am sick and tired of hearing about that ticking nuclear bomb in Manhattan. You know the one. Why? Because, if you let me put my thumb on the utilitarian scales, I can get you to agree that you have an affirmative moral duty to torture a three-year-old child to death.
I will utilitze my mighty powers of stipulation, thusly: the earth is invaded by a race of super-intelligent, but malevolent beings. They subscribe to a xenocidal religion under which they have ravaged the galaxy, exterminating all life when they find it. In the last million years or so, however, they’ve had some sort of reformation, and are now content with a single sacrifice. For occult alien reasons involving astrology, you alone can satisfactorily perform this sacrifice. So, you are given a choice: you can torture one child to death, or the aliens will exterminate all life on earth, over a painful period of time, and wrap the whole thing up by nudging the earth from its orbit into a death spiral terminating in the heart of the sun. Because of your unique religious status, even if you choose not to perform the sacrifice, you will still be forced to kill children, around the clock, in awful ways, for the rest of your artificially extended life. The aliens will keep enough humans alive to serve this terrible purpose, and they will turn a mind-controlling ray on you, under the influence of which your body will commit these acts as your rebellious consciousness looks on in horror. If you agree to perform the sacrifice, by contrast, the earth will be spared, and we will get lots of alien technology which we can use to solve all problems of illness and material want for all humankind. It’s up to you.
Now, does anyone think you shouldn’t torture that one child to death, under the circumstances? No. Does anyone think this scenario helps cast even the feeblest single photon of illumination onto the moral question of whether it is ever appropriate to torture children to death? No.
The ticking nuclear bomb scenario is more plausible, of course. We capture some Al Quaeda guy, and though we don’t torture him, as we don’t know about the bomb, he folds like a cheap suit anyway, destroying his life’s ambition, by telling us that there is a nuclear bomb set to go off in Manhattan, but that he doesn’t know where it is. Then Bruce Willis and the FBI rappel into Osama Bin Laden’s secret hideout, and arrest him, and he’s all “you didn’t read me my rights”, and this one straight-laced FBI agent starts to Mirandize him, but then Bruce Willis is all: “you have the right…to get your ass kicked!”, and he goes buck wild on Osama, and he totally caves and tells them where the bomb is and what the disarm code is. So then, Bruce Willis is racing through the streets of New York, and maybe some funny things happen like a hot dog vendor gets in his way, and he has to drive up on the sidewalk. I was thinking he could maybe be in a taxi with a driver who has a humorous subcontinental accent, but that’s optional. And then Bruce Willis gets to the bomb, and it has a big red digital readout that’s counting down under one minute, but first Bruce Willis has to fight this one super-strong Al Quaeda guy who knows Islamic martial arts, and at the start of the fight Bruce Willis is totally getting schooled, and blood is coming out of his nose and stuff, but at the absolute last second he hits the guy with a tire iron, and then he enters the code right as the digital display ticks down to 0. We’ll all wipe our collective foreheads and say “phew” when that happens, I can tell you!
Now, you may object to the aliens in my example above, but of course you can just replace them with a genocidal tyrant and his henchmen, and the whole world with your entire ethnic group, and mind-control rays with hideous torture under which you will beg for death but it will be denied. See? All tidy. So, basically what I’m saying is, shut the fuck up about that bomb.
Thanks to Tyler Cowen, over at Volokh , I came across Jason Brennan’s list of movies with philosophical themes . It’s a good list , though a bit lacking in non-American content. Possible additions? There’s already been some blogospheric discussion of The Man Who Shot Liberty Valance and Christine Korsgaard’s claim that it illustrates Kant on revolutions (scroll down comments). Strictly Ballroom arguably deals with freedom, existentialism, and revolution. Rashomon is about the epistemology of testimony. Dr Strangelove covers the ethics of war and peace and some issues in game theory (remember the doomsday machine?). Suggestions?
UPDATE: I see Matthew Yglesias is also discussing this.
As Kieran noted yesterday I’ve been gallavanting around the world (most recently into St Andrews) so I haven’t had time to promote the latest round of philosophy blogs. Actually there have been two big group blogs launched since the Arizona blog Kieran linked to. I was going to try and make a systematic list, but that’s hard work away from one’s home computer, so I’ll just link to David Chalmers’s very good list of philosophy blogs instead.
Unlike CT, most of these blogs are geographically based. The contributors to group blogs are usually from the same time zone, and frequently from the same zip code. I prefer CT’s cosmopolitan flavour, but that isn’t looking like becoming the dominant form of blogging. That’s a pity, because the real attraction of the medium, to me anyway, is that it helps overcome the tyrannies of distance. Hopefully active comment boards and crosslinks can do that even if the blogs themselves are spatially centralised.
And further into the envelope madness …
Although I started out on the side of John and Bill Carone in believing that there was something funny about the two-envelope problem, I’ve always been suspicious of claims that the class of inifinty paradoxes (even Zeno’s Paradox) can really be tamed by asserting that they disappear if you know how to take limits properly. With that in mind, I mercilessly torture some of Greg Chaitin’s work to create a version of the two-envelope paradox in which I don’t think there are any limit arguments to make use of. Once more into Socratic dialogue ….
ANGEL: So here we bloody well are again, eh?
DANIEL: Yes, yes, I know the drill. Two envelopes, purgatory, hell, etc, blah blah blah. Have you quite finished flipping your coin?
ANGEL: No coin flipping this time. Instead, a little maths test.
DANIEL : Maths test?
ANGEL: Yes. In this first envelope, you will see written down an exponential Diophantine equation. Set to work solving it.
DANIEL: How. may I ask?
ANGEL: Any way you want to. Here’s a textbook on the LISP computer programming language, which ought to help.
DANIEL: And what’s my incentive?
ANGEL: Basically, the faster you solve it, the less time you spend in Hell. For every day you spend working on this problem, you spend a day in Hell.
DANIEL: Strikes me that I could be quite some time …
ANGEL: Clever boy. But if you’re worrying about the fact that the equation might have no solution, then don’t. After you’ve been working a week, then I’ll offer you another deal. At any point after that, I’ll offer you the option of stopping work on that equation and starting work on this equation in this other envelope. And I’ll even offer you an incentive to switch; you’ll spend the time in Hell that you spend solving the second equation, minus a day.
DANIEL: Seems good, but …
ANGEL: And to make it yet sweeter, I hereby guarantee you that at least one of the two equations has a finite solution.
DANIEL: I can’t help noticing that a fair number of my mates have taken similar bets with you, and none of them have ended up in Heaven so far …
ANGEL Well think it over … there’s a week to go before you need to make any decisions …
Jonah Goldberg wishes liberals were more interested in ideas, specifically the history of their own ideas. He wishes they were less ‘intellectually deracinated’; more like conservatives:
Read conservative publications or attend conservative conferences and there will almost always be at least some mention of our intellectual forefathers and often a spirited debate about them. The same goes for Libertarians, at least that branch which can be called a part or partner of the conservative movement.
By contrast:
Ask a liberal about his tradition and he will talk about deeds and efforts to remedy injustice, not ideas. This is in keeping with the legacy of William James’ preference for action over thought, though I doubt most liberals know or care that this is so (while I can think of no conservative who wouldn’t be jazzed to be told his idea was “Hayekian” or “Burkean”). This is a huge tactical advantage for liberals in political battles because they can disown old ideas in ways we cannot.
That was a month ago. Since then, Mark “the Decembrist” Schmitt has taken these anti-liberal allegations as the occasion for what promises to be an interesting series at TAP. Matthew Yglesias and Kevin Drum have been trying to help, politely but firmly. And Jonah gets letters, he gets letters. From liberals. Here he sits, scraping the bottom of the email barrel.
Liberals condemn themselves to repeating their mistakes by not knowing their history - even, it turns out, when it’s a month old. It is a sign of the arrogance of liberals that they brag - as so many have done in their emails - that they don’t “need to know what to believe” or to know history or to have a philosophy or to, in effect, know their homework. They simply know what’s right … It is also a sign of the triumph of two strains in liberal intellectual history converging: pragmatism and intellectual radicalism (by which I mean critical legal studies and the like). Both schools of thought reject the notion that “dogma” and “tradition” are useful sources of knowledge or morality, respectively.
Thus:
Their biggest problem is they don’t have a philosophy. This causes a lack of organization. This causes a lack of popular ideas. This is why the Democratic Party defines itself in such reactionary terms - blocking Republicans, creating lockboxes, yelling “stop” and “no” a la Al Gore and so on.
Four theses to be considered:
1) Liberals don’t know their own intellectual history.
2) Liberals don’t have a philosophy.
3) Liberalism is an arrogant, intellectually flabby, feeling-based pragmatism crossed with a strain of intellectual radicalism.
4) Liberalism is strangely reactionary.
And the correct responses are:
1) Confused; maybe a grain of truth. A mistake worth making for educational purposes.
2) Weirdly false.
3) A ‘bats are bugs’ moment for the record books.
4) Oddly, a mix of 3 and 1. (Wouldn’t have thought that was possible.)
And the explanations for these correct responses:
1) Liberals don’t know their own intellectual history. Jacob Levy makes the necessary points in response much better than I would have (and just in the nick of time, before I made a botch of it.) Basically, there isn’t any natural presumption that students of Rawls, say, should be up on their Croly. Yes, two forms of ‘liberalism’, but surprisingly genealogically distinct. Lots of quite different things get called ‘liberalism’.
I would more strongly emphasize one point Jacob makes. Goldberg knows, I am sure, that ‘liberal’ is one of those terms with so many senses it’s a wonder anyone does anything with this tool except cut themselves on it. Goldberg slices himself something fierce. He uses ‘liberal’ to denote everyone to the left of the Republican party. This is ‘libruhl’, in the pejorative sense, much beloved of right-wing talk radio, not remotely analytically useful. For example, ‘critical legal studies’ - which Goldberg touches on, by way of allegedly getting in touch with one tributary of the liberal stream - is not any sort of liberalism. Critical legal studies has its intellectual roots in all that post-structural, post-modern, post-Marxist continental stuff. (See this randomly googled up page, for example. I have no idea whether it is great shakes, but a long list of the influences on critical legal studies mentions not a single liberal figure or source, which tells you the thing is maybe not paradigmatically liberal.)
If you simply can’t bear not to lump everyone to the left of the Republican party together, at least don’t be surprised when all these folks who don’t have a lot in common haven’t heard of each other. Why should they?
2) Liberals don’t have a philosophy. Wow. Knew the term was slippery. Never actually seen it leap out of anyone’s hand like that and just cut the throat. Clean. Thing of beauty, if you like that sort of thing.
The implied claim that J.S. Mill, Isaiah Berlin, John Rawls, Ronald Dworkin, so forth, don’t have ‘philosophies’ is bizarre and false. So 2) is just a weird, weird thing to say, and not even the fact that Goldberg repeatedly says he is overgeneralizing to make a point explains it, because what’s the point of saying something so bizarre and false? I guess he is thinking something like: leftism is fairly intellectually bankrupt. Well, we’ll file that under thesis 3) and get to it in a moment. Maybe he is saying that the Democratic party is not all fired up with ideas at the moment. They are reacting to conservatives. Which is sort of true, because the conservatives are a bunch of radical hotheads, so someone needs to be doing some conserving around the place. Liberals are the Burkeans of the welfare state, ironically. We’ll file this under thesis 4) and get back to it.
Yglesias interprets Goldberg as offended by the intellectually unpretentious, roll-up-the-sleeves small-p pragmatical practicality of contemporary liberalism. It isn’t bold and grand and exciting enough. Some of what Goldberg says seems to fit with this. I remember Henry posted about this strain of contemporary conservatism some time back - sort of antsy and excitable and easily bored and in need of spectacle and stimulation. Burke the man was supposedly that way. But Burkean philosophy is notably not in favor of such things. Well, I dunno whether this is Goldberg’s thing.
The point could be that there is no significant liberal overlapping philosophical consensus (as a Rawlsian might say.) I don’t really see that there is a terrible failing in this area - I mean, more than usual; folks always disagree. There is significant consensus, from the tip-tops of the ivory towers on down, that the modern, liberal-democratic welfare state is a good thing and that we don’t need a fundamentally different form of government. That’s liberal consensus.
I think Goldberg is here again seriously hampered by his tendency to call too many things ‘liberal’. Since a lot of the thinkers he thinks are ‘liberals’ aren’t, it is no surprise that there is no liberal consensus that includes them.
But moving along, the thing that is truly weird about saying ‘liberals don’t have a philosophy’ is that it calls forth the contrary thought ‘so conservatives have a philosophy,’ which is - in my private but highly considered opinion - one of the very last thoughts a sane and prudent conservative thinker should want to spur in his or her audience. There is of course such a thing as conservative political philosophy. Would not dream of denying it. Nevertheless, philosophically speaking, contemporary conservatism is a doctrinal dog’s breakfast and a very poor advertisement for conservatism as a general outlook or temperament. (I exempt libertarianism, which is highly philosophical and admirably coherent, from this complaint.)
Contemporary conservatism looks a lot less bad a couple levels up from anything you might call ‘philosophy’. And so, young man, if you wish to make a respectable mark as a conservative mind, by all means pursue policy wonkery and conceal the odd tangle that is your ideological root system in a forest of mid-level factual detail.
This will, of course, be regarded as grave slander by conservatives. Well, I’ll just link my posts on the subject. You decide. I have argued the case at inadvisable length here, taking Goldberg’s colleague, David Frum, to task when he tried to get philosophical. Then I sort of tied up the frayed ends here. (The latter link contains a handy link to a PDF version of the first, really really long post.)
I make a short, sharp point here. And got some good comments. Basically, being in favor of ‘go, go!’ dynamist capitalist creative destruction while standing athwart the train of history, yelling ‘stop’ … is silly. But this is the circle you’ve got to square.
Then there is this point that conservatives, like Goldberg, who see conservatism as a temperament are left without a justification for being conservative, because temperaments are not properly reason-giving.
Not that my posts are enough coffin nails, I admit. And not that philosophical conservatism is hopeless. Not that there isn’t something wise and essential about the conservative temperament. I follow Mill and Trilling in saying so. But I do think Trilling was right when he wrote, in 1949:
In the United States at this time liberalism is not only the dominant but even the sole intellectual tradition. For it is the plain fact that nowadays there are no conservative or reactionary ideas in general circulation. This does not mean, of course, that there is no impulse to conservatism or to reaction. Such impulses are certainly very strong, perhaps even stronger than most of us know. But the conservative impulse and the reactionary impulse do not, with some isolated and some ecclesiastical exceptions, express themselves in ideas but only in irritable mental gestures which seek to resemble ideas.
Yes, the National Review started up right after Trilling wrote that. And Russell Kirk. Yeah, so maybe Trilling’s snark stopped being true for a time. But it’s true enough today. Because Jonah Goldberg is not going to be standing athwart any trains of history, yelling ‘stop’ anytime soon. (He hates it when Democrats yell ‘stop’. He’s not about to start.) And he is not any orthodox Kirkian, and I don’t see that he’s a coherently unorthodox sort either.
Let’s move on to the next point, which will allow me to develop some of these others.
3) Liberalism is an arrogant, intellectually flabby, feeling-based pragmatism crossed with a strain of intellectual radicalism.
Okey. Now the going gets strange. Despite the fact that I’m a liberal I remember - it was a whole year ago - how Goldberg, in his wisdom, dismissed the strict need to pursue consistency in arguments about ideas. Let’s start with that comparatively recent event in intellectual history.
Here’s how it went. Radley Balko caught Goldberg out on an inconsistency. Goldberg admitted it but slipped the snare neatly: “I’m sure my position will force me into uncomfortable arguments sometimes, including alas inconsistent ones. But as I’ve written before consistency is often a red-herring.” Guess that showed Balko.
You can scroll up that page and read all the bits where Goldberg elaborated and qualified his position over the next day or so. Basically, he settled on the view that it’s OK to be inconsistent, so long as you think that - on some level - you are right. No inconsistency can be true, of course, but this “doesn’t mean conservative inconsistency makes us wrong, it just means we have to defend our inconsistency better.” Not resolve it, please note. Defend it. Even though it can’t be true. It’s a Russell Kirk thing, you see: “affection for the proliferating variety and mystery of human existence.”
But since you can’t be just go blurting ‘I have an affection for the proliferating variety and mystery of human existence’ at the tail end of all your self-contradictions - eventually it would cause breathing problems, which would only compound the thinking problems - you need to get yourself some intellectual forefathers. Goldberg is right, I think, that conservatives are quicker with tags like ‘Burkean’ and ‘Hayekian’ than liberals are with corresponding tags. But I don’t think this is indicative of conservative intellectual rigor. The tags have legitimate intellectual employments, to be sure, but they are also suspiciously handy crutches for logically weak legs. If you are going to be asserting logical falsehoods a lot, as Goldberg freely admits he will be, you need to be able to make it sound more high-toned, like so: ‘The Burkean wisdom of P … blah, blah, blah … The compelling Hayekian insight that -P.’ The way to defend contradictions ‘better’ is to be well versed in suitable material for constructing arguments from authority. Of course, if your interlocutor has even a short term memory for intellectual history, you may be caught out in the fallacious attempt. But it’s worth a try. [Not that Burke and Hayek are exactly opposites. I’m just following up on Goldberg’s own examples. Burke and Hayek disagree about a thing or two, so it is perfectly possible for B to say P and H to say -P.]
The alternative would be to try and figure out which of the contradictory ideas was false. But this is not an option available to conservatives. Goldberg grouses that it is only liberals who enjoy this ‘tactical advantage’ of being able to ‘disown old ideas in ways we [conservatives] cannot’. (But if you think about it - and I do recommend the practice - the ability to disown old ideas, i.e. admit past errors, is actually a prerequisite for basic intellectual hygiene. I’m sure Goldberg would admit as much in a different mood.)
And this arrogant liberal trick of admitting we are fallible, illicit as it may seem, is nothing compared to the trick employed by conservatives: being able to disown any ideas whatsoever, without giving up on them, in ways we liberals cannot. For in the kingdom of ideas, you are what you imply. But conservatives, with their Goldberg-granted license to self-contradict, are free to stipulate away inconvenient implications by the Kirkian power of ‘the mystery of human existence’. But this amounts to stipulating away the ideas themselves, which cannot actually be separated from their implications. Irritable gestures, looks like to me.
What will Goldberg say to this? First, he will protest that he is not a knee-jerk irrationalist. He thinks consistency is important, but he has a rich appreciation of how life is complicated and general principles often step on each other’s toes. So inconsistency is OK. In short, he is a pragmatist. (The other possible things he might be, leading to so much flagrant self-contradiction, are: idiot, mystic, madman, liar and hypocrite. So I think I’m being quite charitable.)
But if the abiding virtue of conservatism is, at bottom, pragmatism, how can pragmatism also be the abiding vice of liberalism, as Goldberg claims? Hmm, yes?
Come to think of it, it’s a little hard to believe that “affection for the proliferating variety and mystery of human existence” is anything but a feeling you feel when you feel a feeling that feels deeply right, but you can’t quite say why it isn’t a contradiction. So we should add: a feelings-based pragmatism is the abiding virtue of conservatism, according to Goldberg.
And although it is possible to devise flabbier forms of pragmatism, Goldberg’s version is pretty flabby. Any form of pragmatism that affirms contradictions, rather than attempting to resolve them rationally, is flabby in my opinion.
And Goldberg often describes himself as an elitist, and I think claiming to be elite on the basis of nothing better than flabby feelings is pretty arrogant.
I infer that the following is a fair sketch of Goldberg’s projected conservative philosophy: an arrogant, intellectually flabby, feeling-based pragmatism crossed with a strain of intellectual radicalism. In short, he is a liberal. (Except that this isn’t a very good description of liberalism, which isn’t essentially a species of pragmatism at all.)
I haven’t gotten to the bit about Goldberg’s intellectual radicalism.
4) Liberalism is strangely reactionary.
This is really the start of another post, rather than a proper conclusion for this one. Anyway, whenever anyone starts accusing liberals of being reactionaries, I am reminded of a passage from Frum’s book, Dead Right.
He talks about how in the halcyon Reagan years “we thought about policy and elections so hard that we seldom stopped to think about philosophy … we learned to limit our own speculations to what the balance of political forces at that particular moment declared feasible; we wrote articles as if they were memoranda to the president, banning the not immediately practical from our discourse.” There is a paradox in this notion of impractical, speculative conservatism, if I make no mistake.
But at some point the flip was indeed made. Liberals are more conservative than conservatives, these days. Liberals are Burkeans of the welfare state. Whereas the Burkeans have all turned Jacobins, wild-eyed radicals. Hence Goldberg’s frustration at liberal reactionaries, always standing athwart the train of history, shouting ‘stop’. I quote again:
The Democratic Party defines itself in such reactionary terms - blocking Republicans, creating lockboxes, yelling “stop” and “no” a la Al Gore and so on.
As a conservative, he can’t abide such counter-revolutionary obstructionism.
I do realize that Goldberg himself is aware of at least some of these ironies. Whole thing is very confusing.
John complains that the version of the two-envelope paradox I give is not theologically accurate. I was trying to come up with a more theologically accurate one, but I couldn’t really. Still, the following is intended to be a little closer to theological reality.
BRIAN: Where am I?
ANGEL: Purgatory.
BRIAN: Ah, that makes sense. Hang on, does that mean I get to go to heaven one day?
ANGEL: Yep, eventually.
BRIAN: WooHoo! So how long’s the wait then? And can you turn up the heat a little, it’s kinda chilly here?
ANGEL: Well, that’s a hard question. No.
BRIAN: Why is it hard? Hasn’t The Big Guy worked out how long my penance will be?
ANGEL: Well, it turns out decisiveness isn’t one of the divine attributes. So He hasn’t yet decided.
BRIAN: That makes sense. If Bushie is our paradigm of decisiveness, it isn’t obviously a virtue.
ANGEL: Right. So he’s going to toss a coin a few times to figure out how long you stay here. If it lands heads the first time, you stay here 2 days, and we’re done. If it lands heads the second time, you stay here 4 days, and again we’re done. If it lands heads the third time, you stay here 8 days, and again we’re done. You get the picture.
BRIAN: And if it never lands heads?
ANGEL: Then you stay here a forever.
BRIAN: Hmmm, I didn’t realise I’d been that bad.
ANGEL: It was a judgment call whether you went up or down, so don’t complain too much.
BRIAN: OK then. How long will this coin tossing take?
ANGEL: It’s already done.
BRIAN: So it landed heads at least once then!
ANGEL: I wouldn’t infer that too quickly. The first toss takes a ½ second, the second toss a ¼ second, the third 1/8 of a second etc, so infinitely many tosses don’t take that long.
BRIAN: I don’t like the sound of this, but what’s the verdict.
ANGEL: It’s in this envelope. Go on, open it up … WAIT! I almost forgot. Before you do that, I’m meant to offer you a deal. We can tear up that envelope, and we’ll rerun the coin flips, and this time I’ll take a day off whatever the sentence is.
BRIAN: So it would be 1 day here, or 3 days, or 7 days, or 15 days, etc.
ANGEL: You got it.
BRIAN: That sounds like a great deal. Is there a catch?
ST PETER: Funny you should ask. If you open that envelope and see it says 2 days, would you prefer to keep the envelope or take the deal?
BRIAN: Since the deal has an expected infinite sentence, I guess I’d keep the envelope.
ST PETER: And what if it says 4 days?
BRIAN: I guess, well, I guess I’d keep the envelope again.
ST PETER: And 8 days?
BRIAN: Keep the envelope.
ST PETER: See a pattern here?
BRIAN: Yeah, but the envelope was constructed by the same mechanism as the angel is using, but without the discount. How can it be better to keep the envelope?
Some quick commentary on the case.
I’ve deliberately left out of St Peter’s argument what happens if I open the first envelope and see an infinite stay in purgatory. That’s a messy case, but there’s a few things to note.
First, it isn’t obvious I should prefer the deal rather than be indifferent between the deal and keeping the envelope even in this case.
Second, it has probability zero.
Third, we can avoid even this case if we are prepared to allow something like a discontinuity. Change the case so ‘tails forever’ has the same effect as heads on the first flip, i.e. 2 days in purgatory on the first run, 1 day in purgatory on the second run. Now we have a strict conglomerability argument for keeping the original envelope. I don’t understand at all the concern about discontinuous sequences, but I didn’t include this in the original case because of those concerns.
There’s no ‘infinite swapping’ outcome here like in the original two envelope case. But I think it’s very odd that keeping the envelope, which from a neutral perspective appears to be dominated by taking the deal, can be argued against by just the same kind of reasoning in the two-envelope case.
Moreover, if we change the case so the angel doesn’t go on to do the flipping, but has a second envelope from God, we can then give a parallel argument that we really should take the deal.
Nothing in this case relies on there being an equal distribution over [10, M), which doesn’t make any sense to me. But that was never essential to the two envelope case, as John Broome pointed out in his 1995 Analysis paper. Really the two envelope paradox is just a variant on the St Petersburg paradox, as illustrated here. And that paradox works with countably additive probability distributions.
Via Juan at Philosophy617 (who doesn’t think much of the proffered solutions, and probably won’t like this one) I came back to this version of the two-envelope problem put forward by Brian, a bit before I joined CT.
In this case, once you observe that Brian’s angel is giving you faulty theology, it’s easy to show that you should reject his1 mathematics, and his offer. At the end of the problem, the angel says “It’s purgatory,” says the angel, “take all the time you want.” But the whole point of Purgatory is that it’s finite - you purge off your sins one at a time until they’re all paid off. Since we now have a finite problem, the solution is straightforward.
Recall that there are two envelopes, with numbers x and 2x representing remission of time in Purgatory, and that x is greater than 10. If your total time in Purgatory is M, we can assume that a just God is not going to give you more remission than that, so 2x is less than M, and x is less than M/2.
The trick in the problem is the apparent symmetry between the envelopes. If you pick one envelope, getting y, switching envelopes gives you y/2 or 2y with equal probability, which seems like a good bet. So it looks as though the angel can apply a Hell pump to you, with repeated offers to switch, paying a day in Hell each time.
The trick in the angel’s offer in the is that it’s not true, for any given y, that switching gives you even chances of y/2 or 2y. Suppose for example, you draw y greater than M/2. Then it’s certain that you’ve got the 2x envelope and that switching would be bad. Conversely, if you draw, say, 15 days, it’s obvious that you’ve got the x envelope and that switching would be good. Unfortunately, you can’t peek then decide whether to switch. If you could, the angel’s offer would probably be a good one. Since you can’t, and given any fixed distribution for x over the range [10,M/2], it’s easy to check that the expected gain from switching is zero.
It’s easy to extend the argument to allow for the case of a Bayesian soul, with a prior distribution that will be updated once the envelope is opened (of course, it’s too late to anything by then). You can also allow for some kinds of non-Bayesians but not too many, since the angel’s argument implicitly relies on the sure thing principle.
It’s also possible, in at least some cases, to refute the angel’s argument even when time in Purgatory may be infinite. All that’s really needed is a given probability distribution for remission time x with a finite mean.
1 I didn’t think angels were gendered, but the example uses male pronouns, and I’ll follow suit.
I’ve just notices Julian Baggini’s piece about hypothetical questions over at Butterflies and Wheels . Baggini observes the politicians often bat away questions they don’t want to answer by observing that the point is hypothetical. This is a disgraceful move by politicians, but its televisual ubiquity means that many people now seem to believe that hypothetical questions are, by their very nature, illegitimate. And bad though this belief is among the general public, it now seems to be spreading among philosophy undergraduates who don’t seem to appreciate that their subject would be impossible without such questions. I first noticed this phenomenon a few years ago, when sitting-in on a lecture my then-colleague Patrick Greenough. Patrick was running through some Gettier problems and had reached a familiar example involving a dog cunningly disguised as a sheep in a field (a real sheep being just out of sight behind a fold in the land). When Patrick asked whether the observer of the dog knows there is a sheep in the field, a hand went up in the audience: “Excuse me, isn’t that a hypothetical question?” Doh!
I shouldn’t, but what the heck.
Steven Den Beste has a long post in which he articulates his view that:
we are actually engaged in a three-way war. It’s something I’ve spent a lot of time thinking about, and I’ve come to the conclusion that the most important consequence of it is simply the recognition and acknowledgement that it is a three-way struggle.
So, the most important consequence of ‘it’ - i.e. the war - is ‘recognition’ or ‘acknowledgement’ of the nature of that very war? Hegel himself might blush at such a lofty, self-reflexive conceit: a war primarily to know what the war itself is about. (If so, why not just not hold the thing and be satisfied with the answer: nothing.)
Many things have happened which are inexplicable under the assumption that there are only two sides.
So Den Beste deduces this must be a three-way war, a priori, by helping himself to the major premise: anything inexplicable on the assumption that it has two sides must have three sides. (How else?) Plato himself, author of the Theaetetus, might blush. Even the Medium Lobster, transcendent being that he is, might balk at playing a ‘prove everyone is a triangle for free’ card.
But can Den Beste really be meaning to assert these strange, speculative things? Oh, who the hell can say? On the other hand, in dovetails neatly with what comes next. He explains how it all flows from the abstract, metaphysical nature of the three combatants. “There is a significant degree to which each of the three should be thought of as ‘hive-minds’.” Or as one hive-mind. Each. One hive-mind. So you get three. Attacking each other with their minds. Anyway, we have here an abstract war of Ideas. Ergo, by harmonic mental attunement, one may apprehend the agonically teleological operations of Mind behind the scenes. That is, the war.
So what will we see, according to Den Beste, if we claw our way out of the cave, into the Sun? First, Islamism - radical, jihadist, militant Islam.
The other two sides are derived from Western philosophical roots. For them I’ve had to invent my own names: “p-idealism” and “empiricism”.
P-idealism:
One world view is known as teleology, which refers to a basic assumption that there’s a fundamental elegance of design to the universe, a deep sense in which things are related so that outcomes are intellectually and esthetically pleasing. When things happen, it’s not just the result of localized cause-and-effect; there’s also a “final cause”, a deeper meaning and source of it. And because of that, it all relates; everything is of a piece, and it’s all part of an elegant overall pattern.…One of the ways in which this all ultimately manifested was in the basic philosophy of idealism, which posits that the mind is the essential and central force in the universe. . .
And that was why you could figure it all out: if you could somehow attune yourself to that higher order of existence, you’d automatically know it all. And those who had come closer to achieving such enlightenment were therefore more wise than anyone else, and should be able to wield power over the others.
Empiricism:
It started with the question, “What is the universe like?” and came up with the answer, “I dunno; let’s go look and see.” It posits that there actually is an objective universe, and doesn’t automatically assume that it has any kind of underlying purpose. If such a thing is present, it will become clear in due course, and in the mean time let’s all look around to see what kind of place we’re living in.
Notice the peculiar ‘It’. To what does ‘it’ refer? Ideas doing it for themselves? Very Hegelian.
At any rate, the hive-mind known as ‘p-Idealism’ is emanating (in semi-Plotinian fashion) mostly as Europe. The Ding an sich known as ‘empiricism’ manifests itself, empirically, as the United States (roughly).
The basic competition at all levels between the rising force of empiricism and the existing entrenched p-idealism has a long and bloody history. Empiricism dominated the US and still does, but p-idealism has spent most of the last hundred years trying to challenge that, as yet unsuccessfully. In Europe, it’s been more complicated with a long and strong competition between empiricism and p-idealism for control, and the balance of power changing constantly, but since the end of WWII p-idealism has largely come to dominate in western Europe.
Den Beste is, of course, firmly and staunchly on the side of empiricism. Not an p-idealist bone in his body. Nope. No flights of speculative fancy for him. Wouldn’t talk about a messy, empirical thing like it was an abstract dialectic. This isn’t one of those fruity, Hegel passages where you can’t tell whether you are hearing about people, or premises, or a war, or an abstract argument. Or what. Nothing like that.
OK, I’m laying it on thick. But as someone with a weakness for speculative philosophy - from Parmenides on - it bothers me when folks think they can dance ad nauseum to these airy tunes without having to pay the empirical piper. For them to say they are the piper is sillier still.
Seriously, from Hegel’s History: grand, unbroken march of World-Spirit - to den Beste’s War: three-way cage-match of hive-minds - to Calvin’s last-minute report on bats: bats, the big bug scourge of the skies - the finest, truest response is and shall remain that of the tiger, Hobbes: your report only contains one fact, and you made it up.
Hey, it worked for Parmenides. Sort of.
I guess what I’m really saying is it’s about time that whole ‘cheese-eating surrender monkeys’ meme - which was funny for about three days - gets laid totally and finally and utterly to rest. It doesn’t need, at this late stage in its career, to morph into a giant sequel to the Phenomenology of Spirit. It doesn’t need to be an a priori philosophy of anti-Europe grousing that pretends its motto is ‘I dunno, let’s look and see.’
If John Kerry loses a single red-blooded, empirical American vote on the grounds that he fraternizes with p-idealizing surrender hive-minds - I swear somebody is going to deserve a kick in the noumenon.
Still working my way through Robert Skidelsky’s John Maynard Keynes. Frank Ramsey appears only in passing, but the book manages to suggest what a terrible loss it was when Ramsey died, just short of his 27th birthday. His contributions to mathematics, philosophy and economics bring to mind Tom Lehrer’s line, “It’s sobering to reflect that by the time Mozart was my age, he’d already been dead for two years.”
There’s no telling what he’d have done, had he lived. But it seems to me that, sociologically, he would have had a decisive and positive effect on the philosophical community. Although right at the center of Cambridge intellectual life, a member of the apostles, and the translator of the Tractatus, Ramsey never showed any sign of falling under the spell of Wittgenstein. He thought the Tractatus was terribly important, of course, and that Wittgenstein was worth taking a lot of trouble to understand. But he seems to have been immune to the hold Wittgenstein tended to have over other philosophers. Ramsey seems to have been, along with Sraffa, one of the very few people at Cambridge who felt able to tackle Wittgenstein head on and whom Wittgenstein respected. But where Sraffa was withdrawn and a bit solitary, Ramsey was outgoing, likeable and in the thick of things. His character was in sharp contrast also to Wittgenstein, who — when he wasn’t directing monologues at people — was rude and insensitive to an amazing degree. I think it would have done twentieth century philosophy a power of good to have someone like Ramsey around the Cambridge colleges as a counterweight to Wittgenstein, both because he had a mind of the same magnitude but quite different cast, and because he provided an appealing alternative model of what genius can be. It might have saved a lot of people a lot of trouble.
I’ve been asked by my administration for my estimation of the strongest philosophy departments in the UK (in research terms). I’m not a big fan of league tables, but, rather than leave things to my private whim I thought I’d take a look at a least two peer-review based assessments out there: the last RAE and the Leiter reports . Leiter has a ranking of UK departments, but to get one for the RAE you need to make some choices. My crude method was to to take the crude score (5*, 5 or 4) and multiply this by the number of staff submitted (with 5* as 6). This gave me the following ranking table (below the fold):
RAE | Leiter | ||
1 | Oxford | 1 | Oxford |
2 | Cambridge | 2 | Cambridge |
3 | Kings | 3 | St Andrews |
4 | Leeds | 4 | Birkbeck |
5 | St Andrews | Kings | |
6 | UCL | Reading | |
7 | Warwick | 7 | LSE |
8 | Bristol | UCL | |
9 | Edinburgh | 9 | Edinburgh |
10 | Birkbeck | 10 | Sheffield |
11 | LSE | 11 | Leeds |
12 | Sheffield | 12 | Glasgow |
13 | Sussex | 13 | Stirling |
14 | Stirling | York | |
15 | Durham | 15 | Nottingham |
Interesting points:
(1) There’s a lot of consensus between Leiter and the RAE at the top of the table.
(2) Four departments get into the top 15 according to the RAE but not Leiter: Warwick, Bristol [my own department - full disclosure!], Sussex and Durham.
(3) Four departments make Leiter but not RAE: Reading, York, Nottingham and Glasgow.
I’m sure that we at Bristol have become stronger since the RAE, especially in philosophy of science given the moves of Alexander Bird from Edinburgh and Samir Okasha from York (and I hope this gets reflected in Leiter next time!) but on the whole, the large measure of agreement between the two lists ought to boost confidence in Leiter. There are some anomalies, though, and I do think it odd that a department that only scored a 4 at RAE (Glasgow) should make the Leiter top 15. Nottingham is clearly up-and-coming with some strong new appointments (though will this be affected by that University’s “grey-listing” by the Association of University Teachers?). I also believe that Sheffield, which comes 12th on the RAE and 10th on Leiter and has a particularly strong graduate programme, should come higher up both tables and that Leiter’s high ranking of Reading is closer to the truth than their non-appearance in the RAE top-15. Naturally, these opinions are just personal ones, and can be taken with as much salt as you like!
I haven’t watched South Park in years, but when I did I tended to agree with the conclusion of this article that it’s too preachy for its own good. Still, the article’s title gives me an idea or two. South Park and Philosophy could be better than most of the Randomly Chosen Segment of Pop Culture and Philosophy books that are coming out I think. Perhaps there is still potential for life in the genre. Apart from South Park, what could be next?
Baseball and Philosophy has been done already, so maybe it’s time for NFL and Philosophy, or WWE and Philosophy, or, one that raises genuine ethical concerns, Joe Millionaire and Philosophy. OK, those are jokes, but I think Real World and Philosophy could be spectacular. And if someone didn’t know what it really was, you could list the book title on the CV without arousing suspicions. Brilliant! (That last sentence, by the way, will be the title of my entry in Guinness and Philosophy.)
I had an idea the other week for a book where every chapter was kinda like a paper for a volume like that, ranging from the somewhat serious (e.g. 24 and Philosophy) to the complete joke (e.g. Teletubbies and Philosophy).
I couldn’t work out the marketing plan for the book though. One thought was that each chapter could be co-written with a different author, a la The 6ths, but I didn’t really see how that would help the marketing. It would be fun to write all those chapters though, particularly if I chose the co-authors correctly.
Another was to basically make it a 101 textbook, with the underlying aim being to cover all the bases for a 101 course, and use the pop culture to draw in the masses. It might work, but it could date fairly quickly. All I need is for it to catch fire on the textbook market one year though and I’d be sorta rich. My reputation for serious philosophy would take such a hit that I’d probably never get offered another academic gig, but since I just landed a 40-year, multi-million dollar contract maybe that isn’t a concern.
One hears it said from time to time that it’s irrational to perform inductive inferences based on a single data point. Now this is sometimes irrational. For example, from the fact that Al Gore got the most votes in the last Presidential Election it would be foolish to infer that he’ll get the most votes in the next Presidential Election. But it isn’t always irrational. And this matters to some philosophical debates, and perhaps to some practical debates too.
Here’s my proof that it isn’t always irrational. Imagine on Thursday night I go and see a new movie that you’re going to go see Friday night. Friday lunchtime I tell you how the movie ended. How should you react? Most people will complain that I’ve spoiled the movie because you now know how it will end. But if induction on a single case is always bad, this is impossible. All you have is testimonial evidence of how the movie ended on a single occasion, namely Thursday night. You need to make an inferential leap to make a conclusion about how it will end Friday night. (It certainly isn’t a deductive inference because some movies have multiple endings.) That inferential leap will be induction on a single case, and will be perfectly reasonable.
That’s more or less my complete argument that induction on a single case can be perfectly rational. There is an obvious objection though. It might be argued that this isn’t really induction on a single case, because it’s like underwritten by a many-case induction based on the number of previous movies that have ended the same way at multiple screenings. While that’s obviously true, it isn’t clear how much it undermines the original example.
There’s two points we could go on to debate here. First is the question of whether the inference from how the movie ended on Thursday to how it will end on Friday (the movie inference) is really an instance of induction on a single case. That looks like a relatively stale terminological debate, and I couldn’t be bothered hashing it out here. Second is the question of whether there is any way to distinguish the movie inference from what are usually taken to be bad instances of induction on a single case. This one has to be debated case by case, but I suspect the answer is in general no, unless there are independent reasons to dislike the particular bad instance of inductive reasoning.
Here’s a couple of illustrations of what I mean, one practical the other theoretical.
Consider the policy “Don’t start reading a blog if the first thing you read there is false.” Some might consider any application of that to be a bad instance of a single case induction - from one bad claim infer that other things the blog says are not worth reading. But just like the movie inference can be backed up by a meta-induction, this one can arguably be backed up by a claim validated by a meta-induction: that blogs which say something false the first time you open them are not worthwhile reading in the future. That claim might well be false, and I’m not taking sides here on whether it is true or not, but as long as the person who holds the policy believes the claim, their reasoning is no worse than the person who makes the movie inference. (Quick credit: I saw this policy defended somewhere a while ago, but Google was no help in finding where. That was more or less what inspired this post, so I probably should have linked to it.)
Let’s take a more famous case. Why should I believe that other people have sensations? One famous answer, defended by Bertrand Russell, is that I can reason by analogy. I’m alike other people in ever so many ways, so I’m probably alike them in respect of sensations. And I know (somehow!) that I have sensations, so I know they do too.
Some people have objected that this is just induction on a single case. (E.g. Michael Rea makes that objection here.) But it looks to me a lot like the movie inference, for it too is backed by a meta-induction. In the past, when I have tried to infer from the fact my body is a certain way to the conclusion that others are the same way, I’ve met with reasonable success. Not 100% success, but good enough for inductive purposes. Of course other people are like me in external respects - they often have two eyes, one mouth, two legs etc. But they are also like me in internal respects, at least as far as I can tell. Consider, if you aren’t too squeamish to do this, how similar the various kinds of fluids that come out of various parts of other people’s bodies are to fluids that come out of matching parts of one’s own body. X-Ray technology reveals that we are alike in even more ways than we could have previously told ‘on the inside’. So the argument my brain states generate or constitute or correlate with phenomenal sensations, so other brains generate or constitute or correlate with phenomenal sensations is an instance of a schema that delivers mainly reliable instances. Just like the movie inference. And that inference can produce knowledge. So I think we can come to know about the existence of other minds with sensations on the basis of a single case, namely our own. If you don’t believe me, perhaps you don’t need to worry as much about movie spoilers as you thought you did!
The second of BBC Radio 3’s philosophers and places series aired last night, with a broadcast on Nietzsche and Basel (which you can listen to on the web here ). Not as good as the previous week on Rousseau (or so I thought) but still interesting. I hadn’t appreciated what a fearsome teaching routine poor Nietzsche had to undergo, 7am lectures six days a week plus teaching Greek at the local grammar school! Roger Scruton featured prominently on the programme, immediately after Radio 5 had been discussing his advocacy of squirrel-eating . (One text message suggested that feeding Scruton to the squirrels would be a better idea.)
I linked to this at John & Belle, but let me share it here - and advertise it a bit more strenuously: philosophy action figures!
I like Plato (with divided line® accessory). “Enemies progress from imagining to believing to knowing they’re in trouble!” And Gottlob “Ain’t afraid a-ya” Frege “with both Morning Star® and Evening Star® accessories (only one accessory included).” Spinoza’s good too. “The order and connection of his fists is the same as the order and connection of his enemies’ pain!” Ouch! That’ll take the everlasting joy out of life!
Reminds me of my own good old Philosophical Abecedarium. Please feel free to leave your poetical contributions in the comments box. (I’ve got two K’s - Kant and Kierkegaard - so I could use more.)
And speaking of all sorts of mind-body problems, here’s your philosophical puzzle for the day: can ‘carnal knowledge’ be adequately defined as ‘justified, true carnal belief’? Answer either as Dan Savage or Edmund Gettier.
Does anyone know if there’s a free electronic copy of Moore’s Principia Ethica online anywhere? It should be out of copyright, so there’d be no legal reason it wouldn’t be posted, but maybe no one thought it important enough to convert to electronic form. I wanted to cut and paste some long sections because I got interested in the role of necessity and a priority in Moore’s meta-ethical views, and it would be more convenient to (a) not have to transcribe things and (b) be able to refer readers immediately to the passages I’m talking about.
In British universities and, I suspect, elsewhere, medical ethics has been one of the big growth areas in philosophy (well, quasi-philosophy, anyway). It seems, in fact, that the expansion has been so fast that universities are struggling to find qualified lecturers. How else to explain that a scientist who tried to poison his wife’s gin-and-tonics with atropine and tried to cover his tracks by spiking products at the local supermarket has been taken on by the University of Manchester to lecture in philosophy and medical ethics? Do as I say, not as I do? (Hat tip Mick Hartley )
I had thought that the idea that the sunk costs fallacy is really a fallcy was as close a thing as there was to a consensus amongst philosophers. But now I see that Tom Kelly has a paper forthcoming (in Noûs) saying that honouring sunk costs can be rational. Like a few other people in the New England area, whenever I think of the sunk costs fallacy I think of the Red Sox continuing to play Tony Clark long after he showed he wouldn’t justify his $5,000,000 salary, so I’m not exactly positively disposed to the fallacy. But Tom’s paper makes several interesting points, even if it doesn’t do anything to redeem the Sox management circa 2002.
Strictly speaking, Tom isn’t as much concerned to defend the rationality of honouring sunk costs as the possible rationality of actions that are usually so described. He quibbles a bit in various places throughout the paper about whether there are as many clear experimental cases of the sunk costs fallacy, but let’s assume for now these are instances of the fallacy.
The main idea (roughly stated) is that since the value of an action is partially determined by what happens in the future (just like the value of an organism) our current actions can be sometimes justified by the redemptive value they confer on past actions. It’s an interesting idea, though I’m not sure how much it should matter in practice. For one thing, I think we need a more comprehensive theory than Tom offers here about which past actions are worth honouring. (I imagine Tom has such a theory but space constraints kept it out of the Noûs paper.) It clearly isn’t worth redeeming an off-season waiver claim by running Tony Clark out there every day when it’s really unlikely he’ll hit the ball out of the infield, let alone out of the park. Tom often refers to the kind of actions that are worthy of redemption as ‘sacrifices’, and I wonder if there’s more to be said about what makes those actions redemption worthy.
Tom also notes that honouring sunk costs, or at least being perceived to do so, can have game-theoretic advantages in certain situations. I’m less impressed by this as an argument for the rationality of such actions. (And Tom doesn’t lean on it particularly.) In some games of Chicken, the best thing to do is to unbolt the steering wheel and throw it out the window. The situations where it is best to honour sunk costs remind me of those games of Chicken. When it works, it’s a neat stunt, but it doesn’t take much for circumstances to change and then it becomes a really really bad strategy.
Obviously there’s a lot more to the paper than I can indicate in a blog post, so if you’re interested in these topics, er, read the whole thing.
My friend Rob Reich has just told me the very sad news that Susan Moller Okin died last week. Her book, Justice Gender and the Family, had a major effect on political theory, and helped produce the turn to the intimate that has happened in the last decade or so: an agenda setting achievement. I have been meaning for some time to blog about one of her arguments, but today is obviously not the day for that. I met her only once myself, but was impressed on that meeting by how the quality of the work I have admired for so long was matched by the quality of the personality I met — something one does not always find. An obituary will appear in tomorrow’s edition of the Stanford Report. (UPDATE: the full Stanford Report obituary is now online here.) Here is the press release:
Susan Moller Okin died of unknown causes last week at the age of 57. Okin was Marta Sutton Weeks Professor of Ethics in Society and professor of Political Science at Stanford University. At the time of her death she was on leave with a fellowship at the Radcliffe Institute for Advanced Study in Cambridge, Massachusetts.
Okin was the director of the Program in Ethics in Society at Stanford University from 1993-1996, and she held fellowships at the Center for Advanced Study in the Behavioral Sciences in Palo Alto and at the Rockefeller Foundation. She was also the recipient of the Bing Fellowship for excellence and innovation in undergraduate teaching, and the Allen Cox Medal for faculty excellence in fostering undergraduate research at Stanford University.
Okin was a leading voice in political theory whose work centers on justice and the absence or exclusion of women from past and contemporary political thought. She was the author of three books, Is Multiculturalism Bad for Women? (Princeton University Press, 1999); Justice, Gender, and the Family (Basic Books, 1989), for which she was the co-recipient of the American Political Science Association’s Victoria Schuck Award for the best book on women in politics; and Women in Western Political Thought (Princeton University Press, 1979).
She was a courageous voice, both in person and in her work. She spoke out against injustices wherever she saw them, often saying publicly what other people were only thinking privately. Her scholarship reflected her sense that political theory must reach out to public concerns, both in the United States and abroad. She was a feminist, with concerns about women at the heart of her work. In Justice, Gender, and the Family, Okin wrote, “My proposals, centered on the family but also on the workplace and other social institutions that currently reinforce the gender structure, will suggest some ways in which we might make our way toward a society much less structured by gender, and in which any remaining, freely chosen division of labor by sex would not result in injustice. In such a society, in all the spheres of our lives, from the most public to the most personal, we would strive to live in accordance with truly humanist principles of justice.”
She was at work on three projects at the time of her death. She was extending and expanding some recent articles on gender, economic development policies, and human rights. She was writing about evolutionary biology from a feminist perspective. And she was collecting and updating several papers on multiculturalism and women for a new volume.
Okin was born in New Zealand, and educated at the University of Auckland, and at Oxford and Harvard Universities. She is survived by her former husband, Robert Okin, and two children, Justin Okin and Laura Okin, and two sisters, Janice May and Catherine Pitt.
The charitable organization closest to Susan Moller Okin’s heart was the Global Fund for Women. A summary statement by the Global Fund describes its mission:
Your gift to the Global Fund makes it possible for grassroots women’s organizations to defend and promote the rights of women and girls all over the globe. Your support elects women to political office in rural Mongolia, … educates and empowers women who are victims of domestic violence in Chile, and provides free education to girls in Kenya…. Over the past sixteen years the Global Fund for Women has awarded more than $30 million to seed, strengthen, and link over 2,400 women’s groups in 161 countries and territories. These grants continue to sustain women’s efforts to improve and implement education programs for girls, stop violence against women, achieve economic independence, strengthen women’s political participation, and gain access to information technology.
Just prior to her death, Susan took a trip to India in January 2004, organized by the Fund. She wrote while on that trip: “My view of Mumbai’s and Delhi’s slums has been transformed from seeing them (from the outside) as totally destitute and sordid places where no one could possibly lead a decent or hopeful life, to seeing them as poor but vibrant communities, where with well-directed help from the outside, many people can improve their living conditions and hope for a better life for their children. I am inspired to give more $$ to the Global Fund [and] to help it whatever ways I can.”
A memorial fund honoring Susan Moller Okin has been set up at the Global Fund for Women. Please designate your tax-deductible gift to this fund.
The Global Fund for Women
1375 Sutter Street, Suite 400
San Francisco, CA 94109
Phone: 415-202-7640.
Fax: 415-202-8604.
Web: www.globalfundforwomen.org
Email: gfw@globalfundforwomen.org
The post below, which arose out of some discussion in my philosophy seminar last week, is a fair bit less topical than most posts on CT, but since it touches on some topics in philosophy of science and economics some people here might find it interesting. Plus I get to bash Milton Friedman a bit, but not for the reasons you might expect.
In my seminar class last week we were reading over Milton Friedman’s The Methodology of Positive Economics and I was surprised by a couple of things. First, I agreed with much more of Friedman’s view than I had remembered from last time I’d looked at it. Second, I thought there was a rather large problem with one section of the paper that I didn’t remember from before, and that I don’t think has received much attention in the subsequent literature.1
Friedman was writing (in 1953) in response to the first stirrings of experimental economics, and the results that seemed to show people are not ideal maximisers. The actual experimental data involved wasn’t the most compelling, but I think with 50 years more data we can be fairly confident that there are systematic divergences between actual human behaviour and the behaviour of people typical of economic models. The experimentalists urged that we should throw out the existing models and build models based on the actual behaviour of people.
Friedman’s position was that this was too hasty. He argued that it was OK for models to be built on false premises, provided that the actual predictions of the model, in the intended area of application, are verified by experience. Hence he thought the impact of these experimental results was less than the experimenters claimed. When I first heard this position I thought it was absurd. How could we have a science based on false assumptions? This now strikes me as entirely the wrong attitude. Friedman’s overall position is broadly correct, provided certain facts turn out the right way. But he’s wrong that this means we can largely ignore the experimental results, as I’ll argue.
Why do I think Friedman is basically correct? Because read aright, he can be seen as one more theorist arguing for the importance of idealisations in science. And I think those theorists are basically on the right track. On this point, and on several points in what follows, I’ve been heavily influenced by Michael Strevens, and some of the justifications for Friedman below will use Strevens’s terminology.2
Often what we want a scientific theory to do is to predict roughly where a certain value will fall, or explain why it fell roughly there. In those cases, we don’t want the theory to include every possible influence on the value. Some of these, although they are relevant to the value taking the exact value it did, are irrelevant to it taking roughly that value. In those cases, we can build a better theory, or explanation, or model, by leaving out such factors.
Here’s a concrete illustration of this (that Strevens uses). The standard explanation for Boyle’s Law - that for a constant quantity of gas at constant temperature, pressure times volume is roughly constant - is a model in which, among other things, gas molecules never collide. Now this is clearly an inaccurate model, since gas molecules collide all the time, but for this purpose, the model works, which tells us that collisions are not that relevant to the value of pressure times volume, and in particular to that value being roughly constant. Since this model is considered a good model, despite having the false feature that gas molecules do not collide, it seems in general we should be allowed to use inaccurate models as long as they work. That’s one of Friedman’s theses, and it’s worth highlighting.
Let’s note two more related things about the gas case. First, there’s no way to tell whether the size of the idealisation, removing all collisions from the model, is large or small by just looking at how many collisions there are. By any plausible measure, there are lots of collisions but it makes no difference to the pressure-volume product.
Second, whether an idealisation is large or small is relative to what you are trying to model. (I got this point from Michael Strevens as well.) If you’re trying to model the speed at which a gas will spread from an open container, you better include collisions in the model, because collisions make a big difference to how fast the gas spreads. Friedman makes the same point by noting that air pressure makes a big difference to how fast a feather falls, and a very small difference to how fast a baseball falls from low altitude. Let’s note this as an extra point.
All that I think is basically right, though it’s best to bracket issues about whether the idealisations really are small in the intended case. Let’s assume for now that there are lots of nice models that idealise away from non-maximising behaviour, and these models ‘work’ - they deliver surprising but well-confirmed predictions about economic phenomena. If so, the idealisations should be acceptable I think. The idealised models are very nice arguments that the existence of these departures from ‘perfect’ maximising behaviour is irrelevant to the phenomena being modelled.
It’s at this point that I think Friedman goes wrong. Friedman says that at this stage we have some prima facie evidence that other models using the same kinds of idealisations are also going to be correct. And this strikes me as entirely wrong. It’s wrong because it’s inconsistent with the view of the models as idealisations rather than as accurate descriptions of reality.
Note that the structure of argument Friedman is trying to use here is not always absurd. If evidence E supports hypothesis H, and the best model for hypothesis H includes assumption A as a positive claim about the world, then E is indirect evidence for A, and hence for other consequences of A. That’s what Friedman wants. He says that the success of hypotheses in other areas of economics provides indirect support for the hypothesis that there is less racial and religious discrimination when there is a more competitive labour market. I think the idea is that the other hypotheses show that people are, approximately, maximisers, so when trying to explain the distribution of discrimination we can assume they are approximately maximisers.
But it should now be clear that doesn’t make sense. Remember the very same idealisation can be a serious distortion in one context, and an acceptable approximation in another. Without independent evidence, the fact that we can idealise away from non-maximising behaviour in one context is no reason at all to think we can do so when discussing, say, discrimination. If we take Friedman to be endorsing the claim that it’s OK to idealise away from irrelevant factors, then at this point he’s trying to defend the following argument.
The fact that people aren’t perfect maximisers is irrelevant to (say) the probability that various options will be exercised.
Therefore, the fact that people aren’t perfect maximisers is irrelevant to (say) how much discrimination there is in various job markets.
And this doesn’t even look like a good argument.
The real methodological consequence of Friedman’s instrumentalism is that idealised models can be good ways to generate predictions about the economy, but every single prediction must be tested anew, because these models have little or no evidential value on their own. This conclusion might well be true, but I don’t think it’s one Friedman would want to endorse. But I think it’s what follows inevitably from his methodological views, at least on their most charitable interpretation.
1 Life’s too short to read all the commentaries on Friedman’s paper, so this last claim is not especially well backed up.
2 Some of the views I’m relying on are not published, but most of the details can be gleaned from the closing pages of this paper of Michael’s.
Tom Smith is playing Socrates to my sophistical Polus, if I make no mistake:
Polus: What? May I not speak at what length I please? Socrates: It would indeed be hard on you, my good friend, if, on coming to Athens, the one spot in Greece where there is the utmost freedom of speech, you alone should be denied it. But look at my side. Would it not be hard on me also, if I may not go away and refuse to listen, when you speak at length and will not answer the question. (Gorgias, 461e)
But then I cannot for the life of me think why Smith does not simply refuse to listen. Perhaps he hereby sets a cunning, socratic riddle for me to solve. He feels I have not answered the question.
Another bite at the ‘conservatives in academe’ apple it is, then. Yes, it has been nibbled by everyone, right down to the core. (Especially liked Harry’s post.) But the core is interesting.
In his second post (in response to my response to his response to my original) Smith writes:
I am struck by the parallels between the sorts of justifications those on the left come up with for the plight of conservatives, with the arguments one used to hear about blacks and Jews. When I see what looks to me as somebody posing as reasonable by saying he doesn’t personally believe conservatives are stupid, it reminds me of someone at an all-white club holding forth oh-so-liberally that he does not think blacks are actually stupid. If I was overly sensitive in getting that impression from the post I reacted to at the examined life blog, I’m sorry. But I am sensitive about it. I am just one of many conservative libertarian sorts who have been excluded from consideration for academic jobs I was well qualified for because of my politics. For example, more than fifteen years ago, the distinguished legal scholar Charles Allen Wright of the University of Texas called my note editor at Yale, Penny Rostow (neice of the distinguished late former dean of Yale Law School) to ask about me and whether I was, as rumored, a conservative. As Penny told me, she “didn’t feel she could deny it.” Professor Wright said that was unfortunate, as in that event my candidacy could go no further. It was just that simple, and I got the feeling Wright genuinely regretted this, but that was just the way it was. I think this is just like their finding out that that some otherwise qualified canditdate was a Jew, although he could “pass,” back in the bad old days at White & Case or one of the other white-shoe Wall Street firms, where they worried about “the cut of a man’s jib.” (Don’t get that reference? A jib is the foremost sail on yacht, shaped roughly like a nose. Its cut is its shape. Not liking the shape of a person’s nose is slang for saying you don’t like him because he’s a Jew. Nice, huh? And the answers to this discrimination have names like Skadden, Arps and Paul, Weiss, which rank well above White & Case in every recent survey of law firms I have seen.)
Seriously, folks. I don’t mean to pick on poor Smith so inordinately. As Empson writes: “Your majesty, my name is Smith/ the Lordliest name to conjure with”. Ain’t it the truth, in an abstract sort of thought-experimenty way? That lordly sense is the one we want here. (What I’m saying is the transcendental Smith is noble, even if the empirical one is sort of a pain.) Fact is: Smith’s report of a very nasty experience of discrimination has a suitably representative quality. Let’s let it represent. There are abstract issues of fairness and tolerance I wish to investigate, in an abstract and fair and tolerant way.
I am of the settled opinion that people ought not to do these things to conservatives, just because they are conservative. Yet - if Smith will forgive me for pinning the butterfly of his plight to the board - it is a bit hard to articulate why it is wrong. (Sorry to sound callous, in sight of such pathetic fluttering, but without some principle to cover the case, we are going nowhere fast. And, although Smith professes to see simplicity, I do not.)
First - let’s get something cleared away here - there is a genuine puzzle about intellectual consistency. There is a class of lefty academics who seem committed, by a lot of vague things they think (or at least feel), to jump to the aid of all persecuted victims of discrimination. Probably with proposals for affirmative action. Except they hate Republicans. So the likes of Smith have them squirming on the hook. On the other hand, I am just not sure Smith himself - and Horowitz; I could go on and on - is entitled. Life is full of unfairness such as Smith has suffered. People are near total bastards. If Smith is genuinely committed to righting all wrongs of this size and shape, consistency dictates he ought to move significantly to his left and start fighting for social justice on about ten fronts. Also, in a recent post he complains about the tendency of the soft left to focus on “feelings, sensitivities, nicenesses, fairnesses and all that other stuff that makes me feel like I’m trapped in a cheap restaurant where the food smells bad and they’ve used too much air freshener.” Not to get all ‘he who smelt it dealt it’ on the man, but sometimes the fresh scent you whiff may be your own. If ‘feelings’ and ‘sensitivities’ are beneath argumentative dignity, you can’t in good conscience answer arguments with a lot of talk about your sensitivities and feelings. On the other hand, as a bit of a soft lefty, I’m sort of supposed to care about such soft things. Fair enough.
At the end of the day, poetic justice is not justice, though it’s sort of catchy. That is, at some point you cut out with the petard hoists already and try to sort out what you think is right. So let’s.
One thing that’s striking about Smith’s story is how flagrant his dismissal apparently was. I am always genuinely surprised by stories like this, but I don’t disbelieve them. I know from experience that they are at least sometimes true. Read the front page horror story today over at Horowitz’ FrontPage Magazine. Then browse here for more Tales of Tenured Terror From the Crypt. I don’t know that it’s all true, of course. In fact, I doubt it. (I think once you encourage people to share these sorts of stories, they probably get a bit embellished in the telling.) But some of it is true. And the amazing thing is: you could fix almost all of it just by stipulating that instructors must treat their students in a respectful, professional, dignified, non-psychotic manner. Seriously. Yet even if all nasty political theatrics were halted - even if those few who go in for it would forego happy hours of spewing personal bile at their helpless Republican student-victims - I don’t think the likes of Smith would be satisfied. Nor should they be, perhaps. Suppose, instead of just ‘conservative? don’t let the door hit your ass,’ it were sugar-coated in faux-respect: ‘well, we just felt that your work wasn’t quite up to our high standards, because there are - frankly - so many devastating arguments on the other side, to which you obviously have no adequate response. Thank you, come again!’ That’s worse, isn’t it? So, although in a sense you could solve these problems by insisting that no one be dismissed or dissed outright for being conservative, lefties could achieve the same results by more high-toned means. So let us counsel lefty profs to behave themselves while admitting we haven’t gotten to the nub of the complaint.
What could be wrong with lefty profs maintaining their iron fist of hegemonic dominance (let’s grant they enjoy it) so long as it wore a velvet glove of civility and superficial respect for conservatives? (You see what I am saying.)
Smith suggests an analogy with persecuted Jews. Problem is: conservatives do not constitute a religion, a race, an ethnic group. Of course, ‘political beliefs’ is often added to lists of this sort. And one might venture that conservatives constitute a cultural group - the whole blue state/red state thing. We think that American universities should ‘look like America’, in some probably not very clearly conceived yet heartfelt sense; and America is rather conservative. But this is troublesome. The political science deparment can hardly ignore political beliefs in making hiring decisions. (Might as well ignore the beliefs of the physicists about physics.) More generally, the conservatism that bangs at the gates of the Ivory Tower, demanding admission, is an intellectual position, or cluster of them. Even if the academy ought to be democratic in some sense, in another sense - intellectually - it had better be aristocratic. The whole point is to discriminate: separate the good ideas from the bad, all that. ‘It is, at present, the consensus in a number of fields that conservative ideas are beyond the frozen limit. Thank you! Come again!’ Of course, this is unsatisfactory, for reasons whose production I leave as an exercise to the interested reader.
At the end of the day, lefties cannot plausibly posture as Platonists who have left the cave and apprehended cognitive truth beyond the ken of their conservative, troglodyte brethren. It’s just that lefties and conservatives don’t agree. Fundamental differences in moral temperament. Hilary Putnam describes the situation well, pondering his fraught relationship to Nozick (from this Mark Kalderon chapter; thanks, Brian):
But what of the fundamentals on which one cannot agree? It would be dishonest to pretend that one thinks that there are no better and worse views here. I don’t think that it is just a matter of taste whether one thinks that the obligation of the community to treat its members with compassion takes precedence over property rights; nor does my co-disputant. Each of us regards the other as lacking, at this level, a certain kind of sensitivity and perception. To be perfectly honest, there is in each of us something akin to contempt, not for the other’s mind – for we each have the highest regard for each other’s minds – nor for the other as a person –, for I have more respect for my colleague’s honesty, integrity, kindness, etc., than I do for that of many people who agree with my ‘liberal’ political views – but for a certain complex of emotions and judgments in the other
And now the lefty asks, all innocence: ‘If you can’t hate someone because he is contemptible, what can you hate him for?’
And the conservative says: ‘You contemptible little swine! I deserved that job!’
Returning to Smith’s case: why was it wrong for professor Wright to discriminate against Smith on the basis of Smith’s conservative temperament? I personally take it to be wrong. Smith takes it to be wrong. But obviously professor Wright saw things differently. What was his error? Do we think that the university ought to look like America temperamentally? On what meta-view is this an inherently desirable - nay, mandated - goal? Answer: J. S. Mill’s view that we all ought to read Bentham and Coleridge, because there’s a sort of permanent, eternally-poised balance of wisdom between the progessive and the conservative mind.
This is not a problem for me because I sort of think that’s right, most days. (I like Mill and Trilling. I surely do.) Certainly I don’t feel blanket contempt for conservatives - though most of them vex me sorely, and I think they are going through a very bad patch, except for the libertarians. (And Russell Arben Fox, and his five friends, who are stand-up guys.) I need conservatives around to use as punching bags. I think there ought to be more conservatives in lots of corners of academia because they would generally brighten the place up. So there is a perfectly straightforward utlilitarian justification for hiring them. No arguments about their violated rights are necessarily. They get in on the merits, if everything goes smoothly.
But suppose I really didn’t see the utilitarian benefits of conservatives always chiming in with their contemptible judgments. Why should I feel that fairness requires me to give them some professorships? A lot of standard arguments - Kantian ones about respecting autonomy, for example - don’t cut ice from here on out. I can think I have no business paternalistically preventing someone from holding wrongheaded views. That does not mean I am obscurely obliged to give anyone an endowed chair in applied wrongheadedness. (Kantianism - and liberalism generally - mandates respect in a weak sense; it takes more than that to get hired as a professor.)
Commitment to fallibilism? Get serious. Lefties and righties who tolerate each other do not do so because they seriously worry that some advanced discovery in ethical theory might show the other side to be right tomorrow.
The fact that you should only judge people on their intellectual capacities for intellectual jobs? But no one can seriously doubt that temperament, of the sort we are discussing, fundamentally affects - maybe dictates - the intellectual outcome in humanistic endeavors. We can hardly care about intellectual endeavors without caring about their outcomes. So why not care about temperament?
Getting back to Mill: why does he think it’s important to have a healthy ecological balance of conservative/ lefty - or conservative/progressive, as he more sensibly styles this opposition? He has sort of a method-acting theory of the whole business, at least in part. The human mind - or maybe: humanity as a whole - has an innately schizophrenic temperament; and if the philosophical spirit departs from this schizophrenia, spiralling off into undue consistency, it must lose touch with one or the other of its spiritual tap-roots; then philosophy loses touch with life and mind itself, which will always be the same.
Now the Platonist (not to mention Mill in his progressive moods) will not like this at all. The point of thinking about right and wrong is not to engage in endless conservative-progressive isometrics in the cave, by way of ending up exactly where we damn began. The point is not to wallow in our schizophrenic inadequacy and inability to rise above ourselves. The point is to purge wrong elements from our spirits - conservative impulses, say - and emerge into the glorious light of progressive enlightenment. (There is an alternative, conservative version of this program, obviously.)
Yet there is a beautiful cyncism to the thought that, as natural-born spelunkers, we do best to stay where we were put. I quote Terry Pratchett’s Small Gods. The speaker is, of course, a philosopher:
‘Life in this world,’ he said, ‘is, as it were, a sojourn in a cave. What can we know of reality? For all we see of the true nature of existence is, shall we say, no more than bewildering and amusing shadows cast upon the inner wall of a cave by the unseen blinding light of absolute truth, from which we may or may not deduce some glimmer of veracity, and we troglodyte seekers of wisdom can only lift our voices to the unseen and say, humbly,’Go on, do Deformed Rabbit. . . it’s my favourite.’
It is possible to view the request to balance conservative-progressive temperaments as a request to ‘do deformed rabbit’. It’s ugly, it makes no sense - it’s all shadow-play; it just has to be - but its our favorite. So let’s have it.
The thing that makes this less absurd is that it touches on a more positive conception of value. I quote from Sherwood Anderson’s Winesburg, Ohio (why? because Lionel Trilling likes it). This is a from a vignette, “The Book of the Grotesque”, in which a rather grotesque old man write a book about grotesque people:
In the beginning when the world was young there were a great many thoughts but no such thing as a truth. Man made the truths himself and each truth was a composite of a great many vague thoughts. All about in the world were the truths and they were all beautiful. The old man had listed hundreds of the truths in his book. I will not try to tell you of all of them. There was the truth of virginity and the truth of passion, the truth of wealth and of poverty, of thrift and of profligacy, of carelessness and abandon. Hundreds and hundreds were the truths and they were all beautiful. And then the people came along. Each as he appeared snatched up one of the truths and some who were quite strong snatched up a dozen of them. It was the truths that made the people grotesques. The old man had quite an elaborate theory concerning the matter. It was his notion that the moment one of the people took one of the truths to himself, called it his truth, and tried to live his life by it, he became a grotesque and the truth he embraced became a falsehood.
Quite frankly, I don’t see why you should give professorships to people with ideologies you find vaguely contemptible unless you are, in some sense, trying to keep hold of your primordial non-grotesqueness. Or unless it’s the only way to keep everyone from just killing each other.
And yet I don’t believe many people believe in this sort of thing. I mean: a primordial soup of logically incompatible values - conservatism, progressivism, so forth. David Horowitz, for example. He sells T-shirts about how you can’t be educated if you’ve just heard half the story. But I don’t think he actually believes there are two stories, worth telling. He just wants to be the one to wrest that last copy of Eric Hobsbawn - or whoever - from the cold, dead hand of the last tenured weird-beard, in the last pinnacle of the last Ivory Tower. And repopulate it all from the ground up with right-thinking folk. Perhaps I’m being ungenerous. But I just don’t think many conservatives - or lefties, though I hope we are a little wiser - believe in ‘two sides of the story’ as an end in itself. Two sides is a tactical bother that is difficult to overcome, and perhaps therefore worth ratifying into semi-permanence by means of disagreeable treaties. That is all. Don’t know where Smith stands on all this.
This is all related, in subtle ways I can’t tell you about because I don’t yet know what they are, to certain ideas in this interesting paper Brian linked to just yesterday.
This is maybe it for me at Crooked Timber. My week is up. It’s been fun. (I’ll probably be back again if they’ll have me. Maybe Belle and I can be like Green Lantern and Supergirl. You know. Heroes who are only in a few episodes of the Super Friends. But you were always glad when it turned out to be one of those episodes with heroes you didn’t usually get to see. Made for more variety.)
UPDATE: Reading through comments, just thought I would mention one way in which the post has been misread. People are taking me to be in favor of discrimination against conservatives when, in fact, I am quite staunchly against. I think the reason for the confusion is that I say at a couple points that ‘it’s not clear why it’s wrong’. I see how this can cause confusion. It sounds like I’m saying: ‘hey, it looks OK.’ Actually, what I meant was: here is a thing that seems very wrong, but it is surprisingly hard to articulate a principle explaining WHY it is wrong. Looking for that, you sort of experimentally poke at the thing agnostically. Which makes you look like a bastard contemplating discrimination against conservatives. But, no, you are just an inquiring mind. Really. I honestly wish there were more conservatives in the humanities. It would be a better place all around.
This is what I need more of - theoretical justifications for not reading things.
Neil Levy, Open-Mindedness and the Duty to Gather Evidence; Or, Reflections Upon Not Reading the Volokh Conspiracy (PDF)
At times Neil comes perilously close to endorsing Kripke’s paradox. Assume p is something I know. So any evidence against p is evidence for something false. Evidence for something false is misleading evidence. It’s bad to attend to misleading evidence. So I shouldn’t attend to evidence against p. So more generally I should ignore evidence that tells against things I know.
But Neil’s main point is more subtle than that. It’s that it can be a bad idea to approach a topic as an expert when in fact you’re not one. And that seems like good advice, even if you really should be reading the Volokh Conspiracy (for instance).
I recently read Nietzsche’s The Genealogy of Morality with a group of colleagues. To the extent to which I understood the book (and despite the book’s brevity I’m feeling somewhat sympathetic to those snakes who have to sit around whilst they digest a large mammal), my comprehension was greatly assisted by Brian Leiter’s excellent Nietzsche on Morality . Reading the reviews and commentary on Mel Gibson’s Passion, I was immediately reminded of a passage from the second essay, where Nietzsche is writing about the genesis of guilt from the sense of indebtedness (at first to ancestors) and remarks on the further excruciating twist that Christianity brings: on the pretext of having their debts forgiven, believers are put in a postition of psychological indebtedness from which they can never recover (He sent his only son, and we killed Him):
…. we confront the paradoxical and horrifying expedient with which a martyred humanity found temporary relief, that stroke of genius of Christianity—God’s sacrifice of himself for the guilt of human beings, God paying himself back with himself, God as the only one who can redeem man from what for human beings has become impossible to redeem—the creditor sacrifices himself for the debtor, out of love (can people believe that?), out of love for his debtor! (sec. 21)
I haven’t seen Gibson’s film yet (since it doesn’t open in the UK for another month) but it is clear from the reviews that it is precisely this aspect of the Christian story that Gibson accentuates through his relentless focus on the torture and suffering of Jesus. (And see the email of the day on Andrew Sullivan for evidence that some believers are taking the movie in exactly this way.)
Contrast this with, say, Pasolini’s treatment of the story in his The Gospel According to St. Matthew , where another aspect of the Christian message is emphasised: that we all belong to a common humanity, that each person has moral worth and should be recognised as such, and that compassion is an appropriate attitude to the suffering of our fellow humans (a vision powerfully expressed, also, in Joan Osborne’s song “One of Us”). Nietzsche doesn’t like this aspect of Christianity either, of course, but for me at least, it is the most attractive feature of the religion. Not just attractive, of course, but morally and politically important and influential: the basic equality of humans posited by both Locke and Kant is strongly rooted in this Christian tradition (which poses an unresolved problem, I think, for those of us who want to hang onto that moral idea whilst rejecting religion - c.f Jeremy Waldron’s recent God, Locke and Equality ).
One of the reasons I can’t bring myself to share the antipathy to religion that is expressed by someone like our esteemed regular commenter Ophelia Benson , is that, at its best, religion succeeds in a symbolic articulation of universal moral concern that secular morality finds it hard to match up to (motivationally, I mean). Secular morality is a thin gruel compared to the notion that, as children of God, we are to think of ourselves as brothers and sisters. It sounds as if Gibson’s film is a reminder not of religion at its best, but at its very worst: cruel and sadistic and aiming to provoke a mixture of guilt, worthlessness and rage in believers. I’m keeping an open mind about whether the film is specifically anti-semitic, but it sounds very much as if the film draws on and inflames the very reactive attitudes that have inspired much religious violence and persecution (not to speak of personal unhappiness) in the past.
Following the whole Max Cleland, Ann Coulter, Mark Steyn controversy the other day, I was struck by the fact that the defenders of the smearers thought it a sufficient reply to their critics to say that what was said was literally true. (Whether it was literally true is, of course, another matter.) For once, it seems to me, philosophy can be of some use in showing that such a reply is inadequate.
Speech act theory is a pretty unsexy branch of philosophy of language these days (though elsewhere people like Habermas keep it above the visibility threshold, and there have been some daft attempts to deploy it in defence of the idea that pornography silences women). Indeed I’m not even sure that students get taught the basic distinctions on phil lang courses (which tend to be post-Davidsonian in content). But when it comes to thinking about what is going on in political discourse, it isn’t half helpful.
In his book, How to Do Things With Words , J.L. Austin famously distinguishes among three aspects of an act of communication by a person:
(1) Locutionary content — the literal meaning of what is said or written.
(2) Illocution — what they are trying to do in speaking or writing. So they may be warning, threatening, insulting, smearing, praising or whatever.
(3) Perlocutionary effect — what they manage to achieve chez the hearer or reader. So they may be trying to threaten me but, if I just burst out laughing then the perlocutionary effect is somewhat different to the one they intended by their illocutionary act.
Now it is plain, I think, that most human discourse, and especially most op-ed comment, doesn’t take the form of simply informing the reader of the literal meaning of a series of sentences. Indeed, its principal goal is, to put things in rather 18th-century terms, the inflammation of the passions. The purpose of Coulter and Steyn in writing the sentences they wrote wasn’t to convey an accurate picture of Cleland’s military and political career (a task which would have taken many, no doubt tedious, volumes). It was rather to demean and belittle him in the eyes of their readers and to neutralize him as a critic of the US Republican Party and the Bush administration. To appeal to the literal truth of the few sentences they wrote is as disingenuous here as Marc Antony saying that Brutus was an honourable man and is no defence at all to the charge that they were engaged in a foul smear. At least, it is nowhere close to being suffient to rebut that charge partly because of the inevitably selective nature of the “facts” they chose to recount.
As luck would have it, the perlocutionary effects of Coulter’s and Steyn’s acts of smearing have been to provoke ridicule and to lower Steyn’s reputation yet further in the eyes of his readers (an effect not possible in Coulter’s case for rather obvious reasons).
The APA (American Philosophical Association) is looking for stories about how valuable philosophical training has been to people other than professional, full-time philosophers.
The following mail just went out over the APA mailing list.
Dear Colleague:I write in behalf of the American Philosophical Association to ask one specific question.
We who teach philosophy know that we are unlike most of our students, very few of whom pursue our subject professionally. We also know that philosophy can have a profound and lasting influence on students who follow other paths and who later view their undergraduate study in philosophy to have been powerfully important to their intellectual and professional development. Once in a while we learn of this through public comments made by distinguished people in fields as diverse as finance, the arts, or medicine. More often it is private, anecdotal information: an alumnus writes, out of the blue, or a chance meeting prompts an expression of gratitude, or sometimes even an unexpected bequest confirms that philosophy mattered more than we then knew to someone who remembered it to the end.
The APA would like to compile more systematic information about how philosophy has affected and is appreciated by such former students. But finding them is hard. We ask each of you to think about former students who have gone on to success or even distinction in some other domain, and just tell us who they are and, if you know, where we might find them. We’ll then ask whether they are willing to tell us what their previous study of philosophy means to them now. We might also use some of these names later in a fund drive we are now planning.
Thus, the question: who are they? Please reply, if possible, within two or three weeks. Your responses collectively can be of great help to the APA and to our discipline.
Thank you.
Regards,
Michael Kelly
Executive Director
American Philosophical Association
This seems like a good project, and if you can help out, either by suggesting former students or as a philosophy graduate who wants to say just how good philosophical training is, please contact Michael.
UPDATE: Further to the post yesterday, Brock Sides (via Tyler Cowen points to this list of famous philosophy grads. I never knew Bill Clinton was a philosophy major. That explains a lot I think - we know we don’t know what the meaning of ‘is’ is either.
This will mostly be of interest to philosophers and fellow travellers. The APA Pacific Divivision conference program is now online. This is worth noting for a couple of reasons. First, the conference is absolutely packed with good papers. Every session has, IMNSHO, multiple papers that are worth travelling to see. If you are undecided about whether to go to the conference, seeing the program should tip the balance. Second, there is a mini-conference on global justice running during and after the APA, organised by (among others) our own Harry Brighouse. This will be of interest to many CT readers I think. Since this does not entirely overlap the APA, those interested in it should make sure their travel plans allow them to attend. I imagine many attendees will have already booked their travel to the conference, but for those that have not, it is worth checking to see whether you want to stay around for the mini-conference after the main show.
The “Centre for Research in Modern European Philosophy” is organizing a conference with the nonsense title of NOISETHEORYNOISE#1 although NONSENSETHEORYNONSENSE might be more appropriate. The “theme” of the conference is described thus:
Noise is an unprecedented harbinger of aesthetic radicality: no-one yet knows what it is or what it means. This non-significance is its strength rather than its weakness. Noise is ‘non-music’ not because it negates music but because it affirms a previously unimaginable continuum of sonic intensities in which music becomes incorporated as a mere material.
And further elaborations include:
Where a ‘new aestheticism’ might present itself as a resistance to pragmatic instrumentality, postmodern academicism continues to adopt theory as ballast: works are mere pretexts for ostentatious displays of theoretical chic. But in what way could noise change the conditions of theoretical possibility, not to say intelligibility or even sensibility?
In what way indeed? Explanations on a postcard please …. (or in comments).
I was thinking of leaving my little rant about Colin McGinn somewhere where other Timberites might not get any blame for it, but since Chris mentioned it, I figure it’s worth reposting here. McGinn is a relatively famous British philosopher, now at Rutgers, who in the 1980s produced some influential material on the mind-body problem, although his more recent work has not attracted as much attention. For various reasons (including his meteoric rise through the profession, the accessibility of his theories, his wide ranging interests, and his willingness to produce harsh verdicts on other philosophers) he became fairly well-known in broader intellectual circles. And now he’s written an autobiography. This led to an interview in the Times of London. (Note this is now subscriber-only, but I’ve put most of the text on my site.) The most notable passage is:
“I won’t talk to my colleagues about philosophy. It is too boring to me,” he says.But why?
“They are too stupid.”
He can’t say that!“No, they don’t get it. And I don’t want to have an hour’s conversation about it.”
But they have read the same texts?
“Oh, yes. This is where I get much more intolerant. I know exactly what they are going to say. They ought to know what I am going to say, but apparently they don’t.
“It is a fault. But I am not as bad as Bernard Williams. He apparently was horrible to people. He could not tolerate people being less clever than him. He was quicker than anybody else, and if they were not as quick as him, he would show his disdain for them.”
It’s worth noting that in most people’s view Rutgers has some of the smartest philosophers currently active, and in McGinn’s area of work (philosophy of mind) it is probably the leading department in the world. It is also worth noting that the memorial service for Bernard Williams at Oxford was a few days after this piece was published in the Times, although possibly McGinn would not have known that when he gave the interview.
Elsewhere he claims to be a vegetarian who happens to eat meat, which opens up whole new ethical possibilities. Could one be a charitable man who just happens to have not made any donations for a decade or so?
One reason for highlighting all this of course is that it’s very amusing, and blogs are built for this kind of light comedy. Another is that I feel like sticking up for my “stupid” friends. But the other thing that quite annoys me is the worry that people will read this and have McGinn as their model of a modern analytic philosopher. There are any number of people who could be worth interviewing in a major newspaper who would generate a more positive, and more accurate, impression of the state of the profession. (Dave Chalmers, call your agent!)
It’s worth mentioning again that CT has no communal policy, so everything I post is the responsibility of me and me alone.
Brian has a post on his other blog which I think ought to get wider circulation: it is a discussion of and reproduction of a Times interview/profile of cuddly, charming, self-effacing philosopher Colin McGinn.
Norman Geras tells a couple of Sidney Morgenbesser anecdotes , but (at least IMHO) omits the best one, where Morgenbesser was asked his opinion of pragmatism:
“It’s all very well in theory but it doesn’t work in practice.”
Here’s Wolf Blitzer’s current poll question
Do you think any of the Democratic candidates for president can beat George W. Bush?
I honestly don’t know what this means, so I figure I’d throw it over to the LazyWeb. It seems to me that if I answer ‘Yes’, I’m implying that I believe that any of the Democratic candidates for president can beat George W. Bush. And that’s false since I know Sharpton and Kucinich can’t. (At least if we ignore distant possible worlds they can’t.) But if I answer ‘No’ I’m implying that I don’t believe that any of the Democratic candidates for president can beat George W. Bush. And that’s false since I know Dean, Clark, Kerry etc can all handily whip Bush.
The problem is that ‘any’ behaves differently in positive and negative environments. Maybe this is just a presupposition failure, as in “Have you stopped voting Republican?” but I don’t remember seeing it discussed before.
I have an inexplicable fondness for college ‘football’, but I’m worried about what will happen to the economy Sunday if this NY Times report is correct.
If the [LSU] Tigers win and claim the Bowl Championship Series title, Saban will be paid one dollar more than the highest-paid college coach in the nation, according to an incentive clause in his contract.
Since Saban is a college coach, it seems he must be paid a dollar more than he is paid. Which can only happen if a dollar is worthless, which I imagine would be rather disasterous for well-established economic relations.
There are a few potential ways out of this problem.
First, one might argue that the above reasoning depends on Saban having a finite salary. If his salary is infinite, then one could argue that very loosely speaking he is paid a dollar more than he is paid. But believing this probably depends on confusing cardinals with ordinals, and in any case having infinite amounts of money sloshing through the system really can’t be good for inflation. So let’s not take that option seriously.
Second, Saban could be fired immediately so that the initial premise, that he is not a college coach, is broken. This would be a rather ungrateful reaction to the guy who just won you a (share of the) national championship, but it might be in the best interests of the world economy.
Third, LSU might get beaten. I quite like LSU though, largely because they host my favourite ethics conference, so I don’t want this outcome if it can be avoided.
Obviously this is all meant to be something of a joke, because one presumes that the quantifier domain in Saban’s contract is meant to only include other coaches. Even on that interpretation, as soon as one other coach gets the same clause we could be in trouble. And given how hard it’s been for Nebraska to lure a big name coach, I would not be surprised if they do offer such a clause to prospective candidates.
Thanks to Invisible Adjunct for the original link.
Remembering the Eisenhower parody below had me leafing through the Macdonald anthology and looking at some of my other favourites (and then googling to see if they are on the web anywhere) Pride of place goes to Paul Jennings’s Report on Resistentialism which begins thus:
It is the peculiar genius of the French to express their philosophical thought in aphorisms, sayings hard and tight as diamonds, each one the crystal centre of a whole constellation of ideas. Thus, the entire scheme of seventeenth century intellectual rationalism may be said to branch out from that single, pregnant saying of Descartes, ‘Cogito ergo sum’ - ‘I think, therefore I am.’ Resistentialism, the philosophy which has swept present-day France, runs true to this aphoristic form. Go into any of the little cafés or horlogeries on Paris’s Left Bank (make sure the Seine is flowing away from you, otherwise you’ll be on the Right Bank, where no one is ever seen) and sooner or later you will hear someone say, ‘Les choses sont contre nous.’ ‘Things are against us.’ This is the nearest English translation I can find for the basic concept of Resistentialism, the grim but enthralling philosophy now identified with bespectacled, betrousered, two-eyed Pierre-Marie Ventre.
Read the whole thing.
I’m in the odd position that my favourite ethical theory is one I regard as having been decisively refuted. The theory is a form of consequentialism that I used to think avoided all the problems with traditional forms of consequentialism. I now think it avoids all but one or two of those problems, but those are enough. Still, whenever I feel like letting out my inner amateur ethicist, I keep being drawn back to this theory.
It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.
What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.
We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)
So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)
But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.
It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.
This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.
(By the way, the reasons I gave up the theory are set out in this paper that I wrote with Andy Egan. The paper looks like a bit of a joke at first, but it makes a moderately serious point. Roughly, the point is that although the form of consequentialism set out here is not vulnerable to just the form of ‘moral saints’ objection that seems devastating to Benthamite utilitarianism, there’s still a moral saints objection, and that is a problem.)
In a couple of recent posts, Matt Yglesias has raised the question of how consequentialists should handle “other-regarding” preferences. He gives two examples. The first is about the possible execution of Saddam Hussein
My own take on the punishment issue leads to a somewhat paradoxical result. … If Iraqis would feel better with him executed, then go for it…
I like to think of this as a wise and sophisticated point of view, but the trouble is that my preferences depend on other people’s preferences. As long as not very many people agree with me, that’s fine, but if some huge portion of the world were to decide I was right, then you’d wind up with an unfortunate self-reference paradox. Sadly, consequentialist attitudes tend to have these kind of results and I think that if I were smarter I would dedicate my life to resolving the problems.
The second is about the preferences of people who are repulsed by overtly gay behavior. Matt concludes that their preferencesmust be counted, although they should be argued against.
This is an issue of considerable practical interest to resource and environmental economists, because of the popularity of stated preference methods for evaluating public goods such as environmental preservation. I find these methods problematic and one big problem is the treatment of other-regarding preferences.
This is why I have an article on the topic in the American Journal of Agricultural Economics, (PDF and algebra alert). Not, I imagine the kind of journal that philosophers like Matt read with any regularity
In this paper, I show that the kind of disinterested (the jargon term is ‘nonpaternalistic’) altruism considered in the Iraq example, (I want whatever the Iraqis want) should not be counted in a consequentialist evalation. The argument proceeds by comparing what you’d get by looking at individual preference statements with those from M members of a mutually altruistic household or community. You’d hope that, in any operational procedure these would be the same, especially when each member of any given household wants the same thing.
This is true of a voting procedure, for example. We get the same outcome if we allow each member of each household to vote individually, or if we let them collectively cast M votes (remember that in this simple example they all vote the same way)(.
By contrast, if you try to implement a consequentialist assessment taking account of the mutual altruism of the household, you end up in a complete mess. In the case of perfect altruism, each individual gets counted M times over, once for themselves and once for every other family member. Things get even worse when some groups have negative altruism towards others (I want them to get whatever they don’t want).
So Matt’s Iraqi example can be resolved reasonably easily by saying that we should evaluate the consequences for Iraqis, on the standard assumption that they know their own preferences, and that Matt’s altruistic preferences should be disregarded. Some difficulties arise when we ask whether Kuwaitis, Iranians etc should have a say also, but I don’t hink they are insuperable.
The second problem is much trickier, and can’t be resolved at the level of abstraction we are currently using, simply because we don’t have a self-evident criterion for classing things as self-regarding or other-regarding. Consider, in addition to Matt’s example of overtly gay behavior and compare the cases of public nudity and smoking in enclosed public places. I guess that most people reading this would want to permit the first, limit the second and prohibit the third but that obviously would not have been the case fifty years ago.
As Matt mentions, Mill tried to bluster his way past this one at the beginning of On Liberty, but he also showed the correct approach with the rest of his discussion . Rather than seeking a first-principles argument that other-regarding preferences should not count, Mill gives consequentialist arguments to suggest that we are all better off if society defines a sphere of self-regarding actions and lets individuals choose for themselves within this sphere. This is true even if, in some short term sense, aggregate utility would be increased by imposing conformity with social norms. Hence, once we have decided that marriage is (largely) within the private sphere* and that homosexuals and heterosexuals should have equal rights, we should disregard, for policy purposes, the preferences of people who are uncomfortable about this. [You can strengthen this case with rule-utilitarianism if you want to, but I don’t think it’s necessary]
*Of course, there’s a huge feminist debate about this, but I don’t think it’s crucially relevant to the point I’m making.
Reference: , ‘Individual, household and community willingness to pay for public goods’, [AJAE 1998, 80(1), 58-63.
The London Times now syndicates Randy Cohen’s The Ethicist columns from the NYT Magazine. I was appalled to read today’s muddled effort :
IN MY CAR, the back seats by the doors have lap belts and shoulder harnesses, but the middle seat has only a lap belt. My two children, aged three and seven, ride in the car, and occasionally we pick up another child. Ethically, who should sit in the middle, less safe, seat — one of my children or the friend?
You should put your own kids in the shoulder belts, if their size and the law allow (and, if they’re very young, in child safety seats in the back). While all children have a claim on your compassion and concern, your primary responsibility is to your own: particular relationships entail particular ethical obligations.
I confess that I never thought of anything beyond which kid would fit best and separating the ones most likely to fight if seated adjacently to one another myself. But Cohen’s reasoning here is entirely wrongheaded. Sure, there are times when it is right to put your own children first (such as reading bedtime stories), but when you are in loco parentis for other people’s the duty is, if anything, when it comes to avoiding real harms, to take special care of theirs. And beyond that, duties of justice quite generally don’t permit us to favour those close to us over strangers (there isn’t a stronger duty to repay a debt to a close relation than to a distant one or to an non-relative).
This will be of very little interest to non-philosophers, but we probably get enough philosophers through here to make it worth posting. I did a break-down of the 438 jobs advertised in Jobs for Philosophers this fall in order to get some picture of what demand was like for job candidates with different specialisations. The results aren't too surprising, but there might be some interesting stuff here, especially for PhD students going on the job market in upcoming years.
First, the data, then some explanation.
With Areas Distributed | Tot | Phil | TT | Top 50 |
Science | 33.3 | 28.1 | 20.4 | 6.7 |
Language | 14.7 | 13.1 | 8.3 | 3.0 |
Mind | 19.9 | 17.4 | 13.0 | 3.7 |
Epistemology | 22.1 | 19.9 | 14.8 | 4.0 |
Metaphysics | 16.6 | 15.1 | 10.9 | 2.7 |
Logic | 11.8 | 10.3 | 6.3 | 1.8 |
Theoretical Ethics | 43.1 | 39.6 | 32.6 | 8.6 |
Legal Philosophy | 21.6 | 12.9 | 9.3 | 1.7 |
Applied Ethics | 60.1 | 39.3 | 24.0 | 2.1 |
Aesthetics | 11.5 | 10.0 | 8.2 | 1.1 |
Political | 23.9 | 20.2 | 15.3 | 4.3 |
Ancient | 29.2 | 27.7 | 22.7 | 3.6 |
Early Modern | 32.7 | 31.2 | 25.1 | 5.8 |
Other History | 28.9 | 23.4 | 20.4 | 2.9 |
Continental | 19.0 | 17.5 | 14.3 | 0.9 |
Asian, African-American | 26.4 | 21.8 | 16.6 | 1.9 |
Other | 21.2 | 17.7 | 11.0 | 1.9 |
With Every Area Counted | Tot | Phil | TT | Top 50 |
Science | 171 | 139 | 104 | 32 |
Language | 148 | 122 | 89 | 28 |
Mind | 156 | 129 | 96 | 28 |
Epistemology | 166 | 138 | 105 | 30 |
Metaphysics | 152 | 126 | 94 | 27 |
Logic | 136 | 110 | 78 | 20 |
Theoretical Ethics | 189 | 159 | 122 | 34 |
Legal Philosophy | 149 | 114 | 83 | 22 |
Applied Ethics | 194 | 144 | 104 | 21 |
Aesthetics | 132 | 106 | 79 | 20 |
Political | 163 | 133 | 100 | 31 |
Ancient | 158 | 132 | 101 | 23 |
Early Modern | 169 | 143 | 108 | 29 |
Other History | 161 | 131 | 101 | 24 |
Continental | 135 | 109 | 80 | 17 |
Asian, African-American | 141 | 112 | 81 | 18 |
Other | 137 | 109 | 77 | 18 |
With Areas Distributed | Tot | Phil | TT | Top 50 |
Core | 105.2 | 93.7 | 66.1 | 20.1 |
Ethics | 155.5 | 118.0 | 86.9 | 17.1 |
History | 99.3 | 88.8 | 72.8 | 13.6 |
Other | 76.0 | 64.5 | 47.3 | 5.8 |
With Every Area Counted | Tot | Phil | TT | Top 50 |
Core | 216 | 183 | 138 | 45 |
Ethics | 271 | 212 | 160 | 39 |
History | 208 | 178 | 141 | 34 |
Other | 173 | 142 | 105 | 19 |
In the 'distributed' tables, I counted a job as being 1/n'th of a job in each area listed as being open for it. So an applied ethics/ancient/epistemology job would count 1/3 for each of those three areas. Most importantly, the 117 open jobs counted as 1/17'th of a job in each area. This is not obviously appropriate - an open job is more valuable for a candidate in theoretical ethics or early modern or mind than it is for a candidate in aesthetics or Asian philosophy or (to some extent) philosophy of language. But it was the best I could do. In those tables I also counted open rank jobs as being 1/2 a tenure-track job.
In the 'every area counted' I didn't use any such fractional analysis. An applied ethics/ancient/epistemology job would count as 1 job in each area.
Most of the categories in the top two tables are self-explanatory, but a note on the two 'other' areas. 'Other history' mostly ended up meaning medieval, but also included a few 19th and 20th century positions. 'Other' included, inter alia, philosophy of religion and feminist philosophy. I was more than a little embarrassed by the stereotypes I was living up to in throwing those into a generic 'other' category, but not embarrassed enough to go back and recode everything - which by the end became a bit of a task because of how bad a coder I am.
For the summary categories at the end, 'Core' is Science + Language + Mind + Logic + Epistemology + Metaphysics, 'Ethics' is Theoretical and Applied Ethics, Legal, Political and Aesthetics, 'History' is Ancient + Early Modern + Other History, and 'Other' is everything else. ('Ethics' really is value theory, broadly construed.)
The first column counts all jobs in Jobs for Philosophers. The second column restricts attention to jobs in philosophy departments. The third to tenure-track jobs in philosophy departments, and the fourth to tenure-track jobs in top 50 philosophy departments. (Top 50 here means in the top 50 in the Leiter Report or, for schools outside the US, listed as being equivalent to a top 50 department or, for schools without a PhD program, of the standard of the departments previously listed.)
There's a few obvious trends. The ratio of Core to Other jobs inside the Top 50 and outside it is noteworthy. The 5.8 Top 50 jobs in 'Other' is actually quite misleading, because that's just a consequence of the fact that there are 17 open jobs in the top 50. If we assume those are really core/ethics/history jobs, the number of other jobs falls to 1 or 2. I was a little surprised by the low number of applied ethics jobs in the Top 50.
There are also a few things I didn't really expect. I don't know if it's a one-year trend, but Science is way ahead of other core areas, especially when the distributions are done. Partially this is because there are very few jobs in just metaphysics, while there are quite a few jobs in just science. The low numbers for metaphysics and logic should be a little worrying to students working (or thinking of working) in those areas. Any such candidate should, at the very least, try to go on the market with a very solid competency in a related area (especially science, epistemology or mind), and ideally with a second AOS.
Also, I hadn't expected how many jobs there would be in each area. The 117 open jobs are obviously pushing up the numbers here, but it seems most candidates could, if their placement offices were so inclined, apply for upwards of 150 jobs. In my (admittedly limited) experience most candidates apply for 40 to 70 jobs, so actually people are passing up a few jobs for which they could, technically, apply.
Last time we visited Weber State University it was to note the existence of a forthcoming volume on The Undead and Philosophy. Now comes a more worthy venture: Bob Dylan and Philosophy. Suggested paper topics include: What It’s Like to be a Rolling Stone; Dylan’s solution to the Toxin Puzzle – Don’t Think Twice It’s Alright; and The Philosophical Significance of Wiggle Wiggle.
Can I claim first dibs on Harry Potter and Philosophy, or has that already been taken?
Here’s the full call for papers.
Bob Dylan and Philosophy
Peter Vernezze & Carl Porter, EditorsAbstracts are sought for a collection of philosophical essays on Bob Dylan. The editors are currently in discussion with Open Court Press (The publisher of The Simpsons and Philosophy, The Matrix and Philosophy, and the forthcoming The Sopranos and Philosophy, etc.) regarding the inclusion of this collection in their “Popular Culture and Philosophy” series. We are seeking abstracts, but anyone who has already written an unpublished paper on this topic may submit it in its entirety. Potential contributors may want to examine other volumes in the Open Court Series.
The book plans to focus on all aspects of the work and life of Bob Dylan. Although we expect the study of his lyrics will form the core of the work, we also plan on including analysis from his work in other mediums such as film and poetry, as well as an examination of his role as a public figure, etc. Some obvious areas for papers include the history of philosophy, ethics, social and political philosophy, philosophy of religion, aesthetics, and philosophy of language. But papers in any area of philosophy will be considered.
Please feel free to forward this to anyone writing within a philosophic discipline who might be interested in contributing.
Contributor Guidelines:
1. Abstract of paper (100-750 words)
2. Resume/CV for each author/co-author of the paper
3. Initial submission may be by mail or email
4. Submission deadline: January 15th 2004Mail:
Peter Vernezze
Department of Political Science and Philosophy
Weber State University
1203 University Circle
Ogden, UT 84408-1203
Thanks to Neil Levy for the link.
Let me second Chris's recommendation of John Holbo's posts (one two) on bad writing. Despite their brilliance, I don't want to take up the thankless task John offers me. In part that's because at this time of year I have quite enough thankless tasks on my plate. And in part it's for an amusing theoretical reason.
The task, if a labour of Hercules can be called a task, John sets is to count the errors in a particularly error-ridden passage. The problem is that the number of errors a passage contains is not obviously determinate. For example, assume the passage contained the following argument.
All philosophers are positivists
So, all philosophers are bad people
At first glance it looks easy to say how many errors that contains. Two. False premise and invalid reasoning, right? But that's not obviously charitable. It's often wrong to regard arguments that are invalid on their face as thereby defective, for they may be enthymemes. The question then becomes what the hidden premises might be. Perhaps the following two premises are intended to be the hidden premises.
Being a positivist is a mark of bad character
Anyone who has a bad character in one respect is a bad person
Now the argument is valid, so one of the errors is gone. But now the argument contains three errors, not two, for all three premises are false. But maybe that's uncharitable, for the hidden premise might instead have been
Being a positivist is a mark of bad character and anyone who has a bad character in one respect is a bad person
And now we're back to two mistakes. But even that might be excessive, because maybe the hidden premise was intended to be
Either some philosophers are not positivists, or all philosophers are bad people
And now the argument is valid and only has one false premise. So it only contains one mistake. So heaven knows how many mistakes it really had.
Now for the special holiday touch. By a rather tendentious interpretation of Quine's "No entity without identity" dogma, and the fact that mistakes in arguments do not have determinate identity conditions, I conclude there are no mistakes in arguments. And if there are no mistakes in arguments, there are no mistaken arguments.
If I was going for the post-Thanksgiving Day snark award I'd say this was the best bit of news blogger X had received all decade. But any award Brian Leiter can't win is an award I don't want.
Returning to John's post, I think he's at one point a little too charitable towards the bad writers. In general, I think he's a little too accepting of the idea that difficult ideas will require difficult writing. I don't think this is true. Indeed, I don't really see much reason to believe it. To take an extreme case, some of the ideas involved in Godel's incompleteness theorems are as difficult to grasp as anything in any branch of philosophy. But that doesn't mean writing about them has to be difficult - indeed the discussion of the theorems in Godel, Escher, Bach, while by necessity somewhat incomplete, is splendidly clear. Now to be sure Hofstadter had a slightly easier task here than some because he was writing an exposition of Godel's ideas (among other things) rather than writing the ideas out anew. But there is little reason I think that a first presentation must be more difficult than a later exposition.
(Maybe I've got the wrong idea of 'difficult' here, and difficult ideas are meant to be revolutionary in the political rather than the Kuhnian sense. Different example then. Whatever its faults, you can't attack The Communist Manifesto for difficult writing.)
Finally, a little anecdote about Judith Butler, who is something of a target in these debates. (I do hope this isn't meant to be confidential - in any case it isn't meant to reflect badly on anyone.) Butler, famously, is remarkably clear in person despite being so obtuse in print. This is one reason why many people outside Theory have a higher opinion of Butler than her fellow-travellers. Anyway, since she is so clear in person, it seems she could easily be clear in print. All she'd have to do is talk for a while and release the transcript. (Isn't that how most of Chomsky's books for the last 25 years have been written?)
So Butler was asked recently why she didn't just write like she talks, and she replied, reasonably enough, that very few people in any field do just that. Everyone, or at least almost everyone, has mannerisms they adopt in print. (She just has more of them than everyone else combined.) Which got me thinking, who does write like they talk? I seem to recall reading that Moore's writing sounded a lot like he sounded in person.
Obviously no one writes just like they talk in ordinary settings. But I think some people do write a lot like they talk when, say, asking questions at colloquia. (Not coincidentally, these people tend to be among my preferred philosophical writers.) The examples that spring most immediately to mind are Frank Jackson and Ted Sider, but I'm sure there are plenty more.
John Holbo has a quite brilliant extended post about the whole Bad Writing debate (and I’m not just saying that because of the nice things he writes about CT). In a follow-up post John has more to say about how the targets of his ire get analytical philosophy wrong: they say that it is “bobbing along in the wake of logical positivism”. This is important, because people who do “theory” in the humanities often operate with a completely false idea of what philosophers think and do - a false idea that functions for them as a lazy self-defence mechanism and comfort blanket.
In his paper “Dolly: The Age of Biological Control”, Ian Wilmut suggests one interesting use for cloning technology. In that paper Wilmut basically opposes what we normally think of as reproductive cloning. (In a recent paper with Glenn McGee he has slightly softened his attitude.) But he thinks the following procedure, which as far as I can tell would be illegal under current anti-cloning legislation, would be entirely appropriate if provably safe. I agree with Wilmut, and I think there’s a very strong argument for amending the legislation to ensure this procedure is permissible.
But there is one way nuclear transfer technology might be used in procreation that I do find attractive. This is its use to replace the mitochondrial DNA in an egg. Mitochondria are the small bodies in each of our cells which supply energy. They contain DNA, which is subject to error (mutation) leading to diseases in just the same way that chromosomal mutation may cause disease. However, in the case of mitochondria we inherit those only from our mothers. A woman suffering from mitochondrial disease knows that her children will inherit the same condition. In principle, there is no reason why the embryo nucleus could not be removed from the defective egg and be placed in a recipient egg cell, itself enucleated. The recipient egg would be provided by a woman known not to have similar damage to her mitochondria, with her full informed consent. The resulting child would be exactly as it would have developed, except that it would not suffer the disease associated with mitochondria. The catch with this is that it would be possible to make multiple copies of the embryo—you might start with a thirty-cell embryo and end up with six fertilized eggs. Done thoughtfully, however, this method of nuclear transfer could provide a way to treat currently untreatable mitochondrially carried diseases.
If you think 30 cell embryos are worthy of legal defence, then I guess you shouldn’t like this idea. But that is very much a minority position, and I can’t see why anyone else should oppose Wilmut’s proposal, provided it is safe.
One of the central issues in the cloning thread has been whether infertile couples should adopt rather than use new technologies like cloning. So far I’ve been content to run with the line that even if it would socially advantageous for the couple to adopt rather than clone, they should have a legal right to clone, because they should have the legal right to have children from their own genetic stock. But perhaps I was too quick to accept the virtues of adoption. Stephen Coleman, in Should Liberals Ban Reproductive Cloning? argues that adoption may have flaws of its own.
The problem unique to adoption is that these cases involve an existing child, and in most cases, existing parents. In the words of Barbara Katz Rothman “For every pair of welcoming arms, there is a pair of empty arms. For every baby taken in, there is a baby given up”. The vast majority of mothers do not relinquish children for adoption because they want to, but rather because they are forced to through poverty. They are not unwilling to care for the child, they are simply unable. This is especially the case with international adoption. Virtually all the children adopted internationally come from economically or politically oppressed areas. Probably only the orphans from these areas can really be classed as “unwanted”. Even within the USA, one study found that 69% of parents giving children up for adoption cited external pressures, including financial constraints, as the primary reason for surrender.7 Given these problems, adoption hardly looks the glowing alternative to reproductive technology…
I’m not sure this is conclusive. Even if adoption is a faulty system it may still be right for couples to participate in it while it is, hopefully, reformed. (In general I suppose I think too much is made of sins of complicity.) But there’s some reason here to think adoption is not the perfect solution for the infertile couple some have suggested. Coleman’s article contains more reasons, as well as snappy responses to many of the prominent anti-cloning arguments.
No argument this time, just a serious question. If cloning is to be banned, that presumably means there will be criminal penalties for creating clones. Who, exactly, should be vulnerable for those penalties? If a couple X and Y decide they want a cloned baby (say with Y’s DNA inserted into one of X’s eggs), and Dr. Z assists with this so clone baby A is born of X, who should be punished for this act of illegal cloning? X? Y? Z? A? (Well, presumably not A.) Any others?
I think many people who want to ban cloning have in mind punishing Z, but I can’t tell from most discussions just exactly what their position comes to. The Weldon-Stupak bill that passed the House doesn’t distinguish, and seems to leave at least X and Z liable, and possibly Y as well. The British Human Reproductive Cloning Act only says Z would be liable, but in debate in the House of Lords Lord Hunt said that the birth mother may also be liable “under the general rules of criminal law if she is an active and knowing participant.” And obviously neither Bill settles the moral question of who should be liable, assuming, as I do not, that there should be a ban. (I think in this case punishing X but not Y seems rather absurd, but maybe Y would also be liable as an active and knowing participant.)
In part I want to figure this out so I can get the opposition picture right. As people have noted, I’ve been fairly cavalier in my representation of opposing views in earlier postings. (Normally I’m fairly easy-going about these things, but I’m actually a little embarrassed about some of the mistakes in the last post, even by blogging standards it was pretty bad in places, so I want to get things better in the future.) And in part it’s so I can have a go at dramatising the difference between pro- and anti-cloning forces. The image of Feds storming into the maternity ward to arrest X and Y, as repugnant an image as I can imagine in this whole situation, seems to make vivid some of the libertarian concerns I have with the anti-cloning movement. But if X and Y would be left to raise baby A, and only Z, the handmaiden, was taken off to jail, then obviously we can’t use just that image.
(I know that even if X and Y were thought to be criminally liable here there might be a humanitarian reason for not arresting them at the, like, ‘scene of the crime’. But it might be thought better to arrest them before A forms any emotional attachments, so maybe the Feds would choose just this moment to storm in.)
One of the neat things about the cloning debate is that it's one of very few places where you'll hear Christian conservatives saying that sex is good. Normally one hears that sex is at best a mortal sin and at worst the cause of all that's wrong with modern society. But give us a chance to make babies any other way, and all of a sudden it's sweetness and light. I mean, which of the following two kinds of activities looks to you like a 'repugnant' way to originate life?
If you picked option 2, then you too can be Leon Kass's friend. More seriously, I wonder how much my own support for cloning comes from somewhat different feelings of repugnance to Kass's.
This isn't to say, as might be hinted, that I find option 1 particularly repugnant. If I were a good Thomist I could quite imagine that I would think that. Maybe I would think something like the following:
Cloning gives us the chance for the goodness of life without the badness of sex, so it looks like a Godsend. Sad to say, some people think Godsends are only announced by people in white gowns, not people in white coats, so they don't recognise what a miracle this is. Imagine all the people, living lives unstained by sex.[Update, 4 hours later: It's been pointed out to me that as a statement of Thomistic doctrine this is about as mistaken as it is logically possible to be. There's a lesson in this - never take history lessons from me. My apologies for the screw-up.]
Maybe it's the worry a second virgin birth would undermine the importance of the first one?
Returning to the subject at hand, I think it's very natural to be completely opposed to restrictions on reproductive rights. Here's a quote from Gregory Pence's Who's Afraid of Human Cloning? (I borrowed the point that Christian attitudes to sex are in a little tension here from Pence's book, though he didn't put it quite that way.)
There are people in medical genetics and medicine with much stronger views than the one expressed here, people who have all their professional lives seen the terrible results of genetic disease. For example, respected genetic researcher Marjery Shaw once suggested that deliberately giving birth to a child with the gene for Huntington's disease should be a criminal offence. [Footnote: Shaw's suggestion is in "Conditional Prospective Rights of the Fetus", Journal of Legal Medicine 63 (1984) 99.]
My initial reaction to Shaw's suggestion is that it is simply abhorrent. Criminalising conception and birth is not something we should be in the business of, even if we can quite properly make judgments about the morality of different acts of conception and birth. Now this isn't much of an argument, which just goes to show we all have to rest on a moral intuition somewhere.
Pence's book by the way is reasonably good, but it's a bit long for what is really covered (despite being only 170 pages) and he doesn't address some of the arguments that have arisen in the comments threads here. (I don't know whether this is because (a) he missed those arguments in the literature, or (b) those arguments weren't in circulation when he wrote his book in 1998 but are now, or (c) the arguments are new to these threads. I suspect not (a), but I don't know about the (b)/(c) split.)
There is one very worthwhile point running through Pence's book. He stresses that as well as the risks that are raise by cloning, there are many other risks that are diminished. For example, he notes that we can be confident the cloned child will not have a genetic disease that causes early death. So he thinks we can reach a stage where cloning is (as far as we will know) no more dangerous than traditional breeding. He thinks this is the standard that should be reached before cloning is permissible. (I've been defending a somewhat weaker standard here, and I might write a later post on the differences between our views.)
Still, it would be nice to have a response to the more recent arguments. For future reference, here's a list of the interesting arguments that have arisen in the previous threads, as well as my responses to them. (Actually I should say 'our' response, because many of these are from the paper I'm co-writing with Sarah McGrath.)
There's still a pile of anti-cloning papers on my reading stack, but I'm not being tempted to move far from my original position that cloning should be legally available, though I have been convinced there are several reasons to heavily regulate it.
I’ve received lots of useful feedback on my earlier cloning post, and on at least one point, the risks involved in cloning, it’s clear I need to revise and expand my remarks. But first another little defence of cloning that popped into my mind.
Many people worry about the possible psychological consequences of cloning. Of course we can’t know what these will be until we try, but it’s certainly worth trying to figure out what these will be before going ahead with cloning. In one respect in one (fairly significant) situation, I think the psychological effects will be quite positive.
Consider a couple who cannot have children because the man is infertile. Their only way of having a child is to use a sperm bank. I think this is morally acceptable, but in most cases it has one cost: the child will not know who her genetic father is. So she does not know her full lineage. Now while that’s not the worst harm ever, I think it is something that could be bad, and for some people it might cause a notable amount of psychological pain.
(This is definitely not meant to be a universal truth. Many adopted children have no interest in finding out who their biological parents are. But we know that many do, so many people value knowing their genetic lineage.)
If the child is cloned from her mother, she will be in a position to know her full lineage (at least for the first generation or two). I think a cloned child may prefer that state of affairs to being the (biological) child of a stranger. Some children may be indifferent between the two, but I’m not sure that many children would prefer being the biological child of a stranger to being the clone of their mother.
(As an aside, here’s another possible benefit of cloning - one that I don’t think is really beneficial but which may appeal to some. In the case I described, it would be possible for the child to be a clone of the father. In that case there will be a sense (and only a sense) in which the child is the product of both parents, since it will grow to human size in the mother’s womb, as well as being the father’s clone. If the fact that reproduction involves both parents is meant to be important, this is a way of sorta kinda allowing for that. I don’t particularly approve of the division of reproductive labour here, so I wouldn’t think this approach is particularly worthwhile, but I can see how some might. If you think Aristotle should have been right, and the form should be contributed by the father and the matter by the mother, you should love this approach. I don’t, so my defence of cloning rests on separate grounds.)
Back to the main point. Here’s an argument I considered about the risks involved in cloning.
1. In all probability, the cloned child will be better off existing than not existing, even if it suffers various physical ailments as a result of being a clone.
2. If it is better off existing than not existing, then the harms it suffers are no reason to not produce it.
C. The harms that likely go along with being a clone do not provide a reason against producing a clone.
This would be a fairly powerful argument if it worked, because it would mean that even if we were fairly confident that a cloned human would be defective in various ways, as long as it was not so badly off that it was better off not existing, it would be acceptable to produce it.
One worry about the argument is that one of the concepts involved, being better off existing than not existing, might be nonsensical. It certainly pushes our understanding of ‘better off’ about as far as it can reasonably be pushed. I don’t have any argument here, and I recognise that on some theories of value this might not make a lot of sense, but I think we can understand this concept. (I’m possibly going to be convinced that this appearance of understanding is chimerical. Perhaps that’s for cloning post 3.)
The bigger problem with the argument is that premise 2 is pretty clearly false. Here’s two cases showing that it is false.
The Bridesmaid Dress (due to Dan Brock)
A woman knows that if she conceives this month, the child she conceives will suffer some severe ailments, but not so severe that it would be better off not existing. But if she puts off conceiving, she will not fit into her bridesmaid dress at a wedding scheduled for nine and a half months time. So she goes ahead and conceives.
The Barometer (due to Gerald Dworkin)
A couple knows that if they conceive while the barometric pressure is below a certain threshold, the child they conceive will likely suffer a similar severe ailment to in the previous case. But they don’t bother to check the pressure before conceiving, and the pressure is too low and the child does suffer the ailment.
In each case the child so conceived is better off existing than not existing, but the harms it suffers are sufficient reason to not bring it into existence. (Some might think the child cannot be harmed by something that makes it, all things considered, better off - namely being conceived. I’ve been convinced by Liz Harman’s arguments that this is the wrong way to think about harm. Unfortunately Liz’s arguments on this point are not available online. When they are I’ll try to link to them because I think they’re helpful in understanding cases like Brock’s and Dworkin’s, which I think are very relevant to the cloning debate.)
So do these cases show that cloning should be banned? No, because there are a lot of distinctions to draw, and the overall effect of the distinctions is to weaken the argument against cloning. But the matters here are very delicate.
The first distinction is between the immoral and the illegal. I agree with the usual judgements that in Brock’s and Dworkin’s cases the agents act immorally. I’m not so sure they act illegally. Would it be proper to have criminal sanctions against the agents in these cases? My tentative opinion is no. Whatever the morality of reproduction, I’m tentatively an absolutist about a legal right to reproduction.
The second distinction is between conceiving and helping others to conceive. This is relevant to the cloning debate, because part of what we care about is the role of the medical practitioners in these cases. If a doctor helped the woman in Brock’s case to conceive, when she could have refrained from helping until the danger of the child suffering the ailment had passed, she acts immorally. (Doesn’t she? I could be wrong here, but it seems she does.) And it might be appropriate to have legal, or at least professional, sanctions in such a case. So while the moral/legal distinction weakens the case for a ban, it does not have as much bite when applied not to parents but to their ‘assistants’, especially if those assistants have professional obligations not to harm others.
The third distinction, and the crucial one, is between cases like Brock’s and Dworkin’s and cases where any child those agents have has a risk of such an ailment. I think this makes quite a difference to the case. In this case, where any child a woman or a couple ever have has a serious risk of major suffering, consider the following four questions.
My answers are: Tentatively no, Definitely no, Tentatively no, Slightly stronger no. (I know a blogger should have firmer opinions, but I think these are hard questions.)
Now I think when we are thinking about legalising cloning, on the proviso that its use is restricted to those couples who otherwise could not have children, the last question is the salient one. And I since I think there should be no law against such assistance, I think there should be no law against cloning. (At least for this reason.) But note I’ve effectively conceded that there should at least be restrictions on cloning, until we lose our grounds for believing that it is a very risky process for the child involved.
At this point a concern several people raised becomes pressing. What counts as “otherwise could not have children”? There are (at least) four possibilities.
They haven’t put it this way, but several people have in effect suggested that adoption is an alternative to cloning. That becomes important here, because I think it’s important to the evaluation of the Brock/Dworkin style examples that there be no alternative to having a child in the risky way. And it isn’t just a matter of of a technical disagreement, because if we agree that cloning be restricted to those who could not otherwise have children, and that means 4, then we are in effect ruling out cloning, because for the forseeable future there will be a steady supply of adoptable children.
At risk of sounding like a wimp, I’m going to stop here for now rather than argue about which of these 4 is the contextually appropriate way to understand ‘could not otherwise have children’. I think the right answer is 2 or 3 (probably 2) but if there’s a good argument for 4, that would be a better argument against cloning than I’d previously considered. Maybe I’ll say more about this in later posts.
For a little project I’m working on I have to write something on cloning, and in particular debates about whether reproductive cloning should be legalised. It isn’t really my area of expertise, so I don’t want to form sweeping judgments too quickly. But at first glance at the literature all of the arguments for banning reproductive cloning look absolutely awful. (With perhaps one exception, which is merely an unsound argument rather than an awful one.) If anyone knows of any good arguments, I’d be rather happy to see them.
One qualification at the start. Like Chris I take it as a given that the default position with respect to any activity is that it should be lawfully permitted. There is no need for an argument in favour of permitting any activity. There is always a need for an argument in favour of banning an activity. So there’s no need to argue in favour of reproductive cloning.
Having said that, I think there’s probably quite a good argument in its favour. It provides, in principle, a chance for some people who are currently incapable of reproduction to reproduce. Since having and raising children is such an important part of what makes life valuable for so many people, even a slim chance of making this possible for even a small segment of the population is a Very Good Thing. So ceteris paribus, reproductive cloning should be permissible.
What are the arguments against? As I said, this is based on a scan of the literature, not a survey, so it might be incomplete, but here’s what I’ve found so far.
Cloning is unnatural
But lots of things that are unnatural, in the sense that they would be impossible without technological innovation, are currently regarded as unproblematically acceptable. The most commonly cited example is IVF, but on the most obvious definition of natural, caeserian sections are unnatural too. They are especially unnatural if they are designed for the mother’s survival. Nobody, I hope, wants to ban them. For that matter, having children with someone who grew up more than 100 miles from where you did is unnatural too in the sense that it would be impossible without technological assistance. Again, I trust we agree that shouldn’t be banned. So this cannot be an argument on its own for banning cloning.
Cloning is abhorrent
As a general rule, what other people find abhorrent should play no role in deciding whether you or I can do it. There’s good reason for that rule. In the good ol’ days many people found mixed race marriages abhorrent, and so banned them. Some people still find them abhorrent, but luckily they no longer stop other people from marrying. If you listen to Christian radio for any length of time you’ll find that lots of people find sex outside marriage abhorrent. I find professional boxing abhorrent, not to mention the Home Shopping Network. But none of these things should be banned, at least not for that reason. (Perhaps boxing should be banned for other reasons, which we’ll get to.) The general point is that under liberalism we shouldn’t let these kinds of feelings influence what is legally acceptable behvaiour.
Clones will lack dignity because they are in some sense ‘identical’ to their parents
Ugh. The clone is clearly not identical to its parent. When it is born it weighs less than one stone, and its parent weighs more than one stone. By Leibniz’s Law, that implies the two are not identical. The premises in that argument are clearly, determinately, fully, beyond a shadow of a doubt true, so the conclusion is clearly, determinately, fully, beyond a shadow of a doubt true. This one is just lousy metaphysics leading to bad law.
Clones will lack dignity because they share their genes with someone else
The hidden premise here is that sharing genes reduces dignity. But this implies that identical twins have less dignity than everyone else. Some days watching Mark Waugh play cricket I thought “You know, he does have less dignity than everyone out there”. Then I realised that was just jealousy at not being able to play cricket like Mark Waugh. The position that identical twins are in some way lacking in essential human dignity doesn’t pass the laugh test, but it (or at least a premise that entails it) seems to be very influential in some quarters.
Cloning is risky, and potentially harmful
There’s two arguments here. The first is the potential harm to the adult participants. But that’s not an argument for banning cloning, as much as for making sure that adult participants are fully informed of the risks. Once that happens, it would be an unjustified violation of autonomy to prevent them going ahead.
The other issue is the potential harm to the child. Given the medical problems that plagued Dolly, these might be non-trivial. This is more serious, since the child obviously is not in a position to provide informed consent. But the child is hardly in a position to complain, since without the cloning she would not exist. That last step is a little dubious, and actually the arguments here may have some bite. In particular there may, in the short term, be an argument for restricting reproductive cloning to those who could not reproduce any other way. (There are, or at least have been, similar restrictions on IVF.) Roughly the point is that sometimes you don’t want to compare what happens to the (currently non-existent) child to what that child would have been like without cloning, but to what a child in its place may have been like without cloning. But if we restrict cloning to the otherwise incapable of childbearing, there is no such child to put in its place. (This is the argument that may not be absolutely awful, since there is a bit of philosophical work to be done in blocking it. Perhaps for that reason, it doesn’t seem to be that widely stressed in the literature, especially compared to the arguments that really are awful.)
Cloning diminishes bio-diversity
If everyone cloned, the gene pool would lose some of its characteristic diversity and luster. But I take it this is a very remote risk. Even if we allow cloning for everyone, non-cloning reproduction involves having sex, and casual observation suggests that many, perhaps most, people prefer ceteris paribus courses of action that involve having sex to those that don’t. (The last premise is slightly less certain than 0=0, but probably more certain than the premises in Descartes’ cogito.) So I think there will still be plenty of diversity to go around even with cloning.
Cloning is against God’s will
I don’t know - I think if He didn’t want clones he wouldn’t have invented scientists. Slightly less frivilously, we’re meant to be fighting wars with people who base legal codes on religious documents, not imitating them. Somewhat more seriously, when someone proposes banning the consumption of shellfish, I’ll take seriously their “God’s will” arguments about other things. But right now we have better evidence that God doesn’t want you to eat shellfish than that He doesn’t approve of reproductive cloning. So I think it’s very hard to motivate a religously based ban on cloning but not shellfish eating. (Could one argue that perhaps shellfish eating is more important to human values than reproduction, so we are justified overriding God’s wishes on that point? I somehow doubt it.)
I’ve probably missed some argument, and I know I’ve skimmed by some of the points here, but as far as I can tell the moral evidence is firmly in favour of legalising reproductive cloning. Indeed, the ban itself strikes me as profoundly immoral, a potentially serious violation of autonomy. If I’ve missed something really important here though, I’d be happy to hear about it.
I’ve just happened upon a piece in Guardian on the difficulties of televising philosophy . It is full of interesting anecdotes about the attempts that have been made.
The director took him to Richard Rogers’ Lloyd’s Building in London and filmed him going up and down the escalators while he expatiated about Plato. When I met Rorty recently, I asked why they shot him there. “I have no idea,” he said. “It had nothing to do with what I was talking about so far as I could tell.”
If Rorty warranted a hi-tech setting, something more mundane was appropriate for one of the objects of his admiration:
When there was a film about Derrida recently … there was a good deal of footage of him listening to the radio while he made toast.
Jeffries rightly states that the two best TV treatments of philosophy in recent years have been Bryan Magee’s The Great Philosophers and Men of Ideas . And he’s also right in saying that two men in chairs talking are not going to appeal to network controllers these days. There have been more dramatic treatments that have worked, though. Most notably Derek Jarman’s film about Wittgenstein which featured an improbable dwarf space alien. On the other hand, don’t get me started about Alain de Botton’s appalling Consolations of Philosophy series.
Jeffries comments that
there is often an inverse relationship between the importance of a philosopher’s thought and the life he or she led.
Indeed. Freddie Ayer’s thought was of almost no importance whatsoever, but you could make a great film about him (assuming the stories he told about himself were true). I especially like the one from Ben Rogers’s biography of him where he persuades Mike Tyson not to force himself on Naomi Campbell
TYSON: Do you know who the F*** I am. I’m the heavyweight champion of the world.
AYER: And I am the former Wykeham Professor of Logic. We are both pre-eminent in our field. I suggest we talk about this like rational men.
Who knew such a thing existed? And who would have guessed that if it did exist, it would exist in Belgium?
SynopsisThe Philosophy of Cricket encompasses a series of reflections upon the nature of cricket, its forms of practice, its history and its influence in shaping the human form physically, emotionally and morally. A recurring theme throughout is the interplay between the matter (what the game is) and spirit of cricket (ideals concerning how one plays the game). What are these ideals and how do they impinge upon cricket’s conditions of existence? Furthermore, is cricket’s ratio essendi exhausted by a set of prescriptive laws or does it encompass a broader ethos, a body of conventions and connotations, a history and tradition that bind the game to realities beyond its constitutive boundaries?
I think it was Louise Vigeant from whom I heard about this collection. If so, thanks Louise! If it was someone else, apologies and thanks. (If I was a real journo-blogger I’d have been taking notes at lunch so I wouldn’t have to make these disjunctive acknowledgments.) Here’s the full call for papers.
Submissions criteria
Contributions are accepted from a broad range of philosophical disciplines discussing issues relevant to the game of cricket. Possible themes include, but are certainly not limited to, the aesthetics of cricket; ethics in cricket; cricket and the nature of man; cricket and education; cricket and culture, etc. Topics related to broader philosophical themes, such as the phenomenon of sport in general, may also be accepted provided they are predominantly illustrated with examples from cricket. All submissions must be of a philosophical nature, meet high standards of rigour and display an obvious command of the language and subject matter.
Papers should be between 5000 and 8000 words in length, though longer papers of exceptional quality and focus may also be accepted. No papers should exceed 10000 words in length.
All submissions must be written in (British) English and should follow the MLA standards for footnotes, citations and bibliographical references.
Deadlines
Abstracts are to be received by 27 February 2004. The final deadline for submissions is 30 April 2004.
Contact
Contributions for review may be sent in electronic form to the editor:
Institute of Philosophy
Kardinaal Mercierplein 2
B-3000 Leuven
Belgium
+32 16 326356
+32 16 326311 (f)
UPDATE: Normblog suggests some topics for the collection below. Anyone who wants to write on them should send me their efforts, with appropriate credits to Norm. I think consequential vs deontological approaches to walking might be fun to work out. I think I can will the universalised rule “All batters should walk iff they are playing against Australia or Victoria”, which probably messes up the deontological solution.
The July issue of the Journal of Philosophy has a paper by Frank Arntzenius built around a few puzzles about rationality, probability and belief, roughly in the tradition of the ones Chris and Brian posted recently and which attracted so much commentary. One of Arntzenius’s puzzles concerns a glitch in how a Bayesian agent ought to update her degrees of belief in x under a particular kind of uncertainty. But never mind about that. The example is about waiting for a reprieve on Death Row and is set up in the following way:
You are to be executed tomorrow. You have made a last minute appeal to President George W. Bush for clemency. Since Dick Cheney is in the hospital and cannot be consulted, George W. will decide by flipping a coin.
Cheap, but funny. It suggests a topic: philosophical importance of U.S. Presidents. Bill Clinton is finding his way mainly into examples in the philosophy of language (“It depends on what the meaning of ‘is’ is”). Also possibly ethics courses. Bush Sr doesn’t seem to have have left much in the way of a philosophical legacy. Dubya is a binomial estimator.
Richard Wollheim has died. There’s an obituary in the Guardian from Arthur Danto, and Chris Brooke has a relevant excerpt from Jerry Cohen’s “Future of a Disillusion”. Norman Geras has a post on Wollheim’s paradox of democracy . I have pleasant memories of Richard Wollheim from my time at UCL where I went to read for the M.Phil in philosophy in 1981. He chaired the research seminars there and I remember him mainly as a benign presence who asked penetrating clarificatory questions in a very plummy voice. A sad loss.
From Martin Schönfeld’s entry in the Stanford Encyclopaedia of Philosophy on Kant’s philosophical development:
Modern thought begins with Kant. The appearance of the Critique of Pure Reason in 1781 marks the start of modern philosophy, and Kant’s ideas have helped to shape global civilization. Today his texts are read on all continents. Although Kant is in the same league as Confucius or Aristotle…
I’ve got some relatives who have spent time in Antarctica, but I’ve never heard them talk about the Kant scholars down there. More seriously, there’s more than a few Descartes, Locke, Leibniz, Hume, Berkeley, Reid and Rousseau scholars who might dispute that modern philosophy begins with CPR, and a few million Americans who would probably dispute that there was no modern thought before Kant.
Chris’s post generated such an interesting comments thread that I feel I have to hop on the bandwagon. The following is a theologically revised version of a puzzle that’s been doing the rounds the past decade or so.
You, a rational agent, are in Purgatory, for good it seems. Things could be worse - you’ve heard horror stories about hell, but they could be better - you hear great things about heaven. These two possibilities seem to be equally valuable, in opposite directions. You’d be indifferent between your current state of affairs and a gamble with a 50% chance of a day in heaven and a 50% chance of a day in hell. (Purgatory is a lot like earth, so this kind of gambling is highly encouraged.)
One day an angel appears with a nice offer. God will give you some time in heaven for good behaviour. But He decided to play a little game to figure out how much time you’ll get. He wrote down two numbers, x and 2x, on slips of paper, and dropped them into identical envelopes. You will get one of the envelopes, it’s your choice which, and the slip will be good for the number of days in heaven that is written on it. The angel doesn’t know which envelope is which, and he doesn’t know what x is, except that it’s over 10. (They don’t send angels down for smaller missions than that.)
So you pick an envelope, and are about to see how long you’ll get in heaven when…
The angel makes you an offer. If you’ll do a day in Hell to cover for a friend of his that got caught up in a little scandal, he’ll give you the other envelope. He argues that this must be a good deal, as follows. Let y be the number written on your slip. The other envelope has either 2y or y/2 written on it. Each is equally likely, so your expected number of days in heaven if you switch envelopes is 0.5 * 2y + 0.5 * y/2 = 1.25y. So the expected gain from swapping is 1.25y - y = y/4. Since we know y > 10, y/4 > 2.5, so this is more valuable to you than a day in Hell.
The reasoning starts to sound attractive, until you worry about what other offers the angel has in mind if you accept. So you ask for some time to think about it. “It’s purgatory,” says the angel, “take all the time you want.”
Here’s a nice puzzle, which I was told about over dinner last night. I’m not sure who devised it, though there’s a reference in a paper by Roy Sorensen :
You are in hell and facing an eternity of torment, but the devil offers you a way out, which you can take once and only once at any time from now on. Today, if you ask him to, the devil will toss a fair coin once and if it comes up heads you are free (but if tails then you face eternal torment with no possibility of reprieve). You don’t have to play today, though, because tomorrow the devil will make the deal slightly more favourable to you (and you know this): he’ll toss the coin twice but just one head will free you. The day after, the offer will improve further: 3 tosses with just one head needed. And so on (4 tosses, 5 tosses, ….1000 tosses …) for the rest of time if needed. So, given that the devil will give you better odds on every day after this one, but that you want to escape from hell some time, when should accept his offer?
This is really Brian’s department, but a report that the world’s oldest person has passed away at the age of 116 leads me to ask whether it is, in fact, analytically possible for the world’s oldest person to die.
This in turn reminds me of a story that the late Dick Jeffrey once told me. While sitting on a bus in London in the early ’70s, he overheard two pensioners complaining about the newfangled decimal currency. They both agreed that change and progress were good things. But they thought it would have been better if instead of rushing to introduce the new money right away the Government had waited until all the old people had died.
Following up on yesterday’s great “spiritual plane debate,” I see via Atrios and Carl Zimmer that Gregg Easterbrook may subscribe to the theory of Intelligent Design. Best known in the version presented by William Paley, this is the view that, as Easterbrook puts it, “organic biology [sic] is so phenomenally complex that it is illogical to assume that life created itself. There must have been some force providing guidance.”
One is tempted to reply that anyone who believes in Intelligent Design clearly has never given birth or had impacted wisdom teeth removed. The ID crowd have thought of this problem. Easterbrook again: “Unlike creationism, intelligent-design theory acknowledges that the universe is immensely old and that all living things are descended from earlier forms.” Of course, if life just needed an initial push from God (if it couldn’t have “created itself”) and then descended with modification, we are left wondering what has happened to the “Intelligent Design” part of the theory — it’s not doing any work anymore. These guys love to have things both ways. Sadly, if you want the nice stuff you need to put up with the nasty bits, too.
Funnily enough, in his column Easterbrook says that ID is “now being argued out in the nation’s top universities.” Would these be the same “top universities” that we learned yesterday were places where people are “laughed out of the room” for their “irrational religious sensibilities”? Of course not. Though they have the same names as real universities, these “top schools” exist only in Gregg Easterbrook’s imagination, to be called forth at a moment’s notice to illustrate some point about the relationship between science and religion whose truth has already been determined in advance by what Carl Zimmer calls Easterbrook’s “own personal neat-o-meter.”
Debate about religion is alive and well at Universities — but intelligent design theory isn’t in the ascendancy in Biology departments and theists are not laughed out of rooms in philosophy departments. Come to think of it, I’d like to meet the last person who tried to laugh Alvin Plantinga, or either Bob or Marilyn Adams out of a room, but they’re probably in therapy somewhere.
Fresh from that thing about them greedy, violence-lovin’ jews (for which he paid a big price), Gregg Easterbrook posts something about God. We all know that bloggers say posts from people they like are “characteristically insightful.” Here we have Gregg Easterbrook being atypically sophomoric. Again.
Cosmologists talk rather casually of alternate dimensions during the Big Bang or of the “many worlds” hypothesis in which there are billions of parallel universes, perhaps an infinite number, occupying an infinity of different dimensions. … Speculation about other dimensions is interesting, but there isn’t the slightest evidence—not a scintilla, as lawyers say—that other dimensions are genuine. Nor is it clear what, exactly, other dimensions could be like on a physical basis. The whole idea of other dimensions is mushy, to say the least.
Hang on, did you just say the legal department will be arbitrating this issue? And where is this line of thought going, anyway?
But the article left out the really interesting part, which is what the question of other dimensions says about the spiritual debate.
By “the spiritual debate” I believe Gregg means “the question of the existence of the omniscient, omnipotent, benevolent God of Christian theology.” Or perhaps he just means the question of the existence of an ineffable immaterial something-or-other that would automatically give our lives meaning and distract our attention from the cold grave that awaits us all. I’m not sure.
At Yale, Princeton, Stanford, and other top schools, researchers discuss ten unobservable dimensions, or an infinite number of imperceptible universes, without batting an eye.
Cosmologists and astrophysicists are indeed known for their ability to come up with quite striking hypotheses of this sort. Though in this case they may have been anticipated by philosophers.
No one considers discussion of other dimensions to be peculiar. Ten unobservable dimensions, an infinite number of invisible parallel universes—hey, why not?
Well, lots of people have considered them very peculiar indeed. But never mind. And then there is the whole thorny issue of the arguments one might offer to support such ideas and the degree to which they help explain facts about the world as we know them and hence make their pecularity bearable, or a bullet worth biting as philosophers just love to say.
Yet if at Yale, Princeton, Stanford, or top schools, you proposed that there exists just one unobservable dimension—the plane of the spirit—and that it is real despite our inability to sense it directly, you’d be laughed out of the room. Or conversation would grind to a halt to avoid offending your irrational religious superstitions.
Now, Gregg probably thinks he has just given his readers an example that supports his argument. See, there goes the God guy, laughed out of the freakin’ room. Shocking. There’s the embarrassed silence after God guy has spoken up, as everyone waits to get back to talking about multiverses and invisible dimensions for which not a shred of evidence exists because that is science. Unbelievable. At the best schools in the country, too. I swear I saw this in a movie once.
Sadly, what has in fact happened is that Gregg just made something up out of thin air in order to shore up his earlier assertion. Discussions about the the “plane of the spirit” (again, I think Gregg is thinking about something much more specific than that phrase suggests) take place all the time at all the top schools. When they take place in the dorm room at 3:20am after a game of beer pong, they sound a lot like Easterbrook does in this post.
To modern thought, one extra spiritual dimension is a preposterous idea, while the notion that there are incredible numbers of extra physical dimensions gives no pause.
Not a single goddamn pause! Dude, that is just so frickin’ ironic. I mean, it’s like, the same thing, man. The very same thing! And they just don’t see it. Write that down. Write that down now. I gotta tell Professor whatsherface about this tomorrow, if I make it to class.
Propelled by the high elasticity of our invented example, our vague generalization accelerates towards the sweeping conclusion which must inevitably follow:
Yet which idea sounds more implausible—one unseen dimension or billions of them?
QED! Slam dunk! Either you are all modernist scientistic fools whose theories are built on sand, or my God exists! Or both! Muahahaha!
Actually, taken together, this post and the one about the Jews show the problems with naive falsificationism as a philosophy of science. I believe, based on previous observation, that Gregg Easterbrook is a smart guy. But here we have two empirical cases that clearly falsify this belief. But do I abandon it? No. Instead, I start coming up with auxiliary hypotheses to protect the main one: Gregg is having a bad week. Gregg is drunk. (But the item was posted at 9:58am!) OK, OK, Gregg hasn’t had any coffee. Whatever it takes. But I’m wondering how much more evidence needs to accumulate before the paradigm shift happens.
Sometimes, when I’m reading or listening to a paper which excites me with its novelty and brilliance, perhaps because it contains some really elegant move, a mental image comes into my head of Steve McManaman running with the ball, circa 1996. Colin McGinn, writing in the latest Prospect about how he became a philosopher, would see the parallel:
The metaphor that best captures my experience with both philosophy and sport is soaring: pole vaulting, gymnastics and windsurfing clearly demonstrate it, but the intellectual highwire act involved in full-throttle philosophical thinking gives me a similar sensation - as if I have taken flight, leaving gravity behind. It is almost like sloughing off mortality. (Plato indeed thought that acquiring abstract knowledge is a return to the prenatal state of the immortal soul.) There is also an impressiveness to these physical and mental skills that appeals to me - they evoke the “wow” reflex. Showing off is an integral part of their exercise; but as I said earlier, I don’t have any objection to showing off. In any case, there is not, for me, the discontinuity between sports and intellectual activities that is often assumed. It is not that you must either be a nerd or a jock; you can be both. It has never surprised me that the ancient Greeks combined a reverence for the mind with a love of sports: both involve an appreciation of the beauties of technique skilfully applied. And both place a high premium on getting it right - exactly right.
The British Philosophical Association , which aspires to be a professional body representing all academic philosophers in the UK, has its inaugural conference today. Onora O’Neill (Cambridge) and Robert Audi (Notre Dame - from the American Philisophical Association) are the keynote speakers. I’ll be there.
Interesting knockabout stuff from two people who’ve decided to take it up a notch in terms of Great Weblog Comments Battles and duke it out in public on Daniel Drezner’s site with $100 at stake. The battle is over the subject “Did Bush Say That Iraq Was An Imminent Threat Or Not?”.
As far as I can tell, the case for the defence is that Bush specifically said that Iraq wasn’t an imminent threat, but that it was about to become an imminent threat and he didn’t propose to wait until it became imminent.
In other words, Bush does appear to be committed to the claim “Event I’ is imminent”, where I’ is defined as “the event of event I becoming imminent” and I is defined as “Iraq being a threat”. Which means to me that this particular line of argument turns on the question of whether “imminent” is a transitive predicate, or in other words, if something will imminently become imminent, does that mean that it’s imminent now?
My guess is that “imminent” is a short-transitive predicate; it’s transitive so long as the chain of “imminents” isn’t too long. Short-transitivity is a somewhat controversial logical property, however, albeit one which would be fantastically useful for economists in making axiomatic theories of revealed preference if it could be put on a rigorous footing. I’ll leave the matter to our resident expert on the subject, Mr Weatherson.
The website of the American Philosophical Association is a quiet affair as a rule, but its section on Calls for Papers turns up the odd gem:
Philosophy and The Onion. Now soliciting proposals for projected philosophical anthology on any aspect of The Onion, America’s leading satirical newspaper.
This is followed by,
The Undead and Philosophy … Abstracts are sought for a collection of philosophical essays on the theme of the undead.
A link to further information helpfully explains that “We define ‘the undead’ as that class of corporeal beings who at some point were living creatures, have died, and have come back such that they are not presently ‘at rest.’ This would include supernatural beings such as zombies, vampires, mummies, and other reanimated corpses.”
The two projects might be combined: Once the Undead and Philosophy is published, someone from The Onion can whip up something pretty quickly.
To be fair, it’s clear there’s plenty of philosophical mileage in the undead. There are tricky definitional problems — Do vampires count as “reanimated corpses”, for example? — which raise the question of whether “the undead” admit of a reductive analysis. My friend Dave Chalmers is a noted authority on zombies, though the zombies he has in mind are rather different from common or garden Hollywood Zombies. Philosophical zombies are roughly defined by the fact that they, unlike Dave, can’t have anything in mind, ever.
Apart from metaphysics and philosophy of mind, the organizers are also interested in bioethics, “cultural theory and globalization studies”, and existentialism. Some of these practically write themselves. Bioethics, for instance. The basic line is that it’s OK to turn someone into a Zombie, as long as you get them to sign an informed consent form beforehand. I can see the press coverage now: “The new Zombie technology will force us to ‘rethink our attitudes towards life and death,’ Dr Arthur Caplan of the University of Pennsylvania said today. ‘Socially acceptable levels of body odor are going to be up for debate, too,’ he continued. Meanwhile, University of Chicago Bioethicist Leon Kass said that Zombies — especially cloned Zombies — are ‘repugnant to human dignity’ for reasons which are at once intuitively obvious to all and difficult to articulate clearly.”
So get writing. The deadline for submissions is December 15, 2003. In the first place this should of course be an undeadline, and anyone from the Southwestern United States could tell you the date should be set at November 2nd. On the other hand, the call for papers comes from Weber State University, which though not a noted center of eldritch activity is located in Ogden, Utah, where it’s probably pretty dead most of the time.
I just stumbled across the webpage for The Monads. When they were compresent with us as such, the Monads were constituted by three WWU undergraduates, two of whom are now UMass graduate students, Kris McDaniel and Justin Klocksiem. I was just complimenting Kris’s philosophical abilities the other day and I forgot entirely to mention his musical accomplishments. Bad omission! If you like philosophical musical humour, you should download some of the songs they have posted. I particularly liked Meinongian Babe, which is the kind of song you might have heard on the Magnetic Fields’ 69 Love Songs had Stephin Merrit been a philosophy major. (Note that’s an 8MB download, so if everyone downloads it we’ll probably crash the UMass server.)
Richard Rorty has an article in today’s Boston Globe arguing that Davidson showed that “reality can’t be an illusion.” (Note: that quote is from the subhead not from Rorty.) Since it’s Rorty it’s little surprise that I don’t believe a word of it (sadly I don’t have time to write a long enough post to convincingly say why) but it’s a much better philosophical article than you’ll normally see in an American newspaper. (Thanks to the APA News service for the link.)
Some sort of mad puritanism seems to be afflicting parts of the blogosphere. Oliver Kamm (in comments to Harry Hatchet , then Natalie Solent and Stephen Pollard have been dogmatically asserting that government should limit itself to the provision of public goods, the assurance of basic rights and to treating citizens justly (though they disagree on what that means). Compassion, according to them, is a virtue (if it is a virtue) that should be exercised by individuals in a private capacity and not by government. But that just looks far too austere.
If “government” here is taken to mean all public officials acting in their public capacity (as it should be), then I’m pretty sure most people, including, I hope, the writers I just mentioned, believe that justice should be tempered by mercy. Now the relationship between mercy and compassion is a little bit obscure. Certainly, there can be instances of mercy where the merciful party isn’t acting from the motive of compassion. So when Caesar pardons the condemned in deference to the crowd’s wishes, it isn’t necessary that anyone is feeling compassionate. But I take it that, when judges, or tax inspectors, or social security officials use their discretion to exact lesser penalties than they might in the light of the human situation of the person in their power, compassion is often the relevant motive. Indeed, a person completely lacking in compassion for others would be a very bad candidate for any position of authority, within the state or elsewhere, because they would lack the capacity to judge when it would be right to act mercifully or would try to emulate that capacity in a clunky external kind of way by copying the behaviour of those who do have the disposition to be compassionate. I guess these writers are misled because they rightly reject the idea that a kind of gooey sentimentality could be the basis for social or welfare policy. But basing such policy on justice doesn’t exclude, and I think requires, a space for the virtue of compassion.
(There’s much more to be said, of a somewhat involved kind, in this area, about the relationship between compassion and the motive to justice, between compassion and the requirement of civility among citizens, and about compassion and positive duties of aid.)
Juan Non-Volokh said that Joe Lieberman said something false on the weekend:
For example, Lieberman stated that the Bush Administration’s “Clear Skies” proposal to reform the Clean Air Act “actually would increase pollution” … He’s wrong … and should know better as a member of the Senate Environment Committee.First, the proposed “Clear Skies” legislation will reduce utility emissions of NOx and SOx by around 70 percent. As I have noted before, the worst that can be said of “Clear Skies” is that it will reduce utility emissions marginally less than they might be reduced under current law – I say “might” because current projections presume that the current regulatory process will stay on schedule, and this is unlikely. Either way, this is not a policy that “actually would increase pollution.”
My first thought was that there’s a meaning for ‘increase’ that Lieberman could be using here. On second thoughts, I’m not so sure, but the semantic question is pretty interesting I think, at least if you’re a semi-professional semanticist.
I’d have thought that the meaning of ‘increase’ would be in the ballpark of these two concepts, which I define stipulatively.
X increasest Q if X causes Q to be larger than it previously was. (The t subscript is to indicate that this is a temporal concept - what matters is that Q grows over time as a consequence of X.)X increasesc Q if X causes Q to be larger than it othewise would have been. (The c subscript is to indicate that this is a counterfactual concept - what matters is that Q is larger than it is in some relevant counterfactual situation.)
What is the relation between the meaning of ‘increase’ and my concepts increaset and increasec? If it means the disjunction of the two, or it is ambiguous between the two, then Lieberman is (arguably) off the hook, since Juan is conceding (at least for the sake of the argument) that the Clear Skies legislation increasesc pollution. Unfortunately, it isn’t obvious that either of these claims holds. Here are a couple of cases to test intuitions.
Merlin
We have here a pile of rocks. The pile is a little unstable, and some rocks are rolling down it, and if left untouched they will soon roll off the pile. Merlin casts a spell that halts one of the rocks, and moves it to a stable point. While this spell is being cast, some other rocks roll off the pile. Did Merlin increase the size of the rock pile? My intuition is that he did not, even though he did
increasec it.
RHAWP
The Red-Haired Australian Welfare Plan (RHAWP) calls for all red-haired Australians to be given a one-time payment of $100,000. Let’s imagine (contrary to fact) that we are in a scenario where the deficit is falling fast enough that even with the RHAWP in place, it will still fall next year. In that circumstance, would the RHAWP increase the deficit? It would increasec the deficit, but not increaset it. In this circumstance I’m a little more conflicted - I’m certainly more inclined to say that the RHAWP will increase the deficit than that Merlin increased the size of the rockpile. That’s good news for Lieberman, since arguably the Clear Skies legislation will have the same effect on pollution as the RHAWP has on the deficit.
Here’s my best guess as to what is going on. ‘Increase’ really does mean increaset, and Juan is right than what Lieberman said is false. But we are happy to use it of people who exacerbate, that is increasec, something we perceive to be a problem. To test this prediction, imagine what would happen if we thought the rock pile was getting in the way of something we want to do. For example, assume it covers up a mine shaft down which a young child is trapped. I think in that case I’d be more prepared to say that Merlin increased the rock pile. And I think (very tentatively) the best analysis of the case is that it’s a case where one can appropriately say something not true because it’s a useful way of communicating something that is true.
One final question. Juan says that
A “lie” is a deliberately false statement, typically made with an intent to deceive. Not just any false statement, or bit of spin, will do. Intentional deception is key.
Hmmm, is intent ‘typical’ or ‘key’? Anyway, we might wonder whether the following situation constitutes lying. X utters S (deliberately), S means that p, X knows p is false, but X thinks S means q, and X believes q is true. I’d say that’s not a lie. And in Lieberman’s defence, that might be what happened here. Before thinking through the cases, I thought ‘increase’ might mean increasec. I now think that’s wrong, but I think it’s a mistake a competent speaker could make. (I don’t think I was incompetent before I thought about the Merlin case.) Juan makes a point of not accusing Lieberman of lying, and I think that was correct given these considerations.
Suppose there are two possible states of the world, S1 and S2, and we don’t know which of the two states the world is in. An event E occurs which is consistent with the world being in either S1 or S2, but is more likely in S1 than it is in S2. We should surely say that, given E, the world is more likely to be in S1 than in S2, and that to that extent E (though consistent with both possible states) is evidence for the world’s being in S1.
Such evidence isn’t, of course, conclusive. After all, by hypothesis, E is consistent with both possible states. But evidence doesn’t need to be conclusive evidence to count as evidence.
That sensible view of what evidence is doesn’t appear to be shared by new enviroblogger Professor Philip Stott , whom I welcome to the blogosphere in the traditional way - by arguing with him.
Stott writes:
I have just been contemplating two recent polls on ‘global warming’. They are fascinating and unexpectedly consistent, though quite different questions. First, my ever-vigilent younger daughter has just let me have the position of a current poll on Discovery Europe (choose UK option):
Does global warming concern you?
No: 48%
A little: 12%
Very much: 39%
[As at this posting]
Secondly, here are the results of a poll held by The Scientific Alliance (see my Links):
Do you believe that this summer’s exceptionally hot temperatures were evidence of climate change?
Yes: 40%
No: 48%
May be: 12%
These are surprisingly mature results - public opinion might be changing.
I take it (by implication) that Stott believes that the 48 per cent who rejected the view that the exceptionally hot temperatures were evidence of climate change were giving the right answer. But surely they weren’t. No-one but a fool would think that Europe’s hot summer was conclusive evidence for climate change. But , as noted above, evidence can be evidence without being conclusive. Indeed it can be evidence even if it just raises our confidence in a proposition by a tiny degree. Europe’s summer should have made believers in climate change a bit more confident in their beliefs and should have made sceptics a bit less sceptical. Typically, that’s what evidence does.
is the title of a not bad article in The Age today on time travel. They give too much credence to branching universe hypotheses for my tastes, but there’s some fun quotes from some leading thinkers, and a relatively straightforward description of Paul Davies’s time machine plan.
Roger Scruton has a piece on Donald Davidson’s importance in the latest Spectator .
UPDATE: Scruton’s piece is sort of OK, but I wouldn’t have bothered linking had I come across this fascinating interview of Davidson by Ernest LePore first (via Brian Leiter’s site).
Musing further on whether technological development has helped or hindered thinking, and especially philosophical thinking, it occurs to me that the ideas of which I’m (rightly or wrongly) most proud have generally started not when I’ve been trying to do philosophy, but when I’ve been daydreaming about it whilst doing something else: travelling on a train, riding a bicycle, swimming or whatever. Purely mechanical and repetitive activities can been good for this too, though it is for good reason that there are a whole range of philosophical stories in which philosophers let cooking pots boil over, poison people or run them down whilst in the middle of their reveries.
Then there’s the business of writing, of trying to turn ideas into publishable prose. I’ve adopted two strategies for getting this done - both of which work very well, but eventually seem to run their course.
Strategy A is the Anthony Burgess method. I read an obituary of Burgess which revealed that he would write 1000 words every day before retiring to the nearest bar to sip a martini. I’m sorry to say that I skipped the martini part, but, for a long time managed the routine of 1000 words. Many of those words, certainly most of those words were garbage and got thrown away, but gradually, like whisking mayonnaise, publishable material started to emerge. Indeed my best ever paper (IMHO) came from following this writing strategy.
Strategy B I think of as the “football method”. Whilst I can spend whole mornings (and afternoons) getting absolutely nothing done, everyone who watches football (soccer) knows just how much can happen even in a few minutes of extra time. (I spend far too much of my life watching football matches.) I’ve found that 45 minutes of intense writing activity, followed by a 15-minute break (half-time) followed by a further 45 minutes, is also very productive (repeat as required).
The two methods are similar in that they allow you to get a lot done (cumulatively) in a little time, though one is like piecework and the other is like payment by the hour. Given I know their effectiveness, I think the complaint academics make that they don’t have enough time for writing and research is probably misconceived. Time, strictly speaking, isn’t the issue. What is a problem — as I know from the fact that I’m having to manage a department for the second time in my life — is that it is far too easy for other matters to colonise your head. To work effectively you need to be able to do a combination of concentration and daydreaming (self-hypnosis is good here!) but that isn’t possible if your thoughts are full of finances, staffing problems and achieving the next government target.
Larry Solum adds his thoughts to the philosophical immortality discussion. His has lots of interest to say, and some extra thoughts on which legal theorists will survive, but I feel a bit sceptical about this:
The twentieth-century was the first time in human history that literally tens of thousands of very smart people worked on philosophical problems for most of their waking hours—with all of the advantages of modern technology— try writing a really big book with a quill pen or traveling four hundred miles by horse to consult a library . In the twentieth century, there was a lot of low hanging philosophical fruit. Much of it was plucked. History will remember.
Does technology really help? Sure, there’s been some philosophical progress but I’m not convinced it has much to do with the availability of typewriters, computers and motor vehicles. Philosophy is a funny business, sort of stuck half way between scientific research and creative writing or music. To the extent to which it is like scientific research then the good thoughts are dissociable from the person having them. But we can also think of a style of writing and thinking as being characteristic of a creative individual and not easily pulled apart from them. Some philosophers are closer to one pole than the other. It is at least arguable that music and literature did a lot better with the horse and the quill pen than they have in the electronic age. Maybe in 100 years we’ll think philosophy did too. Technology might help, but it might just get in the way.
Picking up on some remarks of mine, Brian Leiter is playing the “which contemporary philosophers will still be read in 100 years” game - which can be quite fun. My money is on Rawls and Parfit. Not that they are necessarily the best, but other contenders who have written less monumental works will have their thoughts incorporated into philosophical discourse in a way that floats free of the original form those thoughts were couched in.
Oxford philosopher Kathy Wilkes, probably best known for her book Physicalism has died. The London Times has has a obituary. (I’ve now started to add Donald Davison obituaries to this post).
Donald Davidson, one of the foremost philosophers of mind and language of recent decades, died yesterday in Berkeley. Davidson was the author of many papers that defined the terms of subsequent debate, such as “Actions, Reasons and Causes” and “How is Weakness of the Will Possible?” The last couple of years have seen a succession of philosophical giant die (Lewis, Rawls, Nozick, Williams) and it is sad to see Davidson joining their number. An account of his life and importance can be found at the Stanford Encyclopedia of Philosophy. I’ll add links to obituaries to this post as they become available. (News via Brian Leiter’s site). Obituaries: New York Times, UC Berkeley News, Guardian, The Times, Daily Californian, Independent.
A staple of intro philosophy courses is the ethics of runaway trolleys. There’s probably an interesting sociological study as to why this is so, but rather than delve into that I thought I’d share a new-sounding version of the trolley problem due to Carolina Sartorio posted on Philosophy from the (617).
For those who’ve missed this line of study before, the puzzles arise from reflecting on cases like the following.
Basic Trolley Case: There is a run away trolley car careening down some railway tracks towards two tunnels. If it stays on the track it is on, it will kill five people working in the eastern tunnel. If you pull a switch it will move onto a side track, go through the western tunnel and kill the person working in it. Assume you can’t do anything to stop the car or change its path except pull this switch. You pull the switch, saving the five and leading to the one’s death.
Fatman: Same runaway car, but without the switch. This time, you’re standing on a cliff looking over the track the car is about to careen down. If unchecked it will kill the five people in the tunnel. Fortunately, there is a very fat man standing beside you - fat enough that if somehow he were to fall onto the track his sheer mass would stop the trolley. You give him a little shove, he falls off the cliff onto the train tracks is killed by the trolley (if not the fall) and the five are saved.
A significant number of people think your action in the first case is permissible, perhaps even mandatory, but the action in the second case is impermissible. Providing a reason for the different attitudes towards the two actions is not entirely trivial, which I guess is one of the reasons these cases are staples of intro philosophy courses.
Here’s Carolina’s case.
Fatman*: There are five people tied up to a track. One of them is a fat man, and somehow I can shove him into the path of the train (although I can’t save him!) to stop the train before it kills the other four.
There’s two puzzles here. The first, in which Carolina is interested, is whether the action is permissible in this case. The second, which seems more fun to me, is whether we can tell anything like a plausible story in which the facts are as Carolina stipulates. I’m moderately dubious that this can be done, but I should never underestimate the powers of storytellers.
I wonder if the intuitions about the cases will differ depending on just how the details are set out. Just for fun, here’s a variant that I think is like the Basic Trolley Case.
Evil Demon: An evil demon has set up a run away steam engine to travel through five tunnels, killing the people tied to the tracks in each tunnel. You can’t stop the train (it’s got a really powerful engine) but you can change the order in which the train goes through various tunnels. You notice that the man tied to the tracks in tunnel #5 (i.e. the tunnel the train is scheduled to go through fifth) is really fat. It looks probable the train will derail when it hits him. (It’s a pretty resilient train, so it can run over the supermodels in the other four tunnels without being derailed, but it can’t handle the fat man. Probably.) So you reroute the train to go through his tunnel first, it hits him, kills him and is derailed, saving the four.
Is this action permissible? Is it mandatory? Is it a sign of completely awful character that one even thinks about these puzzles? Of worse character to write about them? I don’t know - I just do time travel.
I’m teaching a freshman seminar on time travel at Brown this year, so I’ve been watching a lot of time travel movies as ‘preparation’. I always knew that many time travel movies don’t make a lot of sense on a bit of reflection. What surprised me on recent re-watchings was that some seemed unintelligible even on relatively generous assumptions.
Philosophers normally break time-travel stories into two categories: those that do make sense within a ‘one-dimensional’ view of time and those that don’t.
The ones that make sense on a ‘one-dimensional’ view never have it the case that at a particular time something both is and isn’t the case. They don’t require that the direction of causation always goes from past to future, that would stop them from being time travel stories after all, but they require that there be a single complete and coherent story that can be told of the history of the world. Some philosophers are known to reserve the label ‘consistent’ for these stories, but that’s probably a bit harsh.[1]
Some stories keep to this constraint, even when they are under a lot of pressure to break it. The first Terminator does, the second Terminator might (though it’s normally interpreted as violating it), and both 12 Monkeys and it’s inspiration La Jetée display quite a bit of ingenuity in telling an involved time-travel story that has a coherent one-dimensional history.
But obviously this kind of constraint is not a universal norm among time-travel stories. For example, the whole point of the Back to the Future movies is that what time-travellers do can change the course of future history. (If you need, or even want, a refresher on what happens in the movie, one is available here, though be warned that site launches a very annoying MIDI file unless your browser is configured to block that kind of thing.)
In Back to the Future in 1985 the first time around George works for Biff, and the second time around, after Marty has changed the past, Biff works for George. So this is a violation of the one-dimensionality principle. I had always assumed that the movie could be made sense of on a ‘branching time’ model. Indeed in the second movie that’s exactly the kind of model they say they are using.
The idea is that the history we are familiar with is only one branch of the tree of time. This isn’t a wholly unknown picture. I’ve been told that Aristotle believed something similar, and (if you believe everything you read on the web) a few quantum mechanics specialists also hold a similar view. (Personally I think it’s about as plausible as the world-rests-on-a-giant-turtle theory, but the history of philosophers making speculations about physics is not great, so I’ll be a little restrained here.) On this picture the other branches exist, and the only thing that’s special about our branch is that we’re in it. Before a branch point it isn’t determined which branch we will end up on. The full story of the world includes a whole array of things totally unlike anything we know - our history is the story of a particular climb up the tree of time, a climb that could have turned out very very differently to how it actually did.
It should be easy to fit Back to the Future style time travel into this picture. When Marty goes back into 1955 it isn’t pre-determined whether he will stay in the branch from whence he came. And he changes his world enough that he more or less has to move into another branch - ultimately a branch in which his parents are much more successful than they actually are. (Or were. Or something. Ordinary tense words don’t handle this kind of situation very well, as Douglas Adams pointed out somewhere.)
So far so good. Now obviously one part of the movie isn’t compatible with this picture. If Marty is safely and soundly in his new branch, there’s no reason to think he will ‘fade away’ if in that branch his parents don’t meet and marry and conceive etc. He’s there and that’s all there is to it. So a major plot line of the movie becomes a little incomprehensible. But apart from that, I thought it was going to be possible to make sense of it all.
What surprised me on re-watching the movie [2] was that even granting them a branching time universe, and ignoring the lack of reason for Marty to ‘fade away’, the story in the movie still didn’t make sense. Here’s why. In the new branch that Marty moves onto, his parents meet, he is conceived, born and grows up in a successful family, rather unlike the family he remembers growing up in. Marty also travels forward in time in that branch from 1955 to 1985. The Marty that got to new 1985 by time travel is around at the end of the movie - we see his surprise at how different new 1985 is. But the Marty that was born, raised etc is not. On the branching time model, there should be two Martys around now, but the movie only gives us one.
Maybe the movie could make sense on an even stranger metaphysics than regular branching time. What we need is a metaphysics with not only branching time, but also some cross-branch relations that determine who (in one branch) is the same person as whom (in a different branch). And we need those relations to have enough causal force that when a person is in a branch they shouldn’t be in, or are too often in a branch they shouldn’t be that often in, the relations somehow make the world fix things. But even this doesn’t explain why new 1985 Marty should not remember growing up in a successful household. It’s really all a mess, even granting a really wild metaphysical picture. What amazes me is how it seems to work under its own logic while one is watching it. Some enterprising grad student should work out just what that logic is - they could probably justify anything whatsoever using it.
[1]There are several interesting aesthetics questions related to this distinction. For instance, is it a vice in a time-travel story that it does not make sense on a one-dimensional view of time? I used to think the answer was yes, then I decided that was much too snobby. But after my recent bout of time travel movieing, I’m drifting back to my former position. At the very least, it’s a virtue of those stories that do keep to one-dimensional time, just because one-dimensional time-travel stories are so pretty when done well. The plot devices in the last two Harry Potter stories may have been fairly awful, but the time travel story at the end of The Prisoner of Azkaban is rather good for just this reason. That story gets bonus degree of difficulty points for having the characters interact with themselves (admittedly at a distance) in a more-or-less psychologically plausible way.
I think that stories that violate this constraint too frequently rely on our assumption that causality always moves forward in movie (or book) time. I’d be surprised if someone could tell a decent time-travel story in a movie where the order of scenes didn’t match up with what happened in real time or in any character’s personal time. (Think Pulp Fiction meets Back to the Future.) I imagine that the result would be incomprehensible. I’ve seen some people argue that the final scenes of Tim Burton’s Planet of the Apes should be understood this way, but since those scenes are incomprehensible, that doesn’t really hurt my point. On the other hand, I imagine that with some ingenuity one could chop up a good ‘one-dimensional’ movie like 12 Monkeys into all kinds of rearranged scenes and it still be tolerably coherent.
[2]Well, not the only thing. As has been noted here previously, the 80s were a really strange time. The ‘fashions’ are … well the less said the better. But the thing I’d totally blacked out was that in the movie they try and make Marty look cool by having him play in a Huey Lewis cover band. It’s hard to comprehend what they were thinking. I was rather shocked to hear a Huey Lewis song on a ‘classical rock’ station in Seattle, but the idea that at one time associating with his music was a way to impress pretty 17 year olds is just wild.
On the other hand, I shouldn’t play up the fact that I remember much of this time at all. Many of the students in my course won’t have been born when Back to the Future was released. Hopefully that means they won’t ask too many hard questions about why the plot doesn’t seem incomprehensible on first viewing.
I imagine most readers have seen Edward Adelson’s checkershadow illusion, because it’s done the rounds of a few blogs. If you haven’t seen it it’s worth looking at, because it’s really quite remarkable.
I always find these things fascinating because of what they tell us about the boundary between perception and inference. So I was interested to see how many similar illusions he has posted, many of them as part of the very good HTML version of Lightness Perception and Lightness Illusions.
I had always thought that most of the illumination illusions were all trading in some way or other on either simultaneous contrast or perceived three-dimensionality. But it turns out that this can’t be the whole story, at least if simultaneous contrast is construed fairly narrowly.
In the snake illusion two of the squares that appear to be different shades are (a) the same shade and (b) surrounded by areas that are also the same shade. If there’s a contrast effect, it’s from areas that surround the areas that surround the squares. Either there are many more contrast effects than seemed a priori plausible (not that contrast effects were given much weight back when philosophers tried to do psychology a priori) or something other than contrast effects are active here. (The best Edelman can do with it is to suggest that straight lines are better than curved lines at marking off contrast areas, which seems to describe the phenomenon more than explain it.)
I’m no expert in this area, so I don’t know someone has explained what’s going on with the snake in the 7 or so years since it appeared, but it seems very mysterious to me.
If you’ve ever put in hard time trying to make sense of the writings of Richard Rorty, you’ll probably get some harmless giggles out of this deliciously silly poem that Norman Geras has managed to acquire, Bob Woodward-style, from a poet who wishes to remain anonymous. Here’s a taste:
Richie Rorty, Richie Rorty,
Naught he hadn’t read, it seems.
Heidegger and Nietzsche brought he,
Both, to feature in his schemes,Next to others not so warty:
Caught he Dickens, Proust and Yeats,
Kundera and Orwell. Sought he
To cavort with them as mates.
Since I’m at it, I recall that the Philosophical Lexicon provided us with this useful definition:
a rortiori, adj. For even more obscure and fashionable Continental reasons.
I’ve blogged before on Junius about retired British philosopher Ted Honderich and his lamentable book After the Terror. It seems that Honderich is now involved in a fierce spat with his German publishers Suhrkamp Verlag who have withdrawn the book after charges that it is anti-semitic were levelled by Micha Brumlik (Director of the Fritz Bauer Institute, Study and Documentation Centre for the History of the Holocaust and Its Effects). Jurgen Habermas, who originally recommended the book to Suhrkamp, now agonises about and seeks to contextualise his recommendation. Honderich in turn, angrily rejects the charge of anti-semitism and calls for Brumlik to be dismissed from his post by the Johann Wolfgang Goethe University, Frankfurt am Main.
For what its worth, Brumlik’s charge of anti-semitism is, in my view, technically unwarranted. I doubt that Honderich bears any animosity towards Jews as such. But Brumlik is correct to state that Honderich “seeks to justify the murder of Jewish civilians in Israel.” In Honderich’s recent essay “Terrorism for Humanity” he gives a list of propositions including “Suicide bombings by the Palestinians are right.” He says of his list: ” These are some particular moral propositions that many people, probably a majority of humans who are half-informed or better, now at least find it difficult to deny.”
There’s probably some possible world where I’m moved by freedom of speech considerations to the thought that Suhrkamp shouldn’t have withdrawn Honderich’s book (though it hardly amounts to censorship, since they’ve relinquished the rights and he can presumably disseminate it himself). But I can’t summon up any indignation on behalf of someone with his odious views who also calls for his critics to be sacked from their academic posts.
(Honderich’s site has links to the text of Brumlik’s letter, Habermas’s thoughts, Honderich’s replies and “Terrorism for Humanity”.)
“Philosophy Talk”, a new public radio show hosted by two esteemed Stanford philosophers, John Perry and Ken Taylor, pilots tomorrow on KALW. The show is at 1pm Pacific Time (that’s 4pm in New York, 9pm in London and 6am in Melbourne, if I’ve done my sums correctly) and if the technology is working should be available in live streaming. The show tomorrow is on lying, with Tamar Shapiro (also from Stanford philosophy) and Paul Ekman, the world’s foremost authority on emotions and facial expressions, among the guests. It should be fun, and it should certainly be better than what passes for ‘talk’ radio in this country. If you want more info about the show, this puff piece from the Stanford Reporter gives John Perry a lot of space to set out what he wants to do with the show.
Does anyone know who was John Rawls’s PhD dissertation advisor? This question came up in discussion around here (a propos of nothing much at all) and no one knew, but I imagine at least one reader, if not a fellow Timberite, will know.
The Sydney Morning Herald recently ran a long profile on the Hungarian-Australian philosopher George Molnar. Australian philosophers can be a weird lot sometimes, but Molnar stands out quite a bit even by our standards. I met him a few times at conferences after he returned to philosophy, but I never knew how many things he’d done outside philosophy. Somehow I don’t think a life in the academy with some blogging on the side will lead to quite the same kind of newspaper reports about me any time down the track.
A puff for one of my other collaborative projects: Imprints. The latest issue is now out and contains much of interest. The online content this time is an interview with Michael Walzer which ranges over many issues: the wars in Iraq and Afghanistan, the morality of humanitarian intervention, Israel and Palestine, anti-Semitism, memories of Rawls and Nozick, the permissibility of torture, blocked exchanges and commodification, the narcissism of Ralph Nader, and much more. Read the whole thing - it is both enlightening and provocative.
Just musing on the whole facts and principles issue, I was reminded of a text which Jeremy Waldron brought up on the very first occasion I heard the Cohen thesis discussed. It isn’t really relevant to the whole fact-insensitive principle stuff at all, but it is a reminder of the kind of “facts” our great precursors helped themselves to! Normally when people are arguing for design in nature, they go for things like the structure of the eye, but Kant had other “evidence” in mind in this wonderful passage from Perpetual Peace :
It is in itself wonderful that moss can still grow in the cold wastes around the Arctic Ocean; the reindeer can scrape it out from beneath the snow, and can thus serve itself as nourishment or as a draft animal for the Ostiaks or Samoyeds. Similarly, the sandy salt deserts contain the camel, which seems as if it had been created for travelling over them in order that they might not be left unutilised. But evidence of design in nature emerges even more clearly when we realise that the shores of the Arctic Ocean are inhabited not only by fur-bearing animals, but also by seals, walrusses and whales, whose flesh provides food and whose fat provides warmth for the native inhabitants. Nature’s care also arouses admiration, however, by carrying driftwood to these treeless regions without anyone knowing exactly where it comes from. For if they did not have this material, the natives would not be able to construct either boats or weapons, on dwellings in which to live. ( Kant: Political Writings ed. Reiss p. 110)
I’ve spent this morning puzzling through Jerry Cohen’s “Facts and Principles” from Philosophy and Public Affairs (31:3 Summer 2003). It is, as I and others have intimated already, an important article and I can’t be confident that I’ve “got” it yet. I do think, though, that I can say that his thesis is not quite the threat to naturalism that I took it to be, unless it is coupled with some further commitments (although, as it happens, those dangerous further commitments are ones I accept). The basic argument Cohen puts forward is a really simple one, claiming that where people seek to ground their moral commitments on principles, some of those principles must hold independently of the way the world happens to be (“the facts”).
Cohen argues for three premises:
P1 “…whenever a fact _F_ confers support on a principle _P_, there is an explanation why F supports _P_, that is, an explanation of how _F_ represents a reason to endorse _P_” (p. 217)
P2 “…the explanation whose existence is affirmed by the first premise invokes or implies a more ultimate principle, commitment to which would survive denial of _F_, a more ultimate principle that explains why F supports _P_….” (217-8)
P3 Where the grounding principle explaining why _F_ supports _P_ is itself fact-dependent, and further interrogation reveals yet deeper grounding principles, that iterative sequence of asking for supporting principles will not go on forever but will (pretty soon) come to rest on a grounding principle that is not itself fact-dependent.
Cohen claims that “it follows from the stated premises that every fact-sensitive principle reflects a fact insensitive principle: that is true both within the structure of the principled beliefs of a given person …. and …. within the structure of the objective truth about principles.” (218)
Much of Cohen’s paper consists of clarification of his view and a careful attempt to set out what he is not saying. So, for example, he says his thesis is neutral with respect to the main metaethical disputes (among realists, quasi-realists, emotivists etc etc etc), that it is not the same as Achilles and the tortoise, that it is not a view about how people actually come to acquire the principles that they hold (a process that will require engagement with the facts) and so on. Cohen’s view is, most basically, about the logical structure of people’s moral beliefs. It also presupposes that moral reasoning and judgement consists (at least in part) of the application of general principles to circumstances in the light of the facts, so Cohen’s view is of limited interest to moral particularists who believe that moral judgement isn’t about the application of such principles.
The principal target of Cohen’s article is Rawlsian constructivism. This is because Rawls believes that the way the world is (the facts) enter into the construction of the fundamental principles of justice (via, for instance, the general facts made available to the parties in Rawls’s original position). Cohen believes that Rawls is not altogether consistent here, in any case, since the design of Rawls’s constructivist procedure rests on general claims (that persons are to be considered as free and equal) that are either themselves fact-independent or rest on further principles that are. So, for Cohen at least, Rawls’s putatively fundamental principles of justice aren’t fundamental at all, but merely derivative or regulatory principles that actually derive from deeper fact-independent principles.
Is Cohen’s argument damaging to ethical naturalism? That of course is going to depend on what we mean by ethical naturalism and, as we’ve seen over the past few days, there’s plenty of room for debate and/or confusion about that. Cohen’s argument has nothing to say about what ethical principles amount to, and if all such principles amount to is the expression of attitudes then there’s no going to be not problem for most naturalisic views. On the other hand, if we have a commitment to moral objectivity, then it looks like we are committed to there being objective truths in ethics that are (logically) completely independent of the way the world happens to be (microphysically or otherwise). Whether that is a threat or not probably depends on the sort of objectivity we sign up for. If moral principles are a priori (and so on a par with, say truths of logic) they may not be. But, to be honest, I’m not confident in what I think about this.
In his post yesterday, Brian suggested that Cohen’s view might be correct but trivial. I think that it is probably a mistake to express the point thus. If we are, as Cohen thinks, committed to some ground-level, fact independent moral principles then those principles are likely to be quite substantive (e.g, of the order of the fact-independent principle “all beings with characteristics X have the right to equal concern and respect”).
Is Cohen’s argument damaging to Rawlsian constructivism? If Cohen is correct, Rawlsians might reasonably, though concessively, reply. They might argue that it is true that if we look at what the logical structure of people’s ethical beliefs ought to be, then fact-independent principles are at the bottom. It isn’t the case then, that what justifies and constitutes our most fundamental commitments is that they derive from a constructivist procedure. But (1), epistemically, such a procedure is the best method for getting at what those commitments are and (2) given “the facts”, the regulatory principles which we are practically most interested in are best seen as the product of a constructivist apparatus. Too concessive? I think most Rawlsians could live with it.
A long and winding post responding to some issues about morality and naturalism.
Purpose
I still agree with Lawrence Solum that we can make a start on getting from natural facts to ethical facts by looking at purposes. Matt Evans responds by saying that naturalists deny nature has a purpose. True enough, but not much follows from that.
A little analogy. Consider a crowd in Times Square on a typical work day. The crowd as a whole has no discernable purpose whatsoever. It isn’t like New Year’s Eve where the purpose is unintelligible, there just isn’t a group purpose there. But the individuals in the group can have purposes. One might be looking for food, another for theatre tickets, and another (I’m told) for where all the porn shops have gone. It might be disheartening to think of all of nature as an aimless Times Square crowd write large, but even if we do, that doesn’t entail that none of the constituent parts have purposes.
There’s another reason naturalists should take purpose seriously. Scientists, at least as far as I can tell, take purpose seriously. In many diverse areas, functional explanation is a core part of the toolkit. And talk of function is just talk of purpose in a (minimally) different guise. Crude example: saying the function of the heart is to pump blood is barely different to saying its purpose is to pump blood. And that’s the kind of thing scientists will readily endorse. (They’ll even sometimes say the function of a part of the body is to transmit information, involving themselves in philosophical mysteries concerning function and concerning content within one little claim. No matter, science still works.) Since naturalists take science seriously, indeed take science to be largely constitutive of what is ‘natural’, naturalists can also take purpose seriously.
Factual-Moral Arguments
Matt Evans posts the following challenge:
Here’s the deal: I invite all Brights to email me the moral premises they accept not by blind faith, but because they are founded in nature. Please trace the moral premise to natural facts. Though submissions will be accepted in any format, syllogisms are especially appreciated. Show your work.
Well, none of the following examples are going to convince anyone who wants to remain unconvinced, but here’s three.
Flurg (due to Gideon Rosen)
First a definition: to flurg is to do something one ought not do in the presence of small children.
Now the argument.
Jack is in the presence of small children.
Therefore, Jack ought not to flurg.
That’s valid, as far as I can see, and for appropriate values for ‘Jack’ it is even sound. The premise certainly looks like an ‘is’ statement and the conclusion like an ‘ought’ statement. So we’ve got it - an ought from an is! Or, in Matt’s terms, an argument from a natural premise to a moral conclusion.
Moral Realism
Here’s an even quicker argument that you can get from is to ought.
Torturing babies is wrong.
Therefore, torturing babies is wrong.
The conclusion is definitely a moral claim. But what about the premise? Well, I’m a moral realist, so I think it’s a truth about the real (i.e. natural) world. I can see how some may disagree, but without a good definition of natural, and a good argument that the premise here is unnatural, I’m inclined to think this is a counterexample to the no ought from an is principle. Still, to keep things interesting, I’ll not rely on this from here on.
What’s an Argument?
I think part of what’s lying behind Matt’s scepticism is a faulty conception of what an argument is. He thinks a good argument must be formally valid. It must be valid in virtue of its syntactic form, just like what up-to-date logic classes teach, and indeed as out-of-date logic classes (like those Aristotle taught) teach. But a valid argument is just one where it’s impossible for the premises to be true and the conclusion false. And it’s impossible that humans be just as they are and it be morally permissible to torture infant children. Proof by contradiction: assume it is possible, then we should be able to coherently describe the situation. But as soon as we try it should be apparent that we’ve misdescribed it. (Seriously, try to describe a situation in which humans are just as they are in all natural respects and it’s morally acceptable to torture their young. You’ll soon get the impression that, whatever you try to stipulate, you haven’t told a story where it’s morally acceptable to torture children.) So such a situation is impossible. So the argument from premises about human nature to the conclusion that you shouldn’t torture children is valid.
The Use of Concepts in Arguments
Stuart Buck criticised my this argument that facts about human qualities and relationships entail moral facts as follows:
He’s smuggling in a moral premise, namely the principle that it is wrong to torture (for one’s own amusement) creatures who are capable of feeling pleasure and pain, who have hopes and plans and fears and regrets, who are capable of great learning, and so forth. Only with that principle in place does it make sense to say that you can derive a moral conclusion from the set of facts Weatherson lays out.
I don’t think that’s right. I think that the only sense in which we need moral premises to get from human statements (as I’ll call them from now on) to moral statements is a sense in which those premises are fully acceptable to the naturalist. But to show that, or even to make a case for it, will require a couple of examples.
What I really want to defend is the following - we can be in a position to know moral facts given just human facts and possession of moral concepts. Stuart and Matt say the naturalist needs ‘faith’ to get moral conclusions, I say they just need moral concepts.
To see why this is not cheating, let’s think about an area where it is intuitively clear that you can get from lower-level facts to higher-level facts. For instance, an argument from the final scores of the only two competitors in a sporting contest to a conclusion about who won could be valid. If we can have a natural->moral argument that’s as good as the scores->victor argument, the naturalist is doing well.
So let’s say I know that X scored 264 and Y scored 279. Can I conclude who won? Well, not unless I know what it is to win the game in question. That is, not unless I possess the concept VICTORY for the salient game. If the game is cricket, then Y won, because the higher score wins. If the game is golf, then X won, because the lower score wins. If I have that concept I can make the inference, if not I can’t.
Might this be a good analogy for human-moral inferences? Perhaps, but perhaps not, because we can say exactly what the salient concept - in this case victory - amounts to. What’s distinctive about moral concepts is we can’t say that. Let’s try a different example that’s on all fours with the moral case then.
Here’s the career stats for two prominent baseball players, Mario Mendoza and Nomar Garciaparra. I think that anyone who has all the relevant concepts, and knows the natural facts given in those stats pages, is in a position to infer that Garciaparra is a better hitter than Mendoza. Note here that having the concept doesn’t mean knowing what it is to be a better hitter. I don’t know to any precise degree what it is to be a better hitter than someone else. It’s possible to have the concept and not know whether Garciaparra is a better hitter than, say, Lefty O’Doul. But I know that Garciaparra is better than Mendoza was, and anyone who looks at those numbers and disagrees probably doesn’t have the concept ‘better hitter’.
So that’s a case where just some facts, and a concept, suffice for a normative conclusion. In this case it isn’t a very deep conclusion, one about who is a better hitter not who is a better person. But I don’t think depth makes for a disanalogy between the cases. What was meant to be powerful about Hume’s principle was it blocked you getting from the descriptive to the normative. But that isn’t a block - as shown in the baseball case.
Cohen’s Argument
As Chris noted earlier, the argument of Gerry Cohen that Lawrence Solum discusses should apply generally if it applies in ethics. But it looks to me that unless we interpret Cohen as arguing for a very weak principle, then the baseball example refutes his conclusion. Just looking at the numbers, I can conclude Garciaparra is a better hitter than Mendoza. Do I need a principle here? Well, if I had a principle like “If X has better numbers than Y in the following statistical categories (list here all the ways in which Garciaparra has better numbers than Mendoza) then X is a better hitter than Y” that would certainly help. (Although as Dsquared noted in the comments on Chris’s post, if someon really wanted to resemble a tortoise they could complain that I still needed a further principle to underpin the validity of the new argument.) But if I don’t have that principle the inference is blocked in a much more trivial sense. If I don’t have that principle I really don’t have the concept ‘better hitter’, so I can’t conclude anything about who is a better hitter.
I think there’s a very weak sense in which Cohen’s right - we certainly need principles of some kind to get from factual premises to moral conclusions. But that’s just because we need to (at least tacitly) accept some principles in order to have moral concepts, and (for completely trivial reasons) we need to have moral concepts to make inferences from factual premises to moral conclusions. This is trivial because it isn’t different from the claim that we need furniture concepts to make inferences from premises about the distribution of wood in my office to conclusions about where the tables and chairs are in my office. But no one really thinks that we can’t in any interesting sense make inferences from where the wood is to where the furniture is. If the relation between the descriptive and the normative is like the relation between the wood and the furniture, it’s hard to see a threat to naturalism, or to Rawlsian constructivism.
In a comment to one of Brian’s earlier posts on ethical naturalism, I mentioned that Jerry Cohen’s argument that ethics must (ultimately) depend on fact-insensitive principles seemed to me to threaten the naturalist position (at least as Brian had formulated it). Larry Solum - who started this whole conversation - now has an extensive discussion of Cohen’s view (scroll down) as expressed in the latest Philosophy and Public Affairs. Larry thinks that even if Cohen is right, an Aristotelian naturalism might survive. I’m not sure what to think about that yet. One thing worth noticing about Cohen’s view is that even though most of the discussion is about ethics, it applies to normative principles quite generally. This being so, it ought to apply to such principles in other domains (including epistemology and the theory of rational action) and that if it threatens naturalism in ethics it also threatens naturalistic programmes in those areas.
Chris Bertram pointed me to the Chronicle piece which Brian discusses below on the difference between US and UK Philosophy. One passage that struck me was the following
There are two broad models of how such engagement might best be achieved: what I call the participatory and the contributory. In the participatory model, academics engage in real-world problems by becoming members of the institutions that are directly involved with those problems. In the contributory model, academics remain in academe, but issue documents, books, and papers that are supposed to contribute to public life
Baggini claims that the ‘participatory’ model is in the ascendance in Britain, whereas in the US philosophers just write wise things as pieces of advice. This is very misleading, although I can see why it might look that way. One reason that it looks that way is that whereas philosophers can enter the British upper house as a result of patronage, the US upper house is reserved for multi-millionnaires who are willing to devote their fortunes and lives to running for office — not a mechanism likely to suit academic philosophers, even successful ones.
But if, as Baggini suggests, sitting on government bodies counts as participation, US philosophers are at it all the time. Norman Daniels and Dan Brock were on the famous Hillary commission on health care reform; my former colleagues Dan Wikler and Allen Buchanan both served time in the Federal Department of Health (and Wikler has spent a good deal of time at the WHO). Wierdly enough, the aestheticist Myles Brand is now head of the NCAA. There are more numerous examples at less exalted levels of government on advisory bodies. And, more shamefully, Bill Bennett (of gambling fame) and Irwin Silber (candidate for Governer of Massachussets, and fervent supporter of the Contras)are both trained and formerly mediocre academic philosophers. I suspect it is ignorance of the situation in the US that drives the distinction.
But there is another fact that makes US philosophers look less participatory than the Brits. The kind of political outlook found in the mainstream of analytical philosophy is left-of-center, and so far off the American political map that it is hard to engage. Baggini cites the famous ‘Philosopher’s Brief’ on assisted suicide as an example of the contributory model. But in other times, and certainly in other countries, Ronald Dworkin would have been a likely Supreme Court candidate, and hence a participant — he has been ineligible because his political outlook is completely out of the mainstream of politics. By contrast those very same views are though of as part and parcel of political debate in the UK, so that people with them have potential access to political processes.
Via Scott Martens, I saw that the Chronicle of Higher Education has published an article on the differences between philosophy in Britain and North America.
One of the big differences is the attitude towards interdisciplinary work. Scott highlighted the following quote, which was pretty striking.
In Britain, there is more skepticism about the value of interdisciplinary work, notes Tim Crane, the country’s leading philosopher of mind. “A lot of what counts as interdisciplinary work in philosophy of mind,” he says, “is actually philosophical speculation backed up with certain, probably out-of-date, Scientific American-style summaries of research in psychology or neuroscience, which tend to support the philosophical preconceptions of the authors.”
That’s not exactly what I’d call “noting” the existence of skepticism.
It’s also pretty misleading in a way. Most of the best recent work I’ve seen in philosophy of physics and philosophy of biology seems to rest on a pretty thorough understanding of the relevant fields. And my impression is that this generalises fairly widely (though maybe not to philosophy of mind). Sometimes philosophers of X will end up having more productive conversations with other specialists in X than with other philosophers, because they rely so heavily on specialised knowledge of their field. Even if Tim’s right about philosophy of mind, it isn’t a reasonable characterisation of interdisciplinary work in general.
The article ended with a long discussion about the role of academics in public life, again noting difference between Britain and America.
There are two broad models of how such engagement might best be achieved: what I call the participatory and the contributory. In the participatory model, academics engage in real-world problems by becoming members of the institutions that are directly involved with those problems. In the contributory model, academics remain in academe, but issue documents, books, and papers that are supposed to contribute to public life.
In Britain the participatory model is dominant, while in America it’s the contributory model. Americans write articles for Philosophy and Public Affairs, British philosophers get involved with political groups and influence things directly.
So that got me thinking, is spouting off ideas on a blog a contributory or participatory mode of engagement? Brian’s hypothesis: if it has a comments board it’s participatory, if not it’s contributory.
Quite by coincidence, an ad for the book on which this article was (I think) based arrived in my mailbox today. So I (think I) can fill in one of the gaps in the article. The author, Julian Baggini, says
I’ve recently taken such a picture, interviewing, with a colleague, 16 of the best British-based philosophers in the generation that will soon lead their discipline.
That made me think, well who are these 16 greats? The ad for the book says it contains interviews with
Simon Blackburn, Helena Cronin, Don Cupitt, Richard Dawkins, Michael Dummett, Stuart Hampshire, John Harris, Ted Honderich, Mary Midgley, Ray Monk, Hilary Putnam, Jonathan Rée, Janet Radcliffe Richards, Roger Scruton, John Searle, Peter Singer, Alan Sokal, Russell Stannard, Richard Swinburne, Peter Vardy, Edward O Wilson, Mary Warnock.
This is odd since (a) there’s 22 names there, (b) not all of them are British or philosophers, (c ) there’s hardly a person on that list under 60 and (d) Tim Crane is not on the list, when the impression from the earlier quote was that he was one of the interviewees. (And I’d find it hard to believe that he wouldn’t be one of the “16 of the best British-based philosophers in the generation that will soon lead their discipline.”) Maybe the super 16 are interviewed as well as the 22 luminaries listed above. Does anyone have the book in question so we can find out?
By the way, if you head over to Scott’s site, you should check his post explaining his carefully considered and well earned aversion to Noam Chomsky.
Larry Solum has a typically insightful post responding to Matt Evans’s criticism of Richard Dawkins for proposing a naturalistic ethics. I think Larry’s criticisms are spot on, but for my money much too tentative.
Matt relies on a fairly crude no ought from an is principle. It’s notoriously difficult to get a statement of that principle that isn’t vulnerable to immediate and obvious objections. I won’t go through them all here, partially because I’d rather discuss the ethics than the logic of this point, and partially because Gillian Russell has a nice summary of the objections in her paper In Defense of Hume’s Law. As the title suggests, she thinks some version of the no ought from an is principle can be salvaged, but it’s clear that it won’t be easy.
But let’s set aside the technical concerns about the premises. The real problem is the conclusion that Matt reaches. Here’s the important passages.
[T]here are no ethics in naturalism. Naturalism is an acceptance of what is, and ethics is the domain of what should be. There is no way to bridge the is/ought gap without referencing an extra-natural source. ..Other atheists and agnostics take naturalism seriously; they believe there are no moral absolutes, there are no ethics. .. To these people it isn’t wrong to kill Jews for being Jewish, it’s just that some people think it’s wrong to do so. Though I spoke with many atheists and agnostics in college and law school, I never found one who adopted this view. …Dawkins was wrong when he said his ethics are based on naturalism. His religion, like all others, ultimately rests on non-rational faith.
The core point seems to be that without some non-natural entity there are no ethics. Let’s spell out some conclusions of this position, because really it is just about the most absurd thing anyone could possibly say.
Say we know the following facts about the world. It contains creatures who are capable of feeling pleasure and pain, who have hopes and plans and fears and regrets, who are capable of great learning, and creating works of great beauty, who often love their children and parents and occasionally love each other, and who have emotional attachments to those people who they love so they are affected by the pleasures, pains, successes, failures etc of those they love. Now a naturalist could easily come to know all these things about the world.
The person who thinks naturalism can’t ground ethics thinks that we could know all those things about the world and still think it’s a wide open empirical question whether it is morally wrong to torture one of those creatures for one’s own amusement, or to kill all of these creatures to relieve a minor headache one has, and so on. Now I can imagine that some people really do think this is an open question, but only because some people are psychopaths. I really don’t think that anyone around here seriously thinks that in such a position we have to do extra work to find out whether it’s right or wrong to torture these creatures for fun. We already know enough to know full well that it isn’t. (Which is not to say that anyone who says that there’s no naturalistic ethics is psychopathic, but rather that they just aren’t being careful enough about following through the consequences of their own position.)
In case this isn’t entirely obvious (and frankly I can hardly think of a more secure premise in ethics, but just in case) try the following thought experiment. Imagine we find out tomorrow that all theistic theories are just wrong. (Everyone makes mistakes.) There’s really nothing around here but us baryons. Would anyone, I mean anyone, think that suddenly we had no ethical obligations whatsoever? That it was now OK to torture babies for fun? To put the point in Bayesian terms, anyone whose confidence in any extra-natural hypothesis is as high as their confidence in the proposition that it’s wrong to torture babies for fun has a very odd worldview.
There’s an analogy here with an argument Jerry Fodor makes in one of his attacks on teleosemantics. On some teleosemantic theories, it would turn out that if Darwin was completely wrong and we really were all created by a divine being, none of our words or mental states would mean anything. But maybe Darwin was completely wrong and we really were all created by a divine being. It’s not very likely, but everyone makes mistakes. It isn’t at all plausible that if Darwin messed up then none of our words or thoughts mean anything at all. So evolutionary theories of content can’t be necessarily true.
The analogy here cuts reasonably deep. Since evolutionary theories are true, they may affect the precise nature of our semantic theories. In particular it’s probable that the evolutionary facts affect the boundaries of biological terms. And similarly if an extra-natural theory of some kind is true, it’s possible that it will affect the precise nature of our moral theories. (But note it’s not trivial to see how theistic theories could have such an effect, a fact I’ll come back to presently.)
Since naturalism and ethical obligation are clearly compossible, any argument that they aren’t must be mistaken. Still, it’s fun to run through (over?) some of the simpler arguments that they are not.
For example, some may argue that you can’t have ethics in a natural world because it’s impossible to see how to logically derive ethical claims from microphysical descriptions of reality. But this is probably just a fact about our inability to engage in complex reasoning. After all, it’s more or less impossible to see how to derive economic facts from microphysical descriptions of reality, but still I’m pretty sure that statements like “Inflation was higher in Britain in the 1970s than the 1990s” are true, and are true solely in virtue of the arrangement of microphysical particles around the world. Just how the connection between microphysics and economics is maintained is a mystery, but there must be one. Facts about inflation are not magical facts that need to be ‘added in’ to the physical world, even if it isn’t clear exactly where they are to be found in it.
Similarly, some will argue that naturalism and ethics are incompatible because it isn’t entirely clear in virtue of exactly what natural feature of the world it is wrong to torture babies for fun. Maybe, as Larry suggests, an explanation in terms of what makes for natural flourishing will be the start of a solution. But you know, this question stays hard even if we add in extra-natural entities. The arguments in the Euthyphro against the claim that something is good in virtue of being loved by the god(s) still look pretty good. The problem of saying what makes an action right or wrong is a hard problem for everyone. To conclude from that that the naturalists can’t answer it is a bad mistake.
Slight disclaimer: I know there are arguments more serious than a blog entry for the views I’ve been ridiculing here. A decent response to them all would also take longer than a blog entry. But essentially my response is going to be the same kind of Moorean response. (Kind of because I’m being a Moorean about epistemology to draw very anti-Moorean conclusions in ethics.) I’m more confident that torturing babies for fun is wrong, even if there is no super-natural force in the world, than I am in the premises of any complex philosophical argument, so if you have a complex philosophical argument for the conclusion that whether torturing babies for fun depends on the presence of super-natural entities, I’m just going to reject one or more of your premises. Given enough time, I’ll usually be able to figure out which premise I want to reject.
More serious disclaimer: This is my second pro-Bright post in two weeks, which is quite disturbing since I really don’t want to stand up for them particularly. But that doesn’t mean I’m going to endorse bad anti-Bright arguments.
Final disclaimer: This has been edited slightly to moderate somewhat some of the ranting, and edited to fix a typo.
Over the past few weeks, many analytic philosophers — including my wife and several of her colleagues — have received a free copy of a book called The Elements of Mentality: The foundations of psychology and philosophy by David Hume. Not, you understand, the David Hume who wrote A Treatise of Human Nature, An Enquiry Concerning Human Understanding and other well-known books. He has been dead for some time. This David Hume, as is discreetly noted on the inside back flap is a pseudonym. Why pick “David Hume” out of all possible noms de plume? I suppose it can’t hurt to have your book shelved along with ones written by the most influential English-speaking philosopher in the past three hundred years.
The book is one of those Fundamental Theory of It All books. The letter accompanying the freebie copies says, in part:
The Elements of Mentality is a momentous undertaking, offering a unique articulation of the foundation or first principles of psychology. It does so by identifying the elemental mental experiences … and by describing “mentation” — the cyclical organization of those elemental experiences. The author contends that the resulting elemental model of mentality provides a basis for the analysis of any psychological phenomena. Showing how the structure of all knowledge emerges from this model, the author concludes that the model is also the foundation of philosophy.
Woah. Pretty heady stuff. A quick skim of the book reveals the author as the kind of person who Lays Out His View as opposed to providing anything much in the way of argument. The exposition gets underway immediately, the tone is confident throughout and there is a refreshingly complete absence of footnotes, endnotes, bibliographical references, or passing mentions of any philosophers living or dead, with the exception of (the real) David Hume who gets a quote before the table of contents and Rousseau, who gets a line in the Conclusion. This style is not unknown amongst some great philosophers, but is not by itself sufficient to make a book a great work of philosophy.
“David Hume” — a Google search suggests that Sid Barnett is the author’s real name — argues that there are five elements of mentality, viz, external sensory experiences, internal bodily experiences, emotional experiences, intellectual experiences and experiences of the will. These are the irreducible, fundamental building blocks of mental life. “There are no other types of experiences, no intermediate categories, no hybrids.” This doesn’t sound very plausible. (How to classify experiences such as this?) These five elements combine in a particular unvarying cyclic order to form a “unit of mentation.” The goal here is an analytic description of the flow of consciousness, I think. From here, for reasons which remain unclear to me, the author believes he can get to “a grand philosophy … of the structure of all knowledge.” He attempts this in the short Part Two of the book, which seems to be the most confused bit of the whole thing. It consists mainly of a string of definitions and little argument. Then he moves quickly on to the “Mind/Matter Problem” where further confusion ensues. Even a non-philosopher like me can see problems with, for example, his analysis of how to distinguish causation from mere correlation:
Probable causation … is suggested by a history of consecutive occurences of C[ause] and E[ffect] about T[ime], provided that the occurences of C are indeterminate in time and varied in circumstances. It is the varied and indeterminate occurences of C that suggest the consecutive occurences of E are not mere coincidence…
Alas, the co-occurence of Cs and Es about Ts is what produces correlation in the first place, so it’s not much use citing this condition as the way to discriminate between it and causation. Later, Barnett notices that causation is closely related to constitution:
For example one might say the that the properties of H2O molecules cause the properties of water. Water is composed from H2O molecules; therefore the properties of water are the properties of H2O molecules.
This is false as well. Water can have many properties (volume, color, wetness) that H2O molecules do not have. Here Barnett seems to be reaching for some concepts — like constitution or supervenience — that are out there in the literature. But relying on them might necessitate actually citing someone, which would spoil the sui generis feel of the enterprise.
I got a bit sick of the book at this point. It’s all very reminiscent of the A.M. Monius affair a few years ago. This was the (slightly less pretentitous) pseudonym of a New Jersey businessman with a hankering for metaphysics. His “Institute” offered well-known metaphysicians a lot of money to write “serious” (by which was explicitly meant “favorable”) reviews of his longish paper “Coming to Understanding.” A bunch of them took the bait, with varying degrees of publicly-acknowledged guilt about prostituting their critical faculties. David Hume appears to want to attract the attention of philosophers in a similar way. I wonder whether this sort of thing is becoming a trend? And why is Sid Barnett using a pseudonym in the first place? Is he under a cloud for some other reason? Or is using the name “David Hume” just a cheap way to attract undeserved attention?
The letter sent out with the book says the publisher would “value any brief comments … about the thesis advanced in the book and your analysis of the course adoption possibilities for it.” I doubt it’ll get adopted in any Philosophy courses, partly because he’s not offering any hard cash but mostly because the book doesn’t seem to be any good at all. Perhaps someone teaching a seminar on “Boundary Maintenance on the Fringes of Academia” would be interested. But that’s just what a career academic like myself would say, “David Hume” might reply. Go read the Prologue or look at the Table of Contents for yourself, and see what you think.
Over at Crescat Sententia, Will Baude has been defending subjectivism about morality. Will doesn’t defend the traditional positivist view that "Murder is wrong" means (roughly) "Boo for murder!", but rather that it means "I disapprove of murder". Freespace’s Timothy Sandefur responds to Will with several moral and legal arguments. This seems to me to be a mistake. Will’s making a metaphysical and semantic claim, and the right responses will be based on metaphysics or semantics. Fortunately, there are plenty of the latter kind of argument.
Argument from indirect speech reports
Consider the following little discourse.
Mr Bigot: Homosexuality is wrong.
Brian: Mr Bigot said that homosexuality is wrong.
My statement seems obviously right - I just reported what Mr Bigot said. But it’s hard to see how this could be true on a subjectivist theory. Normally, when I say S said that p, what I say is true just in case the proposition expressed by p is true when I say it. For instance, if I say You said I am an idiot, that’s true just in case you said that Brian is an idiot, not just if you self-ascribe idiocy. Generally, indexicals in speech reports get their meaning from the reporter’s setting, not from the speaker’s setting. The subjectivist thinks that ‘wrong’ is just another indexical. So my report should mean "Mr Bigot said that I disapprove of homosexuality." But he said no such thing.
Argument from direct speech report (due to Ernest Lepore and Herman Cappelen)
If "Homosexuality is wrong" just means "I disapprove of homosexuality" then I should be able to say the following thing in response to Mr Bigot.
When Mr Bigot said, "Homosexuality is wrong" he spoke truly, even though homosexuality is not wrong.
After all, Mr Bigot does disapprove of homosexuality, so by subjectivist lights the first clause is true. And homosexuality is not wrong, so the second clause is true. But then the sentence is true, even though native speakers would naturally take it to be a contradiction.
Argument from deleted material (due to Jason Stanley)
Here’s a surprising fact about indexicals in English. In the following, Jill’s statement can’t mean that she lives in the Ritz.
Jack [pointing at the Savoy]: I live there
Jill [pointing at the Ritz]: So do I
This is odd, because you might have thought that when Jill says ‘So do I’, she’d be picking up on the words Jack used, and effectively saying "I live there". And since she says that by pointing at the Ritz, you might then think she meant she lives in the Ritz. It’s certainly conceivable that there could be a language in which that was how statements like Jill’s are interpreted. But English is not such a language. (Neither, I think, is any other natural language.) But consider how moral talk works.
Brian: Murder is wrong.
Mr Bigot: So is homosexuality.
The subjectivist wants to say that by that statement Mr Bigot is saying that he disapproves of homosexuality. But if moral terms are indexicals like the subjectivist says they are, then Mr Bigot’s statement should mean that Brian disapproves of homosexuality, just like Jill’s statement means that she lives where Jack is pointing.
What does this mean for ethics
One response to all these arguments would be that ethical terms are just very special kinds of indexicals. For instance, one might try and argue that somehow speakers don’t realise, even tacitly, that they are using different rules for embedded ethical terms as they do for other indexicals. But sui generis theories are as a rule pretty bad, and I think they should be avoided like the plague.
There are a few other responses that are consistent with subjectivist intuitions. One could go with Ayer’s position that "Murder is wrong" means "Boo for murder!". (Though it’s necessary to be careful here - this view of Ayer’s is inconsistent with his views on the nature of truth.) One could adopt a position of one of the modern day non-cognitivists, such as Blackburn or Gibbard. (I’m not enough of an expert to accurately present their views here. If you’re really committed, try their books here or here.) And, finally, one could think that the data shows that our native moral concept is an objective concept. But you might think for independent reasons that there are no such things as objective moral concepts. And then you’ll be an error theorist about morality. As Blackburn (I think) puts it, your theory of evil will be like the atheist’s theory of sin. This would be a fairly radical position, but I think it’s better than a theory that involves changing the meaning of our moral terms.
UPDATE: There’s a bad mistake in the above text. Where I said
Normally, when I say S said that p, what I say is true just in case the proposition expressed by p is true when I say it.
I should have said
Normally, when I say S said that p, what I say is true just in case S said something which expresses the proposition expressed by p when I say it.
That is what I meant all along, but somehow it’s not at all what I typed. (Trust me!) Thanks to Michael Kremer in the comments for picking up the mistake, and apologies for getting it wrong the first time.
There was an article a couple days back in the Chronicle of Higher Education called “What People Just Don’t Understand About Academic Fields.” (Unfortunately, I can’t link to it because apparently you have to be a subscriber—but it doesn’t really matter for this post.) The article included a few paragraphs from a handful of professors in different fields each talking about what most people don’t seem to understand about what they do or why they do it. None of the entries struck me as all that interesting, but they did remind me of an essay by Isaiah which has been bothering me for awhile. The essay is called “Philosophy and Government Repressession” (1954) and was printed in The Sense of Reality. In trying to correct what he thinks is a common “misunderstanding of what philosophy is and what it can do,” argues that second- and third- rate philosophers are essentially worthless, except as obstacles to be overcome by truly great thinkers.
Here’s the passage, sort of cobbled together, that really gets me:
third-rate historians, fourth-rate chemists, even fifth-rate artists, painters, composers, architects, may be of some value; for all these subjects have their own techniques and operate at their own proper level, which may be low, but remains a level. But there is no such thing as third-rate or fourth-rate rebellion, there is no such thing as a trivial effort to cause a major upheaval. That is why the third-and fourth-rate philosophers, who are really engaged in applying techniques of their predecessors who are dead and gone, as if they were practicing a science, as if they were being chemists or engineers, are not so much unsuccessful or unimportant, or unncessary or superfluous, as positively obstructive . . . In philosophy alone the plodding, competent, solid workers who cling to accepted methods, and half-consciously seek to preserve familiar landmarks, and work within a system of inherited concepts and categories, are a positive obstruction and a menace—the most formidable of all obstacles to progress. The world, one likes to think, has been created for some given purpose and everything in it plays some necessary part. If you ask what necessary part the second- and third- and fourth-rate philosophers, and those, even, below that line, can have been created to play, perhaps the answer is that, if they did not exist, the possibility of those great creative rebellions which mark the stages of human thought would never have occurred.
The story goes that gave up philosophy and became an intellectual historian because he thought he wasn’t good enough to make a serious contribution to philosophy. (Whether the story is right or not, I leave to the biographers out there.) I read the passage above, more by chance than anything else, when I first arrived at Oxford to study political theory. And I have to say, at the time, I found it rather depressing. I mean, how would you know if you had a revolutionary idea? And, anyway, who sets out on a career in philosophy, or a related subfield like political theory, thinking: I’ll just be one of those third-rate (or worse) hacks whose only real purpose in life is to make things so bad that some true genius will come along to fix them? I suppose if you sit around with A.J. Ayer, J.L. Austin, and Stuart Hampshire, trying to keep up gets a little depressing after while. But I think turned that bit of intellectual anxiety into a real downer. If you’re expectations are that high—if the only point of doing philosophy is to revolutionize the field—then maybe it would make sense to jump ship early on.
There are probably lots of reasons to complain about ‘s romantic image of philosophy. But for my part, I think his image is neither helpful nor accurate. It’s not helpful because it encourages beginners to give up before they ever really get going—and perhaps was guilty of this himself. It isn’t accurate because it denigrates intellectual virtues that are actually very important in great philosophy. Patience, persistance, and plodding—not to mention a cool name —may sometimes be necessary, even if they aren’t sufficient. No wonder never commented (or did he?) on A Theory of Justice. Must not have been rebellious enough for his tastes. Some first-rate philosophical plodding, indeed!
Tyler Cowen has a couple of posts suggesting that there is a serious libertarian argument against initiatives like the US government ‘do-not-call’ list for telemarketers. His argument is that government shouldn’t be in the business of restraining peoples’ spontaneity.
(warning: lengthy argument follows)
To quote the core of Cowen’s argument:
Take those people who have put themselves on the list. Do they really not want to be called? Maybe they are afraid that they really like being called. That they will buy things. That they will be impulsive. Arguably those people have a rational controlling self, and an impulsive buying self, to borrow some language from Thomas Schelling. Why should we assume that the rational controlling self is the only one who counts (do you really want a life devoid of spontaneity?)? Why should our government be in the business of altering this balance in one direction or the other? Isn’t the market a better mechanism for balancing the interests of the conflicting selves? How many of you out there will be consistent? How about a government list for people who do not want to be allowed into casinos? Do not want to be allowed to buy cigarettes at the local 7-11? Do not want to be allowed to order dessert?
Cowen seems to have gotten a lot of email from people who argue either _a_ that telemarketers are evil (which is self evidently true, but beside the point), or _b_ that they themselves never buy from telemarketers. But this doesn’t address Cowen’s two main claims. First, he suggests that the government shouldn’t favour our propensity for self control over our propensity for spontaneity. Second, he states that the market likely provides a better mechanism for balancing spontaneity and rationality than the government. Even if he’s advancing these arguments half in jest, they’re worth thinking about, as they involve some tricky questions for political theorists, philosophers, economists, and others who pontificate on such matters.
Turning to the first point. For starters, no libertarian I, but it seems to me that when Cowen (correctly) argues that people aren’t consistent in their preferences, he’s jumping up and down on some very thin ice for libertarians. Ideas about individual autonomy, and how it’s best expressed through free choice in certain political and economic contexts, usually rest on the implicit claim that there is an individual there, who knows more or less consistently what she wants. If you start positing different ‘selves’ within the individual, with different ideas of what they would like and how to get it, you’re coming dangerously close to saying that people don’t really know what they want. And this, in effect, is what Cowen is arguing. If we want to “balance” rationality and spontaneity, then we want to limit the circumstances under which we can make rational long term choices that constrain us, and prevent us from behaving spontaneously in the future. In short, some kinds of choice should (sometimes) not be open to individuals, even if those choices are likely to harm no-one but the individual herself (and, even then, these choices will only ‘harm’ one aspect of the individual, her spontaneous self as opposed to her rational, controlling self). This seems to me to be a rather tricky argument for a libertarian to make, and to sustain. In fact, it’s the reverse image of some of the arguments made against libertarians - for example that addictive drugs should be illegal, because once we start shooting up, we may not be able to stop. Anti-libertarian arguments of this sort appeal to our long term self-interest as opposed to our short term, ‘spontaneous’ interest in getting high. Cowen’s argument does the reverse, suggesting that our ability to make long term choices should be limited lest it constrain our spontaneity. But, as should be apparent, the two arguments aren’t that far off each other - they both state that we should ‘limit’ one form of choice, in order to facilitate the other. And I suspect that they’re both, in the end, antithetical to libertarianism.
Second, let’s look at the claim that governments provide an inferior means of balancing spontaneity and long term interests than markets. There seem to me to be two claims; one implicit and one explicit. The first doesn’t hold, as far as I can see, and Cowen doesn’t actually provide any evidence in support of the second.
The first more or less implicit claim, is that the do-not-call list is problematic because it’s the government that is organizing it. This seems to me to be a non-starter. The government isn’t constraining choice here, it’s enabling it. More precisely, it is offering a new choice to consumers which they previously didn’t have - of telling telemarketers not to call them. If the government is “altering the balance,” it is doing so by opening up choices rather than shutting them down - i.e. it isn’t restricting the kinds of liberties that libertarians get het up about. It’s not coercing consumers to sign up. The only people who are being coerced are the telemarketers, who are being coerced only to respect the right of others to choose not to be called by them. To put it another way; would libertarians find the scheme objectionable if it was being run entirely by private actors? Say, for example, if the Direct Marketing Association had put together a really workable do-not-call list (rather than the half-arsed effort that it had). I suspect that libertarians would see this as laudable evidence of market forces at work. But the effects on individual consumer choice would be precisely the same.
The second claim that Cowen makes is that markets are a better way of balancing our controlling selves and our spontaneous selves than governments. He doesn’t adduce any real arguments or evidence for thinking that this is likely to be so, and I suspect that he’d have trouble in finding them. 1 In order to evaluate the respective merits of different means to balancing, you’d really have to have some valid and convincing metric for “deciding” the appropriate balance between the different claims of long term enlightened self interest, and short term spontaneity. And damn me if I know of any way of doing this in an intellectually defensible way. I suspect that Cowen’s claim, if you look at it closely, boils down to something like the following. “Markets are more likely than not to favour spontaneity over long term rationality. By and large, I prefer free scope to be given to spontaneity, rather than careful long term planning, when the two come into conflict. Therefore I, and people like me, should prefer markets over government.” Which is all very well and good, but isn’t going to convince people with dissimilar preferences.
Now this is a rather lengthy response to a throwaway argument, but I think there are some interesting issues buried in here. How well does libertarian claims about social order work, if you assume that people are subject to certain kinds of inconsistency? My suspicion, as articulated above, is that they don’t work well at all. How do libertarians deal with individual forms of choice that are deliberately meant to foreclose future choices that the individual might make? Surely, some libertarian, somewhere, has dealt with this set of problems. The only person I know who has done serious work on this is erstwhile analytic Marxist, current day unclassifiable leftie, Jon Elster. Two of his books, Ulysses and the Sirens and Ulysses Unbound, show that these problems are endemic to many important forms of choice.
1 Broader efficiency claims for markets rest, of course, on assumptions about the consistency of preferences, which Cowen has junked at the beginning of his post.
Update: Ogged has further criticisms.
Addendum: Reading over Cowen’s post again, it strikes me that precisely the argument that he’s making over the do-not-call list can be made with regard to the sale of pension plans on the market. Pension providers, by giving us the choice of signing up to schemes where we put away a chunk of our disposable income every month, are altering the balance between our rational controlling selves and our spontaneous selves. As already noted, the actual nature of the provider (government in Cowen’s example; a private firm in mine) is a red herring - the important bit for the argument is how their provision of something affects individual choice.
Addendum II - David Glenn emails to point to this very interesting paper by Cass Sunstein and Richard Thaler, which starts from similar arguments about limitations in human rationality and consistency, to argue on behalf of a “libertarian paternalism.” Good, thought-provoking stuff.
Given that we have at least two or three contributors who hold down paid jobs as philosophers of one kind or another, and it’s a Friday afternoon, I thought I’d take the opportunity to ask a question that’s been on my mind for a while.
Why is it that no moral or political philosophy of which I am aware has a satisfactory explanation for the fact that snitches, grasses and tattle-tales are almost universally reviled? In most other areas of moral philosophy, it is considered generally unsatisfying at least to have what is known as an “error theory”; a set of principles which commits you to the belief that the majority of the population are wrong in some of their strongly held beliefs. But in the case of snitches, grasses and squealers, most of the moral philosophies I’ve ever heard of seem to be more or less entirely committed to an error theory? Why?
In the case of utilitarian and other consequentialist theories, it’s just a specific case of something that I’ve always found problematic about utilitarianism; the general fact that because you’re using a consequentialist schema for judging the value of actions, it cannot be a further good or bad thing about action N that N is an act of snitching. Like acts of incest, individual snitchings can be good, bad or indifferent according to a utilitarian theory. And that’s something which has always seemed unconvincing to me.
In the case of most non-utilitarian political moralities, however, if they have a policy on the subject at all, it’s usually to require you to grass up your mates in certain situations. I am not aware of any religious or non-religious morality taken seriously by moral philosophers which has a rule aginst grassing. Even quite anti-authoritarian libertarian types usually have some sort of “rule of law” concept which appears to require one to turn stool-pigeon in some circumstances and which certainly doesn’t condemn you for doing it if you should wake up one day and choose to. Which is surprising, really, because in the real world, from the playground to the Mafia, “no snitching” is about as serious a rule as you can get, often much more important than strictures on murder or even marital infidelity.
I’ll put forward three reasons which occurred to me; none of them are particularly convincing.
First, moral philosophers are in general quite authoritarian personalities, in Adorno’s sense. They’ve chosen a life which is an extension of school and thus they model themselves on teachers, for whom a pro-snitching morality is the norm.
Second, snitching is a more socially embedded concept than most of the things moral philosophers need with. Most moral philosophy examples involve only two people (with “the masses” sometimes lurking offstage as an onlooker who might be adversely influenced). From the grammar of the words involved, however, you need at least three people to have an act of snitching (A tattles on B to C), and C has to be in a hierarchical relationship with B as a minimal condition (otherwise it’s gossip, not snitching). It doesn’t really lend itself to abstract reasoning.
Third, outside some parts of the anarchist tradition, moral philosophers generally assume a world in which rules are rules for a reason, rather than for no reason (or insufficient reason). In general the only discussion of snitching in moral philosophy textbooks is in “ticking timebomb” type cases or other examples where it’s fairly clear cut that (whether snitching is the right or wrong thing to do) one is avoiding or punishing a significant harm. Cases of just telling tales on someone for some trivial wrongdoing like speeding or parking offences don’t compute well because philosophers have a tendency to apply transitivity too far; to assume that if it is wrong to X then one should be punished for doing X therefore it is good to ensure that X-acts are punished.
As I say, none of the above convince me particularly. Any thoughts?
This is really Brian’s territory (and Laurie’s even moreso, but she’s in Australia so I am forced to ontologize without the help of a professional), but Eugene Volokh has been posting about gay marriage and he quotes this argument from one of his readers:
I happen to be 40 years old, happen to be an economist, and happen to be fertile, but I AM a man. I am not a human who happens to be a man. Being male is fundamental to who I am in a deeper way than any of these other characteristics.
The reader goes on to talk about fertility and infertility in hetero- and homosexual couples, and Eugene disagrees with him. But this first paragraph struck me as odd. I can understand how being male is more fundamental to his identity than being 40 or an economist, but he also seems to say that it’s the essential thing about him. He’s a man first, “not a human who happens to be a man.” Can he really mean this? What if he had to choose one property or the other? Would he really prefer to be a male non-human than a non-male human? Say, a fine, strapping male canary rather than a woman? Maybe I’m misreading his view. Or maybe my he-canary vs woman preference ranking is not widely shared.
Andy Egan at Philosophy from the 617 responds to some of the debate Henry’s Harry Potter post produced, and in doing so brings up an interesting point about how we judge fiction. Lots of people say that in fiction, especially visual fiction but also in written works, the author should show the audience what happens, not tell them what happens. But what exactly does this rule mean?
One first-pass thought is that it’s something like this: when you’ve got a choice between making something true in the fiction directly (by, e.g., writing down a sentence that expresses the proposition you want to make true), or making it true by saying a bunch of lower level stuff that entails it (or demands that we imagine it, or whatever), you should always do the second. In other words, it’s always better to force the higher-level facts by explicitly fixing the lower-level facts. But that’s pretty clearly crazy- it calls for books written entirely in the vocabulary of microphysics, which would be incredibly long and boring and incomprehensible and awful.
Andy goes on to argue that there’s some privileged intermediate level of detail that’s appropriate for writing at in particular the “level of description that our imaginative faculties operate at”. I’m not entirely sure what that is. Are concepts like CHAIR - things I can easily imagine but which I always imagine as having more detail than just being chairs - at the “level of description that our imaginative faculties operate at”? I don’t know, but perhaps we can work out an appropriate privileged level.
(In these cases I always think the class of monomorphemically lexicalised maximally precise concepts can play a crucial role. But that’s probably just because I (a) read too much Fodor and (b) use too much jargon. So ignore that suggestion.)
But I think the better response is just to accept the allegedly absurd conclusion that Andy offers. The “show, don’t tell” rule does imply that everything should be written in microphysics. But it isn’t the only rule authors should follow. Among other salient rules, there’s the “keep it simple, stupid” rule, not to mention the rule against repetitiveness. If we wanted to try and find one rule for writing it would look something like “Show rather than tell as much as possible consistent with keeping the work accessible to the average reader and not being boring and …” Or, in slightly fewer words, “Show rather than tell as long as ceteris are close enough to paribus”. But once you’ve said that you may as well go back to the simple “Show don’t tell” rule, remembering that if you forget the other rules it will have fairly silly consequences.
À Gauche
Jeremy Alder
Amaravati
Anggarrgoon
Audhumlan Conspiracy
H.E. Baber
Philip Blosser
Paul Broderick
Matt Brown
Diana Buccafurni
Brandon Butler
Keith Burgess-Jackson
Certain Doubts
David Chalmers
Noam Chomsky
The Conservative Philosopher
Desert Landscapes
Denis Dutton
David Efird
Karl Elliott
David Estlund
Experimental Philosophy
Fake Barn County
Kai von Fintel
Russell Arben Fox
Garden of Forking Paths
Roger Gathman
Michael Green
Scott Hagaman
Helen Habermann
David Hildebrand
John Holbo
Christopher Grau
Jonathan Ichikawa
Tom Irish
Michelle Jenkins
Adam Kotsko
Barry Lam
Language Hat
Language Log
Christian Lee
Brian Leiter
Stephen Lenhart
Clayton Littlejohn
Roderick T. Long
Joshua Macy
Mad Grad
Jonathan Martin
Matthew McGrattan
Marc Moffett
Geoffrey Nunberg
Orange Philosophy
Philosophy Carnival
Philosophy, et cetera
Philosophy of Art
Douglas Portmore
Philosophy from the 617 (moribund)
Jeremy Pierce
Punishment Theory
Geoff Pynn
Timothy Quigley (moribund?)
Conor Roddy
Sappho's Breathing
Anders Schoubye
Wolfgang Schwartz
Scribo
Michael Sevel
Tom Stoneham (moribund)
Adam Swenson
Peter Suber
Eddie Thomas
Joe Ulatowski
Bruce Umbaugh
What is the name ...
Matt Weiner
Will Wilkinson
Jessica Wilson
Young Hegelian
Richard Zach
Psychology
Donyell Coleman
Deborah Frisch
Milt Rosenberg
Tom Stafford
Law
Ann Althouse
Stephen Bainbridge
Jack Balkin
Douglass A. Berman
Francesca Bignami
BlunkettWatch
Jack Bogdanski
Paul L. Caron
Conglomerate
Jeff Cooper
Disability Law
Displacement of Concepts
Wayne Eastman
Eric Fink
Victor Fleischer (on hiatus)
Peter Friedman
Michael Froomkin
Bernard Hibbitts
Walter Hutchens
InstaPundit
Andis Kaulins
Lawmeme
Edward Lee
Karl-Friedrich Lenz
Larry Lessig
Mirror of Justice
Eric Muller
Nathan Oman
Opinio Juris
John Palfrey
Ken Parish
Punishment Theory
Larry Ribstein
The Right Coast
D. Gordon Smith
Lawrence Solum
Peter Tillers
Transatlantic Assembly
Lawrence Velvel
David Wagner
Kim Weatherall
Yale Constitution Society
Tun Yin
History
Blogenspiel
Timothy Burke
Rebunk
Naomi Chana
Chapati Mystery
Cliopatria
Juan Cole
Cranky Professor
Greg Daly
James Davila
Sherman Dorn
Michael Drout
Frog in a Well
Frogs and Ravens
Early Modern Notes
Evan Garcia
George Mason History bloggers
Ghost in the Machine
Rebecca Goetz
Invisible Adjunct (inactive)
Jason Kuznicki
Konrad Mitchell Lawson
Danny Loss
Liberty and Power
Danny Loss
Ether MacAllum Stewart
Pam Mack
Heather Mathews
James Meadway
Medieval Studies
H.D. Miller
Caleb McDaniel
Marc Mulholland
Received Ideas
Renaissance Weblog
Nathaniel Robinson
Jacob Remes (moribund?)
Christopher Sheil
Red Ted
Time Travelling Is Easy
Brian Ulrich
Shana Worthen
Computers/media/communication
Lauren Andreacchi (moribund)
Eric Behrens
Joseph Bosco
Danah Boyd
David Brake
Collin Brooke
Maximilian Dornseif (moribund)
Jeff Erickson
Ed Felten
Lance Fortnow
Louise Ferguson
Anne Galloway
Jason Gallo
Josh Greenberg
Alex Halavais
Sariel Har-Peled
Tracy Kennedy
Tim Lambert
Liz Lawley
Michael O'Foghlu
Jose Luis Orihuela (moribund)
Alex Pang
Sebastian Paquet
Fernando Pereira
Pink Bunny of Battle
Ranting Professors
Jay Rosen
Ken Rufo
Douglas Rushkoff
Vika Safrin
Rob Schaap (Blogorrhoea)
Frank Schaap
Robert A. Stewart
Suresh Venkatasubramanian
Ray Trygstad
Jill Walker
Phil Windley
Siva Vaidahyanathan
Anthropology
Kerim Friedman
Alex Golub
Martijn de Koning
Nicholas Packwood
Geography
Stentor Danielson
Benjamin Heumann
Scott Whitlock
Education
Edward Bilodeau
Jenny D.
Richard Kahn
Progressive Teachers
Kelvin Thompson (defunct?)
Mark Byron
Business administration
Michael Watkins (moribund)
Literature, language, culture
Mike Arnzen
Brandon Barr
Michael Berube
The Blogora
Colin Brayton
John Bruce
Miriam Burstein
Chris Cagle
Jean Chu
Hans Coppens
Tyler Curtain
Cultural Revolution
Terry Dean
Joseph Duemer
Flaschenpost
Kathleen Fitzpatrick
Jonathan Goodwin
Rachael Groner
Alison Hale
Household Opera
Dennis Jerz
Jason Jones
Miriam Jones
Matthew Kirschenbaum
Steven Krause
Lilliputian Lilith
Catherine Liu
John Lovas
Gerald Lucas
Making Contact
Barry Mauer
Erin O'Connor
Print Culture
Clancy Ratcliff
Matthias Rip
A.G. Rud
Amardeep Singh
Steve Shaviro
Thanks ... Zombie
Vera Tobin
Chuck Tryon
University Diaries
Classics
Michael Hendry
David Meadows
Religion
AKM Adam
Ryan Overbey
Telford Work (moribund)
Library Science
Norma Bruce
Music
Kyle Gann
ionarts
Tim Rutherford-Johnson
Greg Sandow
Scott Spiegelberg
Biology/Medicine
Pradeep Atluri
Bloviator
Anthony Cox
Susan Ferrari (moribund)
Amy Greenwood
La Di Da
John M. Lynch
Charles Murtaugh (moribund)
Paul Z. Myers
Respectful of Otters
Josh Rosenau
Universal Acid
Amity Wilczek (moribund)
Theodore Wong (moribund)
Physics/Applied Physics
Trish Amuntrud
Sean Carroll
Jacques Distler
Stephen Hsu
Irascible Professor
Andrew Jaffe
Michael Nielsen
Chad Orzel
String Coffee Table
Math/Statistics
Dead Parrots
Andrew Gelman
Christopher Genovese
Moment, Linger on
Jason Rosenhouse
Vlorbik
Peter Woit
Complex Systems
Petter Holme
Luis Rocha
Cosma Shalizi
Bill Tozier
Chemistry
"Keneth Miles"
Engineering
Zack Amjal
Chris Hall
University Administration
Frank Admissions (moribund?)
Architecture/Urban development
City Comforts (urban planning)
Unfolio
Panchromatica
Earth Sciences
Our Take
Who Knows?
Bitch Ph.D.
Just Tenured
Playing School
Professor Goose
This Academic Life
Other sources of information
Arts and Letters Daily
Boston Review
Imprints
Political Theory Daily Review
Science and Technology Daily Review