The discussion threads on naturalism have been lots of fun, but I’m going to have to leave them behind to head off to my favourite little philosophical conference. Unless the thread lasts another week (an eternity in blogtime!) it will be done before I return. So I thought I’d close with a point of agreement.
Matt Evans discusses the “impossibility of criticizing anyone else’s Founding Moral Principle, no matter how outrageous it might seem to me.” This is a little strong – it’s possible to criticise anything. (Just listen to me for a while if you don’t believe this.) What isn’t possible is to make criticisms that the other party will find moving. There’s something to this. It’s why moral arguments are usually coherence arguments, I think. But I don’t think there’s much distinctive about morality here. I think something similar is true in every area of human study. A quick (and very compressed) tour through 20th century philosophy of science might explain why I think that.
The 20th century opens with Duhem pointing out that data never conclusively refutes a theory. One can always find enough auxiliary hypotheses to save the theory. There’s a hard question then about when theory change in response to data is appropriate. For two generations, one of the central questions in philosophy of science was when theory change is rational. The eventual conclusion was that if you’re looking at something as broad as all of science, you can’t say much more than “theory change is rational when smart scientists are moved to change their theory by the data.” If you want to know anything more specific, you have to find out what smart scientists are like. And hence we see the (quite productive) movement in late 20th century philosophy of science to philosophers learning more about particular sciences to try and say something much more specific.
The problem here is that if someone wants to hold onto a core proposition (as Lakatos put it) it’s going to be very hard to conclusively show that they are wrong. Theories change not because their core claims are refuted beyond all doubt, but because in the judgment of enough scientists, it looks preferable to adopt an alternative theory.
(Some people, myself included, think that this model generalises pretty well to all of philosophy. Philosophical argumentation can show you the costs of rival theories, but eventually it’s an act of judgment nor argumentation to determine which is the cheapest theory.)
So I think Matt’s right – someone whose judgment is so bad that their fundamental theories are completely wrong-headed cannot be talked out of this by argumentation. But this isn’t just about morality. Someone who really wants to hold on to a Ptolemaic theory of astronomy can find enough reasons to preserve their theory in the face of any countervailing data. (Remember all those epicycles!)
This is why I’ve been a little surprised by some of the responses to my claims that there are valid arguments from descriptive claims to normative claims. One common thread in the responses is that these arguments wouldn’t move a determined opponent of the conclusion. True enough. But some arguments from the data to astronomical conclusions wouldn’t move a determined opponent either, and that doesn’t show that our astronomical beliefs rest on faith, or cannot be incorporated into a naturalist picture, or don’t follow from perfectly ordinary descriptive claims.
None of this should be taken to be a claim that moral argument is never effective. We can have quite productive disagreements with people who share enough core premises with us. But if we try comparing views with radical opponents, we’ll find that many disagreements rest on quite different judgments about how costly various moves are. Those disputes tend not to be so productive. And this, very roughly, is why I much prefer moral disputes with people with similar political views, because we’ll often be trying to spell out the consequences of our shared premises, than with radical opponents, because then we’ll be in the (rather unpleasant) game of trying to convince the other guy that we’ve misvalued something. Sometimes this can be worthwhile (sometimes after all we do misvalue things and maybe this can be revealed somehow) but often it is less than productive.