Mao Mao

by Henry Farrell on March 17, 2005

David Horowitz “meets”: the Cultural Revolution, with Billmon presiding. Via “Michael Froomkin”:

Eugene Volokh jumps the shark

by Henry Farrell on March 17, 2005

I was writing a post about Eugene Volokh’s “defence”: of the “deliberate infliction of pain, “slow throttling,” and “cruel vengeance” when I saw that Chris had “beaten me to the punch”: I find the argument that the justice system should be used as a means to inflict cruelty in order to satisfy victims’ – and society’s – desire for vengeance rather appalling. It’s a return to the idea that the animating ideal of justice should be vengeance and public display rather than the correction and dissuasion of wrongdoing. Which is not to say that the modern idea of justice doesn’t have its own, more abstract cruelties, as Michel Foucault and Michael Ignatieff have pointed out – but the claim that the justice system sometimes needs to inflict pain for the purpose of inflicting pain is something which we should have gotten rid of a couple of centuries ago. At least Eugene is being honest here. I don’t think it’s unreasonable to suspect that most of the “nonsensical defences of torture”: that we see, invoking “ticking bombs”: and the like, are so many insincere public justifications of an underlying desire to torture the terrorists not to get information, but because they’re terrorists (and if a few innocents get caught up in the system, you can’t make an omelette &c &c). But that Eugene’s defence is sincere doesn’t mean that it’s not repugnant to a set of minimal liberal commitments that are shared by many leftists, classical liberals, Burkean conservatives and others.

“Eugene Volokh writes”: :

bq. “Something the Iranian government and I agree on”: : I particularly like the involvement of the victims’ relatives in the killing of the monster; I think that if he’d killed one of my relatives, I would have wanted to play a role in killing him. Also, though for many instances I would prefer less painful forms of execution, I am especially pleased that the killing — and, yes, I am happy to call it a killing, a perfectly proper term for a perfectly proper act — was a slow throttling, and was preceded by a flogging. The one thing that troubles me (besides the fact that the murderer could only be killed once) is that the accomplice was sentenced to only 15 years in prison, but perhaps there’s a good explanation.

And there’s more …..

bq. I should mention that such a punishment would probably violate the Cruel and Unusual Punishment Clause. I’m not an expert on the history of the clause, but my point is that the punishment is proper because it’s cruel (i.e., because it involves the deliberate infliction of pain as part of the punishment), so it may well be unconstitutional. I would therefore endorse amending the Cruel and Unusual Punishment Clause to expressly exclude punishment for some sorts of mass murders.

Those, like me, who are startled and upset to read Volokh writing like this, might want to visit the website of the “National Coalition to Abolish the Death Penalty”: or visit David Elliot’s “Abolish the Death Penalty blog”: .

Social network systems

by Chris Bertram on March 17, 2005

This post is in Estzter territory, and probably just reflects ignorance on my part, but I’d be grateful for the information from those in the know, anyway. Following “one of Eszter’s posts recently”: , I signed up to “Movielens”: and have been dutifully entering my ratings in various spare moments. Like Amazon, “Movielens”: tells me that based on the movies I like I should check out various other ones. Presumably, the program checks the database to see which movies I haven’t seen are highly rated by other people who like the same films that I liked (ditto Amazon for books, dvds etc).

Now here’s my problem. When we all come to such systems “cold” (as it were), the links between our choices provide genuinely informative data. But once we start acting on the recommendations, even chance correlations can get magnified. So, for example, suppose we have three movies A, B and C. Perhaps if we showed these films to a randomly chosen audience there wouldn’t be any reason to suppose that people who like A prefer B to C or vice versa. But if the first N people to go to the expert system happen to like both A and B, then the program will spew out a recommendation to subsquent A or B lovers to follow up their viewing with B or A. And those people in turn, having viewed the recommended movie, will feed their approval back into the system and thereby strengthen the association. Poor old movie C, excluded by chance from this self-reinforcing loop, will not get recommended nearly so often.

I guess the people who design these systems must have considered these effects and how to counteract them. Any answers?