Here’s an interesting (or at least provocative) new piece of psychological research (link may need academic subscription) with findings concerning the moral framework generally favoured by economists:
bq. In this paper, we question the close identification of utilitarian responses with optimal moral judgment by demonstrating that the endorsement of utilitarian solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as emotionally callous and manipulative—traits that most would perceive as not only psychologically unhealthy, but also morally undesirable.
“The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas”, by Daniel M. Bartels and David A. Pizarro, Cognition 121 (2011) 154–161.
{ 215 comments }
Jim Demintia 08.22.11 at 2:11 pm
See also the works of Charles Dickens.
tomslee 08.22.11 at 2:20 pm
Ungated version
tomslee 08.22.11 at 2:25 pm
we question the close identification of utilitarian responses with optimal moral judgment by demonstrating that the endorsement of utilitarian solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as emotionally callous and manipulative
Isn’t that the definition of ad hominem?
daelm 08.22.11 at 2:27 pm
Well, at least they’ve stopped calling them autistic.
Sandwichman 08.22.11 at 2:39 pm
Economist as hipster: http://ecologicalheadstand.blogspot.com/2011/08/economist-as-hipster.html
Steve LaBonne 08.22.11 at 2:39 pm
No, certainly not. They’re stating an empirically testable correlation for which they claim to have evidence. The ad hominem is fallacious only when it purports to be a purely deductive argument (X is a y therefore X’s beliefs are wrong.)
Chris Bertram 08.22.11 at 2:43 pm
Nicely put Steve. Too many people seem to think that “X is a y _therefore_ we have reason to disbelieve what X says” is fallacious. Which of course, it isn’t.
Salient 08.22.11 at 2:51 pm
They’re stating an empirically testable correlation for which they claim to have evidence.
Setting aside the Latin (and firmly filing it under ‘no that’s NOT what begging the question means’ type disputes about the denotative integrity of prescriptively-misused cliche phrases), you may consider it fairer to observe that ‘can be characterized as’ is a very muscular phrase, in the sense that it ‘can be capable of’ doing quite a lot of work. (Which isn’t to say that either of us should feel inclined to dispute or scrutinize their conclusions, but is to say that you and I can be characterized as inherently non-objective evaluators…)
Steve LaBonne 08.22.11 at 2:56 pm
Well, I only scanned the paper quickly (I should actually be working but it’s a slow day…) but I felt they were fairly careful about how they presented their findings. If you read the paper, the personality traits to which they’re referring in that quote would I think be quite widely agreed to be “emotionally callous and manipulative”, so I don’t see that this use of “can be characterized” is out of line at all.
qb 08.22.11 at 3:13 pm
I’m not saying you’re being obtusely pedantic about ad hominems. I’m saying you’re saying the kinds of things you would say if you were being obtusely pedantic about ad hominems.
roac 08.22.11 at 3:22 pm
Did I not once read about a study of econ grad students, before and after, that showed a sharp decline in some measure of empathy and/or altruism?
czrpb 08.22.11 at 3:23 pm
I would consider myself a confirming data point; I am often frustrated by people. Sorry.
Barry 08.22.11 at 3:24 pm
qb, was this comment addressed to anybody in particular?
qb 08.22.11 at 3:27 pm
Ahem.
“In this paper, we question the close identification of Kantian responses with optimal moral judgment by demonstrating that the endorsement of Kantian solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as uncaring and dogmatic—traits that most would perceive as not only psychologically unhealthy, but also morally undesirable.”
“In this paper, we question the close identification of virtue-ethical responses with optimal moral judgment by demonstrating that the endorsement of virtue-ethical solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as simpering and unprincipled—traits that most would perceive as not only psychologically unhealthy, but also morally undesirable.”
Ad infinitum. Ah, the trappings of Science.
Tom 08.22.11 at 3:53 pm
qb +1
Also, if “the endorsement of utilitarian solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as emotionally callous and manipulative—traits that most would perceive as not only psychologically unhealthy, but also morally undesirable”, should we toss the utilitarian solutions or revise our judgments about the undesirability of those traits?
Steve LaBonne 08.22.11 at 3:53 pm
qb doesn’t appear to understand the difference between making that kind of statement off the top of one’s head, and making it on the basis of empirical evidence (personality inventories). Now, if one wants to question the quality of that evidence, that’s a different story; have at it. But by itself #14 is just silly.
zamfir 08.22.11 at 4:10 pm
Reading their introduction, their main beef seems to be with Cass Sustein’s claim that utilitarian moral principles are true morality, and others heuristic and fallible shortcuts to achieve utilitarian goals.
CJColucci 08.22.11 at 4:36 pm
Wouldn’t it be likely that, in many cases, using any kind of explicit moral reasoning would give an observer reason to think there’s something radically wrong with you? For example, I’m driving down an otherwise deswerted and unremarkable road and you point out a baby sitting in my lane. I have plenty of time to avoid the baby, or even to stop and move the baby out of the road. Or I could just run the baby over. If I started thinking aloud about whether to run the baby over or whether to swerve, or to stop and place the baby somewhere safe, aren’t you likely to think I’m a psychopath no matter what form of moral reasoning I used, and even if I came to the correct decision?
Meredith 08.22.11 at 4:37 pm
Jim Demintia,
Charles Dickens used the English language and is readable, so he must not be a Very Serious Thinker or Insightful Analyst.
But to be fair to academic philosophers today: I believe that the trend among them is to write well and clearly, avoiding dense abstraction-speak as much as possible.
MCollins 08.22.11 at 4:54 pm
This is not an anti-utilitarian argument, but a Sidgwickian.
actio 08.22.11 at 5:00 pm
More provocative than interesting.
They authors are at places properly hedged, like here:
“We should note that our results do not speak to
whether utilitarianism (or deontology) is the correct nor-
mative ethical theory, as the characteristics of a theory’s
proponents cannot determine its normative status. In addi-
tion, favoring a utilitarian or deontological solution to a
sacrificial moral dilemma does not necessarily indicate
that a participant endorses (or understands) utilitarianism
or deontology as a full-blown ethical theory—just because
an individual responds like a utilitarian would is not suffi-
cient evidence that she is a utilitarian. ”
But at several places in the paper they’re just a mess of confused normative claims. It will be interesting to read the replies, which will surely come.
Barry Freed 08.22.11 at 5:05 pm
@CJColucci Yes, I had a great philosophy prof as an undergrad (Peter Manchester at SUNY SB) who used a similar example to make the point that moral systems that work by rules (Confucian, etc) were not inferior to Western views of moral reasoning and may well be superior. The example he used was if you’re in the check-out line at the grocery store and the person in front of you drops a $20 bill without noticing it, is it more moral to reason through the correct thing to do (even if one arrives at the correct solution) or just pick it up and hand it back without a second thought.
qb 08.22.11 at 5:30 pm
Steve, do you mean to tell me you doubt that the endorsement of Kantian views correlates with traits that could be characterized as uncaring and dogmatic? Or that the endorsement of virtue-ethical views correlates with traits that could be characterized as unprincipled? I’m trying to decide if you’re unfamiliar with Kant and virtue ethics, or if you’re unfamiliar with the kinds of ad hominem attacks every undergraduate in the world comes up with when they first encounter those theories. I suppose I shouldn’t rule out both. I like social science, but it cannot plausibly be denied that much of it could be characterized as empirical arguments for self-evident conclusions. But if you want the empirical arguments, go do the study yourself. See you in a few years, once you’ve managed to lug that methodology up the mountainside. The view from up here is great.
Steve LaBonne 08.22.11 at 5:39 pm
I mean to say that whatever I may think, there’s still a difference between merely asserting such a proposition and at least attempting to provide evidence for it. Sometimes things that “nobody doubts” turn out to be false. (In fact this paper claims to be casting doubt on what it claims is just such a widely held presupposition, that utilitarianism is in some sense an optimal moral strategy.) So I think your mockery was misplaced.
But as a natural scientist I’ll only go so far in defending social science. ;) I am not claiming that the paper is good (in any case I only hastily skimmed it), only that the enterprise it embodies is not obviously foolish.
qb 08.22.11 at 5:44 pm
Fair enough!
geo 08.22.11 at 6:04 pm
I like social science
How very odd.
tomslee 08.22.11 at 6:08 pm
Steve and Chris at #6 and #7.
I parse the sentence as “X tends to be a not-very-nice y and X believes b. Therefore b is not optimally moral.” Which is closer to “X is a y therefore X’s beliefs are wrong” than to “X is a y therefore we have reason to disbelieve what X says”. But I’m an ex-natural scientist too, so my sentence-parsing may not be that great.
Also, what qb says.
Chris Bertram 08.22.11 at 6:23 pm
“I parse the sentence” … you parse _which_ sentence? (I was simply approving of Steve’s general point re so-called _ad hominem_ .)
So
bq. X is a [person with a proven record of lying] therefore we have have reason to disbelieve what X says.
Is OK, and not “ad hominem” at all.
qb 08.22.11 at 6:47 pm
I wonder how they’re operationalizing “optimal moral judgment.” It sure looks like what they mean by that is judgment that correlates with traits that most would perceive as psychologically healthy and morally desirable. That pretty much rules out every mainstream ethical theory ever… which shouldn’t be surprising, since one the main motivations for doing ethical theory is the thought that our pre-theoretic moral intuitions–mostpeople’s intuitions–are deeply flawed.
John Quiggin 08.22.11 at 6:49 pm
I’ve also had time only for a quick look at the study, but it rings all sorts of alarm bells. It’s the fat man and the trolley problem yet again, this time with the trappings of statistics.
I’d summarise the findings as “We presented subjects with toy examples of moral problems rigged to show that utilitarianism commits you to actions that intuitively appear immoral. We find that those who choose the utilitarian option anyway have unappealing personality traits”.
John Quiggin 08.22.11 at 7:00 pm
Separately, I would say that utilitarianism isn’t very appealing as a basis for individual ethics, simply because it’s impossibly demanding. If those doing the study wanted to do it right, they would have asked something simple like “Would you sacrifice your child’s life to save two people you’ve never met on the other side of the planet”. The utilitarian answer is trivial, but anyone who gave it would be way off any kind of normal psychological scale.
Utilitarianism only makes sense as a basis for public and legal decisions and here the cases is much stronger (though by no means indisputable). Governments have to solve trolley problems all the time, and the distinctions drawn in the different variations of the problem don’t work as advertised. For example, should a government refrain from building a railway because some people will inevitably fall on to the tracks. Is there a difference in the case where railways are privatised and governments permit, but don’t cause new tracks to be built.
MikeM 08.22.11 at 7:05 pm
My favorite quote on utilitarianism comes from Gigerenzer et al, Simple Heuristics…: One philosopher was struggling to decide whether to stay at Columbia University or to accept an offer from a rival university. The other advised him: “Just maximize your expected utility–you always write about doing this.” Exasperated, the first philosopher responded: “Come on, this is serious.”
Salient 08.22.11 at 7:13 pm
that intuitively appear immoral
I have a hobbyhorse around here labeled incredulity constrains intuition that hasn’t been ridden in awhile, with a sticker slapped on it that adds … but mostly along humane contours!
People who are willing to pretend to be credulous about a live human body acting ludicrously in the mechanical role of trolly-impediment apparently demonstrate a parallel willingness to disvalue human life the way a scheming psychopath would, almost as if that life constituted… a merely mechanical construction in a pretend game!
I’m growing increasingly fond of trolley problems but only as a sandbox for nuclear-war-brinksmanship type games, wherein the only winning move is not to play.
Matthias 08.22.11 at 7:48 pm
…and…?
CJColucci 08.22.11 at 7:48 pm
By the way, I haven’t seen anything in a long time about the problem of the Utility Monster. Has the problem been solved, or has it merely gone out of fashion?
geo 08.22.11 at 8:05 pm
JQ: Would you sacrifice your child’s life to save two people you’ve never met on the other side of the planet?
Why is utilitarianism thought to require valuing all lives exactly equally? Suppose this question were rephrased as “Would you sacrifice your child’s life to save the lives of two very ill 90-year-old people you’ve never met on the other side of the planet?” Then no one would suppose this is a conundrum for utilitarians. Likewise if the question were “Would you sacrifice your child’s life to save Henry Kissinger’s and Dick Cheney’s lives, or the lives of the board of the Council on Foreign Relations, or [substitute your own bete noire]?” Again, no conundrum, because even a quick calculation of the consequences for human happiness of your child — any child — growing up and leading a normal life versus Kissinger and Cheney dragging out their miserable existences would lead most utilitarians to dispatch K and C without a qualm.
The point is: utilitarianism is simply consequentialism, ie, a rough-and-ready (or, where appropriate, painstaking) reckoning of best outcomes, by measures that go all the way down in one’s moral imagination, and can be transmitted culturally (that’s what great literature is about, among other things) but cannot be demonstrated deductively and hence make no metaphysical claims.
Alison P 08.22.11 at 8:26 pm
I’ve said before that it seems to me that the thought-experiments of academic ethics are designed to symbolically punish the vulnerable, emotional parts of the academics themselves, embodied and externalised as soft corporeal victims, who must (via convoluted scenarios) suffer for the greater rational good. And I would say not so much that the people who construct these scenarios are unfeeling, but that they affect lack of feeling: they as pose as cold, but perhaps they are a bit warmer inside than they would like to admit.
The famous SF short story ‘The Cold Equations’ performs a similar emotional function. it is not that appreciative readers are longing to put a girl in an airlock in real life, but they want to be the the sort of person who would.
Jeff R. 08.22.11 at 9:09 pm
Honestly, the scenarios that run parallel to personal desires (“Would you murder an innocent stranger to save the lives of two friends? Would you consider that a morally correct action?”) do at least as much work against a utilitarian view as the ones that run against them.
And I’m willing to consider at least the possibility that a failure to accept these types of scenarios as a reducto arguments against Utilitarianism correlates strongly with psychopathy…
qb 08.22.11 at 9:09 pm
30: That summary sounds about right. It seems like these kinds of misguided studies are coming out every two or three months these days.
31: I always found the “utilitarianism is a standard of rightness, not a decision procedure” line convincing on the overdemandingness objection.
35: Ditto that. I can’t even remember what the standard reply is, if there is one. I guess the easiest thing to say, which isn’t very satisfying, is that the scale of monsterhood you’d need doesn’t describe anything that exists in the real world. But empirical solutions to theoretical problems are so blah.
37: Also blah are attempts to change the subject from morality to psychotherapy. I mean, I see what you’re saying, and you might even be right, but nothing relevant follows from it, and it kind of comes off as another non-ad-hominem ad hominem. Speaking for myself, I’m a psychopathic utilitarian who tries to come off as a big softie trying to cover for his sentimentality by coming off as a psychopathic utilitarian.
BertCT 08.22.11 at 9:23 pm
Is that really from an actual paper, or did someone just jumble up their Self-Important Words refrigerator magnet set?
bianca steele 08.22.11 at 10:10 pm
A surprising number of the thought experiments given involve dilemmas imposed on one person by an order given by an authority figure of another culture.
stubydoo 08.22.11 at 10:38 pm
Even if there does turn out to be something to this, it’s still only an insight into psychology. As a critique of utilitarianism per se it’s not even wrong.
It seems to me that more effort is expended on trying to expose issues with utilitarianism than with other moral bases (where are the trolley problems for deotologists?). If I’m right about this, I’d call it a pretty encouraging development for utilitarianism.
campfiregirls 08.22.11 at 10:50 pm
The best part of this paper was taking the quiz at the end, just like in Cosmo magazines. It seems that the moral answers are obvious though:
For even if the end is the same for a single man and for a state, that of the state seems at all events something greater and more complete whether to attain or to preserve; though it is worth while to attain the end merely for one man, it is finer and more godlike to attain it for a nation or for city-states.
or even better: http://www.imdb.com/title/tt0084726/quotes?qt0454854
In conclusion, utilitarian people are cold like Spock and totally come across as psycho. They hardly ever give out hugs because they are just too busy saving people in ridiculous circumstances.
But back to appendix A. Can someone catch me up on why, in formulating moral dilemmas, the question always has “you and five others.” Even when the question puts you in a group up front, we get, “if the enemy finds your group, all six of you will die” or some such. I’m sure there’s a good reason for this and that all of you know it while I don’t.
Yarrow 08.22.11 at 11:39 pm
John Quiggin @ 30: It’s the fat man and the trolley problem yet again, this time with the trappings of statistics.
That’s what Bartels and Pizarro are arguing against. Two of the papers they cite as bad examples use the trolley problem, and one of those explicitly identifies deviations from utilitarianism as errors; the third waffles about whether utilitarianism is the right basis for moral judgments, then (to my mind) uses it anyway. (The three papers are: Baron and Ritov 2009 (google [“Protected values and omission bias as deontological judgments”], Greene et al. 2009 (google [“Pushing moral buttons: The interaction between personal force and intention in moral judgment”]), and Sunstein 2005 (google [cass sunstein Moral heuristics]. No stats in the Sunstein, but the other two are loaded with them.)
@ 31: Utilitarianism only makes sense as a basis for public and legal decisions
Baron and Ritov do talk about public decisions as well as trolley problems, but I’m not sure they are any more realistic: “If the government does nothing, 20 fish species will become extinct due to naturally changing water levels. Building a dam would save 20 species of fish above the dam but cause 6 species to become extinct downstream.” I think most folks who have a strong interest in preventing species extinction would find that scenario rather, er, fishy.
Naturally occurring changes in water level are going to kill 20 species in a non-geological time frame? And building a dam will save them. Also, war saves lives. (Disagreement is only “omission bias”, doncha know.)
Neil 08.23.11 at 1:04 am
Koenigs et al. Nature 2007.
Nothing to see here. Move along. The train passed through the station 4 fucking years ago.
tomslee 08.23.11 at 1:43 am
Well that’s a crass fucking comment.
John Quiggin 08.23.11 at 1:48 am
@35 The Utility Monster is just a mistake, arising from a misunderstanding of what utilitarianism means (admittedly, one shared by some utilitarians). There is no utility meter that would allow someone to verifiably claim that they are a utility monster and therefore deserve more goods than anyone else. As geo says above, utilitarianism is just consequentialism with a couple of constraints
(i) everyone counts equally (which reinforces the a priori arguments against utility monsters)
(ii) only consequences that affect people count directly (I’ve put this clumsily, but I’m saying, for example that, say, “preserving the rule of law” can’t count as a valued consequence in itself, only as it produces better consequences for people).
Marcus Pivato 08.23.11 at 3:36 am
I don’t want to be totally pedantic, JQ, but I think that (i) and (ii) pick out a fairly large class of social welfare functions (or moral philosophies), many of which are pretty far from utilitarianism. In particular (ii) allows for moral philosophies which promote personal goals other than personal utility —e.g. they seek to maximize personal “freedom” (whatever that is) or Sen’s “functionings and capabilities”, etc. We could strengthen (ii) to
(ii’) Only consequences which affect people’s personal welfare count directly.
This would then give us an ethical framework called welfarism (assuming, of course, we have some precise definition of “personal welfare” —intuitively it means something like “well-being” or “happiness” or “preference satisfaction” or “achievement of personal life goals”, etc., but entire books have been written on exactly this question, as I’m sure you know better than anyone).
But even “welfarism” doesn’t pin down utilitarianism, since it also includes such diverse social welfare orders as the Nash (maximize the product of utilities) and the “egalitarian” social welfare order (maximize the utility of the least fortunate, sometimes called the “Rawlsian” SWO, although this is a misnomer since Rawls himself was not a welfarist).
To go from welfarism to utilitarianism, one must invoke some additional assumption about the way in which the welfare gains/losses of one person should be “traded off” against the welfare gains/losses of another person.
However, I think your general point is that most of the criticisms which are made against utilitarianism (e.g. it commits us to morally “distasteful” solutions to trolley problems, etc.) are equally applicable to almost any other welfarist ethical system —and indeed, almost any consequentialist ethical system which provides systematic rules for how to trade off between the conflicting interests of different people. I agree with this statement.
(As an even more pedantic aside, the “egalitarian/Rawlsian” social welfare order is not vulnerable to Nozick’s“utility monster” criticism. But the “least fortunate person” plays a dictatorial role which is almost identical to that of a utility monster, and this, for me, is a very damaging criticism of this version of egalitarianism.)
Meredith 08.23.11 at 4:02 am
I guess I’m a bit mystified but any notion of ethics as an endeavor aimed overwhelmingly at determining the rights and wrongs of discrete situations (save these fish or those fish, run over baby or rescue it). (I don’t think contemporary philosophers (formal sense) usually think about ethics in these narrow terms, btw. I’m sure a Plato or a Kant didn’t.) Ethics engages larger questions about the nature and goals of a human life and of human lives, about what a good life would be, about the kind of habits/practices and society that might best nurture the good life, (whether “society” is capable of such nurturing), and so on. Only in the context of such larger questions can rights and wrongs of discrete situations be weighed. Economics, like political science (and yes, psychology), should be intimately engaging these same questions but instead seems to take for granted, as settled, one version of utilitarianism or another. (Even though, except maybe for accepting a very general notion of “pleasure” as a good by definition, contemporary philosophers of ethics don’t give much thought to utilitarianism as a serious ethical system — at least, that’s my impression.)
Marcus Pivato 08.23.11 at 4:18 am
Getting back to the topic at hand, I think a lot of people misunderstand the significance of these trolley problems. Anyone who has studied modern physics knows that, when your intuitions violently disagree with the predictions of a theory in some thought experiment, this does not mean the theory is wrong. It could also mean that your intuitions are wrong.
Every ethical theory gives the “wrong” (i.e. counterintuitive, distasteful, “repugnant”, etc.) answers for some thought experiments. And our raw moral intuitions themselves often appear to give contradictory answers in different situations. This means we may need to rethink at least some of our raw moral intuitions. If we think about the evolutionary origins of moral intuitions, this is hardly surprising; they are rough-and-ready heuristics which evolved to keep a small tribe of hominid hunter-gatherers functioning harmoniously most of the time, so that enough of its members could survive and reproduce. Why should we expect them to provide a logically consistent moral framework? And if they don’t, then it is unreasonable to insist that a formal ethical theory (e.g. utilitarianism) must be 100% compliant with these intuitions.
CJCollucci and Barry Freed above suggested that anyone who consciously and deliberately works through some sort of consequentialist moral calculus before making an obvious ethical decision (e.g. saving a baby from a speeding car) is “psychopathic”. I would certainly say such a person is psychologically abnormal. It would also be highly abnormal if someone explicitly solved a system of equations to compute the trajectory of a baseball before catching it. No one does that —you just reach out and try to catch the ball (and maybe fail). Likewise, it would be psychologically abnormal if someone explicitly solved some multidimensional utility maximization problem every time they went shopping at the supermarket.
But I wouldn’t call such people “psychopathic”. If someone was actually able to do these computations in real time, I would actually call them “superhuman”. But of course, ordinary human beings are totally incapable of performing such computations in real time, and lack most of the necessary information anyways, so anyone trying to solve ethical dilemmas or catch baseballs through brute computation would actually be totally dysfunctional. Paralyzed with indecision. Ineffectual. Pathetic.
But “psychopathic” means (roughly) a person without any desire to behave morally in the first place. Someone who tries (with heroic futility) to work out the ethical calculus in real time is the opposite of psychopathic. In fact, they may, in some sense, be more “moral” than someone who just reacts instinctively, without even thinking about the consequences of their actions.
In ordinary social situations, we have no choice but to operate on the basis of our ethical intuitions, instincts, heuristics, or social conventions, and hope that they produce a morally satisfactory outcome most of the time. But governments are not people, and they do not operate on human timescales. Governments (usually) have the luxury of long periods of time for their decisions, and command informational and computational resources vastly huger than any individual. Therefore, governments really can (and should) attempt to work through some sort of consequentialist moral calculus and try to identify the policy which yields the best long-term outcome. Of course, even for a government, this ideal is not perfectly attainable: reality is complex and unpredictable, and the available information is incomplete and ambiguous. But a government which decides the fate of millions of people on the basis of folksy ethical intuitions and simple heuristics is generally a bad government.
geo 08.23.11 at 4:51 am
Marcus P.: CJCollucci and Barry Freed above suggested that anyone who consciously and deliberately works through some sort of consequentialist moral calculus before making an obvious ethical decision (e.g. saving a baby from a speeding car) is “psychopathic’‘. … In ordinary social situations, we have no choice but to operate on the basis of our ethical intuitions, instincts, heuristics, or social conventions, and hope that they produce a morally satisfactory outcome most of the time.
J.S. Mill, Utilitarianism, ch. 2:
“Again, defenders of utility often find themselves called upon to reply to such objections as this- that there is not time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness. This is exactly as if any one were to say that it is impossible to guide our conduct by Christianity, because there is not time, on every occasion on which anything has to be done, to read through the Old and New Testaments. The answer to the objection is, that there has been ample time, namely, the whole past duration of the human species. During all that time, mankind have been learning by experience the tendencies of actions; on which experience all the prudence, as well as all the morality of life, are dependent. People talk as if the commencement of this course of experience had hitherto been put off, and as if, at the moment when some man feels tempted to meddle with the property or life of another, he had to begin considering for the first time whether murder and theft are injurious to human happiness. Even then I do not think that he would find the question very puzzling; but, at all events, the matter is now done to his hand.
It is truly a whimsical supposition that, if mankind were agreed in considering utility to be the test of morality, they would remain without any agreement as to what is useful, and would take no measures for having their notions on the subject taught to the young, and enforced by law and opinion. There is no difficulty in proving any ethical standard whatever to work ill, if we suppose universal idiocy to be conjoined with it; but on any hypothesis short of that, mankind must by this time have acquired positive beliefs as to the effects of some actions on their happiness; and the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better. That philosophers might easily do this, even now, on many subjects; that the received code of ethics is by no means of divine right; and that mankind have still much to learn as to the effects of actions on the general happiness, I admit, or rather, earnestly maintain. The corollaries from the principle of utility, like the precepts of every practical art, admit of indefinite improvement, and, in a progressive state of the human mind, their improvement is perpetually going on.
But to consider the rules of morality as improvable, is one thing; to pass over the intermediate generalisations entirely, and endeavour to test each individual action directly by the first principle, is another. It is a strange notion that the acknowledgment of a first principle is inconsistent with the admission of secondary ones. To inform a traveller respecting the place of his. ultimate destination, is not to forbid the use of landmarks and direction-posts on the way. The proposition that happiness is the end and aim of morality, does not mean that no road ought to be laid down to that goal, or that persons going thither should not be advised to take one direction rather than another. Men really ought to leave off talking a kind of nonsense on this subject, which they would neither talk nor listen to on other matters of practical concernment. Nobody argues that the art of navigation is not founded on astronomy, because sailors cannot wait to calculate the Nautical Almanack. Being rational creatures, they go to sea with it ready calculated; and all rational creatures go out upon the sea of life with their minds made up on the common questions of right and wrong, as well as on many of the far more difficult questions of wise and foolish. And this, as long as foresight is a human quality, it is to be presumed they will continue to do. Whatever we adopt as the fundamental principle of morality, we require subordinate principles to apply it by; the impossibility of doing without them, being common to all systems, can afford no argument against any one in particular; but gravely to argue as if no such secondary principles could be had, and as if mankind had remained till now, and always must remain, without drawing any general conclusions from the experience of human life, is as high a pitch, I think, as absurdity has ever reached in philosophical controversy.”
zamfir 08.23.11 at 5:15 am
Marcus says: But “psychopathic’’ means (roughly) a person without any desire to behave morally in the first place. Someone who tries (with heroic futility) to work out the ethical calculus in real time is the opposite of psychopathic. In fact, they may, in some sense, be more “moral’’ than someone who just reacts instinctively, without even thinking about the consequences of their actions.
The authors make this point explicitly in the discussion section. Their main conclusion is that trolley problem tests are not capable of distinguishing between calculators who value lives high, and psychopaths who don’t care much in the first place. Making such tests dubious as measures of some sort of moral capability.
Chris Bertram 08.23.11 at 6:02 am
Given the quality of discussion in this thread, I kind of wish I hadn’t drawn attention to this paper. Lots of comment here along the lines (a) the study is inconclusive (b) it’s just not trooo! More interesting, I think, would be the question of what conclusion we ought to draw if it were true: options (a) reject U? (b) revise our view of apparently bad dispositions (c) something else …..
Various people in the thread seem happy making assertions to the effect that consequentialism is a more rational or “scientific” moral system than its rivals, that the moral capacities we have are just a matter of what was functional for our ancestors etc etc. None of which are uncontested within philosophy. (Relatedly, I think Selim Berker’s “The Normative Insignificance of Neuroscience”, PPA 37:4 is worth a look.)
John Q seems happy to say that the utility monster objection is a mistake, but then seems to identify this “mistake” as having to do with the lack of a “utility meter” giving verifiable readings. If we’re testing our fundamental normative principle in a thought experiment, though, such a defect in our epistemic apparatus doesn’t seem relevant. If it is relevant though, it probably sinks a good deal more than the utility monster, and specifically threatens any theory that depends on interpersonal comparisons of welfare …. I’m not sure that you want to go back there, John.
Do governments (in peacetime) “face trolley problems all the time”? They certainly take decisions involving statistically foreseeable deaths all the time, but that isn’t quite the same, is it?
Chris Bertram 08.23.11 at 6:07 am
Incidentally, I find the faith that some commenters have in the capacity of governments to work out, shape , and pursue the long-term welfare-optimal policy rather touching (or possibly, rather alarming).
campfiregirls 08.23.11 at 6:39 am
I like the justifications of the popularity and expedience of whatever morality, and I appreciate that such things are a uniquely human condition and experience, but the whole paper seems mathematically mundane. Of course economists would agree with it. More value is better than less value. If you value human life, more is better than less (perhaps). That is, unless you live in my country and you think darker people are worth less than lighter poeple, or feminine people are worth less than more masculine ones. In such a case you would prefer to have less of those types, and immigrants, lest your overall human value would decrease. (I like that the “moral” dilemmas take into account the idea of the foreign or perhaps enemy of the state as @41 bianca notes). Pawns are worth less than the other pieces even though they live on the same world. And yes, a psycopath would sacrifice as many pawns as a normal person if he wanted to win the game.
But I don’t care about any of that.
Just give me a link to a non-paywalled paper explaining why “one out of six” is important in morality equations (or surveys).
I’ll buy the assertion that someone who maximizes impersonal value is more utilitarian than one who maximizes personal value, and therefore said person has a chance of being predicted more accurately.
Hannes Nykänen 08.23.11 at 7:02 am
I have to admit that I simply cannot take utilitarianism seriously. Intellectually speaking it is ridiculous and morally speaking a disaster. That it has been discussed so much is simply a matter of collective pressure: it is socially impossible to say, in a “serious” debate, that defending U is simply a sign of lack of moral sense. In discussions on music, claims that testify to a lack of musical ear are ignored – which is why people who lack musical ear do not dare to make any claims about music. Not so in moral philosophy. Here the outrageous claims are criticised, creating the impression that moral understanding is rational – or then irrational – while it is neither.
In short: utilitarianism is a moral philosophy for the morally insensitive and a calculus for the mathematically ungifted.
Jeremy Bowman 08.23.11 at 8:40 am
It’s about time everyone woke up to the fact that people who disagree morally don’t just think their opponents are mistaken, they think they’re downright immoral. It is thus quite understandable that non-consequentialists would see consequentialists as psychopaths.
Speaking as a consequentialist, I see non-consequentialists as narcissists.
They’re narcissistic, because for a non-consequentialist moral deliberation is a matter of asking: “Am I acting with a good will?” “Is my soul unblemished?” — In other words, “Do I look OK?”
It’s hardly surprising that these moralistic attitudes guide philosophically illiterate psychologists’ “studies”, but philosophers should be working on a higher plane.
bad Jim 08.23.11 at 8:45 am
As an American, I can unreservedly proclaim that we need more trolleys. Why debate when you can experiment?
Niall McAuley 08.23.11 at 10:06 am
I never heard of the utility monster before. It sounds like some versions of God.
A quick Google says this is not an original thought.
Fall in queue 08.23.11 at 10:18 am
@John Quiggin:
I think that most people would be willing to grant that something like your utilitarian version of (ii) is at least a morally salient property (welfare, or some such), although there would certainly be disagreement over whether it’s the only one, or the most important one.
And I think many (most?) people would also want to say that, in some sense, everyone’s welfare is of equal value. What seems most controversial in utilitarianism, in my view, is the idea of aggregation: the idea that there is some number N of mild headaches that is the moral equivalent of the most excruciating pain.
Note that if you combine this with strict welfarism, you seem in danger of having to admit that there is a number N of mild headaches that is the moral equivalent of the most horrific torture; but my point here is that aggregation by itself seems bizarre, even abstracting from strict welfarism.
Jeremy Bowman 08.23.11 at 10:44 am
@Fall in queue:
“What seems most controversial in utilitarianism, in my view, is the idea of aggregation: the idea that there is some number N of mild headaches that is the moral equivalent of the most excruciating pain.”
I agree, but that only applies to hedonistic utilitarianism. Once we escape from the “Cartesian Theater” and look at agents’ preferences/choices in action instead of experiences, aggregation is only possible where preferences are commensurable.
Utilitarianism will have another lease of life when it can leave hedonism behind, with its silly hedonic calculus and the inevitable “sacrifice of the one for the many”. Utilitarianism should embrace the insights of the later Wittgenstein and pragmatist thinkers such as Daniel Dennett and Donald Davidson.
Chris Bertram 08.23.11 at 11:00 am
Jeremy Bowman: Utilitarians long since made the move from experiences, to desires, to preferences, to rationally-informed preferences, etc etc …. The trouble is that each redefinition of utility or welfare escapes from one set of objections at the cost of falling foul of others. There just may not be a unitary conception of well-being that will do all the things people want it to.
John Quiggin 08.23.11 at 11:50 am
On the utility monster, it’s not that we lack the technical ability to construct a meter, it’s that a utility monster is a nonsense concept just like a “justice monster” or a “rights monster”.
John Quiggin 08.23.11 at 11:52 am
@Fall in queue: If I suffered from chronic headaches and was offered a cure that involved a few seconds of excruciating pain, I’d take it. How about you?
Jeremy Bowman 08.23.11 at 11:52 am
@Chris Bertram:
“Utilitarians long since made the move from experiences, to desires, to preferences, to rationally-informed preferences, etc etc …”
They claim to have made the move, but they haven’t really made it at all. Almost everyone still habitually thinks of mental states such as desire as experiences — something whose strength is a matter of “vividness” (an as opposed to something that guides choice in action — hence examples of “extreme pain” or “great enjoyment” rather than actual avoidance). As Wittgenstein realised, what is needed to escape that habit is “therapy” — quite a long course of therapy!
Jeremy Bowman 08.23.11 at 11:55 am
We are only able to recognize that utilitarian-style “metrics” are mistaken because they conflict with what we know about actual choices in action. What else could we appeal to to defend our claim they are mistaken?
Yarrow 08.23.11 at 12:19 pm
Chris Bertram @ 53: More interesting, I think, would be the question of what conclusion we ought to draw if [the study] were true: options (a) reject U? (b) revise our view of apparently bad dispositions© something else …..
It seems to me that something like these “mistakes” about trolley problems or killing 6 fish species to save 20 would be predicted by a rule consequentialism based in part on the idea that our judgment is very likely to be biased in our own favor.
If it will be for the best that we all follow certain rules, even when our (known to be fallible) judgment says that in a particular case the outcome will not be for the best, then people ought follow the rules even in hypothetical cases based on the idea that we have absolutely certain knowledge of the consequences. That is, if we have the moral duty to disbelieve our own certain beliefs when those beliefs conflict with our rules, then it will require training to suspend that duty in purely hypothetical situations. Some philosophers are trained that way; but in real life, how many would actually push someone off a bridge, no matter how firmly they believe it will save five lives?
Andrew F. 08.23.11 at 12:50 pm
To test for psychopathy, the authors use an “adapted” 30-question test from a yet to be published source. They find a significant numerical correlation between higher scores on this adapted test and utilitarian answers to their dilemmas.
The problem is that the differences in psychopathy scores being captured in the regression are not necessarily meaningful from a psychopathology perspective. In fact they’re likely to be not meaningful.
The authors begin by claiming that the endorsement of utilitarian solutions to a set of commonly-used moral dilemmas correlates with a set of psychological traits that can be characterized as emotionally callous and manipulative—traits that most would perceive as not only psychologically unhealthy, but also morally undesirable.
But they end by noting that their results do not establish that any of the participants actually exhibit pathologies. In other words, their results do not actually establish that any of the subjects is in fact psychologically unhealthy.
Sometimes “Machiavellian” behavior, or behavior that has a psychopathic quality to it, can be healthy and compassionate: the spouse who skillfully bolsters his partner’s confidence before a trying event; the parent who deceptively exhibits an air of calm certainty to reassure a scared child about to undergo a surgical procedure; and so forth. But someone who believes that doing either of those things is good will score higher in Machiavellianism, since they disagree with the test statement “honesty is always the best policy” along with “there is never any excuse for lying to someone.”
In fact, for any given personality disorder, some traits exhibited by sufferers of that disorder will be quite healthy to possess. The question is one of “too much” or “in the wrong combination.” Self-regard for example can be healthy – and yet we might find that those with healthy self-esteem scored higher on tests for narcissism than those with unhealthy low self-confidence.
The real question on tests like these is whether the subject falls outside of a given range – they’re not meant to be used as establishing a precise continuum of psychopathic or machiavellian tendencies. Claiming that utilitarians score “higher” on tests for psychopathy is of no more use than claiming that confident individuals score “higher” on tests for narcissism. The claim is probably true, but it tells us nothing about the desirability of confidence – or, in the authors’ case, about the desirability of utilitarianism.
Chris Bertram 08.23.11 at 1:00 pm
@JohnQ
_a utility monster is a nonsense concept just like a “justice monster†or a “rights monsterâ€._
I’m not finding your “just like” particularly enlightening here John. My paradigm notion of a “nonsense concept” is things like “colourless green idea” or “square triangle” , but I can’t see that “utility monster” is like them at all. Utilitarianism is a maximizing doctrine, it is indifferent to distribution as such. Given that, the idea of a being (or a person) with properties such that, for each additional unit of resource, the maximizing choice is always to give them that unit, is not self-contradictory or incoherent. Ditto, and relatedly, the case where transferring from A to B always results in a greater gain to B than loss to A, so A’s interests just get swallowed by B’s. At least, that’s so on the face of things, but I’m very open to rebuttals.
CJColucci 08.23.11 at 1:31 pm
I never heard of the utility monster before. It sounds like some versions of God.
Let’s say (obviously we have no way of verifying anything like this in the real world, but this is a thought experiment) that Adolph Hitler got ENORMOUS pleasure out of killing Jews. Let’s say that his capacity for such pleasure was so superhumanly great that it outweighed the human-level pain felt by the dying Jews and their friends, relatives, and even the world at large. This hypothetical Hitler would be a utility monster, and if we accept as true for purposes of discussion that hypothetical Hitler’s pleasure at killing Jews was so great that it really did outweigh the pain caused to others by killing all those Jews, what does utilitarianism have to say about this?
bianca steele 08.23.11 at 1:50 pm
“Utility Monster”: Ah, that explains Jeremy Draws a Monster (Jeremy draws a monster and, basically, it asks him for all sorts of consumer goods before he decides to play with the neighborhood kids instead, it’s really quite a good book). I may make a collection of utilitarianism-related picture books (sorry for the self promotion).
I assume I’m not the only one to notice that under intercorrelations between traits tested for, “machiavellian” correlates relatively highly with “socially desirable.”
Cranky Observer 08.23.11 at 3:14 pm
CJColucci @1:31,
There is no need to risk invocation of Godwin’s law. Right up to the last sentence of the third paragraph (“according to the doctrine of utilitarianism”) which is presumably debatable, the Wikipedia definition of utility monster describes perfectly the behavior of those Prof. DeLong describes as the Princes of Wall Street. I’m confused as to why there is even a debate over whether this phenomenon exists.
Cranky
geo 08.23.11 at 4:43 pm
Chris B: Utilitarianism is a maximizing doctrine, it is indifferent to distribution as such. … There just may not be a unitary conception of well-being that will do all the things people want it to.
Again, Mill in Utilitarianism had an answer to this:
It is quite compatible with the
principle of utility to recognise the fact, that some kinds of
pleasure are more desirable and more valuable than others. It would be
absurd that while, in estimating all other things, quality is
considered as well as quantity, the estimation of pleasures should
be supposed to depend on quantity alone.
If I am asked, what I mean by difference of quality in pleasures, or
what makes one pleasure more valuable than another, merely as a
pleasure, except its being greater in amount, there is but one
possible answer. Of two pleasures, if there be one to which all or
almost all who have experience of both give a decided preference,
irrespective of any feeling of moral obligation to prefer it, that
is the more desirable pleasure. If one of the two is, by those who are
competently acquainted with both, placed so far above the other that
they prefer it, even though knowing it to be attended with a greater
amount of discontent, and would not resign it for any quantity of
the other pleasure which their nature is capable of, we are
justified in ascribing to the preferred enjoyment a superiority in
quality, so far outweighing quantity as to render it, in comparison,
of small account.
In admitting “quality” of pleasures, Mill is abandoning “a unitary conception of well-being.” By vesting authority in judgment rather than measurement, Mill is acknowledging that utilitarianism cannot do “all the things people want it to” — if one of those things is to deduce morality from self-evident first principles. He is asserting, in advance of Rawls and Rorty, the priority of democracy to philosophy.
To those who still feel the attraction of reasoning metaphysically about morals, Mill had yet another answer in Utilitarianism:
I might go much further,
and say that to all those a priori moralists who deem it necessary
to argue at all, utilitarian arguments are indispensable. It is not my
present purpose to criticise these thinkers; but I cannot help
referring, for illustration, to a systematic treatise by one of the
most illustrious of them, the Metaphysics of Ethics, by Kant. This
remarkable man, whose system of thought will long remain one of the
landmarks in the history of philosophical speculation, does, in the
treatise in question, lay down a universal first principle as the
origin and ground of moral obligation; it is this: “So act, that the
rule on which thou actest would admit of being adopted as a law by all
rational beings.” But when he begins to deduce from this precept any
of the actual duties of morality, he fails, almost grotesquely, to
show that there would be any contradiction, any logical (not to say
physical) impossibility, in the adoption by all rational beings of the
most outrageously immoral rules of conduct. All he shows is that the
consequences of their universal adoption would be such as no one would
choose to incur.
geo 08.23.11 at 4:43 pm
Sorry, the paragraph beginning “If I am asked … ” is also from Mill and should have been in italics.
piglet 08.23.11 at 4:53 pm
Chris Bertram 08.23.11 at 5:29 pm
Sorry geo, but you’re confused. The higher/lower pleasure distinction is orthogonal to the distinctions among experiential and desire or preference-based conceptions of utility.
geo 08.23.11 at 5:31 pm
Chris: I guess I am confused. Could you explain?
Jeremy Bowman 08.23.11 at 6:11 pm
Almost everyone recognizes that it would be unjust to sell someone into slavery so that a large number of other people could enjoy watching gladitorial combat. But almost no one thinks it unjust to tax a rich person so that a large number of other people might avoid abject poverty.
How do we recognize these things? Whence the near-universal agreement on the difference?
We just imagine how absolutely determined we would be to avoid getting sold into slavery, versus how relatively un-determined we would be to attend a gladiator show, even if we wanted to.
By contrast, we imagine how relatively un-determined we would be to avoid paying taxes if we were rich, even if we didn’t want to pay tax, versus how absolutely determined we would be to avoid abject poverty.
It’s no use appealing to abstract rights here, because we still have to make decisions about which rights to observe, and which rights trump which other rights. To make those decisions, we appeal to the choices we would make in action if we could. In other words, we appeal to how determined we would be to achieve this or avoid that, just as the preference utilitarian does above.
As soon as utilitarians stop appealing to pleasures and pains (i.e. to vividness of experience) and instead appeal to the choices agents would make if they could, there is much closer agreement between their concept of justice and that of people who appeal to abstract rights.
actio 08.23.11 at 6:17 pm
Like Chris #69, I don’t understand JohnQ’s dismissal of the utility monster thought experiment as nonsense. Utilitarianism is the branch of consequentialism that aims to maximize the sum total welfare in the world. Any version of utilitarianism includes a theory of welfare. The dominant versions of the theory accept that intra and inter personal welfare comparisons make sense in principle. Given all those things, the idea of the utility monster is obviously coherent. As such it has relevance as a thought experiment against the theory and tempt a move to directly distributive sensitive versions of consequentialism.
Henri Vieuxtemps 08.23.11 at 6:30 pm
This is all very confusing. I think what we call “moral intuition” did start, centuries ago, as a (sort of) utilitarian calculation, when primitive tribes needed some simple rules to keep order, to make the tribe bigger and stronger, and to slaughter competing tribes. The rules were propagated through indoctrination, generation after generation, and now we have this intuition. If you want your children to feel that it’s natural to throw fat guys off bridges, you better start telling it to them early, and make sure it’s in children books, school curriculum, and on MTV.
Chris Bertram 08.23.11 at 6:39 pm
geo: Mill is trying, in his late Victorian way, to revise the understanding of utility so as to admit quality as well as quantity of pleasure. The distinction I’m alluding to concerns whether we understand utility as some form of experience (could be high, could be low) or as, say, the satisfaction of the desires that an agent has, or in terms of some preference-satisfaction idea (including revealed preference, so need have no reference to the subjective feelings of the agent at all).
Slex 08.23.11 at 8:00 pm
Utility is a necessary requirement for every ethical theory. In a universe of only stones and rocks no ethical theory can be applied. You need to have sentient beings for that. All deontological theories, ultimatelly, come down to utility, too.
There are reductio ad absurdum hypothetical examples which show the unacceptability of utilitarianism, but such examples can be given also in the opposite direction.
And these examples do not even need to be so far-fetched.
Provided that stealing is always wrong, why not just shoot someone who has stolen a can of beer, instead of having him sentenced to 1 month jail? Because we know that the punishment wouldn’t fit the crime. And this knowledge is invariably the result of interpersonal comparison of utility.
dsquared 08.23.11 at 8:24 pm
On the utility monster, it’s not that we lack the technical ability to construct a meter, it’s that a utility monster is a nonsense concept just like a “justice monster†or a “rights monsterâ€.
Oh absolutely nonononono. Maybe in Nozick’s original version, but Derek Parfit in Reasons & Persons demonstrates that for seemingly plausible assumptions, there is an actually existing Utility Monster, and it’s called “Future Generations”. One of the most important constraints on the equation in the Stern Report is that you have to parameterise it so as not to create Utility Monsters.
C. Trzcinka 08.23.11 at 8:27 pm
Did anyone actually read the article? It’s more of a critique of psychologists who think that any deviation from utilitarianism is a moral error than it is of utilitarianism. But either is a “genetic fallacy” where you argue that “bad people believe X, therefore X is bad”. The findings themselves are not really news. The study concludes that those who are callous and manipulative are more willing to sacrifice one innocent life to save five people which is hardly surprising. They should have shown the multivariate results which might be a bit more interesting. Psychologists have also found that moral philosophers are among the least tolerant of those with other moral views– as much of the discussion above demonstrates.
geo 08.23.11 at 9:20 pm
seemingly plausible assumptions
This is what I came up with from a brief Google search on Parfit’s Utility Monster. It seems to be a quote form Parfit:
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living
Perfectly plausible. Of course, now that we know animals can feel pleasure and pain, it’s not even necessary that the “members” of the “larger imaginable population” be human. A gazillion vibrantly happy earthworms or bacteria would be a more desirable state of affairs than ten billion happy people.
Isn’t this a reductio ad absurdum less of utilitarianism than of the ideal of formalizing moral argument? By “formalizing,” I mean “doing without appeals to the imagination.” Given drastically different, apparently irreconcilable intuitions, aspirations, tastes, ways of life, etc., the only way forward is to increase the parties’ experiences in common. And the most efficient way to communicate experience is through the products of the imagination, where experience is at its most intense and concentrated.
Chris Bertram 08.23.11 at 9:26 pm
dsquared @83, that’s a brilliant point.
John Quiggin 08.23.11 at 9:32 pm
@Chris and dsquared – it is a brilliant point. Of course, it’s a real problem and not just for utilitarians.
Slex 08.23.11 at 10:55 pm
Am I the only one who doesn’t get the comparison between the Utiltiy monster and the current-future generations trade-off? From what I understand, the Utility monster is capable of experiencing utility which always exceeds and compensates for the disutilities experienced by others. If we try to put it in the context of intergenerational trade-offs, this will mean that the current generation, let us call it generation “t” will have to make a lot of sacrificies in the standard of living, in order to save resources for future generations (presumably so many generations, that they become the Utility monster). But the same will apply also for generation t+1, because presumably there will be many generations after them. Then the same will apply for generation t+2, t+3 and so on.
So far it looks like everything is in line with the Utility monster hypothesis, because we have sacrifices made by current generations in favour of the many future generations. But in fact it is not a good comparison, because in the original example we have an existing Utility monster which consumes the increases in utility. In the intergenerational trade-off example we have only the sacrifices part, not the part with the increase in the experienced utility of the Utility monster, because the Utiltiy monster is never actuallized, it exists only as potentiallity.
Marcus Pivato 08.23.11 at 11:20 pm
@Dsquared: You are right that the “tyranny of future generations” is a major ethical problem. But I don’t recall Parfit discussing it in R&P. If I recall correctly, the closest thing to “future generations” in R&P is a special case of a much more general paradox he discusses, namely the Repugnant Conclusion.
As I’m sure most CT readers already know, the Repugnant Conclusion asks you to choose between the survival of two possible future societies. One society is relatively small, but relatively happy and affluent (say, 5 billion people whose quality of life is roughly comparable the middle class in Western Europe). In the other society, everyone has a life of grinding misery, and finds continued survival just barely preferable to oblivion. But the other society is astronomically large. Say, 100 billion people. Or 10 trillion. Parfit’s observation is that, if we make “astronomically large” large enough, then utilitarianism (or anything vaguely similar) will choose the survival of the astronomically large population of miserable people over the small population of happy people.
Parfit plays with several variations of this experiment. In one variation, the astronomically large group is spread through time, rather than through space. In other words, you must choose between two futures: in one future, there is a single generation of 5 billion happy people, who live a normal lifespan and are then instantaneously and painlessly snuffed out (say, by a supernova explosion). In the other future, there are a million generations, each with 5 billion miserable people. I think perhaps this is the version of the “future generations” paradox to which DSquared refers.
Note, however, that the Repugnant Conclusion is not a version of Nozick’s Utility monster. In fact, it is almost exactly the opposite sort of problem. In Nozick’s Utility Monster paradox, we must sacrifice the happiness (or even the lives) of a very large number of people to increase the still greater happiness of one utility monster (or, generalizing, a small number of utility monsters). In Parfit’s Repugnant Conclusion, on the other hand, we find ourselves compelled to sacrifice the interests of a small number of relatively happy people for the sake of a much larger number of unhappy people.
Indeed, supposing you give the “wrong” answer to the Repugnant Conclusion, and insist that we should choose the world of 5 billion happy people over the world with 100 quadrillion unhappy people. Then your 5 billion happy people are playing a role dangerously close to Nozick’s Utility Monster. Indeed, it seems like any attempt to modify utilitarianism to avoid the Utility Monster makes it more susceptible to the Repugnant Conclusion, and any attempt to steer clear of the Repugnant Conclusion makes a modified utilitarianism more susceptible to the Utility Monster.
Of course, the Tyranny of Future Generations is a much more general problem than this particular version of the Repugnant Conclusion. The basic problem is that the interests of 10 000 generations of future people plausibly “outweigh” the needs of the present generation, according to any reasonable weighting system.
Also, as John Quiggin observes, the Repugnant Conclusion (and the Tyranny of Future Generations) are not just problems for utilitarians. They are problems for pretty much any ethical theory where the needs of a very large group of people can outweigh the needs of a much smaller group of people. That is: pretty much any ethical theory worth the name. And unlike the various Trolley Problems, in the Repugnant Conclusion, the choice is not one between action (pull the lever, push the fat man) and inaction (do nothing). In the Repugnant Conclusion, you must choose to either sacrifice the small group for the large group, or sacrifice the large group for the small group. You don’t get to pretend you can maintain deontologically “clean hands” by just looking the other way and doing nothing.
Fall in queue 08.23.11 at 11:55 pm
JQ @ 63:
I think you misunderstand my objection (not your fault: I left something important out), but I think your reply highlights something useful.
The issue I have in mind doesn’t have to do with what one would choose for oneself, but with whether a state of affairs that contains N number of headaches, possibly spread out across different individuals, is morally equivalent to a state of affairs that contains a single individual in excruciating pain. Any version of utilitarianism, hedonistic or otherwise, must allow for at least a partial ordering of states of affairs in this way, and so for the aggregation of utilities across individuals. My question is whether such aggregation makes sense.
Now, John’s response seems highlights something important: there is a special case in which aggregation seems to make sense, namely, when we are talking about a single individual throughout.
This is not utilitarian aggregation, which is supposed to be blind to questions of identity. But it might be that in certain contexts, most obviously public policy, utilitarianism makes sense because in thinking about them it makes sense to adopt a sort of collective point of view.
E.g., the administrators of a public health-care system are entrusted by all of us with a given amount of resources, and are bound by some sort of contract to use them in a certain way. What way is that?
Well, perhaps in thinking about that it might make sense for us to want to adopt a collective point of view (so a point of view blind to questions of identity), and so endorse a form of aggregation, analogous to the one that we use in the intra-personal case. As a result, we might want to say that our implicit instructions to our trustees are “maximize QALYS”, or something along these lines.
But notice what is going on here. The obligation the trustees are under derives from the fact that they are our trustees, and so bound by a promise; it does not derive from the intrinsic value of a state of affairs that is judged better by the calculus we have implicitly adopted.
So I would say the opposite of what qb said earlier: utilitarianism might be okay as a decision procedure (in certain circumscribed contexts), but not as a theory of right action.
John Quiggin 08.24.11 at 12:02 am
FiQ that’s exactly my view. Utilitarianism makes sense as the basis for social decision procedures, not as a theory of right actions for individuals.
Tony Lynch 08.24.11 at 12:43 am
It seems to me that both the initial article and CT has missed the point here. The point is not to engage in the usual debates about utilitarianism, but to place utilitarianism into relationship with economics.
The relationship that matters here isn’t whether or not economists treat “economic agents”as utility maximizers or whatever. It is, rather, how economists often tend to use utilitarian considerations to LEGITIMATE the putative motives (self-regarding, not other-regarding) of those economic agents. Thus: it is OK – indeed GOOD – to be a self-regarding economic agent because then most are better off than they would be otherwise.
Now the trouble: for this legitimation undermines itself. For if it is really true that utilitarian morality says to economic agents – be self-regarding, not other-regarding (for when one is the latter then i) one is not acting to maximise general utility, so is ii) wasting time, or iii) is positively malicious); then that other-regardingess which is necessary to even concern oneself with the supposedly general utility gains of economic activity has disappeared. It is motivationally unavailable. Thus “greed is good” collapses into simply greed…
The utilitarian legitimation works on the basis of an other-regarding concern it tells agents to eliminate from their motivational set…
Brett Bellmore 08.24.11 at 1:11 am
This thread is so silly it’s drawn me out from lurking.
The problem with the utility meter isn’t that we’ve exhausted the world’s supply of love sensitive copper, from which the windings must be made, manufacturing love charms. It’s that there’s no such thing as a utility meter, because “utility” can’t be objectively quantified.
The problem with doing utilitarian calculations isn’t that you can’t complete them in real time because they’re an NP complex problem. Or even that we haven’t yet settled whether utility is a scalar or vector quality. It’s that the very idea of doing said math is nonsensical.
The problem, in short, is that utilitarianism is a perfectly satisfactory metaphor, which can never actually settle any moral controversy, because it’s only a metaphor, and nothing more.
Jeremy Bowman 08.24.11 at 7:23 am
@Brett Bellmore:
The fact that utility can’t be “objectively quantified” is actually the saving grace of utilitarianism. It entails that there is in effect a “lexical ordering” of different kinds of preference. For example, a normal person’s preference to avoid getting sold into slavery is “on a different scale” from, and therefore incommensurable with, the preference to attend a gladiator show. The former always trumps the latter. So there is no number of people, however large, whose preferences to attend a gladiator show outweigh the preference of a single person not to be sold into slavery.
That purely factual insight about human decision-making informs the thought that people have a “right” not to be sold into slavery, but do not have a “right” to attend gladiator shows. As a preference utilitarian, I submit we cut out the middleman of rights-talk and just appeal directly to human decision-making.
Of course, we can still compare numbers of agents when the same kind of preference is at stake. If one person dies or twenty will die, it is less bad if one person dies.
Slex 08.24.11 at 7:57 am
@Brett Bellmore
We can’t objectively quantify utility but human beings are good detectors in some situations. Likewise, we don’t have an objective criteria on when a pile becomes a pile, or when a group of people becomes a crowd, but we know one when we see it.
Problems of that type are note exclusive to utilitarianism. Deontology would require moral knowledge from agents and in this respect it is understandable that we distinguish between the actions of a child and an adult. Obviously, a man in his thirties is an adult and a seven-year-old boy is a child, but somewhere in between there is an area of ambiguity. And we just put an arbitrary boundary of 18 years or 21 years for practical, that is utilitarian reasons.
Going back to utility comparison, it would be wrong to punish someone who has stolen a can of beer with 30 years in prison, because it would be a disproportionately severe punishment. As for whether he should spend a month in jail, or 10 days, or maybe he should get a probation, our utilmeter breaks and can’t give a definite answer.
Chris Bertram 08.24.11 at 8:32 am
@FiQ, @JohnQ
That seems both superficially plausible and quite wrong to me for two reasons.
1. The “trusteeship” analogy doesn’t fit at all with utilitarianism’s universalism. What the analogy supports is, at best, the idea that those running a state should employ some version of CBA with respect to the effect of policy decisions on the well-being of _citizens_. But genuine utilitarianism gives no weight to people being citizens or not. (There are some qualifying indirect-U things you could say here, about distributed responsibility among states for the interest of people, but utilitarianism is never going to support policies that focus exclusively on the well-being of, say, citizens of the United States.)
2. If something like prioritarianism is an important part of the story, then U is going to give more weight to the interests of the wealthy than is right (though much less weight than they get at the moment).
Chris Brooke 08.24.11 at 8:46 am
CB: Mill is trying, in his late Victorian way…
Utilitarianism first appeared in 1861, which isn’t late Victorian at all–only just over a third of the way through her reign. Late Victorian utilitarianism is Mathematical Psychics or the later editions of The Methods of Ethics–by which time I’m not sure anyone much wanted to hold on to Mill’s higher/lower distinction: some of Mill’s earliest critics (e.g. Grote in the 1860s, Jevons in the 1870s) made the point that to insist on the distinction was to abandon the distinctive terrain of utilitarianism altogether.
Chris Brooke 08.24.11 at 9:34 am
Actually, since I’ve been inflicting this stuff on myself recently, I don’t see why I shouldn’t inflict it on you lot. Here’s a bit of Jevons on Mill in a review essay in one of the journals from, I think, 1879:
*** The verdict which Mill takes in favour of his high-quality pleasures is entirely that of a packed jury. It is on a par with the verdict which would be given by vegetarians in favour of a vegetable diet. No doubt, those who call themselves vegetarians would almost unanimously say that it is the best and highest diet ; but then, all those who have tried such diet and found it impracticable have disappeared from the jury, together with all those whose common sense, or scientific knowledge, or weak state of health, or other circumstances, have prevented them from attempting the experiment. By the same method of decision, we might all be required to get up at five o clock in the morning and do four hours of head-work before breakfast, because the few hard-headed and hard-bodied individuals who do this sort of thing are unanimously of opinion that it is a healthy and profitable way of beginning the day. ***
John Quiggin 08.24.11 at 9:38 am
“By the same method of decision, we might all be required to get up at five o clock in the morning and do four hours of head-work before breakfast, because the few hard-headed and hard-bodied individuals who do this sort of thing are unanimously of opinion that it is a healthy and profitable way of beginning the day.”
Don’t all rightthinking people agree on this?
Chris Brooke 08.24.11 at 9:46 am
Be careful about taking lifestyle advice from Jevons: he died at 46 after foolishly going swimming in a chilly English Channel off Hastings.
dsquared 08.24.11 at 10:13 am
If we try to put it in the context of intergenerational trade-offs, this will mean that the current generation, let us call it generation “t†will have to make a lot of sacrificies in the standard of living, in order to save resources for future generations
Yes, this was (iirc) Partha Dasgupta’s critique of the Stern Report.
FiQ – yes that’s a good point – the Utility Monster thought experiment carries an implication of inequality as well as the problems which exist with respect to the level of sacrifice that can be reasonably required. But the sting in that thought experiment comes from the fact that in factory-gate, unmodified versions of utilitarianism, there isn’t a “maximum level of sacrifice that can be reasonably required from any one individual”.
Alex 08.24.11 at 10:53 am
Maybe in Nozick’s original version, but Derek Parfit in Reasons & Persons demonstrates that for seemingly plausible assumptions, there is an actually existing Utility Monster, and it’s called “Future Generationsâ€
See also: “In the long run we are all dead”. This applies very much to economics.
John Quiggin 08.24.11 at 11:02 am
@DD This was Dasgupta’s critique, but his numbers relied on the spurious assumption that the risk-free rate is 4 per cent. Where the numbers are right, as in poor countries experiencing a sudden take-off into very rapid growth, we often see the kind of high savings rate that would be expected on the basis of (mostly intra-household) utilitarian calculations about later-born generations.
John Quiggin 08.24.11 at 11:06 am
I’ll repeat my standard point that “future generations” is a misconceived idea. The centenarians of 2110 are alive today, and by the time they are in middle-age, they will have contemporaries who might reasonably (given a bit of progress in life-extension) be making plans for the 23rd century.
The relevant distinctions are between earlier-born and later-born cohorts who are (at least for part of their lives) contemporaries. The basis for discounting future utility lies not in a generational distinction but in the fundamental uncertainty involved in any calculation about it.
Slex 08.24.11 at 12:14 pm
@ dsquared
So, obviously I got the argument correct. But then my criticism of it is, that the same logic could be applied to generation “t+1” and so on. At one point in time or another, in order to have an Utility monster, we need to have a generation “t+j”, which will be the first generation to start consuming the utility sacrificed by previous generations and will mark the appearance of the Utility monster. The problem is that what applies to “t+1”, would apply to “t+j”, provided that there are infinite number of generations. The monster will never come into existence, and the net utility will never be positive.
And if there are limited number of generations and people, then we don’t have an Utility monster. We are back to ordinary (if I may say so) utilitarian calculations among a fixed number of people.
The idea about the Utility monster is that net utility is always positive, because no matter what new suffering there is, the pleasure felt by the monster will outweigh it. This can not happen in a fixed population. There is, in this case, a limit to the suffering, which can be outweighed by the pleasure of the others.
Slex 08.24.11 at 12:42 pm
I also agree with JQ at 104. Another criticism of future generations as the Utility monster in the context of global warming is, that if we don’t take measures against climate change now, future generations will have to deal with it at much higher costs than us. So, the comparison doesn’t hold at all.
James Wimberley 08.24.11 at 12:48 pm
61: “…and pragmatist thinkers such as Daniel Dennett and Donald Davidson.”
As a lexical rule, this leaves out CT’s very own dsquared. And why Dickens when we have George Eliot’s Daniel Deronda? We expect great things from this young man.
Jeremy Bowman 08.24.11 at 1:51 pm
Future generations must count for less in our deliberations, because we have less confidence in how our actions/decisions will affect them. The more distant they are, the less we should take them into account.
In general, when we make rational judgements about courses of action, we have to take account of both the desirability of our goal and the likelihood of achieving it. In ideal actuarial terms, the “expectation” of a course of action is the arithmetical product of the desirability of the goal and the likelihood of achieving it. (These are “ideal” terms, because numerical values of “desirability” and “likelihood” are impossible.)
Avoiding catastrophe for future generations may be very desirable, but we cannot act rationally if we have practically no idea of how our actions will affect them.
We must also try to avoid common confusion between merely potential persons and actual future persons.
Chris Bertram 08.24.11 at 2:31 pm
@dsquared Re the attribution of the future-generations utility-monster to Parfit, I think a more plausible person to credit may be John Rawls, who makes the point exactly wrt the classical principle of utility at sec. 44 of A Theory of Justice. (mind you, I think he’s channeling others, since he cites Sen and a bunch of others in a nearby footnote).
geo 08.24.11 at 5:02 pm
Chris Brooke @ 98: What vitiates Jevons’s criticism is the word “required.” Of course we vegetarians do believe (know, actually) that it’s “the best and highest diet”; but what’s this about “packing the jury” or “requiring” others to eat our way? They’re quite welcome to try to argue us out of vegetarianism (they’ve been trying since Jevons’s time, with smugly fatuous references to “common sense” and “science”) and to eat themselves into cardiac and colonic catastrophe. How does Mill’s argument entail legislating high culture or vegetarianism or anything else without a democratic mandate — without inviting everyone onto the jury and patiently persuading them? Mill was (as William James pointed out) a pragmatist avant la lettre and believed as firmly (though implicitly) as Rorty or Rawls in the priority of democracy to philosophy.
Chris Brooke 08.24.11 at 5:18 pm
I think that’s right, that in practical terms Mill did want a world in which the population of competent judges was steadily growing, moving towards the kind of democracy you describe. But on the level of theory, I’m not sure I see that. The phrase he uses in Utilitarianism is that “from this verdict of the only competent judges, I apprehend there can be no appeal”. He could have presented the verdict of existing competent judges as a kind of hypothetical verdict, one to be ratified by–or appealed to–a future, more enlightened demos. But he didn’t.
I suppose I read Mill through the lens he provides in the Autobiography, where he says of his younger (pre-1848, say) self that “In short, I was a democrat, but not the least of a Socialist”, but that, over time, he and Harriet Taylor “were now much less democrats than I had been”, and with views “would class us decidedly under the general designation of Socialist.” I’m not sure (because largely ignorant of Rorty) where that shift might fit into the idea of the priority of democracy to philosophy.
Billikin 08.24.11 at 5:38 pm
@ campfiregirls:
“Can someone catch me up on why, in formulating moral dilemmas, the question always has “you and five others.†Even when the question puts you in a group up front, we get, “if the enemy finds your group, all six of you will die†or some such. I’m sure there’s a good reason for this and that all of you know it while I don’t.”
In “Remembering”, Bartlett found that, in the case of the reproduction of drawings from memory, often it was trivial details that were faithfully reproduced (if they were reproduced at all). Perhaps the 5 to 1 ratio is one of those trivial details in this case. It certainly seems like these dilemmas are variations of a single dilemma.
BTW, the fact that they are all so similar makes me wonder if this research points to anything of significance. Psychopathy covers a wide range of behavioral choices. Scott Peterson lied to his mother for no apparent reason.
geo 08.24.11 at 6:34 pm
Chris Brooke: Your concern about “no appeal from the verdict of competent judges” is plausible, but I don’t think the context (the passage is in chapter 2 of Utilitarianism, for those interested) offers any real reason for anxiety, absent any other reason to suspect Mill’s democratic and libertarian commitments. Is the sentence you quote from his Autobiography a reason to suspect them? I’d say no, but perhaps I’m partial. Here’s the context (Autobiography, ch. 7); others can judge:
In those days I had seen little further than the old school of political economists into the possibilities of fundamental improvement in social arrangements. Private property, as now understood, and inheritance, appeared to me, as to them, the dernier mot of legislation: and I looked no further than to mitigating the inequalities consequent on these institutions, by getting rid of primogeniture and entails. The notion that it was possible to go further than this in removing the injustice — for injustice it is, whether admitting of a complete remedy or not — involved in the fact that some are born to riches and the vast majority to poverty, I then reckoned chimerical, and only hoped that by universal education, leading to voluntary restraint on population, the portion of the poor might be made more tolerable. In short, I was a democrat, but not the least of a Socialist. We were now much less democrats than I had been, because so long as education continues to be so wretchedly imperfect, we dreaded the ignorance and especially the selfishness and brutality of the mass: but our ideal of ultimate improvement went far beyond Democracy, and would class us decidedly under the general designation of Socialists. While we repudiated with the greatest energy that tyranny of society over the individual which most Socialistic systems are supposed to involve, we yet looked forward to a time when society will no longer be divided into the idle and the industrious; when the rule that they who do not work shall not eat, will be applied not to paupers only, but impartially to all; when the division of the produce of labour, instead of depending, as in so great a degree it now does, on the accident of birth, will be made by concert on an acknowledged principle of justice; and when it will no longer either be, or be thought to be, impossible for human beings to exert themselves strenuously in procuring benefits which are not to be exclusively their own, but to be shared with the society they belong to. The social problem of the future we considered to be, how to unite the greatest individual liberty of action, with a common ownership in the raw material of the globe, and an equal participation of all in the benefits of combined labour.
Lemuel Pitkin 08.24.11 at 6:48 pm
I’m astonished at the hostility this study is attracting here, especially from people like the sagacious geo. To me, the results seem perfectly natural.
Morality as we experience it in every day life is not consequentialism. It’s deontological and strongly dependent on relationships. You have particular duties to children, parents, colleagues, neighbors, fellow citizens, etc., that can’t simply be derived from what states of the world lead to greater wellbeing (though of course you should think about that too.) Morality is about rules, not just outcomes.
Same with law. From the American Progressives to the early Soviet legal theorists to the law-and-economics folks, there have been repeated efforts to subsume positive law under some form of consequentialism, but it never works. That’s the difference with administrative questions, where it’s perfectly appropriate to look only at outcomes. (Of course what is a matter for law, and what for administration, is always going to be up for negotiation.) It’s sometimes necessary for a state to see like a state. But not for judges shouldn’t, and not for you or me.
Why this should be the case, is an interesting question. But it’s perfectly obvious that what we regard as moral actions in everyday life have a large element of precisely *not* acting instrumentally, but of doing what is right because it is right. It doesn’t surprise me at all that people who don’t grok that, come across as sociopaths.
geo 08.24.11 at 7:31 pm
Lemuel: Thanks for the kind word, but actually I haven’t looked at the study or said anything about it. My little crack way back @26 (“How very odd”) was just a harmless (I thought) bit of anti-social-science know-nothingism. For what it’s worth, I do think (without benefit of social science) that most economists are, if not quite sociopaths, then at any rate … well … very odd.
stubydoo 08.24.11 at 10:42 pm
I feel as if this critiquing of the “utility monster” angle is actually somewhat besides the point, as I find myself wondering how many of the folks here are ready to join me in biting that bullet.
Because I’m a sociopath.
Fall in queue 08.25.11 at 12:55 am
Chris Bertram @96:
I agree with you that any implied contract by which citizens bind their “trustees” likely should not, on a deeper analysis, be thought of as purely utilitarian. I’m just trying to give a story (roughly following Taurek, IIRC) of where utilitarian intuitions might come from, on the assumption that interpersonal aggregation does not in general make sense.
Mike Otsuka 08.25.11 at 3:06 am
Chris is right that Rawls says, in a Theory of Justice, that classical utilitarianism “may direct us to demand heavy sacrifices of the poorer generations for the sake of greater advantages for later ones that are far better off” (p. 287, original ed).
But additional credit to Parfit for deploying a future generations case involving large numbers of people (though one that is actually less analogous to Nozick’s utility monster than Rawls’s above counterexample to utilitarianism) in response to the claim that Nozick’s utility monster is a bad objection to utilitarianism because such a monster is ‘deeply impossible’.
After raising and discussing the objection that Nozick’s utility monster is an impossibility, Parfit writes:
“Return now to my imagined Z [of the repugnant conclusion]. This imagined population is another Utility Monster. The difference is that the greater sum of happiness comes from a vast increase, not in the quality of one person’s life, but in the number of lives lived. And my Utility Monster is neither deeply impossible, nor something that we cannot imagine. We can imagine what it would be for someone’s life to be barely worth living. And we can imagine what it would be for there to be many people with such lives. In order to imagine Z, we merely have to imagine that there would be very many. This we can do. So the example cannot be questioned as one that we can hardly understand.”
I think both the Rawlsian counterexample to utilitarianism and Parfit’s use of this counterexample in reply to those who object to the impossibility of a utility monster are brilliant points.
Mike Otsuka 08.25.11 at 3:09 am
… what I meant to say in my last sentence was:
I think both the Rawlsian counterexample to utilitarianism and Parfit’s use of a similar future generations counterexample in reply to those who object to the impossibility of a utility monster are brilliant points.
Slex 08.25.11 at 6:01 am
@ Mike Otsuka
But we have shown reasons why Parfit’s future generations examples is not so brilliant, if not even flawed. See posts from 104 to 106.
Mike Otsuka 08.25.11 at 6:59 am
@ Slex
I read and noted those posts but was neither convinced by them nor moved to comment on them.
Chris Bertram 08.25.11 at 7:04 am
Slex: I’m not at all sure about that …..
The point about present-day people always being injuncted by classical utilitarianism to save rather than consume doesn’t depend on there being some determinate FG who get to play the utility monster role.
The point about imposing higher costs on FGs to cope with global warming .. well, I’m not completely sure of my ground here, but the way you express it seems to reinforce this argument against classical utilitarianism since you are requiring still greater sacrifice in the present for the benefit of future people (despite the fact that future people might — on one set of assumptions about continuous growth that I wouldn’t endorse myself – be better placed to bear those costs).
I agree with JQ’s remarks about uncertainty being the only defensible basis for discounting and about the indistinctness of generations. But (a) I’m not sure that the FG arguments here actually depend on determinacy and distinctness in any fundamental way and (b) uncertainty is just dsquared’s point about Stern: i.e. it can sink the FG utility monster, but only with the number set high enough.
Slex 08.25.11 at 12:55 pm
The whole point about the Utility monster argument is to show the untenability of the utilitarian position. It is not a critique of utilitarianism per se, trying to show its inconsistency as a doctrine, but the goal is to elicit an emotional reaction in the defender of utilitarianism. Nothing stands in the way of admitting the acceptability of the Utility monster in some versions of utilitarianism, but the desire not to feel and look like a monster yourself.
But the utilitarian has a cop out. We need ethics to guide us in this universe. To reject an ethical theory, based on its inability to deal in a proper way with non-existant situations would be the equivalent to reject Newtonian physics as a valid scientific theory, because it will not explain a universe with no gravitation. You can’t judge the merits of utilitarianism (or deontology) based on things that can’t happen.
If we go back to Nozick’s original argument – here’s the excerpt:
http://www.animal-rights-library.com/texts-m/nozick01.htm
he uses the Utility monster example in the context of humans eating other animals (there are also wider issues at stake, but I will ignore them here). The monster eats people and in the process it always gains more utility than the people lose. The net utility is above zero. Um – Up > 0. The current-future generations trade-off has everything to do with actual future generations benefiting from previous sacrifices. Without them it will be -Ut -Ut+1 -Ut+2 and so on. Net utility is always below and going even further below zero. It’s like running to the horizon and speeding up in the hope of reaching it sooner. There is not even a trade-off, let alone a Utility monster. It is a clear cut case for the utilitarian.
But let’s assume that this is not the case and that the above deliberations are wrong. How well does the Utility monster compare to the challenges of our civillization?
First, there are reasons to think that global warming could become self-reinforcing. A postivie feedback loop is at least a possibility. If that is the case, our generation should take measures now and lose utility, because future generations will not be able to stop the warming processes by analogical or even bigger loss of utility. That is, if we start taking measures now, they will at least pay off in the future. Chances are, that if the same measures are taken by next generations, they will not pay-off at all and there will be only loss of utility.
Second, how big should the sacrifices be? Here it should be noted, that unlike the Utility monster example, sacrificies of the current generation do not automatically lead to an increase in the utility of future generations. Some sacrificies will lead to benefits, some will make no change at all, others will be plain and simple detrimental.
And that is where the analogy breaks. Let us say that the Utility monster enjoys torturing people. First, it slaps the man in the face. The man feels pain, but the monster feels pleasure that outweighs it. Then the monster runs electricity through the man’s body. The man feels even more pain, but the monster feels even more pleasure. You get the picture.
Now, let us come back to global warming. If the current generation tries to limit gas emmissions, it will lose some of its standard of living. This could slow global warming and buy more time to develop better and environmental-friendly technologies. Let us say that the current generation makes an even bigger sacrifice by abandoning oil as fuel and going to sun energy, etc. This will slow the warming even more and give boost to the development of eco-friendly technologies, because the sheer scale of their usage will lead to better feedback on what works and what not. Future generations will obviously benefit from this more (compared to the benefits from the previous sacrifice – whether the utility they gain will outweigh the utility lost by the current generation is another matter; I will assume it is). What next? We can return to the preindustrial mode of production. This will obviously incur very high losses to our generations, but the benefits to future generations will be very dubious. Not only because it will seriously slow the dissemination of knowledge, innovation and technological development, but also because it is suspicious whether the pre-industrial organization of economic life will decrease damage, environmental pollution and the exhaustion of resources when applied to the current population level. If we take all this into consideration, most probably what we have done would lead to the postponment of the problem by a few generations, not its solution. We would be in the position of a long-jump athlete who decides to run not 20 metres before the jump, but 200, thinking that this way the jump will be easier.
Note also that the Utility monster, even if it were a possiblity, would not be a problem for some versions of utilitarianism. The one to which I subscribe (I am not a strict utilitarian, actually) does not measure suffering against pleasure, but suffering against suffering, and pleasure against pleasure. So, while inflicting suffering could be justified to avoid more suffering, inflicting suffering to bring pleasure to someone else does not. Of course, there are challenging dilemmas for this type of utilitarianism, but this Utility monster is not one of them.
Every ethical theory I have seen is either incomplete, or contradictory. Utilitarianism has its defficiencies, but it is better than deontology. In the real world (not just some imaginary one) duties often come into contradiction, and we should resolve this by either not acting (if it is an option at all) or by doing a cost-benefit analysis (i.e. apply utilitarian thinking). If lying is wrong, and murder is wrong, would you lie to prevent a murder? This is not an exception, this is an example of a categorical failing of deontological ethics.
bianca steele 08.25.11 at 1:19 pm
I find what Slex says intuitively persuasive. However, the paper cited offers a definition of deontology that does not have the deficiencies Slex notes.
Slex 08.25.11 at 2:05 pm
I couldn’t find such a definition of deontology in the paper, but I have only skimmed it. I suppose it would be something similar to deontology with prima facie duties or the principal of permissibal harm. In my opinion this is incorporation of utilitarian principles in deontology, just like (but to lesser extent) rule utilitarianism draws near to deontology (more in effect than in motivation).
bianca steele 08.25.11 at 2:26 pm
bianca steele 08.25.11 at 2:40 pm
It’s possible, on second thought, that I’ve misunderstood what the authors mean by “constraint,” in which case I’ll also have to revise my understanding of why they mention that utilitarianism relies on “one simple rule.”
Slex 08.25.11 at 3:33 pm
This I had seen, but I couldn’t infer from the paragraph (and I still don’t) an explanation of how a situation of conflicting moral duties is resolved (that is why I thought it was elswhere). As for the meaning of “constraint” in this case, I take it as “normative constraint” of “should and shoudn’t do”. Basically, they are right about utilitarianism, but there are several brands of it, which can differ significantly in some respects on what is the right thing to do.
bianca steele 08.25.11 at 3:50 pm
Well, I haven’t read far enough to figure out whether their argument or their methodology is going to depend heavily on the idea that relying on “one simple rule” is simplistic and thus related to psychopathy, hence undesirable. Or to know whether there is any upshot to the answer to that question. Answering those questions may require an intimacy with psychological theory that I don’t have. It may also be the case that their paper looks entirely different when you factor in a more sophisticated understanding of utilitarianism.
But my point was that the kind of deontology they defend, clearly, isn’t the kind that says you have to tell the truth to the Nazis about where Anne Frank is hiding. They could be defending an idea that morality is a kind of knowledge and involves a lot of separate principles that aren’t derivable one from another.
Andrew F. 08.26.11 at 11:29 am
Slex @123: Nothing stands in the way of admitting the acceptability of the Utility monster in some versions of utilitarianism, but the desire not to feel and look like a monster yourself.
But the utilitarian has a cop out. We need ethics to guide us in this universe. To reject an ethical theory, based on its inability to deal in a proper way with non-existant situations would be the equivalent to reject Newtonian physics as a valid scientific theory, because it will not explain a universe with no gravitation. You can’t judge the merits of utilitarianism (or deontology) based on things that can’t happen.
Okay. Newtonian physics would fail in a universe without gravity because that universe lacks one of Newtonian physics’s key assumptions.
So apply that same reasoning to the Utility Monster. “Utilitarian ethics would not work in a universe with a Utility Monster because that universe lacks one of utilitarianism’s key assumptions.”
But that missing key assumption seems to have something to do with distributive justice, with what each person deserves, and in any case with something that goes beyond the sheer aggregation of utility.
The Utility Monster, in highlighting the assumption which makes utilitarian accounts in THIS universe plausible, shows us what utilitarianism is missing. More than that, insofar as utilitarianism rejects grounds of ethical action other than sheer utility, the Utility Monster thought experiment shows us that utilitarianism can only be intuitively plausible if we smuggle in an unstated assumption that directly contradicts the breadth of utilitarianism’s central rule.
Getting back to the paper a moment, the more I think about it, the more I realize how abusive it is of the psychological tests it utilizes (see what I did there?). However, as JQ and others above have pointed out, it does point us to something interesting about the nature of ethical leadership of a group, if anticipated by some thinkers quite some time ago.
First, there are obvious connections with Machiavelli’s (among others) theory of politics. And the distinctions we’re led to draw between moral duties in different roles – as a friend, as a President, as a business-owner, and so forth, can be constructively suggestive for a variety of ethical theories.
Second, the article reminds me of Emerson’s comment in Self-Reliance, that [t]hy love afar is spite at home. I’m ripping the quote from context, but it states nicely one of the implicit suggestions of the article: that utilitarians may express what they consider sound moral reasoning, but that they deploy this reasoning from a lack of empathy and a surfeit of detached manipulation which would impair their actual relationships with others.
But, quite frankly, that implicit suggestion of the article is very poorly supported. The tests they deploy don’t necessarily capture psychopathy or lack of empathy – they may simply capture the understanding that the rules we learn as children (don’t tell a lie) cannot be used mindlessly.
If it were true, it would hold troubling implications for the personal cost of leadership and also for the importance of constraining leaders in the exercise of their power, but it tells us little about the soundness of utilitarian reasoning.
Slex 08.27.11 at 8:14 am
@ Andre F
<i.So apply that same reasoning to the Utility Monster. “Utilitarian ethics would not work in a universe with a Utility Monster because that universe lacks one of utilitarianism’s key assumptions.â€
My point is rather that utilitarianism does not have to deal with utility monsters because we live in a universe, where the ability to feel pleasure is limited (at least that is what our experience tells us). The Utility monster is not a subject to such a constraint, as far as I understand it.
Mike Otsuka 08.28.11 at 11:50 am
Slex @ 131
In the Parfit passage that I quote @ 118, he responds to what sounds like the point you’re making. Here’s what Parfit says in the sentences leading up to the ones I quoted above:
“…this Monster’s quality of life must be millions of times as high as that of anyone we know. Can we imagine this? Think of the life of the luckiest person that you know, and ask what a life would have to be like in order to be a million times as much worth living. The qualitative gap between such a life and ours, at its best, must resemble the gap between ours, at its best, and the life of those creatures who are barely conscious—such as, if they are conscious, Plato’s ‘contented oysters’. It seems a fair reply that we cannot imagine, even in the dimmest way, the life of this Utility Monster. And this casts doubt on the force of the example.” (Reasons and Persons, p. 389.)
What he goes on to say in what I quoted @ 118 is an answer to this objection.
Andrew F. 08.28.11 at 1:15 pm
Slex @161:
But that misses the point. The contrast between the unsatisfactory outcome of utilitarianism applied to a universe with a Utility Monster, and the more satisfactory outcome of utilitarianism as applied to this universe, illuminates a key underlying assumption of utilitarianism.
That key assumption, illuminated by its absence in the Utility Monster’s universe and its presence in this universe, contradicts the basic principle of classic utilitarianism. That is, it turns out that sheer aggregation of utility insufficiently accounts for our sense of what actions are ethical – and it turns out that distribution of utility also plays a role.
Alex 08.28.11 at 3:44 pm
Not having any philosophical education except CT threads, I think there’s an interesting embedded assumption in the Utility Monster.
This is: no diminishing returns to scale for utility. If the UM’s “quality of life must be millions of times as high as that of anyone we know”, wouldn’t it be plausible for the marginal value of one more rat-orgasm unit piled on top to be very low indeed? For the UM to work, the reasoning seems to be that additional utility is a flat-rated percentage. 1% more utility for a creature vastly happier than us is a greater absolute value than 1% more utility for each one of five billion philosophising monkeys.
But why should delta u be a flat-rated percentage? In marginal economics you usually work with one unit more or less – one ratgasm. So the UM has to get the equivalent of (1 ratgasm * world population)+1, ex hypothesi. It’s trivially obvious that just giving an individual 5 billion ratgasms + 1 is moar than giving every human being 5 billion ratgasms, but what’s doing the work there isn’t the distribution, it’s the one extra unit. (This is basically Belle’s Pony in action.)
To make it work, you’ve got to assume that the UM has a vastly greater preference for utility itself. This I find unsatisfying as an argument, and circular. If you give the UM one more unit of utility, it gets more utility from it than the rest of us? It’s more efficient in being happy? And what’s the output metric in this efficiency calculation? Utility. (A special, internal measure would surely get slashed by Occam’s razor.) I think we have a logical flaw here.
So let’s drop utility as an intermediate metric. Say we have some concrete policy prescription that would hurt everyone by minus x ratgasms each, but the UM is so weird that just transferring this to it would make it impossibly, incredibly happy. Perhaps it is a psychopath, motivated by delight in others’ pain*. This implies an enormous sensitivity to (whatever) in the UM. Is that plausible? Wouldn’t diminishing returns set in?
* of course, JS Mill dealt with this, didn’t he?
Jeremy Bowman 08.28.11 at 5:58 pm
I do wish you’d all stop assuming that ‘utilitarianism’ means hedonistic utilitarianism. We utilitarians are not all pre-later-Wittgenstein, pre-American Pragmatist, old-fashioned Cartesian nitwits, you know!
The idea of a “utility monster” poses no threat whatsoever to non-hedonistic utlitarianism.
Slex 08.28.11 at 6:29 pm
@ Mike Otsuka
From the quotes by Parfit in 118 and 132, I get the following:
In 132 he seems to counter an implicit argument that no such amount of positive utility can exist that can compensate for a certain level of suffering. People claim this, because they can’t even imagine this, being constrained by their own ability to feel pleasure. But just because people can’t imagine it is possible, it doesn’t mean it isn’t.
In 118 he replies to the objection that UM is impossible. According to him it is both possible and something that we can imagine. I have never challenged the logical possibility of the UM. It might or might not be logically possible. I just think that the intergenerational trade-off example fails and I’ve tried to explain why. I also think that the existence of the UM is very unlikely, if not impossible, in our universe – at least the universe as we know it.
Slex 08.28.11 at 6:46 pm
@ Andrew F.
I guess that we will have to agree to disagree. In my opinion, ethical theories should be judged on their relevance in our world.
As for the distribution of utility, you probably know that there are versions of utilitarianism that incorporate it, e.g. prioritarianism. I am not even sure that it is a problem for classical utilitarianism, given appropriate valuations of utilities and the fact that the ability to feel pleasure is limited.
Brett Bellmore 08.28.11 at 8:17 pm
“But that misses the point. The contrast between the unsatisfactory outcome of utilitarianism applied to a universe with a Utility Monster, and the more satisfactory outcome of utilitarianism as applied to this universe, illuminates a key underlying assumption of utilitarianism.”
Speaking of missing the point. This is a contrast between imagining applying utilitarianism in one universe, and imagining applying it in another. Imagining, because that’s all you can ever do with utilitarianism: Imagine implementing it.
I think that’s why utilitarians are such fervent advocates of their own system: Because all you can do is imagine applying it, you’re free to imagine it working out fabulously. While deontologists of various stripes can actually try to implement their theories, and run into the pesky details.
Now, when you finally do invent a utility meter, and can objectively assign one number of “utils” to an orgasm, and a different number to eating a well made cheeseburger when you’re really hungry, we might actually profit from discussing the math, and whether rule utilitarianism is more computationally efficient, and so forth. But until then, could we try to remember it’s all just a game of “let’s pretend”?
And when the rubber hits the road, we all have to be deontologists in the end. Even those of us who like to pretend we aren’t.
geo 08.28.11 at 10:48 pm
we all have to be deontologists in the end
Ha! There’s one in the eye for you, Chris Bertram! Vindicated by Brett Bellmore — can you ever live down the shame?
qb 08.29.11 at 1:30 am
I think that’s why utilitarians are such fervent advocates of their own system…
As opposed to being advocates of yours? A touch of projection going on here.
Peter T 08.29.11 at 1:47 am
Herder pointed to an interesting implication of the aggregative kind of utilitarianism – it is inherently competitive. Perhaps this is what makes it so attractive in a capitalist world, and less attractive as we move beyond capitalism?
Slex 08.29.11 at 8:20 am
@ Brett Belmore
“And when the rubber hits the road, we all have to be deontologists in the end. Even those of us who like to pretend we aren’t.”
It’s the opposite. When deontologists are faced with conflicting duties, they dump one on the basis of utiltiy.
Henri Vieuxtemps 08.29.11 at 9:01 am
When deontologists are faced with conflicting duties, they dump one on the basis of utiltiy.
Yes, some certainly do:
http://en.wikipedia.org/wiki/Robert_D._Kaplan
Which, incidentally, makes them sound a bit psychopathic.
Brett Bellmore 08.29.11 at 9:01 am
Even were that true, Slex, (And it’s more of a fairy tale utilitarians tell themselves, like that “no atheists in foxholes” whopper the theists have.) that still wouldn’t be utilitarianism, which requires the pretense of measurement and calculation. Nobody does actual utilitarianism, because it’s flatly impossible. Impossible due to the utter and complete lack of any objective measure of “utility”.
Objective interpersonal comparisons? You can’t even do objective intrapersonal comparisons of utility, without resorting to observed preferences, due to that lack.
That’s why, as I said a while back, utilitarianism isn’t so much an ethical theory as it is a metaphor which some people make the mistake of taking far too seriously. Utilitarian calculus? There isn’t even utilitarian counting, you’d need numbers for that.
Some people may engage in something utterly informal, for which utilitarianism is a metaphor, but that’s as close as it will ever get, and doesn’t get anywhere near the stuff that’s fun to talk about, like “utility monsters”.
Slex 08.29.11 at 12:13 pm
@ Brett Bellmore
“Nobody does actual utilitarianism, because it’s flatly impossible. Impossible due to the utter and complete lack of any objective measure of “utilityâ€.”
On the contrary. The criminal justice system is full of utility comaprisons. As I have written in post 95, if utility didn’t matter, and we couldn’t compare it, it wouldn’t matter if punishment for small thefts would be a month in prison or 10 years in prison.
An example for a very utilitarian government action can be found here:
http://www.time.com/time/magazine/article/0,9171,978031,00.html
Henri Vieuxtemps 08.29.11 at 1:19 pm
As I have written in post 95, if utility didn’t matter, and we couldn’t compare it, it wouldn’t matter if punishment for small thefts would be a month in prison or 10 years in prison.
Could you explain this, please. If I don’t care about utility, how does that prevent me from realizing that 10 years in prison for a small theft is disproportionate and barbaric?
It seems that the opposite may be true: if small thefts were to become a serious problem, utilitarians might feel like tightening the screws, and, perhaps, punishing it by 10 years in prison. Something like Giuliani and the broken windows theory.
Matthias 08.29.11 at 3:09 pm
Both caring about utility exclusively and caring about proportionality require being able to do intersubjective utility comparisons., even though they’re incompatible. (No human actually cares about utility exclusively – or, for that matter, not at all – so this is as hypothetical as the trolley problem itself.)
qb 08.29.11 at 3:24 pm
Yeah, the thought that we can’t make reasonable estimates about interpersonal utility comparisons is as ridiculous as the thought that the theoretical viability of utilitarianism turns on how precise our estimates can be in practice. The first is disingenuous; the second is just misguided.
Slex 08.29.11 at 3:58 pm
@ Henri Vieuxtemps
The point of the example was to show that people employ utility calculation very often (contrary to the claim by Brett Belmore that no one does actual utilitarianism). Of course, if you didn’t care about utility, it wouldn’t matter. Obviously, it matters for institutions, and as institutions usually reflect the opinion of the people, it matters for them, too. In opposition to the view that we are all covert deontologists, I think that we are all utilitarians in disguise, because we all compare utilities in one way or another. In my opinion, even sociopaths estimate roughly the utility/disutility that others experience, it is just that they don’t care about anyone but themselves.
Regarding the broken windows theory, as far as I know, it is about taking prompt action against minor crimes, misdemeanors and their artifacts, not about imposing harsh penalties for them. There is no utilitarian justification for imposing severe penalties for small crimes, because it will create incentives for potential offenders to commit more serious crimes.
Slex 08.29.11 at 4:08 pm
@ Henri Vieuxtemps
If I don’t care about utility, how does that prevent me from realizing that 10 years in prison for a small theft is disproportionate and barbaric?
On a second reading, you seem to imply that you can realize that 10 years in prison for a small theft is disproportionate without caring about utility. I have no idea how you can do it, so if it is not a problem, you could try and explain.
Or, better yet:
Do you think that a man who starts a bar fight resulting in a broken nose should get the same punishment as a serial rapist? Why if yes/no?
Henri Vieuxtemps 08.29.11 at 4:43 pm
Slex, in the society where I grew up serial rape is considered a much more serious crime than a bar fight. I’m sure there is a utilitarian component in it, but I see it as a result of social evolution. I imagine there are societies where serial rape isn’t a crime at all, depending on the circumstances. As an obvious example: if it’s a married couple.
Slex 08.29.11 at 5:03 pm
Henri, societies aside, I was asking about your personal position. Because I can’t think of a plausible justification for different punishments, which does not involve the comparison of utility.
Henri Vieuxtemps 08.29.11 at 5:34 pm
I’m probably completely confused about all this, because I don’t see your example as a utilitarian calculation at all. Beating someone up involves physical pain and injuries, while rape (assuming it’s merely a forced copulation) is a violation of personal dignity, which is a social construct. If, in a particular society, it doesn’t exist, then there is no harm whatsoever.
bianca steele 08.29.11 at 5:49 pm
I agree with HV to the extent that I agree this discussion is weird. I suppose there are very few societies where bar fights are criminalized, to a greater degree than, say, public drunkenness. Starting a fight in Symphony Hall might be a different story, and I also suppose a history of barfights wouldn’t actually t help when you’re trying to make partner.
Slex 08.29.11 at 6:48 pm
@ Henri
I’ll concede for now that you really don’t see it as an utilitarian calculation, but still it does not become clear on what grounds you think that the punishment for multiple rapes should be higher than for a broken nose in a bar fight (if you think it should be higher at all).
@ bianca steele
Bar fights per se may not be criminalized at all. But their consequences, if severe enough, are. Do you think that the punishment for a broken nose should be different from the punishment for multiple rapes? And if yes, on what grounds. I think the question is pretty simple and straightforward. I don’t know why both of you find it confusing.
Chris Bertram 08.29.11 at 6:51 pm
_Vindicated by Brett Bellmore_
I think when Brett enters a thread we’re probably at some Godwin-like “thread is over” moment. Observing the discussion from a distance now, I find myself mildly amused by the way some people think they have to be consequentialists just because _deontology is crazy, and some people think they have to be deontologists because, well, _consequentialism is crazy_ . My guess is that the true account of morality exhibits some consequentialist and some deontological features without being reducible to either.
Henri Vieuxtemps 08.29.11 at 7:23 pm
Consequentialism is fine, just not necessarily the notion for individuals to be guided by; not under ordinary circumstances, at least. We are trained to act automatically, without calculating the balance.
Salient 08.29.11 at 7:51 pm
1, There’s utility as a very generic and very slippery synonym for ‘greater good,’ which enables statements like I can’t think of a plausible justification for different punishments, which does not involve the comparison of utility (Slex — before attending to the comparative aspect, can you even think of a plausible justification for punishment period? If you can’t, doesn’t that suggest you may have generalized your definition of utility beyond anything your interlocutors would recognize as specifically meaningful?)
You might assign the name ‘crime’ to actions which obstruct or inhibit a specific subset of economic interaction and exchange which you wish to encourage, and assign detainment on a completely experimental basis, beginning with an arbitrary fixed term and resetting the term length periodically: term length up if you haven’t kept the behavior below a satisfactory rate of occurrence, term length down whenever you have reduced the behavior far below a satisfactory rate of occurrence [the thought being that detained individuals have a more limited capacity to participate in the encouraged systems, so over-detainment is avoided in order to improve participation rates].
2, There’s utility, the precise quantification of goodness along the positive real axis, which forms a convenient ray to beat scarecrow strawmen over the head with (Brett Bellmore — I’m not familiar with any utilitarians who insist on precise quantification, I’m not familiar with any contractatarians who insist on the terms of the contract being available in writing, etc, etc).
There’s utility, the estimation of derived productivity/convenience obtained by one subpopulation offset by imposed constraint/inconvenience on a different (possibly overlapping) subpopulation, that enters naturally into evaluations of collective action or governance (which seems swept up in John Quiggin’s and Chris Bertram’s statements).
I think ‘utility [defn.3]’ as actually applied in policy evaluation discounts derived personal enjoyment almost entirely, and prioritizes the second-order effects of enabling people to be productive (active participants in acknowledged economic and social structures). utility ~= engagement.
In this case, there’s no utility monster concern: if there’s a tradeoff in which lower productivity in the general population is offset by transcendent productivity of The Utility Monster Thing, it’s hard to see why that’s a bad thing. It’s only when utility is accidentally interpreted to measure ‘happiness’ (hedonistic utilitarianism) that dislocation of utility becomes a problem.
(Why would any system attempt to maximize happiness? I can’t understand why we would want to try to make people happy. People who aren’t suffering are pretty good at finding or inventing ways to become happy from time to time, people generally demonstrate a revealed preference for lots of other states of being over a ‘happy’ state of being, people are instinctively resistant to attempts to make them happy, and there’s nothing intuitively more obviously horrifying and morally despicable than saying you want to make as many people as possible happy — especially if you’re placing suffering along the happiness-unhappiness spectrum and attempting to improve the average. If this isn’t immediately obvious, substituting ‘doped up into pleasant delirium’ for ‘happy’ makes it so. I thought Huxley Brave-New-Worlded all this to death already. If you’re insistent that utility or greater good should have anything to do with one’s endorphin state, why not shoot for minimizing persistent trauma?)
Salient 08.29.11 at 7:53 pm
We are trained to act automatically, without calculating the balance.
We’re also trained/taught to not impulsively break stuff that would probably be hard to fix (where ‘stuff’ includes windowpanes, noses, and relationships). That’s a crude form of calculating some kind of balance, isn’t it?
Henri Vieuxtemps 08.29.11 at 8:15 pm
The point is that we’re trained, indoctrinated. It is a part of our nature. We don’t calculate: ‘if I break this SOB’s nose it will make a whole bunch of people happy, and that just may compensate for the pain he will experience’. No, we just know: ‘unless something really extraordinary is going on, don’t hit a person in the face. Even if it’s something really extraordinary, and it seems like a good idea, think twice before doing it’. That’s a rule. No question, it’s evolved over generations because of the utilitarian advantage, by trial and error, but here and now for us it’s just the rule.
bianca steele 08.29.11 at 8:37 pm
Henri,
I don’t really agree with you (not to make a federal issue out of it–and furthermore this is really really really OT, sorry to pursue it), but I would actually be interested in knowing what you think of when you hear the word “indoctrinated.”
Henri Vieuxtemps 08.29.11 at 8:54 pm
I don’t know, maybe it’s a wrong word. You’re a product of your social environment; this is where you get the rules, attitudes, ambitions, etc; you are formed as an individual. What do you call this process?
Slex 08.29.11 at 9:12 pm
@ Salient
As far as I understand you do have a problem with the word “punishment” I used. According to me the purpose of punishment is first and foremost to deter. The acceptable period should be anything between zero days and any period which does not cause excessive suffering over the one it is intended to prevent. If capital punishment has approximately the same deterrent effect as 20 years in prison, then 20 years should be our first option. There are other considerations, but these are the most important ones.
The part about my interlocutors – I didn’t quite get it, nor what it has to do with the meaning of utility.
bianca steele 08.29.11 at 9:20 pm
Henri,
I think “indoctrinate” has sinister overtones, though not quite as sinister as “brainwash.” It definitely suggests something different from childhood acculturation into the community ethos. For what you say you are talking about, I would use socialize, acculturate, maybe assimilate, and maybe there are less clinical sounding words but I can’t think of them offhand. If you were using “indoctrinate” when you meant, simply, something along the lines of “learn the ropes,” I think I would end up being quite confused for some time until I figured it out.
Salient 08.29.11 at 9:29 pm
The part about my interlocutors – I didn’t quite get it, nor what it has to do with the meaning of utility.
What is the meaning of utility?
If you can’t imagine a justification for punishment/determent/whatever that doesn’t rely on utility, you’re facing one of two problems: either your definition of ‘utility’ is so absurdly broad that it sweeps up all possible justifications for detaining someone, or your imagination is impoverished. I think I’m being reasonable in assuming the former, right?
Salient 08.29.11 at 9:48 pm
here and now for us it’s just the rule.
I think you’re right, and there’s a lot of talking-past-each-other happening. Suppose you^1^ sit down one evening after you realized you’re not satisfied with various aspects of your life and your interaction with people and such, and you determine somehow that some of the intuitive rules you’ve been using are not appropriate. The ‘somehow’ might be random meandering thoughts, or you might try to apply a systematic approach to organize your thoughts about how to determine which rules are appropriate. It’s during that reflective process that someone might try to incorporate a utilitarian or contractatarian view of what ‘appropriate’ should mean.
Another situation. Suppose you feel uncomfortable with what your government is doing, and you feel you might have some small ability to influence it — perhaps just by complaining and being a nuisance. For personal reasons you really really want to ‘get it right’ and have a readied, organized way to discuss what is inappropriate about what the government is doing, and what would be more appropriate. You want to be persuasive, and you feel that to be persuasive, you need to be able to present a somewhat complete and coherent vision of what governments ought to do. In this case you might try to think through from first principles: what rules are appropriate for governments to enforce? That might lead you to desire a first-principles framework to build from.
Now you might say, most people don’t do that. Sure, but most people don’t do what a narrow swath of unusually highly self-reflective people tend to do. Saying that such a systematic approach is unusual doesn’t knock it.
Most folks who are advocates of utilitiarian or contractatarian models seem to be advocating their model specifically to highly reflective people who want to ‘get it right’ and who want compelling arguments that the set of first principles from such-a-such model supports a useful framework for reflective thought — perhaps even argument that such-a-such model supports a more useful framework than the two well-known alternative models. I don’t see very many formal arguments that everyone ought to be that reflective, just a bit of occasional exasperated emoting to that effect!
^1^for second sentence on ‘you’ = a person in general, I’m not trying to make a personal statement
Slex 08.29.11 at 9:56 pm
@ Salient
I can’t imagine a plausible justification for the differences in the punishments (in the cases of the broken nose and serial rape) that does not involve utility comparisons. And no one has come up with one so far in the thread.
Salient 08.29.11 at 10:01 pm
According to me the purpose of punishment is first and foremost to deter.
That actually has very little to do with my statement, which was ‘utility: you keep using that word. I do not think it means what you think it means.’ But still, while we’re on it — the purpose of punishment itself shouldn’t be to deter, I don’t think. A purpose of proposing and promising punishment might be to deter, but actually carrying out punishment as a form of deterrence is pretty widely acknowledged to be cruel and perverse nowadays — we don’t put crucifixes up on the side of the road, we don’t cut thieves’ hands off, we don’t brand or mutilate convicted criminals, we don’t put people in blocks in the public square to be humiliated and tomatoed by passersby. Those methods of punishment are designed to deter by “setting an example” of people who have committed the crime.
Nowadays we show more respect for people — even criminals and potential criminals-to-be! — so we don’t use punishment itself as a form of deterrent.
In other words, saying “I am going to credibly threaten to punish you with prison so you don’t do this thing you shouldn’t do” is … at least coherent, though on a thread where it’s not off-topic I’d have some negative things to say about it … but saying “I am going to send you to prison to help make sure others don’t do what you did”
makes no sense unless you very publicly display the suffering of the person being sent to prison and then try to maximize their suffering subject only to some constraint, which is fairly-uncontroversially barbaric (but potentially justifiable under some utilitiarian systems!)
Consider: the suffering of one criminal is such a tiny thing if it prevents the suffering of a thousand potential criminals-to-be. So on grounds of utility, you could justify extreme mutilation for any important offense: the more extreme and spectacular the punishment, the more likely you are to deter would-be offenders!
It’s not like anybody consults the jail-sentence tables before breaking and entering. If you want to use punishment to deter people from committing burglary, nothing short of mutilation-with-spectacle will be effective. Nobody avoids breaking-and-entering because the specific punishment is ‘not worth it’ to them, and people who break-and-enter almost always still would have even if you had quadrupled the sentence.) To be a bit provocative about it: any system of punishment which eschews spectacular mutilation or publicly displayed humiliation and suffering, cannot be justified on grounds of utility. Potential lawbreakers just don’t pay attention to the specific consequences otherwise, so the relative deterrent effect of ten years prison vs five years prison is nearly zero.
Salient 08.29.11 at 10:09 pm
And no one has come up with one so far in the thread.
Well, in a way, that’s a point against you. Suppose I say, “I can’t envision any justification for taxes that doesn’t rely on hippogryff.” You would be inclined to reply: well, uh, one of two things is going on — either your definition of that word hippogryff is so broad that it’s not a useful word, or your ability to envision things is really weirdly skewed. Probably the former.”
It would make sense if you said, “here are three justifications for punishment that are not based on utility, X, Y, Z, and I think all of them are horrible” — then we could at least get down to specifics. As it is, it sounds like you’re meaning ‘utility’ to be something as general as ‘whatever we think is good for society overall’ which is too broad to be useful. And if you mean something more specific, then specifying it carefully should help you come up with examples on your own of non-utilitarian justifications for punishment (not to say I’m unwilling to help, as I think this is interesting enough as it goes, but it’s impossible for any of us to help if your definition of utility is unspecified and seems too broad for us to nail down).
Slex 08.29.11 at 10:33 pm
@ Salient
Provided that everybody here seems to be arguing from a non-utilitarian point of view, how come I am the one who should provide the non-utilitarian argument?!
And if you say “I can’t envision any justification for taxes that doesn’t rely on hippogryff.â€, I will probably say, for example: “I don’t know what a hippogryff is, but in my view, taxes are justified, because they allow for the redistribution of resources in a way that maximizes utility”. Then, I don’t know what you’ll say. Probably “No, your position is wrong” or “Oh, but utility is just like hippogryff!”.
Likewise, you or anyone else could have simply come up with their own position on why the punishments should be different (the non-utilitarian versions), instead to explain at wide and large why there is no point in coming up with such a position in the first place. Maybe my view of utility will turn out to be too encompassing, or maybe more than one imagination is impoverished.
Slex 08.29.11 at 10:53 pm
@ Salient
Regarding #168
You seem to be conflating punishment and corporal punishment. Imprisonment is certainly a form of punishment, though not corporal. I personally have nothing against corporal punishment, as long as it is optional for the offender (i.e. he can choose between 1 year in prison or 50 lashes) but I don’t think it should be done in public.
What works and what doesn’t work well in terms of deterrence is an empirical matter. But potential lawbreakers do pay attention to the consequences – to what extent depending on the type of crime. Expected punishment would deter more in planned crimes, not in crimes of passion. The probability of being caught is just as important, if not more, than the punishment itself. We have also other factors such us biological and socio-cultural that provide the framework within which the choice whether to commit the crime is taken. But to say that punishment does not matter
Slex 08.29.11 at 11:05 pm
To say that punishment does not matter simply is not true. At some point the marginal deterrence effect becomes zero for most crimes, so there is no point in imposing more and more heavy punishments. It is also questionable whether the current prison system is the best way to punish, but it does matter.
Tim Wilkinson 08.29.11 at 11:59 pm
The paper isn’t really about utilitarianism. The examples are, in the first instance, about doing/allowing, double effect, novus actus, etc., and by concerning themselves solely with generic deaths, they exclude any distinctive role for ‘utility’.
Slex 08.30.11 at 12:05 am
@ Tim Wilkinson
There are lot of things we disagree about with other people in this thread, but I don’t think that anyone will question that we have diverted from the OP (and the paper).
Henri Vieuxtemps 08.30.11 at 5:59 am
Bianca, I think it’s one of those things where we instill values in our children, while they indoctrinate. So, I’ll say: it’s all indoctrination, whether you raise them to be good Anabaptists, or good Satan worshipers, or good secular humanists and freethinkers. Same shit, different doctrines (or ‘memes’, if you wish).
Andrew F. 08.30.11 at 10:08 am
Slex @137: In my opinion, ethical theories should be judged on their relevance in our world.
Well of course, but this is the very question at issue!
The point of the Utility Monster is not that (i) the Utility Monster has a significant chance of occurring in our universe, and (ii) classical utilitarianism would lead us astray should we encounter one, so (iii) we better not use classical utilitarianism lest we be led astray should we encounter an Utility Monster.
If that were the point, then you and I would be in full agreement that the problem posed by the Utility Monster is silly.
Instead the point is that (i) classical utilitarianism leads us astray when confronted with the possibility of a Utility Monster, and (ii) the best explanation of WHY classical utilitarianism leads us astray also undermines classical utilitarianism.
We want an ethical theory to not only produce outcomes in our likely universe that we find ethically sound, but also to explain why those outcomes are ethically sound. When a theory fails to explain why its produced outcome for a logically possible situation seems ethically unsound, this is a mark against it, since it means that – for some sets of facts – the theory does not accord with deeply held moral intuitions.
And when the best explanation for why that theory fails is also an explanation that denies a key premise of that theory, then we’ve found a potentially damning argument.
Brett Bellmore 08.30.11 at 10:12 am
What you’re doing here is conflating “utilitarianism” with “caring about other people”. While it is indeed quite common to care about other people, it is exceedingly rare to, for instance, take seriously the idea that you should live on instant ramen so as to maximize your charity to the worst off in the world. Or, in dating a beautiful woman, wonder if there was somebody else out there who’d enjoy her company more. Or seriously contemplate covertly killing unloved orphans so as to increase the supply of transplant organs without causing too much grief to families.
These latter, conspicuously not engaged in practices are consequences of the structure of utilitarianism, which is universal, in that it implies duties towards people you’ve never met that are equal to those you might have to your acquaintances, and reliant on formal, pseudo-objective math-like interpersonal comparisons. Three hamburgers – a stubbed toe = an orgasm type reasoning.
Which nobody engages in, because they don’t have the numbers to do the math on.
So, no, I stand by my position: Nobo
Brett Bellmore 08.30.11 at 10:19 am
dy engages in utilitarianism, because you don’t have the numbers. It’s a moderately good metaphor for some actual practices, but like most metaphors, you’re better off not confusing it for the actual thing.
Alex 08.30.11 at 10:24 am
172: Mark Kleiman has published some interesting research on this – there doesn’t seem to be much if any relationship between the severity of punishment and the rate of a lot of crimes, but there is a strong relationship between the crime rate and the detection rate. I.e. criminals don’t think “how many years will I get?” but “will I get away with it?”
There’s quite a lot of uncertainty about what will happen in the courts, but it’s certain as can be that getting arrested is no fun and involves lots of pointless hanging about waiting for hearings and perhaps also unofficial police brutality. If you’re actually living by crime, of course, it means you’re off the streets and essentially unemployed for a while whatever happens. Also, what sentence you get is not completely out of your control, especially in countries where plea-bargaining is practiced.
So it’s quite rational to worry more about the chances of getting caught than about the sentence you’re likely to get.
Tim Wilkinson 08.30.11 at 11:19 am
And what do non-‘criminals’ think? They are also deterred who stay in line.
Convicted criminals (NB where the conviction is valid) – are those for whom both deterrence and the attempt to get away with it has failed – selecting from this group is likely to skew things.
Though I would guess that this would only strengthen the thesis that sentence severity is largely irrelevant – since the majority of the law-abiding are probably those for whom a criminal record, name in paper, getting publicly nicked, losing job etc are enough to deter, given the highish payoff of refraining from crime that arises from being pretty comfortably off within the law.
Difficult to get people to acknowledge this, though, I should think, since it’s probably quite well-internalised and inchoate.
Alex 08.30.11 at 11:40 am
And what do non-’criminals’ think? They are also deterred who stay in line.
Less important in practice than in theory – something like 90% of crime is committed by the top few % of the criminality distribution.
Tim Wilkinson 08.30.11 at 11:40 am
What you’re doing here is conflating “utilitarianism†with “caring about other peopleâ€.
Brett Bellmore and the authors of the paper between them provide a distributed two-step.
The authors conflate “utilitarianism” with “not subscribing to a certain kind of deontological fundamentalism based on a certain rigid and simplistic ‘clean hands’ approach to some or all of doing/allowing, double effect, novus actus etc (or, weirdly, actually believing that the numbers are literally irrelevant, so that there is nothing to choose between 5 generic deaths and 1)”. Which is not to say that their empirical findings have much relevance to that.
BB conflates “utilitarianism” with “Bentham’s felicific calculus explicitly and rigorously applied”.
Of course in the examples used in the original paper, the whole point (of their original application) is that one does have the numbers, and nothing else ethically relevant to go on in distinguishing the outcomes.
Slex 08.30.11 at 12:51 pm
@ Andre F.
As I have written before, a classical utilitarian has no reason to reject the Utility monster, other than on emotional basis. If he/she is logically consistent, he should accept it as the morally superior position.
The point is not that what you say here is not true:
When a theory fails to explain why its produced outcome for a logically possible situation seems ethically unsound, this is a mark against it, since it means that – for some sets of facts – the theory does not accord with deeply held moral intuitions.
It’s rather that what you’ve written is valid not only for classical utilitarianism, but also for deontology. Would you sacrifice the life of an innocent person to save the rest of humanity? No, if you are a deontologist. Repugnant conclusions that go against our moral intutions are not the specific domain of a single ethical theory. And I personally prefer an ethical theory with problems in a logically possible, but practically impossible world, than an ethical theory which leads to conclusions which go against my intuition in the world as we know it. To put it shortly, utilitarianism is imperfect, but is better than the alternatives.
Besides, as it has already been mentioned the Utility monster is not a problem for several versions of utilitarianism.
Slex 08.30.11 at 1:11 pm
@ Alex
It depends on the type of crime. For example, if you intend to steal huge amounts of money and expect to get only an year in prison if caught, you will see a lot of people trying it. But with the current system of punishments, the number of Bernard Madoff wannabes is much lower.
Crimes of passion are another story. But even then you have no reason to lower significantly the sentences on epistemological grounds. Some people may commit a premeditated murder disguising it as an act of passion, so we are not justified in having a very large margin between the sentences they get (e.g. 1 year versus 25 years).
Beyond a certain level, every punishment becomes ineffective, but this does not mean that punishment does not matter. There are many reasons for this – potential criminals overstating their chances of getting away, hyperbolic discounting, etc. But try to imagine what it would be without punishments at all.
Slex 08.30.11 at 1:24 pm
On a second thought, I am willing to reconsider some of what I’ve written about the crimes of passion.
Henri Vieuxtemps 08.30.11 at 1:50 pm
…weirdly, actually believing that the numbers are literally irrelevant, so that there is nothing to choose between 5 generic deaths and 1…
Well, I’ll defend Brett, and I’ll up the ante: not only you don’t have any numbers to compare, but, even when it all seems completely clear and obvious (as in your example), you, in fact, don’t have a slightest idea of what the result is going to be.
You kill one to save many, you torture a terrorist to get lives-saving information – and in a specific, isolated case it may all make perfect sense, but this doesn’t happen in isolation. Norms will change, killing and torturing a few individuals for the benefit of many will become acceptable and even commendable, and soon enough you will, perhaps, find yourself in a society with the NKVD defending The People against their mortal enemies, or something.
Could you calculate all that when you decided to kill one to save five? I don’t think so.
Tim Wilkinson 08.30.11 at 4:11 pm
Or – just for balance – to stand by as five are killed so as to avoid being personally implicated in killing one (to make things clearer, stipulate that the one is also among the five, perhaps?)
But any reasonably sophisticated approach to utilitarianism as an organising principle (Mill, Hare, Feldman spring to mind) would of course take into account the many conclusive reasons not to apply ticking time bomb fantasies to the real world. And epistemic limitations will also apply to discerning the nature of acts for deontological purposes – thou shalt not impose risk of death?
I suppose the point I had in mind was that the examples themselves were designed as abstract debating points stipulated to depict closed systems with various unrealistic assumptions such as perfect foreknowledge, and that the original paper didn’t specify any such presuppositions. Nor did it try to ascertain whether respondents took the examples in that spirit, or whether they very understandably introduced (consciously or not) the kind of wider considerations that you mention.
In the paper, the respondents were asked to imagine themselves in the proposed situation, so not only would this tend to encourage the introduction of real-world (consequentialist) factors you mention, but in particular would tend to require people to consider whether they personally would have the stomach to directly kill.
Programnmatic remarks on the wider issue:
Actual moral practice and intuition tends to combine, in varying proportions according to circumstances and framing, the interrelated categories:
1. default simple absolute rules based on a characterisation of ‘the act itself’, as Jonathan Bennett would put it, that incorporates certain ways of thinking about personal moral responsibility – the verb
2. the cultivation and instantiation of personal virtues – the subject
3. a direct pragmatic assessment of actual predicted consequences – the object
Consequentialism as a programme is then the approach which holds that if one wants to explain and possibly revise these practices and intuitions, then consequences in the broadest sense must in the end be the arbiter – or as on might say , conseuqnces are the only element that can subsume the others, or that can never be eliminated without making the whole enterpise senseless.
Salient 08.30.11 at 4:14 pm
Provided that everybody here seems to be arguing from a non-utilitarian point of view, how come I am the one who should provide the non-utilitarian argument?!
I throw my hands in the air. You’re the one saying that literally every conceivable justification is utilitarian. No matter what we say, you’ll grin and respond, “but that’s utilitarian!!!!!!1!!” If you don’t follow this up with a very carefully stated definition of utility that would at least offer us the possibility of proposing something you can’t smirk at and call utilitarian after-the-fact, why should we attend to your statements rather than ignore them?
At some point the marginal deterrence effect becomes zero for most crimes
This just isn’t true. The marginal deterrence effect increases the more you make a punishment brutal and spectacular and over-the-top. The only point at which the marginal deterrence effect drops off and becomes zero is when the absolute deterrence effect has become 100%.
And again, nobody consults the criminal code before deciding to commit a crime, even a carefully planned crime. Not even stuff like embezzlement. Nobody does it. Nobody’s behavior would change if you swapped every ten-year sentence for every one year sentence and vice versa. Nobody. The main difference you’d see is in recidivism — people likely to recommit a crime getting out earlier or later and recommitting the crime shortly after their release. But that has nothing to do with alleged deterrent effects of punishment. Punishment versus no consequences whatsoever might offer some generic deterrent, but a specific punishment offers very little potential for deterrent effect above the generic unless it makes the person’s blood run cold to think of it.
Tim Wilkinson 08.30.11 at 4:16 pm
the many conclusive reasons not to apply ticking time bomb fantasies to the real world
or rather not to misapply them. One way this gets done, besides failing to reintroduce the epistemic and other limitations that are stipulated away by ‘thought experment’ type debating points, is by reading off state policy and law directly from personal ethics.
Salient 08.30.11 at 4:21 pm
Though I would guess that this would only strengthen the thesis that sentence severity is largely irrelevant – since the majority of the law-abiding are probably those for whom a criminal record, name in paper, getting publicly nicked, losing job etc are enough to deter, given the highish payoff of refraining from crime that arises from being pretty comfortably off within the law.
Exactly. There doesn’t exist a person in the universe who says, “I was going to commit this crime, but then I learned the possible jail time is four years instead of two, so I decided not to.” Substitute any numbers except zero for ‘four’ and ‘two’ there and it’s still an accurate statement. There do exist plenty of people in the universe who might say, “I was going to commit this crime, but then I learned that they will punish me by [insert something horrifying and brutal here, like forcing the person to watch as the authorities murder their family].” For punishment or threat of punishment to have a more powerful deterrent effect than the generic consequence of {criminal record + name in paper + publicly nicked + losing job + generic prison sentence}, it needs to scare the crap out of people.
Brett Bellmore 08.30.11 at 4:56 pm
“Would you sacrifice the life of an innocent person to save the rest of humanity? No, if you are a deontologist.”
No, if you are sane. How often do situtations that require killing one innocent person to save the rest of humanity come up? Once a month, and twice on leap years? No, they’re so wildly improbable that, should you perceive yourself to be in one, the only rational conclusion to draw is that you’ve suffered some kind of psychotic breakdown, and that you should curl up into a ball, while waiting for the men in white coats to snap you out of it.
Sure, it’s a real pity if that Presidential candidate really WAS the antichrist, or your dog really is a disguised time traveler warning you of the coming apocholypse. But that’s just a risk any sane, responsible person would ahve to take.
Slex 08.30.11 at 6:07 pm
@ Salient
If I argue about morality with a theist who claims “I can’t imagine morality without God”, I will not waive my hand in the air. I will tell him about Euthyphro, probably. Then he might say that god is good and good is god by definition (at which point we might agree to disagree or I may challenge his views about god and go on debating) or maybe, just maybe, he will have his “aha” moment.
You are accusing me of having a vague position when your position is unstated at all! You do have a position, don’t you? You’ve had it even before this debate I suppose? Intuitively or thoughtfully you probably had some explanation why a pickpocket and a rapist should not be subjected to the same punishment? Is it so difficult to just write it here? Maybe I will have my “aha” moment. Maybe we will be involved in an argument about utility. Maybe we will agree to disagree. What’s the big deal?
Do you realize that your argumentation tactics can be applied to practically anything? “I can’t imagine the provision of pure public goods by anyone but the government” – “Oh, but you have no imagination!” -“OK, then, tell me how private companies will provide pure public goods” – “I won’t because you will blur the distinction between private and public”. You get the idea, right?
Slex 08.30.11 at 6:32 pm
Most people don’t read the penal code but they have roughly an idea about what punishment corresponds to what type of crime, because the law is, more or less a reflection of the values of the societies they live in. A lot of people regularly involved in criminal activity, however, are aware even with the details. The proposition that punishment does not matter is plain and simple wrong. Of course, it is subject to the preferences of the socio-cultural environment, which is the main determinant of crime. Most people won’t violate the norms of the society they live in anyway, but especially for those that are willing to do so, punishment matters. If the penalty is small enough it will be considered a cost worth taking, if outweighed by the expected profit. This is especially true for financial crimes.
The marginal deterrent effect of a punishment drops to zero after the punishment becomes severe enough. It is practically impossible to conceptualize in advance the difference between very severe punishment and very very severe punishment on a meaningful level, let alone modify your behaviour based on it. The marginal deterrent effect will not be zero when 100% deterrence is achieved, it will be zero when an additional unit of punishment fails to make changes in the deterrence rate.
Slex 08.30.11 at 6:35 pm
… and this can and most probably will happen way before the 100% deterrence rate is achieved.
bianca steele 08.30.11 at 7:22 pm
Henri,
Okay, now I think I understand what you were saying. So if we didn’t have memes in our brains, would we have any idea of norms? If we would, is there some way we can rediscover those norms after we’ve been indoctrinated? Or do we just have to make our choice, one set of memes or the other, “as a conservative, I think government is a stern father” or “as a progressive, I think government is a nurturant parent”?
Salient 08.30.11 at 7:47 pm
The proposition that punishment does not matter is plain and simple wrong.
The proposition that fairly wide variation in term sentence would not matter (in terms of deterrence) is defensible.
Intuitively or thoughtfully you probably had some explanation why a pickpocket and a rapist should not be subjected to the same punishment?
A rapist should be removed from society, because they attempted to derive satisfaction from inflicting suffering on another person. (Any attempt to interfere with that removal or to reintroduce the person to society could be interpreted as a grave violation, if you really wanted to make this hard-line.) We as human beings have an obvious stake in ensuring the state (or any organizational entity to which we willingly defer a monopoly on violence) takes on the responsibility for regulating ways in which we might attempt to induce suffering in one another.
For pickpockets I have little such intuition, and certainly no strong feeling that they ‘should’ be punished at all — it seems like a pickpocket is likely to have inconvenienced quite a lot of people beyond even the pickpocketed person, and a natural consequence of that should be to require remediation from that person — perhaps in the form of donation of services to the state, perhaps in the form of a financial forfeiture (but to whom?); my intuition certainly doesn’t lead me to any specifics there.
But the whole notion of a pickpocket as sufficiently disruptive to the state to justify incarceration requires a rather specific conception of what the state is supposed to regulate, which is exactly where {utilitarianism, contractarianism, etc} begin to diverge.
Most people don’t read the penal code but they have roughly an idea about what punishment corresponds to what type of crime
No they really really don’t, except in the very crudest cases, or except for folks who watch CSI or Law & Order type shows frequently. (And it’s only because of shows like that and their peculiar foci, that the average American knows categories like Murder 1 vs Murder 2 vs manslaughter exist, and knows that people can ‘plea bargain’ and can offer to substitute a confession for a crime they did not commit for prosecution of a crime they might have committed, etc.)
A lot of people regularly involved in criminal activity, however, are aware even with the details
(and by definition of regularly involved, are not deterred)
Most people won’t violate the norms of the society they live in anyway, but especially for those that are willing to do so, punishment matters.
I think the whole point of a utilitarian or contractarian approach to what-is-appropriate-punishment is to move away from this commutatarian social-norms conception.
It is practically impossible to conceptualize in advance the difference between very severe punishment and very very severe punishment on a meaningful level, let alone modify your behaviour based on it
This is just not coherent to me. The difference between “we will kill you” and “we will torture you for one year by isolating every person you love and forcing you to witness us invent novel forms of suffering to inflict upon them for ten hours each day” is so much more obvious than the difference between “prison for four months” and “prison for four years” that I can’t even begin to understand this claim. Someone who is willing to endure their own death as a consequence of their actions, may very well abandon their planned actions if they are informed that some greater suffering than death will be inflicted. This just isn’t that hard.
The marginal deterrent effect will not be zero when 100% deterrence is achieved, it will be zero when an additional unit of punishment fails to make changes in the deterrence rate.
Name a crime. Any crime. Find a nonzero quantity of people who are willing to endure death as a consequence of committing that crime. I promise you that any capable theorist can devise an “additional unit of punishment” that will induce those individuals to revise their course of action and avoid the crime.
The Fool 08.30.11 at 8:03 pm
Allow me to summarize the state of the debate on this thread so far: Slex wins.
FYI: I am currently working on a paper noting certain correlations between rule-worshipping deontologists and the tendency, even beyond childhood, to believe in fictions like witches, unicorns, and Casper the Friendly Ghost.
Shorter Brett Bellmore: “But he’s the friendliest ghost you know!”
Salient 08.30.11 at 8:14 pm
Allow me to summarize the state of the debate on this thread so far: Slex wins.
Which is fine as far as it goes (I for one am bewildered about what we’re even debating and thus don’t stand a chance), but in all the details it’s Slex’s win for the commutatarian-intuitionist column, not the utilitarian column.
Henri Vieuxtemps 08.30.11 at 10:03 pm
So if we didn’t have memes in our brains, would we have any idea of norms?
Of course not. Someone who grew up alone in a dark basement would not have any idea of anything at all, except for the most basic physiological functions.
Yes, once you’ve been indoctrinated, formed as an individual, it’s very difficult to change. You could have an epiphany and become a different person, but that’s rare.
bianca steele 08.30.11 at 10:43 pm
Henri,
Maybe I’m missing something but it seems a point against your theory that it logically implies every movement’s best bet for new recruits are wolfboys.
Slex 08.30.11 at 11:37 pm
@ Salient
It wasn’t that hard. I basically do agree with you. A rapist should get a heavier punishment than a pickpocket, because the rapist causes more harm (the one causes suffering, the other inconvenience). We don’t have a way to objectively verify it, but to think otherwise would be contrary to our experience. We do make utility comparisons (and I don’t think the notion of dis/utility in this case is too vague, abstract, or encompassing).
As for the severity of punishment, whether someone will be tortured for an year and killed, or tortured for two years and killed, really wouldn’t make a difference as far as deterrence is concerned. Just like we don’t perceive the difference between 10 million and 2o million – we know the latter is twice the former, but both fall into the category “an awful lot”. Excluding probably the likes of NBA players and Hollywood stars, if you ask the average Joe on the street what he will do for 10 million or for 20 million you will see that it won’t make much (if any) difference.
Now, I can’t think of an exact example to your challenge, but this at least comes close. The prospect of Hell. It offers a theoretically infinite punishment. You can’t devise an additional unit of it. And yet, history shows that it has not achieved 100% deterrence. In the years many true believers have committed horrendous crimes. Some haven’t looked like crimes in their own eyes, e.g. the inquisition. Some were probably rationalized away (God will punish, but not with Hell). But at least some of them have really pondered the possibility of going to hell, both before and after the offence. This is valid especially regarding crimes of passion and sex-related offences.
There is another reason why overtly harsh penal code may backlash and fail to deter. Punishment is important, but not the only determinant of crime. You can levy more severe punishments and expect at least a small increase in deterrence, provided that nothing else changes. But what you propose (just for the sake of the argument, not that you really endorse it) – public torture and mutilation – we may suppose, could habituate the senses and change the overall attitude towards violence in society. We will have something like hedonic adaptation, just in the opposite direction – people will become accustomed to scenes of violence. If you take as examples countries such as Uganda and Sudan, you will see what I mean.
This brings me to my next and last point, that the regulation of behaviour in society as a whole is best done by norms and values. Norms and values can take different forms, however, and the utilitarian position (and I think the deontological position) should be to popularize the norms, corresponding to the respective ethical view up to the point that they get internalized by the largest possible number of people. There will be always deviants, however, and that is where punishment has a role to play.
Henri Vieuxtemps 08.31.11 at 1:41 am
Bianca, not really a theory; just a description of some banal facts of life.
What are some other possibilities: that we’re born with social norms already programmed in us? That seems rather unlikely, doesn’t it? That adults change their personalities like gloves? Nah, I don’t think this is supported by observations.
Andrew F. 08.31.11 at 2:55 am
Slex @183: As I have written before, a classical utilitarian has no reason to reject the Utility monster, other than on emotional basis. If he/she is logically consistent, he should accept it as the morally superior position.
It’s no more an “emotional basis” than are the very intuitions that furnish the impetus for utilitarianism in the first place. We reason ethically, in part, by considering different possible situations and trying to account, in some satisfactory way, for our different intuitions, strong and weak, regarding the ethical course of action in those situations.
Certainly, one can cling to a chosen ethical position and ignore any contradictory ethical intuitions, regardless of their strength. But this is the path of the absolutist and the ideologue, that of the libertarian who refuses to countenance taxation for the sake of destroying a humanity-ending meteor. I don’t think such a position is persuasive to most – nor do I think it particularly a wise position to adopt.
Regarding the rest of your comment, I agree that other moral theories suffer from the same failings, but that to me is simply a good reason to reject them all as complete accounts of ethical questions. I’m a pragmatist and a humanist, and I view utilitarianism as quite useful – once one rejects some of the absolutist tenets of its classical form.
Yarrow 08.31.11 at 3:13 am
The Fool @ 197: I am currently working on a paper noting certain correlations between rule-worshipping deontologists and the tendency, even beyond childhood, to believe in fictions like witches, unicorns, and Casper the Friendly Ghost.
Hey! I’m a witch, as are my co-religionists. Why do people keep disbelieving in us? I’m attracted to rule consequentialism myself; some of us may be deontologists pure and simple. Not “rule-worshipping deontologists”, mind you. We’re mostly anarchists.
Salient 08.31.11 at 1:47 pm
A rapist should get a heavier punishment than a pickpocket, because the rapist causes more harm (the one causes suffering, the other inconvenience).
I’m not sure ‘more harm’ ever entered into it, on my end. In coming up with the assertion I didn’t make a comparative analysis. It’s definitely true that I carefully and completely discounted all notions of aggregate utility. This is what I meant about me offering a very non-utilitarian deontological proposal and it getting the “well that’s utilitarian!” response. So far as I can tell neither you nor I applied an assessment of utility when making that evaluation of appropriate punishment, and saying that under the right circumstances there exists a possible way for a utilitarian to come to the same conclusion… is falling prey to what I was complaining about.
We do make utility comparisons (and I don’t think the notion of dis/utility in this case is too vague, abstract, or encompassing).
But neither of us made a utility comparison. Suppose hundreds of thousands of people derive ‘utility’ [1] from the attacker remaining free and unhindered (perhaps for reasons unrelated to the crimes, e.g. because the rapist is an acclaimed and beloved film director). The attacker should still be removed from society, by the way of reasoning that I proposed. That directly contradicts a utilitarian approach (I think my proposed reasoning is fairly deontological, maybe somewhat intuitionist, but very opposite-of-utilitarian).
There is another reason why overtly harsh penal code may backlash and fail to deter.
My assertion would be ‘for any penal code to successfully deter crime in a differentiated way, it would have to be overly harsh’ (where in a differentiated way means varying intensity of punishment depending on the crime).
I agree that assigning some sort of consequence versus none at all has a powerful deterrent effect (e.g. losing one’s job and receiving ex-con status is pretty huge no matter if you get 10 years in prison or 10 days), I just don’t agree that varying certain aspects of that consequence in a non-brutal way will have a measurable effect on deterrence. Specifically I don’t think varying the length of a prison sentence will have a measurable effect on deterrence.
I acknowledge that a longer prison sentence will mean it literally takes longer for a recidivist to get back out and be able to commit their crime again. But that’s not a question of deterrence, it’s a question of threat-removal.
Hell… offers a theoretically infinite punishment.
Your intuitive feeling that this isn’t a terribly good example matches mine. We have no evidence that the place exists, and there’s definitely no evidence that a state (or other enforcing entity) can credibly assert its intention to assign eternal Hell to people.
If you take as examples countries such as Uganda and Sudan, you will see what I mean.
Well, yes. That (and the rest of what you said) is part of why I feel punishment as deterrence is horrible and wrong. My two-step is this: [1] to be an effective deterrent, punishment has to be brutal, [2] that brutality is repellent to me, therefore [3] I should not endorse any attempt to employ punishment as a deterrent. I endorse restricting a person’s access to society by force only in those circumstances where I feel confident asserting that person has attempted to derive satisfaction from inducing suffering in another human being (I am stating this in absolutist terms for the sake of simplicity).
I’ve recovered enough from some blood toxicity problems to reread my own comments here and elsewhere slightly more clearheadedly, and have discovered I’m not at all happy with them (wondering why on earth I was being so silly and dramatic) so I’d best bow out of the discussion.
^1^I put ‘utility’ in quote marks here to indicate that we can be fairly flexible about what ‘utility’ means and the statement’s still true.
Henri: What are some other possibilities: that we’re born with social norms already programmed in us? That seems rather unlikely, doesn’t it?
Depends on what you mean by ‘social norms.’ I’d say it’s plausible our brains are more receptive to some types of indoctrinated beliefs and less receptive to others, and that that receptivity could be physiologically pre-programmed.
That adults change their personalities like gloves? Nah, I don’t think this is supported by observations.
Well,
Slex 08.31.11 at 2:51 pm
Salient, but you are missing the point of the Hell example. It does not matter whether it really exists. The point is that some people think it does, and still it does not deter them from committing crimes.
Regarding the pickpocket and the rapist, you can’t avoid utility comparison and even though I am running out of words to explain it, I will make one last attempt to be more precise. In my previous post, when I said the rapist caused more harm than the pickpocket, I intentionally used your own words – the first causes suffering, the latter causes merely inconvenience. It’s just that you don’t weigh the utility of the perpetrator versus the disutility of the victim (to determine whether to allow it), but the disutilities caused by two different perpetrators in two different kinds of offences. Even if you decide to incarcerate the rapist for a long time based on the decision to incapacitate him, instead of deter other potential rapists by making an example out of him, it is a decision that involves estimation of (dis)utility. There are people who enjoy mocking at people with disabilities but while we find it reproachable and use milder sanctions, we don’t lock them behind bars for 20 years.
This example doesn’t try to justify utilitarianism as a whole, but tries to counter the claim that we can’t make interpersonal utility comparisons and points to the fact that virtually no one is utility agnostic.
Salient 08.31.11 at 3:20 pm
The point is that some people think it does, and still it does not deter them from committing crimes.
I dispute that [1] assignment to Hell is superlative suffering (I’d take an eternity of postlife Hell for a productive predeath life any day, and even if you disallow that kind of preference, surely people will generally prefer to endure Hell themselves rather than cause loved ones to be forced to endure it), [2] that people believe in the exact combination of beliefs that would lead to your conclusion, [3] other stuff.
[2]: for Hell to be a superlative deterrent, someone would have to believe: I am guaranteed to avoid Hell so long as I avoid this specific crime. I am guaranteed to receive Hell if I commit this specific crime. Forgiveness of all transgressions up to the present moment is guaranteed, forgiveness of all future transgressions other than the specific criminal act in question is guaranteed, and forgiveness of the criminal act in question is the guaranteed to be impossible. Does anyone, criminal or no, really believe all of this?
If Hell is superlative suffering, it must include the suffering of remorse, and the feeling that not only was the act a bad choice on balance, but also there was absolutely no benefit or enjoyment derived from the act in any conceivable fashion. (If you thought otherwise, you would be able to derive some satisfaction from the bit that was good, which means you’re not suffering as much as possible.) How can someone imagine suddenly feeling that categorically remorseful about something? It’s imagining becoming a different person, which implies it’s a different person suffering Hell.
the claim that we can’t make interpersonal utility comparisons
You’re not being utilitarian. You’re just using ‘utility’ as a synonym for benefit or improved experience, and ‘disutility’ as a synonym for loss or bad experience. That’s why I keep trying to tell you that you’re not a utilitarian (or at least, you’re not taking a utilitarian approach to the stuff we’re talking about). “This act hurt somebody worse than that other act, so it should be punished more severely” is, I repeat emphatically, not utilitarian. Its conclusion might coincidentally coincide with the conclusion of a utilitiarian assessment, but only coincidentally.
Slex 08.31.11 at 8:03 pm
Salient,
on [1] you’ll see that torturing loved ones wouldn’t really work for sociopaths.
The hell example intended to show that no punishment can ever achieve 100% deterrence (not that it was the ultimate punishment and nothing harsher can be designed, though I assumed it to be the case), hence zero marginal deterrent effect comes before that. However the issue is not whether someone can devise more severe punishment, but whether the additional punishment will deter more. If the threat of killing John Doe’s wife will not prevent him from doing something, so will the threat to the life of his sister. You can find examples in history, e.g. totalitarian governments, where punishment is inflicted on relatives of the perpetrator, too.
You can’t achieve 100% deterrence for crimes of passion even if punishment is very severe and very likely this will hold even if it is certain. For premeditated crimes you can achieve 100% deterrence if the punishment is certain even if it is not very severe. But when you combine severity with uncertainty of detection you will fail to achieve full deterence.
On [2] you can just do a search on the internet to find out if there are people who think they might go to hell for something they have done. There are plenty. Probably very few of them, if anyone at all, think it is surely going to happen, but they at least consider it possible, and considered it possible before their misdeed.
I am not sure what you imply with your next paragraph, but I see remorse, enjoyment of the crime, and the punishment of a different person as irrelevant to the question when we get to zero marginal deterrent effects. If a crime is to deter, the choice must look bad before, not after the crime. Remorse, guilt and any other feelings, subsequently felt, matter only to the extent that they are factored in advance and influence the decision whether to offend or not, which might or might not be predicted, and in any case it enters the calculation (if I may say so) independently of the punishment.
Regarding utility – yes, you could define it in that way and many utilitarians will agree. Not just that, but many who criticize utilitarianism would accept it too.
“This act hurt somebody worse than that other act, so it should be punished more severely†is, I repeat emphatically, not utilitarian.
Not necessariliy, but the first part of the sentence is a necessary part of utilitarianism. What you say is not utility, is exactly what many critics of utilitarianism claim can’t be compared.
Substance McGravitas 08.31.11 at 8:05 pm
Where has this been achieved?
Salient 08.31.11 at 8:16 pm
The hell example intended to show that no punishment can ever achieve 100% deterrence
Well I don’t disagree with that. I don’t think differentiated punishment can achieve differentiated deterrence, unless the punishment is unacceptably severe. I thought that’s what we were disagreeing about.
you can just do a search on the internet to find out if there are people who think they might go to hell for something they have done.
That thought would have a negative deterrent effect, an anti-deterrent effect. If you figure you’re going to hell anyway over something in the past, why bother to avoid a criminal act in the near future?
Not necessariliy, but the first part of the sentence is a necessary part of utilitarianism.
Not really.
Salient 08.31.11 at 8:32 pm
Regarding utility – yes, you could define it in that way
There are only so many times I can try to point out how absurd this is. No. It’s a useless definition of utility. This is why you’re unable to understand how people could devise a non-utilitiarian justification for… well, anything. That’s what I’ve been trying to point out to you from the start. Utility means something much more specific than ‘good stuff’ and a utilitarian approach is something much narrower than ‘try to maximize goodness.’
If ‘utility’ is ‘good stuff’ and any model which attempts to bring about good stuff is utilitarian, well, that’s just not a meaningful way to use the words. And I can’t see that you’ve analyzed anything utilitarianally in any more narrow sense. In the examples you’ve put forth, you’re not even attempting to aggregate over an entire population. A pretty basic tenet of utilitarianism is that you have to aggregate utility over an appropriate population. (There are some carefully constructed disaggregated forms of utilitarian computation, but they’re fairly highly technical and you’ve made no reference to them.)
Pretty much everything you are saying is somewhere on the intuitionist-deontological-commutatarian spectrum (and to be honest in my eyes you’re being a completely reasonable intuitionist), but you’re calling it utilitiarian, for reasons I can’t fathom. Given that, I don’t know how to move forward. It would help if you could point to your source(s) for utilitarian theory, maybe I just need to flip through the book you’re using, and heck, maybe I’m wildly off-base.
Salient 08.31.11 at 8:34 pm
Sorry, I don’t know why I’m following up like that, it’s not like I’m being useful or interesting and I just end up sounding much pissier than I intend to.
Slex 08.31.11 at 9:07 pm
@ Substance McGravitas
Where has this been achieved?
It hasn’t, because the condition for certainty does not hold. I am theorizing.
Substance McGravitas 08.31.11 at 9:16 pm
It’s hard, though, to theorize about a 100% deterrence to the premeditated crime of suicide bombing. Crimes of passion and premeditated crimes overlap somewhat.
Slex 08.31.11 at 10:53 pm
@ Salient
Hell will have an anti-deterrent effect for all crimes after the first one. Suposedly, it will decrease the incidence of committing crime for the first time.
Me: The hell example intended to show that no punishment can ever achieve 100% deterrence
Salient: Well I don’t disagree with that. I don’t think differentiated punishment can achieve differentiated deterrence, unless the punishment is unacceptably severe. I thought that’s what we were disagreeing about.
I get such impression from 188:
The only point at which the marginal deterrence effect drops off and becomes zero is when the absolute deterrence effect has become 100%.
If we can’t have 100% deterrence, this means that the marginal deterrent effect becomes zero before that. My views on the current condition of the criminal system in the Western countries are more or less summed up in the introduction here:
http://onlinelibrary.wiley.com/doi/10.1111/j.1745-9133.2010.00680.x/full
I do think that differentiated punishment can achieve differentiated deterrence, though to what extent depends on the type of crime. I will allow myself to quote from this paper (p. 190), which reviews the empirical literature and which confirms what I think:
http://tuprints.ulb.tu-darmstadt.de/1054/
Another example is the type of offense (table 3.43): while results for tax evasion, drunk driving and fraud – and property crime in general – are more compatible with the deterrence hypothesis, those for homicide or assault are not. Results for the probability of punishment are also more in favor of the deterrence hypothesis than those based on the severity of punishment.
In my opinion short-term imprisonment should be avoided when possible, because the deterrent effect on would-be offenders could be outweighed by the demoralization effect by the prison environment. Inmates, who are there for petty crimes could be brutalized by their interactions with inmates with longer sentences for violent crimes or can build up networks with other criminals, which will open up new opportunities to benefit from criminal activities after prison. I don’t know at what point exactly, but I think that for most crimes the maximum deterrent effect is achieved somewhere between 15 and 25 years and beyond that it just doesn’t matter for deterrence, only for incapacitation.
Empirical tests of the deterent effects of punishments are difficult, because crime is the result of many factors, and even if we change significantly the laws, it will still take time to change the awareness about them. Maybe the best way to examine the matter is to see whether the behavior of juveniles changes as they reach adulthood, because we can change the relevant factor while keep the others constant. But the evidence as a whole is mixed, even though the most famous study by Steven Levitt finds significant deterrent effects both for property and violent crimes.
Turning to utility, I don’t know what you mean by it, but the example of the Utility monster involves negative and positive experience, so do trolley problems and many other examples both criticizing and defending utilitarianism. My example was not devised to aggregate total utility in society, but to counter a single criticism against utilitarianism – that it fails because we can’t compare utility. You can’t aggregate, if you can’t calculate, and you can’t compare if you can’t calculate. And there we have people who fight taxation and income redistribution as a way to maximize total utility on the basis that we can’t compare utility between persons, but don’t question the criminal justice system, which is also based on interpersonal comparison of utility.
I haven’t read any books on the topic, so I can’t guide you in this respect, just articles on the internet, Wikipedia, SEoP.
Comments on this entry are closed.