In my previous post on utilitarianism, I started with two crucial observations.
First, utilitarianism is a political philosophy, dealing with the question of how the resources in a community should be distributed. It’s not a system of individual ethics
Second, (this shouldn’t be necessary to state, but it is), there is no such thing as utility. It’s a theoretical construct which can be used to compare different allocations of resources, not a number in people’s heads that can be measured and added up.
Failure to accept these points is at the heart of the kind of ‘longtermism’ advocated by William McAskill and, earlier, Parfit’s Repugnant conclusion. The claim here is that the objective of utilitarians should be to maximise total utility, including people who are brought into existence as a result of our decisions. In particular, that means that it is desirable to bring children into existence who will have a miserable life, provided that no one else is made worse off, and the life is not so bad that the children in question regret being born.
As well as being intuitively unappealing, this idea makes no sense in the two main contexts in which it is relevant: families deciding how many children to have, and polities deciding whether to promote pro-natalist policies[1]
The members of a family, and of a polity, have to allocate resources among themselves. Utilitarianism says that the welfare of each member should be given equal weight. In deciding whether to bring an additional child into existence, it’s necessary to compare two situations
(i) the child is born, and has an equal weight with everyone else; or
(ii) the child isn’t born, and all the current members of the group are weighted equally
It’s nonsensical in case (ii) to add in some extra weight to the hypothetical child who doesn’t exist. And it’s clear, to me at any rate, that if everyone in case (ii) is better off than everyone in case (i), the correct utilitarian decision is to go with (ii).
This leads to the conclusion that the social order we want is one where average utility is maximized (remembering that utility is a way of comparing allocations, not a real thing).
Another way to reach this conclusion is from behind a Harsanyi/Rawls veil of ignorance where we choose a social order of which we will be a member, without knowing where we will be situated. There’s no way to make this work if we are also supposed to consider the infinite set of possible people who won’t come into existence at all.
The counterarguments I’ve seen don’t impress me. Many of them start with some version of the utility monster, an individual who can have massively more utility than anyone else. But, as I showed in my last post, utilitarianism as a political philosophy doesn’t work that way. Reductions in the utility of a trillionaire are outweighed by small improvements for a hundred other people, or significant improvements for ten.
Parfit’s arguments, as quoted in the Stanford Encyclopedia of Philosophy rest on appeals to intuition derived from situations that can’t possibly exist in an actual polity.
For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being”
This is both untrue and analytically faulty.
Untrue because humans are social animals, and human societies require a minimum number of people to deliver anything beyond bare existence. Solitary individuals (castaways, for example) don’t live well, so the supposed “better population” can’t exist. But if it were possible, we wouldn’t need polities or utilitarianism, any more than bears or skunks do.
It’s analytically faulty because the point isn’t to compare different populations in the abstract but for families and polities to make choices about population. Starting from an existing population, it’s entirely possible (and is now the case) that people might choose below-replacement fertility so that they and their children can have better lives.
So, we could easily see the population of the world gradually decline from billions to hundreds of millions, with steadily rising living standards. But below some point (Charlie Stross estimates a lower bound of 100 million) it would become impossible to sustain a modern civilisation. So, at this point (many generations away) it might be necessary to encourage people to have more kids.
Until then, the choice can be put as one between
(i) Letting families make their own choices, leading to a world with a shrinking population living better lives; or
(ii) Adopting pro-natalist policies[2] to deliver a growing population, living worse lives
Parfit called (ii) the repugnant conclusion, and he was right to do so.
[1] Migration raises a whole new set of issues about who counts. My position is essentially cosmopolitan (everyone counts, wherever they live), but this needs a whole new post, or maybe a book.
[2] To get fertility rates above replacement under current conditions, such policies would have to be very intrusive.
{ 63 comments }
MisterMr 07.30.23 at 6:31 am
Only partially relevant but, in what sense we can call a conclusion “repugnant” or a tought experiment like the utility monster can “prove” utilitarianism wrong?
It seems to me that both are based on moral intuitions that themselves are utilitarian (e.g. the utility monster goes against the general assumption of declining utility, and therefore create a paradoxical situation) but then this “proves” utilitarianism wrong while implying that utilitarianism is right.
The correct answer would be to prove utilitarianism wrong from a different point of view, like virtue ethics or religious ethic.
But if virtue ethics or religious ethics go against utilitarianism, this would mean that following a certain virtue or religious precept makes people on the whole worse of, at which point most peopoe would likely take the side of utilitarianism.
Therefore, people don’t contrast utilitarianism with other forms of ethics, but search instead for weird situations where the utility calculus, applied in certain ways, goes against our intuition of where utility is in reality (e.g. the utility monster, the violinist).
But it seems to me that this proves utilitarianism overall right, and only certain ways to calculate utility wrong.
Some other variants as the reverse repugnant conclusion, that we should do overall pro natalist policies (as many religious people think) might in fact be based on non utilitarian logic, but they are rare.
GG 07.30.23 at 7:22 am
JQ –
Thanks for the prompt follow-up!
Respectfully, I believe your definition of “utilitarianism” is in need of signficant elaboration. Utilitarianism, as historically understood, is a system of personal ethics. Bentham thought that utility could be objectively quantified. McAskill and Parfit seem to be right in the middle of mainstream utilitarian thought as it’s typically understood.
Based on your previous post, and your statements above, you’re clearly defining “utilitarianism” differently, which is fine and dandy. But it’s not obvious, exactly, how you’re defining it, nor why your particular definition should carry moral or policy weight. A particular weakness is that you’ve anchored your particular definition with the concept of diminishing marginal utility of income. As far as I’m aware, diminishing marginal utility only holds true within the context of a single person. Marginal utilities between people are incommensurable, which suggests that you can’t just add them up as you did in the previous post.
Murali 07.30.23 at 7:43 am
Utilitarianism is not only a political philosophy. Especially in its act-centered iterations, it is or purports to be a fully general moral theory. consider act utilitarianism. It is not primarily about how to distribute resources, though it can be about that.
https://plato.stanford.edu/entries/consequentialism/
Utility, as such, may just be a way of assigning value to consequences and hence not be a thing in people’s heads which can be measured up. However, pleasure is in fact a thing in people’s and non-persons’ heads that can be measured (by introspection, surveys, brain scans) and added up. And utilitarians do believe that you should maximise aggregate pleasure.
You may think that only existent persons’ well-being should matter, but that’s not allowed by utilitarianism. Utilitarianism requires you to promote pleasure. In that case, the point is to compare different possible states of the world in the abstract, see which one contains higher amounts of total net pleasure over pain. So, yes, utilitarianism does lead to the repugnant conclusion.
Murali 07.30.23 at 7:45 am
To put things differently, your own view may be more attractive, and utilitarianism adjacent, but its not utilitarianism
Chris Bertram 07.30.23 at 8:24 am
Sorry for not challenging this in the comments to the earlier post, John, but I have grave doubts about your first definitional claim “utilitarianism is a political philosophy, dealing with the question of how the resources in a community should be distributed.”
First, and least important from my purposes here, there is a lot more to utilitarianism than a claim about the distribution of resources: Bentham’s interest in opposing the cruel penal policies of his day (for example) was not primarily a worry about their inefficiency but their cruelty.
Second, and most important, “community” is implicitly bounded. I realise that that’s what you want here because you want not to count those future individuals, but it is hardly a feature of genuine utilitarianism that it only counts the well-being of members of a community and discounts that of non-members. Rather, all are to count, including distant others, and a policy that promotes the well-being of Americans at, say, the expense of Bangladeshis, is to be rejected. I’m sure you agree, hence what you say about cosmopolitanism, but in that case you probably need to drop “community”.
(Oh, and to drive home both points: animals are clearly within the circle of utilitarian concern, but that has nothing to do either with distributing resources or with membership of a community.)
John Q 07.30.23 at 10:08 am
There’s no way to resolve definitional claims about what utilitarianism is, so I probably shouldn’t have made them. But all of the early utilitarians favored population limitation, AFAICT. Sidgwick, who was a critic of utilitarianism seems to have been the first to draw something approaching the repugnant conclusion.
Matt 07.30.23 at 10:41 am
Sidgwick, who was a critic of utilitarianism
This is…an unusal thing to say, since Sidgwick is often conisdered one of the greatest utilitarians (even if he noted lots of difficulties with the view!) (From the Forward to The Methods of Ethics: “In the utilitarian tradtion Henry Sidwick (1838-1900) has an important place. His fundamental work, The Methods of Ethics…is the clearest and most accessible formulation of what we may call “the classical utilitarian doctrine.” … What makes The Methods of Ethics so important is that Sidgwick is more aware than other classical authors of the many difficulties this doctrine faces, and he attempts to deal with them in a consistent way while never departing from the the strict doctrine, as for example, did J.S. Mill. Sidgwick’s book, therefore, is the most philosophically profound of the strictly classical works and it may be said to bring to a close that period of the tradtion.”
I think there’s something to be said about taking utilitarianism to be most useful as a (partial) political philosophy. But doing so by stipulation won’t, of course, solve the real philosophical issues, and it’s just very odd to read the stipulations back into authors who clearly didn’t accept them, or to read people like Sidgwick out of the tradition.
John Q 07.30.23 at 11:08 am
I’m going to pass on the definitional issue, and stick to my substantive claim, which is that, considered as a political philosophy, saverage utilitarianism makes sense and total utilitarianism doesn’t.
Chris, while I used “allocation of resources” as a shorthand, I meant it to cover a broad range of things that polities/states do, including penal policies.
And to be clear, I don’t want to exclude future individuals who will actually exist under given choices, only to exclude hypothetical individuals who won’t
Reaso 07.30.23 at 1:15 pm
Average? Why average and not median (or perhaps an even lower percentile)? Why fo we think it is OK to weight the better off as highly as the afflicted?
Reason 07.30.23 at 1:17 pm
Sorry about the typos – note to self – on a site without editing use a laptop not a phone.
rick shapiro 07.30.23 at 1:25 pm
The definitional emptyness of utilitarianism is a consequence of the infinite regress implicit in any attempt at construction a deontological basis for morality, either as a religious mythology or (as by Kant or by the utilitarians) as a secular priniciple. The latter runs into trouble because it is an example of what philosophers call the naturalistic fallacy.
Sashas 07.30.23 at 2:48 pm
@op I’m not convinced by either of your two initial claims, for the record. I don’t think that’s devastating to your argument, which seems to me to hold together without relying on either. I am curious what you’re reacting to when you claim “there is no such thing as utility”, since theoretical constructs are things and I could have sworn that’s precisely what utility is.
My sense is that the larger problem with longtermist and utility monster arguments lies in the insane amount of “trust/knowledge at a distance” they require in order to work. I, like you, prefer the cosmopolitan position, but that does not mean I think me giving one dollar to a charity organization on its promise that the dollar will go farther in Africa is a good idea. When we’re talking about pro-natalist policies, we can go on until we’re blue in the face about how they would lead to greater total utility[1], but that doesn’t answer the question of whether we can trust the people administering such policies to realize that greater total utility when we agree to put the policies in place. (Specifically, I don’t think I’ve ever seen a non-eugenicist pro-natalist policy, and the claim that greater total utility will result does not survive contact with “eugenicists will be in charge”…)
One more thought: I’m not convinced that “living person” is always a positive contribution to total utility. One extreme example is a child with a birth defect that will kill them guaranteed within days. If this defect is detected before birth, I believe most would recommend an abortion. Alternatively you could hold on and give the child a week of life. Another extreme example might be a hypothetical like in the Matrix where a person “lives” as a battery in a bio-electricity mill. If I could choose to add an extra person to that mill, would this be a positive utility from that person? I suspect not.
[1] Total vs average utility hasn’t mattered much to me since I started thinking about event horizons of knowledge, trust, and action. I agree that utility is valuable everywhere, but there’s a rather more finite space where I can influence things and so calculating total utility vs average utility over that space gets you to roughly the same place.
@MisterMr (1) These approaches to utilitarianism claim that if (A) we adopt this framework, then (B) it inevitably follows that (C) bad thing. Therefore we should not adopt the framework. The logical approach is sound, but it can be attacked at any of the three points while the critics only want to attack (A). A common response is, as you noted, to attack (B) instead and point out that the repugnant conclusion does not derive from utilitarianism but only from a particular utility calculation.
@Murali (3) Please take it from the actual utilitarians in this thread (e.g. me) that while pleasure is often a convenient shorthand for utility, they are not the same thing. (I’m about to argue that utilitarianism is a big tent with diverse ideas, but I think this position is pretty universal now.) The “only existing people matter” position is also very much “allowed” under utilitarianism, but not every utilitarian will agree on where to draw boundaries and if any. It’s a big tent and we don’t always agree with each other. If you want to focus on the beliefs of specific historical utilitarians you are of course welcome to do so, but the term is live and the living crowd is quite diverse.
CHETAN R MURTHY 07.30.23 at 4:49 pm
I don’t want to derail the subject, but …. I’ve always wondered why, when it comes to “good guys” discussing moral philosophy, utilitarianism is almost assumed to be the only way forward. Whatever happened to Kantianism?
John Q 07.30.23 at 7:24 pm
@Reason Rank-weighted averages are fine, going as far as the common interpretation of Rawls that puts all the weight on the bottom end. In fact, my own work on choice under uncertainty is all about this. But I didn’t want to raise additional complications.
TM 07.31.23 at 9:57 am
GG 2, how can you assert that “utility can be objectively quantified” but at the same time claim that “Marginal utilities between people are incommensurable”?
Chris Stephens 07.31.23 at 6:28 pm
Thanks for these interesting posts. Is there an agreed upon response in the economics literature on the problem of interpersonal utility comparisons? I was under the impression that this was a big conceptual problem for utilitarianism, a problem that doesn’t depend on whether you have certain intuitions about moral cases (see, e.g., Dan Hausman “The Impossibility of Interpersonal Utility Comparisons” from Mind 1995). I don’t specialize in ethics or political philosophy, so there may be good responses to this worry. But if not, this also seems to be a big problem for Longtermism.
John Q 07.31.23 at 7:32 pm
Chris, my opening observations are meant, in part, as a response to this problem. I’m not interested in “utility” as a thing to be maximized, but as a functional representation of the marginal social value of additional resources received by different people.
John Q 07.31.23 at 7:36 pm
Chetan. As mentioned by Tim Sommers in comments on my previous post, Kantianism is more about ethics for individuals, utilitarianism about ethics for institutions (such as states).
For example, the categorical imperative is interesting and problematic for individual ethics. But at the level of public policy, it becomes something like “treat equals equally”, which is fairly uncontroversial
Alex SL 07.31.23 at 10:22 pm
It seems there are some people who are deeply disturbed by the idea of human population ever shrinking in the slightest, and they do not reason so much as rationalise backwards from that conclusion. Just a few days ago I made the mistake of engaging with one of them on Twitter, and the gist of it was that (a) there are no limits, ever, even if we become e.g. 15 billion people, because Ehrlich and Malthus were wrong in the past, (b) nature will not suffer in any way either because “intensification” can solve that, (c) people only have few children because policies are making it hard to have more, and (d) we need more children to care for the elderly and keep the economy running.
It occurred to me that (a) a part of The Boy Who Cried Wolf that people keep forgetting is that even if the boy in that tortured analogy was making it up at the beginning, the wolf was actually still out there and came in the end, (b) the extinction crisis is already happening, (c) not my expertise, but I really doubt that given how reliably birth rates implode the moment women are given a real choice, and (d) oh, look, a Ponzi scheme.
I have no opinion on whether utilitarianism is originally a personal or a political philosophy. The kind of person who is an outspoken utilitarian today is, however, generally using it as a political philosophy.
The key problem with their reasoning to me still seems to be that any unborn, future people are purely hypothetical, and it is simply absurd to even have them in the equation, even if one could theoretically quantify utility. It is one thing to say, you should leave the world about as intact as you found it, so that the next generation of whatever size that we can already see growing up will have a chance to have an okay life. It is quite another to say that the next generation needs to be large rather than small, so you get on with the birthing despite not wanting more than one child, chop chop.
GG 08.01.23 at 5:31 am
TM@15 –
The first bit is just what Bentham thought; I don’t necessarily find it plausible.
Sashas 08.01.23 at 6:42 am
@Alex SL (19) Speaking for myself… Every Malthusian argument I’m aware of so far has turned out to be incorrect. That’s where I’ll usually stop, because what I would follow with is this: Every Malthusian argument I’m aware of so far has been made in bad faith with some flavor of eugenicist/genocidal goal lightly camouflaged underneath. Naturally, this is not very productive to point out even though as far as I can tell it is true.
I believe arguments that proceed from considering population in general have a really bad track record of being promoted by awful people for awful reasons. I alluded to this in my previous comment w/r/t pro-natalist policies, but I think it’s true for all of them. I don’t believe they deserve the benefit of the doubt. (The pro-natalists, the Malthusians, and I would apply the same heuristic to any others I came across.)
I’m curious about this. Are you thinking of specific public thinkers? I’m not necessarily clued into public philosophy in any systematic way, but I think at least some of the people I listen to have hinted at utilitarianism in both contexts. It’s all been hints in passing though. (To be fair, outside of CT I think I mostly follow leftists and antifascists and people in both groups have more important things to be doing right now than convincing people to try utilitarianism.)
John Q 08.01.23 at 8:35 am
Thanks for some interesting discussion.
It’s striking that my preliminary claim that utilitarianism is a political philosophy rather than a system of individual ethics has attracted a lot of attention, mostly critical, while no-one AFAICT has defended total utilitarianism (the orthodox view among philosophers, according to SEP) against my arguments for average utilitarianism (allowing for distributional weighting etc).
I’ve ducked this issue so far, but I’ll come back to it now. AFAICT, Peter Singer is the only significant philosopher who currently defends utilitarianism as a system of individual ethics. His version the demanding implication that you should place equal weight on the wellbeing of all living things. But even if you restrict it to humans, the claim that we should act so as to promote the welfare of the eight billion people on earth, with no particular concern for ourselves, seems too demanding to be taken seriously.
Against this, there is a vast literature with Rawls and Harsanyi as obvious starting points, criticising and defending utilitarianism as a political philosophy. Bob Goodin is the contemporary writer who seems clearest and most significant to me.
Am I missing something here? If (as asserted strongly above) utilitarianism is a theory of individual ethics, who are its proponents and which critics should I be taking seriously (please, no trolley problems or violinists).
Chris Bertram 08.01.23 at 9:18 am
I think it fair to say that very few people promote maximizing utilitarianism as a position in ethics, for a variety of reasons, but one of those reasons is a scepticism about a measure of utility or well-being. But many philosophers are consequentialists who believe that the right thing to do is to act so as to bring about the best consequences. I am myself sceptical about the possibility of measuring consequences, sorting everything into better or worse states of affairs etc. John Broome is someone who is a consequentialist in this sense, I think: see his Weighing Goods and Ethics out of Economics.
Charlie W 08.01.23 at 9:19 am
The most recent thing I’ve read on this is Williams (‘Ethics and the Limits of Philosophy’) so would tend to recommend that. Williams is focussed on metaethics: what motivates or justifies such and such normative scheme. Williams is sceptical that consequentialist arguments can provide a metaethical underpinning (he is sceptical that anything can). Still, this is how consequentialist arguments got started, historically. It’s one thing to assert that such and such a scheme is best, and it’s another to ground it.
I’d suggest considering that consequentialism is essentially forward looking. If you are Bentham, you are appealing to outcomes to show that a given existing law lacks a sound ethical base: i.e. an outcome that is deemed unlawful just because it offends a person’s taste may nonetheless turn out to be a good outcome (because of happiness or similar) and therefore the law should be repealed. And then, if you follow Bentham, you will welcome further such good outcomes. You are forward looking. So then the line of questioning shifts. What is the population that will experience good outcomes? Where are its boundaries? Who are (or will be) its members?
Am doubtful that it’s enough to just rule out by fiat an unborn population, or the communities of primates, or artificial persons, etc. What sets the population?
engels 08.01.23 at 9:35 am
On another thread I think it was said that utilitarianism was a “political philosophy” and Kantian deontology was an individual ethos. In terms of this contrast I think it’s easy to find everyday examples of people reasoning in ways that seem more utilitarian than Kantian (though far from perfectly adhering to either). Kant’s ethics are also pretty demanding btw (famously if I an axe murderer shows up at your door asking where your children are hiding…)
engels 08.01.23 at 9:50 am
And I don’t think utilitarianism is a shoo-in at the state level, eg the broadly utilitarian arguments for launching the war in Iraq (and dismissing international law), which I didn’t find at all appealing.
Alex SL 08.01.23 at 10:20 am
Sashas,
Oh dear.
The question whether Malthus was ultimately correct can be answered quite simply: do you believe that there is enough freshwater, fertile soil, energy supply, etc, to be had on this planet for infinity humans to lead a comfortable life? If you answer no, then Malthus was correct in principle, and we are only speculating about the exact number (that he clearly underestimated). If you answer yes – the planet can feed, say, 600 quadrillion humans – then we have left any basis for a rational conversation.
Regarding racism, that is the usual argument. Some racist also says that there are too many people (at least of a certain kind), so anybody who points out that the planet cannot feed infinity people or even just that a ‘mere’ eight billion people are already driving other species to extinction left right and centre is lending support to racists. I always wonder why this argument is only made in this specific context, because the same logic can be applied nearly anywhere.
Say you would like working class salaries to be higher, or perhaps raise the minimum wage, to reduce inequality. You would find, if you were interested in such things, that historically, the labour movement often argued that salaries should be high enough that a man can support his family by himself, so that the wife can stay in the kitchen, and you would still find some religious fundies argue so today. That means everybody who argues today that working class salaries should be raised is lending support to sexists, right? Can say anything like that, the poor have to stay poor forever, because completely different people than you think only men should have salaried jobs.
I do not know who you consider a public thinker and who not, but I was referring to the kind of longtermist or cryptobro who has lately featured strongly in the media.
Matt 08.01.23 at 11:11 am
AFAICT, Peter Singer is the only significant philosopher who currently defends utilitarianism as a system of individual ethics.
Oh, no, no, no. I mean, I suppose you can quibble about who counts as “significant”, but this is just completely wrong. Here’s one pretty prominent example:
https://www.colorado.edu/philosophy/people/alastair-norcross
But, you can find lots more. There are many spins on how things work and what to do, but it’s just false that there are not many significant (for a plausible account of “significant”) philosophers who defend utilitarianism as a system of individual ethics.
MisterMr 08.01.23 at 12:02 pm
@Reason 9 and JQ 14
I reread this thread, and it seems to me that no, it is either sum or average, this is because the way “utility” is calculated is already taking in account the fact that the ones who have less are suffering more/the one who have more are enjoyng less.
For example, take the two distributions
A:10$, 10$, 10$
B:9$, 10$, 11$
If we see the utility as the log 10 of the $ value, as in the OP, we have these three situations:
sum:
A = 1+1+1=3
B = 0.95+1+1.04 = 2.99
Average:
A = 1+1+1=3/3=1
B = 0.95+1+1.04 = 2.99/3= 0.99 (I’m approximating down to make the case)
Median
A = 1
B = 1
It also doesn’t make sense to calculate the lower or upper quintiles or such because the declimig marginal effect is already in the function of log10; if it seems that it is not sicounting enough the “rich” then the idea is to change the function, not to add other different hurdles later, as “utility” as a concept is supposed to already take in account falling marginal returns of stuff.
Tim Sommers 08.01.23 at 3:26 pm
I think the point to emphasize (that you make here) is that there should be a version of utilitarianism which explicitly confines itself to maximizing the welfare of actually existing persons at a time. This is plausible because of the repugnant conclusion, the hopeless epistemological requirements of longtermism, and many other reasons. I wonder if before Parfit most utilitarians even thought that the idea was to maximize total welfare over the entire life of the human race.
As far as concern for the future, I think Rawls’ solution is perfectly fine. Given that behind the veil of ignorance we don’t know which generation we will be born into we would add a principle of just savings (for future generations) to the other principles of justice.
Chris Stephens 08.01.23 at 6:52 pm
This is a bit off topic from your OP, but since you asked:
Prominent folks who work on utilitarianism as an ethical theory (or at least consequentialism) would include Alistair Norcross (“The Scalar Approach to Utilitarianism” (2006)
Peter Railton “Alienation, Consequentialism and the Demands of Morality” 1984)
Shelley Kagan’s book, The Limits of Morality
David Brink’s book on moral realism has an extensive chapter defending objective utilitarianism
And more recently, a good open access book on utilatiranism begins with the claim that its a moral theory. See Chappell et al.’s book https://www.utilitarianism.net/introduction-to-utilitarianism/
most of these attempt to address the “too demandingness” objection that you raise. Many of them are responding to Bernard Williams’ book Ethics and the limits of philosophy.
so, in addition to a whole Rawls-Harsanyi debate in more political philosophy, there’s another debate about utilitarianism as a moral theory, with lots of different kinds of detractors (some defending Kantian approaches, some virtue ethics, and some trying to reconcile all these…)
engels 08.01.23 at 7:04 pm
a version of utilitarianism which explicitly confines itself to maximizing the welfare of actually existing persons
That would seem to give a rather narcissistic answer to the question of whether to have children (do it iff. you’ll** get a kick out of it)…
** strictly speaking anyone now living
LFC 08.02.23 at 3:03 am
Toward the beginning of A Theory of Justice (the first edition and presumably the second edition as well), Rawls writes that his theory, unlike the classical utilitarianism of e.g. Bentham and Sidgwick, “does not interpret the right as maximizing the good.” The “question of attaining the greatest net balance of satisfaction never arises in justice as fairness….” (30) He also says that classical utilitarianism “applies to society the principle of choice for one man [sic],” i.e., maximize utility, thereby leaving open, or not foreclosing, the at least theoretical possibility that individual rights could be overridden if that resulted in a greater net balance of satisfaction. By contrast, justice as fairness is aligned with and seeks to “account for” the (allegedly, at any rate) “common sense” conviction or belief that every person “has an inviolability founded on justice or, as some say, on natural right, which even the welfare of every one else cannot override.” (28)
At this general level there seems to be a clear contrast between Rawls and what he calls classical utilitarianism; however, as has already been suggested, it may be that at the level of more concrete judgments the contrast becomes less sharp. Nonetheless, it’s fairly clear that, at least in ToJ, Rawls wanted to align himself with what he said was Kant’s emphasis on “the priority of right” (p.31 n.16) as opposed to classical utilitarianism’s emphasis on maximizing “the good.”
Charlie W 08.02.23 at 5:00 am
One other thing: if (with whatever intent, metaethical or otherwise) you are mounting a utility argument that aims to show that a smaller population would enjoy higher utility, how will you show that birth control is different from just killing people? This has nothing to do with abortion. If it’s good to have a smaller population, why not go there via any practically achievable route (as long as it’s not utility-lowering)?
TM 08.02.23 at 7:19 am
Sashas 21: “I believe arguments that proceed from considering population in general have a really bad track record of being promoted by awful people for awful reasons.”
There is some truth to this and the result has been that the left has mostly taken the position of not “considering population in general”, for fear of getting into thorny controversies and being applauded by the wrong people, due to the fact that high fertility rates are nowadays mostly found in Africa and parts of Asia and South America.
The left has mostly taken the liberal position of considering family size a matter of individual choice and promoting access to contraception and abortion, but never openly promoting either pro- or antinatalist policies. That position is fine as far as it goes, but the fact remains that “population in general” is a crucial factor of human impact on this planet – not the only one of course, as the “awful people” have often pretended, but a crucial factor nonetheless – and ignoring it won’t be possible forever.
MisterMr 08.02.23 at 12:31 pm
So I never knew of this “repugnant conclusion” story, but thanks to this thread I read an explanation of it in wikipedia, and I must say that I don’t think this “repugnant conclusion” holds, and that the correct way to make “utility calculations” is the total sum,not the average.
according to this wikipedia article, and in particular the “criticism and responses section, the idea is this:
suppose a world where there are 10 people each of them has a nice life and owns a Ferrari. But, in the same world (it is implicit that the amount of resources is fixed), there could be 20 peoples, each with a nice life but a very cheap car. Utilitarian calculation multiplies the lower personal happines by the higer number of people, and gets an higer total happiness. Repugnant conclusion!
In the “criticism” section it is noted that there is nothing particularly repugnant in this, and in fact the higer total people situation might be considered better. At which Parfit replies:
“It follows that this revised intuition must hold in subsequent iterations of the original steps. For example, the next iteration would add even more people to B+, and then take the average of the total happiness, resulting in C?. If these steps are repeated over and over, the eventual result will be Z, a massive population with the minimum level of average happiness; this would be a population in which every member is leading a life barely worth living. Parfit claims that it is Z that is the repugnant conclusion.” (I’m citing Wikipedia not literally Parfit’s words).
I don’t really understand how this continuous iteration is supposed to work. Presumably for a very large population there is a point where people literally resort to cannibalism to stay alive due to the scarcity of resources, but this situation is a situation where evidently total happiness is very low.
Before point there has to be a point where an increase in population N causes a decrease in average happiness/utility such that the total happiness of N is less than the total happiness of N-1, but the total happiness of N-1 is stiull higer of N-2; N-1 is therefore the “optimal” population level.
The determination of N depends on: (a) the total amount of resouces, (b) the way an increase in poulation impacts the use of resources (a technical problem), (c) the supposed utility function.
It seems to me that there is an incosistency here because “utility”, because of the way it is defined, more or less means “total happiness”, it is not really a resource, but when we think about it we tend to think of it as a resource.
For example, suppose a situation where A gets utility 1, and B gets utility 19; most people would think that it is a worse distribution than a=10 and B=10; but this wrong, because we are already speaking of utility not of resource, so if the total is 20 in both cases then they are tied.
Our intuition is wrong because we think of utility as a resource to be distributed, but it isn’t, what happens is that in the first case A is using (presumably) an amount of resources that is way more than B (if we ise the log10 example, we would have that A uses 10 000 000 000 000 000 000 where B gets 1) so that if we distribute equally the resources both A and B would get roughly utility 18, for a total utility of 36 (so way better).
The problem is that we can’t postulate an entity (utility) that is more or less the same thing of total happiness, and then say that when total happiness is higer, the people are living a life “barely worth living”; this is a contradiction because if this was true then total happiness would be lower.
MisterMr 08.02.23 at 12:33 pm
TL-DR of my previous comment:
IMHO the “repugnat conclusion” fails because it somehow assumes that we are redistributing utility, whereas we are actually redistributing resources, so that there is necessarily a pont X where we have an optimal population (and not that the optimal population tends to infinity as the “conclusion” seems to assume).
Tim Sommers 08.02.23 at 7:41 pm
a version of utilitarianism which explicitly confines itself to maximizing the welfare of actually existing persons
That would seem to give a rather narcissistic answer to the question of whether to have children (do it iff. you’ll** get a kick out of it)…
I believe you are incorrect.
(1) The context of the quote is the assumption that this version of utilitarianism is explicitly institutional and not a guide to personal ethics (like whether or not you should have kids).
(2) Even if we drop that, the next problem is that even on this view as it is you are supposed to take into account the welfare of all actually existing people. It is an abuse of language to call such a demanding altruistic ethical standard “narcissistic.” It’s like calling someone who endorses a utilitarianism that excludes all nonhuman animals narcissistic. Either view may be right or wrong, may demonstrate speciesism or unfairly discriminate against the nonexistent, but neither is narcissistic .
(3) It’s also perfectly consistent with this view for you to base your own decision on whatever your moral beliefs are – even if your comprehensive moral view is standard utilitarianism with no generation stipulation or any stipulation that applies only institutionally.
(4.) Further, it’s consistent with the view as stated, for you to want to have children because you can provide for them well, expect them to be very happy, and you believe that adds value to the world without endorsing the additional claim that part of why it is a good is that it will be good for these hypothetical, but nonexistent, people as well. Again, this may be incorrect, but it’s not plausibly described as “narcissistic”.
I apologize for the length. As Twain said, if I have more time it would be shorter.
engels 08.02.23 at 8:21 pm
MrMr I don’t understand your argument but I think you must believe that if a large happy population is enlarged by a given factor (k), average utility must decrease by more than k. But I don’t know why you’d think that. (It is true that because utility isn’t directly redistributable it wouldn’t go down by exactly k.)
I didn’t envisage cannibalism but eating bugs amidst endless resource wars and famines…
Alex SL 08.02.23 at 10:15 pm
Coincidentally, eight hours ago, the UN Environment Program tweeted “Today is #EarthOverShootDay! As of Wednesday, humankind has already used up all the resources the Earth can regenerate in 2023. This is a stark reminder to take action #ForNature & stop this deadly trend.”
But of course, according to cornucopians on my side of the political spectrum, this is all due to twenty billionaires, and if we only take away their yachts and private jets, we can grow the world population forever. (Just to clarify, although I think this idea is innumerate, I am still fully on board with billionaires being taxed and expropriated out of existence – no single person should be allowed that kind of power over other humans.)
TM,
Yes, that is the situation, but there are two aspects that I don’t understand. First, on current evidence the problem is solvable through education/secularisation, social safety, and access to contraceptives. Within a generation after doing that, families seem to fairly reliably opt for slightly under two children on average. That is therefore all a left-winger who recognises that our number is a factor in our collective ecological footprint has to advocate.
Second, what “it is the rich and the global north who cause all the trouble” actually means is “you all who aspire to a middle class life style have to stay poor, forever, so that we can use the resources you would like to use to accommodate a few billion additional people, but don’t worry, they will also live in misery”. And that doesn’t seem like a very left wing frame of mind to me.
TM 08.03.23 at 8:16 am
engels: “I think you must believe that if a large happy population is enlarged by a given factor (k), average utility must decrease by more than k.”
MisterMr isn’t making that claim. To the contrary, it’s Parfit who is claiming, afaict without giving any reason for it, that average utility will never decrease faster than population size increases, in other words he’s assuming entirely arbitrarily that the “total happiness” function must be monotonous increasing with population size. This assumption makes no sense (surely there is a point at which two starving people have less happinbess than one well-fed person) and makes it hard to understand why anybody would take such pseudo-mathematical reasoning seriously at all.
Alex 40: You make a good point. There are good reasons to believe that societies (especially poor ones) with high fertility rates would be far better off with lower fertility. But we cannot as low-fertility Europeans purport to decide for others. The problem with this kind of population discourse (best typefied by Ehrlich’s “population bomb”) has always been that it’s a white people’s discourse about other people. The best we can do is oppose natalism in our own society and fight for reproductive rights everywhere.
Fake Dave 08.03.23 at 9:04 am
The OP might be stipulating things that are controversial in big tent utilitarianism, but I do think that focusing on tangible resourse distribution does tend to defang most forms of long-termism. We can’t give actual resourses to hypothetical persons without in some way rendering the resources and their effects hypothetical as well and my take is that breaks the inherent logic of utilitarianism. The epistemic/information problems of utilitarianism are old news and so are most of the solutions at this point and it’s all way above my pay grade in any case, but I do smell a rat when it comes to “implications” of a philosphy that seem to let people off the hook for adhering to its basic tenets.
In the case of natalism, for instance, we should question how resources are to be distributed to (or perhaps preserved for) people who do not yet exist and who gets to hold onto those resources in the mean time. In a simplistic model, you might simply give parents a subsidy per child that outweighs the costs of having children, but aside from all the arguments of whether more kids means happier families, there’s the more basic problem that we don’t know how many children people will have until they’ve had them.
If we preclude various totalitarian population control schemes (and we really should because they’re evil and mostly don’t work anyway), what we’re left with is either subsidizing the welfare of people who might have kids or supporting kids who actually exist. The utilitarian argument would seem to favor the children/parents who we know need support now over the potential parents who merely may need support or may not. If the merely potential parents decide it’s better to keep the subsidy and skip the kids (or more kids) part, then the support becomes surplus and may merely be hoarded or squandered (which is the classic problem with the breadwinner wage). That’s not utilitarian any more than the old argument that rich people aren’t hoarding, they’re creating a legacy for their heirs or the supposedly modern approach of the practical “altruists” who say it’s OK to be extravagently wealthy for the forseeable future because they’ll definitely put it it to the best possible use after that.
Don’t get me wrong, the anti-long termist argument I just made can be easily twisted into a rejection of conservation, thrift, and planning of all sorts. Those things are absolutely essential for the wellbeing of current and future generations. I also think it’s fair to assume there will be future generations and that we should want the best for them. The problem I have with thinking in the “long term” isn’t quite that we’re all dead in it so much as that we won’t be the ones who are alive. There needs to be some sort of time horizon on consequentialism or it becomes a strange sort of historical determinism.
Psychohistory worked for Aasimov because he was writing fiction (and it’s only good fiction because it didn’t work too well). In the real world (or adjacent realities), it’s not particularly easy to predict the consequences of our actions even if we’re just relying on our fickle future selves to see them through. Outsourcing the ends that will justify our means to people who don’t even exist yet is at best an absurdity. At worst, it’s a stifling moral burden we have no right to impose akin to telling our kids that we did terrible things for them and they had better be worth it. That dumping of responsibility doesn’t seem to have nearly as much utility for them as for us.
MisterMr 08.03.23 at 10:44 am
@Engels 39
Rethinking about it, I’ll break my reasoning in two steps, one mathematical and the other about semantics.
The mathematical part: take for example a situation where, for a given set of resources, 1 person gets utility 19, 2 persons would get utlity 18 each (so 36 total), 3 persons would get 17 each etc.; in this situation, the optimal amount of people is 10 (10×10=100 tot utility), whereas both 11 persons (11×9=99) and 9 persons (9×11=99) would be subotimal.
It is not that the reduction in average utility always dominates the increase in people, it is that there is a certain optimal point after which the fall of average utility dominates the increase in people.
Now this is a random simplified utility function that I made up on the spot, but I say that this is true for all plausible utility function, because at some point as we increase the people we reach a starvation level, where arguably utilty [happiness] becomes negative, so evidently the maximum total happiness will happen before total happiness turns negative. This is based on the idea that happiness can be negative, which leads to:
The semantic part: Parfit speaks of “barely positive” utility fucnction, and I think that by this he is speaking of something like “starvation level”. But as utility more or less represents happiness, it seems to me that “barely positive” still means “a life worth living”. Now this is a problem because there really isn’t a clear semantic meaning of a barely positive utility, so the intuition about what it means might vary a lot, but as utility more or less means happiness it seems to me that, if we have to stay coherent with the logic that utility represent an unigform measure of happiness, then even for a barely positive utility we have to mean a situation where enough people are happy enough with their life that it is not a good idea to have less people (because this is literally the meaning of higher total utility).
It seeps to me that Parfit is implicitly, and perhaps, unconsciously, switching “minimum positive utility” with subsistence level, perhaps because he is unconsciously confusing distributing resources with distributing utility.
But this secon semantic part of the argument is more confusing because of the difficulty of defining a “barely positive” utility: barely relative to what?
Charlie W 08.03.23 at 12:47 pm
It is I guess an institutional utilitarianism that the OP is driving at. Perhaps that institution is meant to be guided not only by utility but also other ethical ideas taken as given and not stated here (i.e. rights).
But then, why engage with the repugnant outcome or the utility monster at all? Those are cases that some writers have put forward to show the difficulty with relying only on utility as your ethical criterion. This is another characteristic of utilitarianism: its purism / radical minimalism. Instead, why not just say: ‘my political proposal is for an eventually smaller population, that enjoys high living standards, arrived at by humane means’. Am sure you could get a constituency for it without much philosophical argument.
I haven’t read any of the ‘longtermism’ people but wonder if their texts should be addressed at all: sounds a bit like internet machismo, philosophising as benchpressing, etc.
Charlie W 08.03.23 at 2:04 pm
Sorry, I should have added to the above, and to spell it out: I see no real need to try to counter Parfit’s argument to do with ‘population ethics’ (and suspect it would in any case take quite a bit more than what’s in this post & thread). There are some open avenues outlined in the SEP entry.
In the world as we find it today, population size does very much look like a severe threat. Would generally agree with what people up-thread are saying about it, and would not agree that fixing problems to do with the foundations of ethics are required before we can take action.
Tim Sommers 08.03.23 at 4:18 pm
None of the math is relevant.
Parfit never claims, nor needs to claim,
“that average utility will never decrease faster than population size increases.”
His claim is that where there is population that is more than minimally happy there are conceivable worlds in which a larger number of people are less happy, but the sum of their happiness is still greater.
I don’t see how that could possibly be controversial. You can attack the relevance of the r-conclusion, you could deny that the sum is what matters, you could make any number of moves. But Parfit makes no controversial assumptions or disputed empirical claims about populations or population growth or population trends. Like many philosophical counter-examples, there need be only one conceivable case, no matter how unlikely, to make its point. It’s even reasonable to say, as a policy maker and not a philosopher, that you think such cases are rare enough to completely ignore. But philosophers can’s say that.
Alex SL 08.03.23 at 10:38 pm
TM,
“The best we can do is oppose natalism in our own society and fight for reproductive rights everywhere.”
That’s what I said…? Well, reproductive rights plus education and social safety.
Dave W. 08.04.23 at 2:20 am
TM@15: That follows if people’s individual preferences obey the four axioms of the Von Neumann–Morgenstern utility theorem (there is reason to believe that actual people’s preferences don’t necessarily do so). Given two people who are both Von Neumann-Morgenstern utility maximizers, you could experimentally compute a numerical utility function for each of them over a set of possible outcomes by assigning arbitrary values to two such outcomes – say 0 to the less preferred outcome and 100 to the more preferred outcome, and then asking them questions about their preferences between various lotteries of those two outcomes and a third outcome whose utility needs to be assigned. The resulting utility values will predict their individual choices between other lotteries involving the same outcomes, but the utility values will not be comparable between the two individuals.
So for example, if A is the outcome “You are left totally broke” and B is the outcome “You win a billion dollars” and we assign 0 to A and 100 to B, I would need a very high probability p to choose a lottery of A with probability 1-p and B with probability p over getting my current circumstances for certain. There are certainly a lot of things I could do with a billion dollars that I would potentially enjoy, but as someone who is reasonably close to retirement age, they wouldn’t be worth the risk of winding up broke for the rest of my life with essentially no chance to recover. So maybe I would require p = .999 to take such a gamble, which would imply that my utility for my current circumstances is 99.9 on that 0 to 100 scale. Someone else who desperately wanted money to pay for an uninsured procedure to save the life of a loved one might be willing to take that gamble with a much lower value of p, say p=.6, which would imply that their utility for their current circumstances was 60. But that wouldn’t imply that you could compare our two utility values, or add them together to get anything meaningful. They are just arbitrarily scaled values that predict the choices of the specific individuals involved, again on the assumption that those individuals’ preferences follow the four axioms of the theorem.
MisterMr 08.04.23 at 6:56 am
@Tim Sommers 46
“His claim is that where there is population that is more than minimally happy there are conceivable worlds in which a larger number of people are less happy, but the sum of their happiness is still greater.”
Yes, but assuming that these concivable worlds have the same amount of resources (otherwise the comparison makes no sense) this claim is wrong, unless you postulate an utility function that leads to an infinite population, which is not a plausible utility function in a world with a finite set of resources.
If you don’t like the math-ish jargon, let’s put this it way: the sentence where there is population that is more than minimally happy there are conceivable worlds in which a larger number of people are less happy, but the sum of their happiness is still greater implies that population could reach infinity. This is clearly false in a world with finite resources, and irrelevant in a world with infinite resources, so the claim makes no sense.
TM 08.04.23 at 8:31 am
Tim 46: “His claim is that where there is population that is more than minimally happy there are conceivable worlds in which a larger number of people are less happy, but the sum of their happiness is still greater.”
Perhaps I’m getting this wrong. If Parfit claims only that this is outcome is not impossible, then I would agree, it’s not impossible. But if he claims that it’s generally the case, or at least the most likely outcome, I think such a claim is baseless and arbitrary.
More to the point. At least this much seems to be uncontroversial, that utilitarianism judges actions or policies by their consequences. Given a planet with limited space, limited resources, and limited capacity to assimilate pollutants, any plausible consequentialist analysis that takes the interests of future generations into account must come to the conclusion that population stabilization is more likely than population growth to be conducive to the long term happiness of our species.
engels 08.04.23 at 4:58 pm
None of the math is relevant. (TS)
This is correct but I was just trying to understand the argument. Also there’s a long tradition on parts of CT of treating philosophy as an inferior form empirical prediction and accusing anyone who disagrees of “trolleyology”.
In so far as I do understand the argument I don’t think works because it’s trying to infer something about the shape of a graph and the location of the maximum from the fact it crosses the x-axis. A straightforward criterion for a life barely worth living is not reasonably regretting one was born.
On your earlier reply, I take the point your suggestion was only intended for policy decisions but I think similar problems arise there. Eg was just reading the conclusion of Woodward’s Plan of Attack which has Bush reflecting on the long term consequences of the Iraq invasion:
Charlie W 08.05.23 at 8:07 am
“any plausible consequentialist analysis that takes the interests of future generations into account must come to the conclusion that population stabilization is more likely than population growth to be conducive to the long term happiness of our species”
The utilitarian rulers may (and perhaps secretly on our behalf: a twist I hadn’t appreciated earlier) conclude that the greatest sum of utility is to be found in a stable human population of, say, twenty billion, the largest that the earth will support before conditions get really bad and utility absolutely plummets for individuals. Nothing in (classical?) utilitarianism prevents this. Biodiversity, habitat preservation, generous living space, travel, all of that is jettisoned. Conditions are just good enough.
Charlie W 08.05.23 at 8:31 am
I admit that I was inspired enough by this post and the thread to go back and properly read the opening pages of ‘Reasons & Persons’, and also the chapter on the repugnant conclusion to see if I could get a better handle on how ‘longtermism’ may have gotten started. Evidently the people connected with that are strongly influenced by Parfit.
I have to say that I’m to a degree entertained but also absolutely horrified that actual policy proposals are falling out of a scholarly book from the 1980s. It’s just one book, though also one written with great ambition and huge confidence. It contains 154 arguments (they are numbered). The author is absolutely convinced that he is right. The longtermism people, too, seem utterly convinced of their own ideas. And yet the foundation of Parfit’s utilitarianism is something like:
(1) We should do what we have most reason to do;
(2) Our reasons are given by facts (but are not identified with facts);
(3) The notion of ‘a reason’ is primitive and unanalysable (Parfit suggests that we can understand reasons by actually thinking them: in his example, we always have a reason to want to escape torture);
(4) What we have most reason to do is best informed by some consequentialist theory C, improved as far as we can improve it. Ideally, C would contain a solution to the repugnant conclusion (and other related problems). We improve it by thinking it through.
Parfit rejects moral scepticism, subjectivism; he is described as a non-natural cognitivist, something like that (the terms in use here seem to be a bit fluid). And he convinces himself and the longtermists. But can you really hang the real futures of eight billion people (choose a number here) on this construction? It seems so incredibly shaky, to me at any rate.
engels 08.05.23 at 9:29 am
Perhaps I’m getting this wrong. If Parfit claims only that this is outcome is not impossible, then I would agree, it’s not impossible. But if he claims that it’s generally the case, or at least the most likely outcome, I think such a claim is baseless and arbitrary.
You are: he doesn’t.
Yes, but assuming that these concivable worlds have the same amount of resources (otherwise the comparison makes no sense) this claim is wrong, unless you postulate an utility function that leads to an infinite population, which is not a plausible utility function in a world with a finite set of resources.
The universe doesn’t have finite resources. Comparing worlds with different resource endowments is intelligible because it’s intelligible to ask a question like “would a world without sugar have been better or worse than ours”.
MisterMr 08.06.23 at 2:14 pm
@engels 54
Ok but Parfit is speaking of an ethic theory.
How is the claim “a world with a different endowment of resouces would be better of worse than this” relevant from an ethic point of view?
It seems to me that the idea that we can “conceive” a world with an N+1 population implies that we can conceive it as a different outcome of the same world, otherwise we can conceive also a world with an N+2 population with an higer average utility (since we are not constrained from the limited resources) that would be better than both and solve the problem this way.
engels 08.06.23 at 3:23 pm
Parfit’s not claiming that the “repugnant” world is the best conceivable world, just that it’s better than the associated world with high average utility. Imho ethics about what people should do and it’s relevant to that to be able to say for any two fully specified outcomes (worlds) which is better, even if it’s not necessarily something we can bring about now or ever. Eg we can debate whether it would have been better if Columbus hadn’t got to America. It is possible to reason about counterfactual situations like this imo although it’s not usually very reliable.
TM 08.07.23 at 7:16 am
engels 54: “it’s intelligible to ask a question like “would a world without sugar have been better or worse than ours”.”
Sugar is an essential component of the metabolism of most or all organisms in our world, including our own species. There is no world with humans but without sugar.
MisterMr 08.07.23 at 8:35 am
Ok but if we argue about the hypothetical if Columbus just sunk in the middle of the Atlantic we are still arguing about two outcomes of the same “world”, in my use of the term world, whereas it seems to me Parfit says something like “we can conceive of a world where the Americas didn’t exist, Columbus either actually reached India or died before, but since there weren’t native americans to enslave this didn’t happen, higer average utility, what of the two worlds is better?” That IMHO doesn’t make sense because he would compare two really different “worlds”, with different rules, not two different outcomes of the same world with the same rules.
engels 08.07.23 at 11:30 pm
57 Ok then, sucrose, to be pedantic.
58 I think the first part is right but I still don’t understand what you’re objecting to. Different moral rules? Utilitarians believe utilitarianism applies universally.
I think my examples may have just confused things. The point of the exercise is to investigate how utilitarianism ranks different situations considered schematically. It does so by reasoning from premises that seem self-evident. It’s not interested in the details or how they came about. In a similar way it’s possible to establish whether a 24 mile high tower or a 36 km high tower would be taller without getting into arguments about tensile strength or astronaut recruitment.
MisterMr 08.08.23 at 5:56 pm
59
If we could really calculate “utility” then yes, but even if utilitarians postulate a cardinal utility number, we can only guesstimate utility so that we will just put two different oucomes one before the other ordinally, using our guesstimate and generally a good dose of moral intuition (in pratice).
So Parfit build his case of a repugnant conclusion on one assumption that can only be true if we assume pretty desparate situations, and on the idea that you can always imagine another different situation with a lower level of average utility (this as an answer to some philosophers who said that the conclusion isn’t repugnant at all).
This is a very flimsy basis to call for a moral intuition that somehow “breaks” utilitarianism, and in my opinion this works because the intuition we get is that of the starvation level (that would happen with fixed resources).
So in pratice I think that the “repugnance” of the repugnant conclusion holds because the intuitions it evokes are actually intuitions based on an implicit idea of fixed resources.
Or to put it as an example:
Suppose a world A where there are 9 people with 11 happiness, and a world B with 10 people with 10 happiness. Straight sum utilitarianism says that B is better, 100 to 99.
Is this a repugnant conclusion? It seems to me not really. The conclusion is “repugnant” only if you think of a world C with 1001 people each with 0.1 utility, but it is repugnant only because it evokes an image of starvation based on the inplicit idea of fixed resources.
If we don’t assume fixed resources, there is no reason to think that the 1001 guys at 0.1 are starvating, nor that the 100000001 at 0.0000001 are starvating, this intuition only comes because the most immediate idea is the one with limited resources.
engels 08.09.23 at 11:56 am
“The repugnant conclusion isn’t really repugnant: it only seems that way because we are unconsciously influenced by facts about resource limitations that shouldn’t be part of the thought experiment” is a good argument imho and one I think a lot of utilitarians might agree with, although I’m not sure I do.
Fergus 08.10.23 at 6:44 am
Lots of people have pointed out that “utilitarianism is not a theory of ethics for individuals” is not an accurate characterisation of what many utilitarians/consequentialists believe, but I also want to emphasise that this is not just an unimportant question of definitions. Pretty much all JQ’s points rely on this starting point, and can’t be granted by someone who doesn’t agree:
– The dismissal of utility monster arguments against average utilitarianism relies on arguments about the diminishing utility of income, which can only be relevant in the context of certain types of (realist/non-ideal) political theory
– The argument against assigning weight to a hypothetical additional person flows entirely from the view that the question is about resource distribution, which may be granted if we agree we’re only discussing political theory, but completely begs the question if we haven’t already limited ourselves to political resource distribution
MisterMr 08.12.23 at 9:08 am
@Fergus
I personally see myself as an utilitarian and I don’t think that utilitarianism is just a theory of distribution, but then for a generic definition of utilitarianism there might be people that see it as a form of personal ethics, others that see it as a theory of distribution, others that see it as a metaethic statement etc.; we are just speaking of different uses of the term.
It would be different if we spoke of, say, specifically J.S. Mill form of utilitarianism, but this is not what we are doing.
Comments on this entry are closed.