Philosophy goes through self-conscious, periodic bouts of historical forgetting.* These are moments when philosophical revolutionaries castigate the reading of books and the scholastic jargon to be found in there, and invite us to think for ourselves and start anew with a new method or new techniques, or new ways of formulating questions (and so on). When successful, what follows tends to be beautiful, audacious conceptual and even material world-building (in which sometimes old material is quietly recycled or reinterpreted). Hobbes, Descartes, Bentham, Frege, and Carnap are some paradigmatic exemplars of the phenomenon (that has something in common with, of course, religious reformations and scientific revolutions). There is a clear utility in not looking back.
What’s unusual about utilitarianism is not that it’s a nearly continuous intellectual tradition that is more than two centuries rich. Even if we start the clock with the pre-Socratics that’s not yet a very old tradition by the standards of the field. But rather that it has become so cavalier about curating and reflecting on its own tradition. In one sense that’s totally understandable from within the tradition: the present just is the baseline from which we act or design institutions or govern society (etc.). Spending time on the past just is opportunity costs foregone or, worse, a sunk-cost fallacy. Worrying about path dependencies and endowment effects prevents one from the decisive path forward.
Of course, the previous paragraph is too crude: some active utilitarians write with sensitivity and care about the less savory parts of tradition’s past (I think of Bart Schultz); And certainly Julia Driver has shown that moral complexity can be fruifully discussed in the tradition. Yet, most of what’s produced about the past from within the tradition is either very narrowly focused or the product of people who don’t really develop the tradition forward. (This claim becomes more evident if you look at the practice of Kantianism, Humeanism, and the variety of virtue ethics out there. Even within formal philosophy there is better curating of their own past.) Nothing has matched Halevy’s The Growth of Philosophic Radicalism in the century since it has appeared in richness and insight (and philosophically it is by no means without problems). Obviously, too much looking back stalls progress (one may think that this what happened say, within, phenomenology or Austrian economics at one point or another), but never checking the rearview mirror also generates certain known risks.
In fact, there is a clear recruiting advantage to this march-forward stance. The barriers to entry are very low, the fundamental principles are well developed and clear enough, and one can start making progress within utilitarianism or applying it to sets of problems rather quickly. And while that’s a gross simplification for areas where there has been lots of existing utilitarian activity, it remains true in the large. If you are earnest and want to improve the world, utilitarianism gives you ready-made tools to think about doing so! And what tools–elegant, systematizing, and action-guiding! That they risk flattening things is a virtue not a bug. And because the baseline is given, a side effect has been that working within existing institutions is seen as more efficient and more promising to contribute to improving the world –say by making lots of money trading in crypto-currency — than by tackling some of the opaque, entrenched structural injustices through the political processes that got us here.
That’s to say, within utilitarianism there is a curious, organic forgetting built into the way it’s practiced, especially by the leading lights who shape it as an intellectual movement within philosophy (and economics, of course), and as a social movement. And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and – while Bentham rejected this — imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry. But in so far as autopsies are done by insiders they never question that it is something about the character of utilitarian thought, when applied outside the study, that may be the cause of the trouble (it’s always misguided practitioners, the circulation of false beliefs, the wrong sort of utilitarianism, etc.).
In my view there is no serious study within the utilitarian mainstream that takes the inductive risk of itself seriously and – and this is the key part – has figured out how to make it endogenous to the practice. This is actually peculiar because tracking inductive risk just is tracking consequences and (if you wish) utils. It is especially odd because there is, within utilitarianism, a continuous return to the question — which in a way is crying out for an inductive risk analysis — of how much lying and deception of the public is permissible, or to be precise, “the possibility of esoteric morality.” (I also find it odd that this literature is part of the public, credit economy rather than an oral tradition. But what do I know about esotericism?)
For example, the best example I am familiar with that has really started to do something like what I propose within philosophy (within economics I warmly recommend work by David M. Levy, Sandra Peart, and Thomas C. Leonard), from someone who at least gives utilitarianism all the benefits of the doubt, Allen Buchanan’s (2007) “Institutions, Beliefs and Ethics: Eugenics as a Case Study” ends up arguing that “ethics must incorporate social moral epistemology, the systematic comparative evaluation of the effectiveness and efficiency of social institutions in producing, transmitting and sustaining the beliefs upon which our moral motivation, judgment and reasoning depend.” This paper has received fewer citations that some of my papers on long dead figures, so it is fair to say it has not generated a major discussion fifteen years on. UPDATE: It has received little scholarly attention.** The most serious response I found suggests (by De Volder, who I admire greatly) basically that ‘this time is different’ (because liberal eugenics does not involve coercion—a claim that is not scrutinized). Maybe I have missed the utilitarian that has taken it seriously, but if it’s out there it has not generated a real debate or uptake.
Notice that Buchanan simply denies the autonomy of ethics. But even if one accepts that diagnosis (and this helps say, guard, against expert over-confidence and misplaced trust in other experts) this does not exhaust the inductive risk. And while in some philosophical theories downstream consequences are shrugged off within utilitarianism today long distance, downstream consequences are the opiate of several projects (including, of course, longtermism). In wider historical context, this unwillingness to take seriously inductive risk of philosophy, in a project that aims to reshape the world, is also odd because the father figure of philosophy, Socrates, was executed, in part, in virtue of the perceived negative effect he had on his students.
Don’t get me wrong utilitarianism is a beautiful, systematic theory, a lovely tool to help navigate acting in the world in a consistent and transparent matter. When used prudently it’s a good way to keep track of one’s assumptions and the relationship between means and ends. But like all tools it has limitations. And my claim is that the tradition refuses to do systematic post-mortems on when the tool is implicated in moral and political debacles. Yes, somewhat ironically, the effective altruism community (in which there is plenty to admire) tried to address this in terms of, I think, project failure. But that falls short in willing to learn when utilitarianism is likely to make one a danger to innocent others.
This present rant — feel free to check out my more judicious scholarship and blogging about this topic — is obviously triggered by the failure of FTX and the subsequent public comments by William MacAskill, who quite naturally seems angry that he was lied to (although how sincere he is in light of the possibility of esoteric morality I leave to others), and correctly outraged on behalf of the victims that the benefactor of his movement probably committed significant fraud and theft. And if you do not believe me that massive historical forgetting is at play here, at one point in his twitter thread MacAskill writes, “I know that others from inside and outside of the community have worried about the misuse of EA ideas in ways that could cause harm. I used to think these worries, though worth taking seriously, seemed speculative and unlikely.” As if to think such harms had not already occurred in the past!
I close with one final criticism of MacAskill (who in some ways is a victim of the stories utilitarians tell themselves). When it comes to his own role in the FTX debacle — and the story has been told in many media (in the context of his book launch) — about how he encouraged Bankman-Fried to embrace “earning to give” over a by now mythical lunch in Cambridge, about that no word in his thread. (Just to be sure, I am actually a fan of effective altruism, albeit a critic of long-termism.)
By framing the problem as Mr. Bankman-Fried’s “integrity” and not the underlying tool, MacAskill will undoubtedly manage to learn no serious lesson at all. I am not implicating utilitarianism in the apparent ponzi scheme. But Bankman-Fried’s own description back in April of his what he was up to should have set off alarm bells among those who associated with him–commentators noticed it bore a clear resemblance to a Ponzi.+ (By CrookedTimber standards I am a friend of markets.) Of course, and I say this especially to my friends who are utilitarians; I have not just discussed a problem only within utilitarianism; philosophy as a professional discipline always assumes its own clean hands, or finds ways to sanitize the existing dirt.
- This post was originally published at D&I and has been republished with minor modifications.
+It’s possible that in the end it wasn’t a ponzi, just simple theft or gambling with other people’s money.
**Update: 18 Novenber, 2022: An earlier version of this post linked to and quoted from footnote 13. in a paper J. Anomaly “Defending eugenics. Monash Bioeth. Rev. 35, 24–35 (2018).” <https://doi.org/10.1007/s40592-018-0081-2>. Earlier today, professor Anomaly contacted me and suggested that I misrepresented his footnote. While even after reading his letter and returning to his paper and the Buchanan’s I feel that my original reading can be defended on textual grounds, I do believe now that my interpretation of the footnote does no justice to what professor Anomaly intended to say. And while in different contexts I deny that an author’s intentions settle such matters, I do think they can help one diagnose an honest misunderstanding or less than felicitious phrasing. Since my use of professor Anomaly’s footnote was merely illustative, and does not change my argument, I have removed the passage.
{ 72 comments }
John Quiggin 11.13.22 at 3:47 am
I don’t know anything about MacAskill or anything much about Bankman-Fried, so I may be way off the mark in the comments that follow.
I’ve been denouncing Bitcoin for almost as long as it has existed, and have run across one group of philosophers who are keen defenders, calling themselves Resistance Money https://www.resistance.money/. They don’t look like utilitarians to me, but I’m not an expert
As the name of the Resistance Money group implies, the suggestion that cryptocurrency is “working within existing institutions” is problematic. To the extent that participants in crypto are more than simple crooks or Ponzi speculators, they appear to believe that it will radically change existing institutions.
I’m mystified by recent developments around EA. As I encountered it a few years ago, it was a relatively straightforward application of consequentialism. The idea, in giving aid to poor countries, people in rich countries should choose the approach, and the organizations that do the most good, not those that make the donors feel good about themselves. I’m happy to defend this against the opposite view which I take (perhaps incorrectly) to be the conclusion derived from virtue ethics. But now there is all sorts of crazy stuff around the idea of “longtermism”, as well as this entanglement with crypto. My diagnosis is that EA fans have the same problems as dogmatic libertarians, essentially that of teenagers who have just discovered they are smarter than their high school teacher, and conclude that they can push logic to whatever conclusion they reach, however absurd.
Eric Schliesser 11.13.22 at 4:37 am
Hi John,
1. I am friends with one of these, who is definitely no utilitarian. I can’t speak for all of them.
2. Yes, there are definitely folk (like the people from Resistance Money), who think crypto is a way to get rid of or revolitionize some existing institutions. My post was not about them.
3. EA has non-trivial presence in professional philosophy. And longtermism has foundations in non-trivial bits of professional philosophy (some of it very good and prestigious) including work by Parfit, Bostrom, and others. The relationship between EA, longtermism, and crypto has been documented by mainstream press (especially in context of ), but a really nice bit of reportage can be found here (in the context of (!!) Sequoia’s investment: https://webcache.googleusercontent.com/search?q=cache:pizI33lYOGAJ:https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/&cd=1&hl=en&ct=clnk&gl=us.)
John Quiggin 11.13.22 at 6:58 am
I’ve read a few things by Bostrom, which just strike me as the philosophical equivalent of math proofs that 1 = 0. Something must be wrong, but it’s not worth my while to work out exactly what.
As regards Parfit, the “repugnant conclusion” reads to me like a reductio on Sidgwick, who seems to have been the first person to argue for a notion of total utility. Bentham, the Mills, and (AFAICT) all the early utilitarians were Malthusians, which implies an average utility view. Longtermism, again AFAICT, goes the other way, picking up the repugnant conclusion and running with it.
Chris Bertram 11.13.22 at 8:28 am
@JohnQ Malthusianism you can only get from their Malthusianism to a rejection of total utility on an assumption that the starving millions have lives worth living, if only barely. If each of them has a life that would be better not lived then then Malthusianism could be consistent with total utility.
Matt 11.13.22 at 9:30 am
The link to the paper by Schultz doesn’t work (though w/ a few steps you can figure out what it is) but I also want to mention his co-edited volume Utilitarianism and Empire
(There’s some interesting discussion in his massive biography of Sidgwick, too, though less focused.) But it’s interesting to me that one of the best historians of utilitarianism, JM Schneewind, isn’t even a utilitarian. Is that so for many other movements? (For example, I found most of the historical chapters in the Cambridge Companion to Utilitarianism to be underwhelming.)
On the total vs average utilitarian question, both versions lead to pretty odd conclsuions. My view is that this suggests giving up utilitarianism, but then, the ability of utilitarians to outsmart their opponenets by embracing a reductio is legend. (I also think that it’s not clear that either Bentham or Mill had thought through the difference between total and average utilitarianism clearly, and that it was only with Sidgwick that this was done. Bentham, for example, regularly talks about “happiness” being the “sum of pleasures experienced”, which seems to indicate a total view, but other times says things that suggest an average view. My impression is that he (and Mill) just hadn’t worked this out rigorously. And, for the sort of reform projects that they [maybe especially Bentham] were interested in, it wasn’t that important.)
As for Bankman-Fried, while I don’t doubt that there was some illegal fraud at the end, I suspect that this will be one more case where the real scandle was what was legal.
John Quiggin 11.13.22 at 10:15 am
Most of the time in social policy, we take the population in question as given, so there is no distinction between maximizing the sum of a quantity and maximizing the average.
Matt 11.13.22 at 10:19 am
Most of the time in social policy, we take the population in question as given, so there is no distinction between maximizing the sum of a quantity and maximizing the average.
And that’s probably a pretty reasonable thing to do in many, but obviously not all, social policy discussions. It doesn’t tell you what to do in cases where you can’t take the population as given, because the policy choices will change the population in obvious ways, though.
SusanC 11.13.22 at 10:22 am
Many of the Effective Altruists seem to have become believers in the AI Apocalypse. (Yudkovsky, etc.)
If we can call Marxism an apocalyptic Christian heresy, then much the same could be said of the AI apocalypse; there is reason to be very skeptical of it.
Old Marxian apocalyptic: dictatorship of the proletariat/communism as the replacement for the second coming of Christ and the kingdom of God on Earth.
New Marxian apocalyptic: capitalism inevitably leads to the extinction of human being as a species. Variant 1: global warming. Variant 2: AI apocalypse. Economically like variant 1, but with an actually Satanic being getting created by capitalism.
SusanC 11.13.22 at 10:26 am
David Chapman made a joke about the “Nick Bostrom cinematic universe”; the AI apocalypse has something of the flavour of a comic book plot.
John Quiggin 11.13.22 at 10:28 am
You can do a pretty good version of the repugnant conclusion for Rawls. Given any society where some people have a tolerable life (and no one is so bad as to better off dead), the chooser behind the veil of ignorance would rather be alive in that society than not existing at all. So, if we are to choose a social arrangement, we would want to maximise the chance of existing.
engels 11.13.22 at 10:33 am
one of the best historians of utilitarianism, JM Schneewind, isn’t even a utilitarian. Is that so for many other movements?
Eric Hobsbawm, capitalism
engels 11.13.22 at 10:50 am
if we are to choose a social arrangement, we would want to maximise the chance of existing
Assuming the population of choosers is less than or equal to the population of the society, all choosers are guaranteed existence, I’d have thought. In Rawls’s experiment it is the members of the society themselves who are doing the choosing (and the size of the population doesn’t seem to be a principle of justice).
John Quiggin 11.13.22 at 10:55 am
Matt @7 Agreed. So, a charitable reading of Bentham (etc) would be that he used the term “sum” when thinking about policies for a given population, but “average” when the population size was in question.
And genuine question: what is the odd conclusion derived from average utilitarianism? I looked at SEP and the objection seemed to be that it didn’t maximize total utility, which seems circular
engels 11.13.22 at 12:26 pm
I thought the basic idea of long-termism wasn’t that population growth (or even duration) was necessarily good but that it should be given much more attention in present decision-making than it has been. Ie it is similar to Paul Segal’s argument in his post (which I agree with, despite disagreeing about much else) about giving due weight to the interests of everyone in the world when evaluating social arrangements, but with a temporal rather than a spatial emphasis.
Matt 11.13.22 at 12:32 pm
John at 12: Here’s Tim Mulgan in the Cambridge Companion to Utilitarianism: “…the average view faces problems of its own. Many are the varations of the “hermit problem”. Suppose everyone in the cosmos is extremely happy. We create a new person on a distant unihabited planet. His life, while very good, is slightly below the cosmic average. Under the average view, we have made things worse; and whether we ought to have created the hermit in the first place depends on the happiness of people in distant corners of the cosmo, with hom our hermit will never interact. Both claims seem implausible.”
On its face that’s a pretty sciece-fictiony example, but then again, so are most standard versions of the repugnant conclsuion. And, it’s not to hard to imagine cases where the average utilitarian view suggests that we ought do either do away with, or never bring about, “low performers”, even if they are living basically good lives, given that this would raise the average. (The historical connection between utilitarianism and eugenics may be of interest here.) There are ways to try to get around this, but they don’t seem to me to be obviously more plausible than the tools open to the total utility view.
I don’t have time to address it now ( a cop out, but I’m behind in preparing for teaching tomorrow…) but I don’t think the case made in 10 is a very plausible reading of what’s being done in the original position. Maybe more later.
engels in 11 – thanks. That is an interesting possible example.
engels 11.13.22 at 12:47 pm
what is the odd conclusion derived from average utilitarianism?
One odd conclusion: when deciding whether to have a child you should rate their expected utility against the average utility of the rest of the population. If everyone else is miserable, go for it; if they’re happy, maybe hold off.
LFC 11.13.22 at 2:03 pm
one of the best historians of utilitarianism…isn’t even a utilitarian. Is that so for many other movements?
Frank Manuel on Saint-Simon and related strands of the “utopian” tradition.
MisterMr 11.13.22 at 2:45 pm
@OP
“And this is remarkable because utilitarianism for all its nobility and good effects has been involved in significant moral and political disasters involving not just, say, coercive negative eugenics and – while Bentham rejected this — imperialism (based on civilizational superiority commitments in Mill and others), but a whole range of bread and butter social debacles that are the effect of once popular economics or well-meaning government policy gone awry.”
And how do you know that eugenics, imperialism, and assorted government policies caused “disasters”?
You know this because you approximattely gauge them to have reduced general happiness by a lot.
Hence, utilitarianism wins again!
SusanC 11.13.22 at 4:55 pm
Antinatalism is a consequence of several other philosophical positions, so it isn’t quite a reduction ad absurdum of utilitarianism to argue that it implies antinatalism.
E.g. the argument that everyone ought to become a Buddhist monk in order to end the cycle of samsara kinds of gets you to much the same place,
John Quiggin 11.13.22 at 6:53 pm
Matt @8 I’ve always found thought experiments unconvincing, and this one is no exception, but at least it’s possible to see where the error is fairly quickly. Utilitarianism, as advocated by Mill, Bentham etc, is a political philosophy not a claim about cosmic good. What happens on a distant planet can’t matter to us, unless we communicate. What happens when two societies with vastly different resources and technology come into contact is a problem for utilitarians (eg JS Mill on India), but its not as if other ethical/political theories have an easy answer to this.
We’ve had an interesting discussion of thought experiments here, such as this one https://crookedtimber.org/2013/12/03/a-non-violent-unfunny-trolley-problem
Engels. I’ll do the “out-Smart” move. Most people, in deciding whether to have more children, take into account whether those children are likely to have a happy life, and make that judgement with reference to the society they live in. The median conclusion is that they should stop at two. Taking the welfare of the world as a whole into account, a bit less than two looks better.
Clarifying: I don’t believe in utils, so I’m really defending consequentialism, rather than utilitarianism in the strict sense.
Sebastian H 11.13.22 at 7:04 pm
EA proponents often suggest that one of the best ways to help is to earn as much as you can and then donate almost all of it. I wouldn’t think that extends to ‘defraud as much as you can and then donate almost all of it’ but I’m also not completely sure it doesn’t.
Peter Dorman 11.13.22 at 8:39 pm
I’m not sure how responsive this will be to the OP, but the issues of utilitarianism as such hasn’t come up for a long while on CT, so I’ll get these two points off my chest.
Let’s assume for the sake of discussion that values, things whose amounts or intensities are inputs into a utilitarian calculation, are incommensurable. Rather than having a single pile resulting from one choice and comparing it to a corresponding pile for another choice, we have multiple piles for each with multiple comparisons and no clear algorithm for condensing them into a single comparison. In that case, utilitarianism has a soft boundary with other traditions that either prioritize particular types of values or propose criteria for summarizing the piles of outcomes. A virtue approach, for instance and as I understand it, relatively devalues all outcomes compared to a particular input (a more or less complex intentionality) into choice, but insofar as some outputs are markers of this input it has consequences for combining across incommensurates.
The longtermism thing simply baffles me. Starting from the present there is a cone of uncertainty extending into the future. We know, don’t we, that incorporating this uncertainty is like a form of discounting, so we should apply a discount rate to potential future outcomes. And it is reasonable to suppose that this discounting can more than offset the increased weight distant future outcomes might otherwise possess in current decisions. Is there an EA rebuttal to this point that I’ve missed?
Peter Dorman 11.13.22 at 10:46 pm
While I’m at it, two further thoughts on EA:
EA is driven by quantitative analyses of measures or investments to improve the well-being of the worst off, but it is subject to the same concerns that have been raised against RCT in applied development. (a) Can we generalize from a set of study cases to the wide range of cases that are different in meaningful ways? (b) Are measures that can be evaluated by quantitative analysis often inferior to those that can’t, like wide-scope institutional change? (Surely yes.) (c) Given that quantitative analysis, particularly if it employs RCT, can be costly, is there a relevant funding bias? (Again, surely yes.)
There is a particular offshoot of EA that resurrects a sort of Gospel of Wealth: for those with the talent, the best option is to make as much money as possible and then give it away according to the criteria of EA. It’s actually a bit more radical than the old GofW, which took getting rich for granted and took matters from there. From a utilitarian perspective, the key question is, what are the costs of wealth accumulation that ought to be set against the benefits of its disbursement? I suppose, if personal wealth accretion is identical to its social increment, the costs are minimal, but in the real world this is unlikely. Much personal wealth accrues from transfers, such as the exploitation of nature (and people who depend on it) and workers, and those are substantial costs.
But also important is the claim that tax avoidance (a big deal with crypto) is justified by EA motives. This too is a sort of transfer and carries with it the cost of less public funding for the things the public funds. Actually, there are two effects on public finance, the direct effect of avoiding one’s own tax payments and the secondary effect of degrading the tax collection system as a whole. When I read pronouncements by EA-ish billionaires about how good it is they avoided paying taxes that others pay, it sounds both arrogant (I know so much better how to spend money) and obtuse (failure to see the consequences obvious to others).
nastywoman 11.13.22 at 10:50 pm
and I have absolutely no idea what you guys are talking about BUT today on Twitter was this question about “longtermism” if it’s really true that there are guys who masturbate before in order to last longer –
And is that true?
(OR – if we would take it as a Philosophical Parabel for the collapse of Bitcoin?)
nastywoman 11.13.22 at 10:56 pm
AND my favourite joke –
STILL
is
when Elon Musk and Donald Greenwald get asked at a Party in Belgium by a member of the Flemish minority -(or are they in the majority now)
DO YOU HAVE ELECTIONS IN YOUR COUNTRY?
AND
both Glenn Trump and Elon Must answer unison:
YES!
Every Morning!
Fergus 11.13.22 at 11:55 pm
OP is a perfectly reasonable criticism of utilitarianism, but I’m really struggling to see what it has to do with FTX, Sam Bankman-Fried or Will MacAskill. Plenty of moral tragedies have arisen from utilitarian reasoning, but is this one of them? In those cases we have a lot of written evidence about how utilitarian thinking led to awful decisions – here we don’t have anything like that, we just know there’s been a bit of crypto fraud. There are a lot of crypto frauds – it’s not like we need a special explanation to make sense of it. Even if his thinking was “well, I can move the funds around a bit, probably no harm will come of it and I’ll make a lot of money, which can be used for altruistic purposes” – that seems a lot more similar to “well, I can move the funds around, no harm will come of it and I’ll make a lot of money, which I would enjoy” than it does to any thorough utilitarian reasoning. That is – it seems plausible that he’s a guy who worked at a hedge fund, then started his own, had a lot of money around, and got corrupted by the heady feeling of that power and resources. Not a story that really has anything meaningful to do with utilitarianism, even if it’s idiosyncratic in certain ways because the guy who was corrupted, in this case, professed to be a utilitarian.
Fergus 11.14.22 at 12:36 am
A second comment with some more general thoughts…
Re: ‘esoteric morality’, I’ve personally never understood why this is meant to be an objection to utilitarian/consequentialist theory. It’s really no more than the simple point that even if “do what maximises the benefit of all consequences of your action” is a true principle, it’s totally useless to try and apply it as a decision-rule. You can’t be trying to calculate all the downstream consequences of your action every time you make a choice with any moral significance. I know some theorists (notably Bernard Williams, but bizarrely many people take him seriously) have an objection to the idea of a moral principle being ‘true, but not to be used as a decision-rule.’ But I genuinely don’t understand why – Aristotle doesn’t claim that every choice should be made after a deliberate consideration about virtue (in fact the opposite!), and only a very ungenerous critic would insist that Kantian ethics means working out categorical imperatives every time you act. People use rules of thumb all the time, and the fact that utilitarianism probably involves more rules of thumb than other theories doesn’t seem like a serious point against it. (The ‘esoteric/Government House’ utilitarianism as described in the OP, where it involves secretive governance of an unsuspecting mass who can’t be trusted with moral truth, can obviously be sinister – but I think that in any modern application, a good utilitarian would say that transparent good government is the right rule of thumb.)
JQ @ 20 – the claim that utilitarianism is “a political philosophy not a claim about cosmic good” is (these days) a controversial one, and you’re right it’s at the root of this. I was in a grad class taught by MacAskill about some issues related to long-termism, and he endorsed what some people call the ‘container view’ of morality, where the relevant picture really is the cosmos (which ‘contains’ people, societies, pain, pleasure, etc) rather than a fixed society and set of relationships. If you stipulate that you’re interested in moral/political claims bounded by existing relationships and the duties they generate, yes, the strange cases don’t arise so much.
Peter Dorman @ 22-23 – I am not an effective altruist myself but have been around enough to answer a few things you raise:
– On long-termism, I think EAs would not say there should be a ‘discount rate’ per se, but uncertainty is built into all the calculations and they’d accept that far-off consequences are less likely. The core piece of logic in the whole long-termist concept is that if you’re talking about consequences of adequate scale, then it still makes sense to care a lot about them even if you are very unsure of how they’ll play out or how your actions will affect them. And the core empirical claim is that the issues long-termists are talking about (AI risk, predominantly, but also pandemics much more serious than Covid, and so on) could cause harms on the scale of many billions of lives lost.
– I think EAs (at least those not wholly sunk on AI stuff) are very rigorous at thinking about your questions (a) and (c): they genuinely do alter priorities as evidence emerges of whether one RCT doesn’t generalise, and think about the level of funding/effort that has gone into understanding and solving a problem when they’re assessing their certainty about it and whether it would benefit from more work.
– Your question (b) tends to get extremely irritating responses from EAs, but I think the good answer that can be given is that at the relevant margin (what should you do with your money, or even your career) supporting small-scale things doesn’t make institutional change more or less likely. I don’t remember exactly where it is, but there’s a good (slightly ill-tempered) articulation of this point by Peter Unger: he advocates giving virtually everything to effective charities, and a critic made your point with the slogan “if Oxfam ran the world”, to which his response was effectively to agree that it shouldn’t, but also point out that it doesn’t and isn’t anywhere near doing so
– Re: costs of wealth accumulation, I haven’t closely kept track of this, but I do remember a few years ago some of the relevant organisations/people (80,000 Hours is one, MacAskill may have spoken about it) coming out and saying that they were revising their lists of ‘best’ professions for earning-to-give, to incorporate the harms done by different professions. I am sure their calculations are relatively limited in scope, though, compared to the point you’re making.
Alex SL 11.14.22 at 1:13 am
The key problem with all these – Utilitarianism, Effective Altruism, Longtermism – is that they are textbook cases of Motte and Bailey arguments. Or to put it another way, in each case they can be defined in a way to make them acceptable but utterly trivial, and in another way that makes them remarkable new concepts but abhorrent.
I google Utilitarianism, and I get “a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all affected individuals” or “a theory of morality that advocates actions that foster happiness or pleasure and oppose actions that cause unhappiness or harm”. Defined like that, how many people would really disagree with it? Everybody is utilitarian, it seems, and so it is about as relevant a school of thought as “regularly breathe in and out-ism”. It is only when we come to implausible thought experiments on the lines of torturing one person to make millions happy that it all falls over.
Effective Altruism can withdraw to the motte of, as per their own website, “a research field and practical community that aims to find the best ways to help others, and put them into practice”. Who doesn’t love that? Who would openly argue, no, actually we should waste enormous amounts of our donations on administrative overheads instead of helping people? Seems we are all EAs, then. Not really, though, because in practice they like to spread out on the bailey of earning fifty billion by poisoning the water supply, donating fifty million to water remediation, and then getting a pat on the back for their charity. The dichotomy here is no-brainer versus flimsy fig-leaf for selfishness.
And longtermism “is the view that we should be doing much more to protect future generations”, fide William MacAskill. Few environmental activists, I think, would reject the idea that we should leave the planet in a habitable state for future generations instead of short-sightedly destroying their opportunities. In practice, however, Longtermists mostly seem to (supposedly) work against extinction risks and towards utopian scenarios that are most charitably explained either as innumeracy or the as result of having read too much science fiction.
Nobody should be taken remotely seriously who argues that global warming isn’t such a biggie because even if it collapses complex economies and kills billions, it will still only kill part of humanity, and the rest can adapt and recover, but malicious self-improving AI will somehow be able to ignore the laws of physics and behave effectively like a vengeful Old Testament god. But that is what many of them argue.
The final piece of the puzzle is the mere existence of future people. Again, most people would immediately agree that, assuming we have a descendant in the future, we should leave the world behind in a state that gives them a chance at a good life just like we had. But the idea of a duty to ensure that they exist in the first place seems extremely odd, because until they exist, they don’t even exist. (Weird having to actually spell that out.) Breaking it down to a personal level, surely it is immediately obvious to anybody except certain flavours of religious zealot that a commandment to produce as many children as possible, even to the detriment of your own quality of life, is morally odious. The same logic extends, in my eyes, to the idea that the hypothetical existence of quadrillions of future minds across the galaxy should be prioritised over the welfare of people alive right now.
But, again, those minds will never exist anyway, because interstellar travel and mind simulation are, best we can tell right now, physically impossible fever dreams.
Joe B. 11.14.22 at 2:40 am
@22 Peter Dornan. Thank you for bringing up uncertainty about the future. I am not a philosopher, but it must be true that the whole epistemology of the future must be part of their considerations? Right? I sometimes think longtermists get around this by engaging in a secular form of Pascal’s Wager, at least in the popular accounts I have read. Since there are essentially an infinite number of people in the future, any (positive) utility multiplied by infinity is …infinite! The analogy to religious arguments is striking. Take the Franciscan fathers who enslaved and converted the indigenous population of California. For them saving a soul saved it for eternity. One could cast this as a sort of longtermist utilitarianism as well, and I suspect they saw it in more or less those terms. I’m not sure the modern secular version is any better.
nastywoman 11.14.22 at 7:10 am
‘And longtermism “is the view that we should be doing much more to protect future generations”,
Good Lord?
and I thought it is when guys masturbate before in order to last longer.
Can’yall forgive me – as I must be really…
sophisticated?
Thom S 11.14.22 at 7:45 am
@26 Alex SL
I’m glad to see someone saying it – so much of EA appears to be this weird semantic game where you agree that everyone being dead is bad and that there’s no inherent law of nature that prevents a sufficiently advanced machine from being, in principle, as smart as a human being. And then someone jumps out of a bush and starts ranting about how we all need to drop everything and put 100% of the planet’s resources into preventing a god-machine from arising and reducing the world to paperclips by 2035 and the only way to do this is to inexplicably pay for even more AI research overseen by a think tank run by a weirdo who dropped out of college and…
It’s this tall tower of reasoning with ordinary concerns at the bottom, sheer insanity at the top and a bunch of shaky reasoning in the middle (naïve assumptions about the nature of intelligence, concepts of exponential explosions in self-directed recursive improvement that seem to ignore the present-day realities of AI research, risk assessment that weighs infinite possible future lives against finite present ones, half-baked theories of mind etc etc.). And it seems to be catnip for a certain type of self-professed smart person.
It also appears to come in a cluster with some other ideas that range from interesting (polyamory and other flavours of alternate relationship structuring, traditional meditative practices as guides to certain theories of mind), to vaguely worrying (nootropics and other fringe health science stuff) to deeply concerning (concerns about dysgenics, HBD).
Matt 11.14.22 at 8:50 am
Fergus at 25: Re: ‘esoteric morality’, I’ve personally never understood why this is meant to be an objection to utilitarian/consequentialist theory. It’s really no more than the simple point that even if “do what maximises the benefit of all consequences of your action” is a true principle, it’s totally useless to try and apply it as a decision-rule. You can’t be trying to calculate all the downstream consequences of your action every time you make a choice with any moral significance.
At least as I understand it, that’s not the objection to “esoteric morality”. Rather, the idea is that “the elite” really do follow utilitarian principles – this allows them to lie when it’s beneficial, including, for example, about the nature of morality. The common herd, however, can’t be expected to do this, and so they are sold a sort of Sunday-school morality which, while not the “true” morality, is the best thing for the limited sort of folk to follow. That’s to say, for most people, believing in utiltarianism would be bad for them, so you sell them Sunday-school morality. The elite, even though they can’t of course really hope to rationally calculate al the time, know the true morality and follow it in their own lives, and also do so while directing the lives of the herd. That, I think, is what’s supposed to be problematic.
Fergus 11.14.22 at 10:58 am
@ Matt – I agree that is the critique of what Sidgwick talked about (the ‘government house’ version.) I guess my point was just that I don’t think anybody particularly defends that, which is why in my head it’s linked to the question of fully-self-effacing utilitarianism and whether that makes sense. Though I wouldn’t be shocked if there are some EAs out there subscribing to a view of themselves as an esoteric morality elite, in which case fair enough…
Related to that point – since my earlier comment about FTX/SBF I have read a few things that make me more inclined to agree the fraud is related to his moral views: https://twitter.com/KerryLVaughan/status/1591508697372663810?t=at7h6-03XgnWjPQzlzR_9g&s=19
So that would support the OP against my earlier complaint. Though it should also be said that this twitter thread makes it sound like other EAs closest to him recognised and rejected his approach
TM 11.14.22 at 1:50 pm
It seems to me that when the hypothetical happiness of hypothetical entities in a hypothetical future is touted as a real world policy guideline, there is no end to the absurdity that can follow. JQ’s analogy of a proof for 1=0 is a good one I think. Somewhere along all the hypotheticals, the philosopher-magician has divided by zero and hopes that the audience hasn’t noticed.
TM 11.14.22 at 1:57 pm
JQ: “Most people, in deciding whether to have more children, take into account whether those children are likely to have a happy life, and make that judgement with reference to the society they live in.”
I’m not sure at all that is true for most people. I’m not disputing that most people do want happy lives for their children (if they have any) but I doubt that their decision making process is much influenced by hypothetical calculations. Perhaps with the exception of those who choose “not to put any more children into this miserable world”, but even they are probably more motivated by their own perceived interest than that of their hypothetical offspring.
TM 11.14.22 at 2:04 pm
“I wouldn’t think that extends to ‘defraud as much as you can and then donate almost all of it’ but I’m also not completely sure it doesn’t.”
This is easy. Obviously the immense good you do with the money outweighs the damage caused to your rich victims’ happiness, so you should opt for as much fraud as possible. Isn’t this line of argument one of the main objections to utilitarianism? (Not that we wouldn’t all like some good old Robin Hood but in real life, Robin Hood mostly turns out just another greedy cleptocrat).
LFC 11.14.22 at 2:46 pm
I don’t know all that much about ‘effective altruism’, but I thought it involved, for instance and as just one example, doing research to make sure that one’s charitable donations go to organizations that spend money in ways that are consistent with their mission and help people, rather than mostly on administrative overhead and fundraising.
I don’t really understand what altruism — effective or ineffective — has to do with artificial intelligence (AI). An affluent or even a not-so-affluent person could decide to give a very large proportion of his or her income to ’causes’ (e.g. global public health, poverty reduction, education, climate, etc.) — and to organizations that work effectively on those issues– without having any particular interest in artificial intelligence. Apparently there is some connection between effective altruism as a movement and artificial intelligence, but I frankly have no idea what the connection is.
Trader Joe 11.14.22 at 4:47 pm
The saga of FTX is really as old as the stones – its simple theft and conversion. Its not even really a ponzi scheme since most who will lose money quite willingly put it there on the false notion that they would profit by doing so – it that instance its not different or grand than any other failed investment (apart from scale perhaps).
The fact that the failure in question involved a colorful ring leader and crypto currency is really just window dressing and has rather little to do with whether the ‘profits’ of the enterprise were ultimately to be used for good or evil. His gains were little more honestly gotten than Al Capone’s (albeit with fewer Tommy Guns) and its hard not to look at source of funds in deciding if they poison the end use.
I’m not near knowledgeable enough to weigh in on utilitarian arguments, but I know the pretty basic fact of – don’t take what isn’t yours. That’s the bottom line here no matter how many loans, coins or legal entities are involved.
P.S. Even the tiniest bit of regulation and oversight would have stopped this – simply having published financials audited by a credible firm would have given a few investors pause (though wouldn’t have stopped what happened).
TM 11.14.22 at 4:59 pm
LFC: This article might help answer your question:
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
My understanding is that some proponents of EA and “Longtermism” speculate that AI will lead to humans being replaced with computer simulations, which they seem to see as a vast improvement in aggregate utility: “That is what our ‘vast and glorious’ potential consists of: massive numbers of technologically enhanced digital posthumans inside huge computer simulations spread throughout our future light cone.”
engels 11.14.22 at 6:09 pm
Perhaps with the exception of those who choose “not to put any more children into this miserable world”
Worth noting this is almost the opposite of what average utilitarianism says you should do: widespread misery doesn’t matter as long as your offspring won’t be even more depressed and might even be a reason to pop the sprog (to get the average up).
LFC 11.14.22 at 8:25 pm
TM @39
Thank you for the link; I’ll check it out.
Colin Danby 11.15.22 at 12:00 am
Stepping back, this seems like another case of bad actors using philosophers as cover, and some philosophers naively playing along. I am sure MacAskill is totally, painfully sincere, but when he writes that “I had put my trust in Sam,” but in cultural terms the “trust” was endogenous. Bankman-Fried presented himself as the good guy in crypto, and his charitable activities were a large part of creating an image as a selfless innovator. A lot of people ate it up. Felix Salmon tweeted last week that he “was in many ways the most trusted human in all of crypto.”
Consider for example a long May 13 2022 interview in the FT (the most skeptical of the pre-downfall profiles I’ve seen) which includes this passage:
“I take a perfunctory bite of my burger and steer the conversation towards his ethics. Bankman-Fried says he switched from charities to crypto because he became convinced that he would do the most good through earning as much as he could, then giving it away.
“I talked to a few of the charities [and asked] . . . would you like my money or my time?” he recalls. The charities said it wasn’t even close. “Your money — by, like, a factor of five.”
His background in effective altruism, a philanthropic movement with a utilitarian bent, gives him a framework for his decisions, weighing up costs and benefits to achieve the greatest good for the greatest number.
Like other adherents, Bankman-Fried favours causes with the highest impact on human lives, from climate change to preventable disease. The primary limit on his donations is how fast the money can be effectively deployed, he says.”
The interviewer pushes him a bit on how he squares all this with a private jet, but the overall frame goes unchallenged.
Fergus 11.15.22 at 5:28 am
LFC @ 37: I think the missing link is this – probably the most important idea in effective altruism is that you should try to rank and assess causes. So it’s not just (or even mostly) about reducing overheads and ensuring ‘consistency with mission’, which is a much older idea. It’s about identifying which missions are doing the most good overall, and where you can make the most contribution to that good. So, in the older days of EA, the point was generally that a donation to buy malaria nets for people in the Global South would save and improve far more lives than the same donation to a guide dog charity in the West, making it a much more effective act of altruism. (They really, really loved the guide dogs example.)
The AI connection is that a critical mass of influential EA figures and organisations have become convinced that there’s a serious risk of a bad AI emerging which could cost billions of lives, and so, by the same sort of measurement approach, the most effective donation is to research on AI that could help prevent that. A lot of other people have a lot of questions about that logic, but that’s what it is…
John Quiggin 11.15.22 at 7:23 am
@Fergus I think this is right, with a couple of additional points to consider.
1. Implicit in longtermism, people in the future count just as much as people today. I agree strongly with this. Any coherent alternative implies weighting earlier born people alive now above later born
2. Following Parfit, “potential people” count just as much as those who are actually alive in some given timeline. Discussion of this is in the thread above
1&2 imply (with Parfit, AFAICT) that extinction of the human race is the worst thing that can possibly happen and that almost any level of misery (above that which would induce mass suicide) among those alive for any finite period of time is worthwhile to reduce the risk of this happening.
As you say, the fact lots of EA people have decided the AI apocalypse is the big risk is logically unrelated. I’d say nuclear war is a more likely path, myself.
John Quiggin 11.15.22 at 7:31 am
As regards utilitarian anti-natalism, this post https://crookedtimber.org/2021/11/28/the-case-for-being-born/ and the one it links to may be of interest https://crookedtimber.org/2008/09/03/better-never-to-have-been
TM 11.15.22 at 9:43 am
JQ “Implicit in longtermism, people in the future count just as much as people today. I agree strongly with this.”
Do you also agree with the premise that the interests of a larger number of hypothetical future people swamp the interests of the real actually living people?
Susanac 11.15.22 at 10:13 am
Hmm.. the EA argument for being concerned about the AI apocalypse might be that even if it’s not the most likely end of the world, it’s the one that can be averted at least cost (reduction in probability of end of world per dollar spent).
Not sure I buy that,
I could believe this as an example of Mary Douglas ‘s cultural theory of risk, that attention to risks is disproportionately directed to things that involve a change to way of life. Thus: global warming – doing something about it would require a change to the status quo, so less salient as a risk. AI apocalypse – to avert this, we have to carry on as before, without AIs – so highly salient as a risk.
LFC 11.15.22 at 12:36 pm
Fergus and JQ
Thank you. That is helpful.
Not sure I would agree that future lives count just as much as current lives when it comes to where research money should be spent. (But I’ll think about it.)
Sashas 11.15.22 at 3:49 pm
I’ve been struggling with how to phrase this for the last few days. Apologies in advance if I don’t get it right:
It feels like there are three different interesting threads to this conversation, all of which are done a disservice by being combined together into one thread.
First and of most interest to me personally, I would love to hear more about the claim that Utilitarianism has a unique history of ignoring its own history. I’m not convinced yet–the skeletons mentioned so far are skeletons shared as far as I’m aware by every philosophical tradition in existence–but I acknowledge my own philosophical background is pretty limited. Combining this conversation with discussion of a group of billionaire scammers (EA movement) doesn’t seem at all helpful.
Second, the EA movement I know started out as the laudable project of trying to focus charity toward actually effective work rather than “feel good” work. I thought it pretty quickly got coopted by billionaire scammers and basically stopped being an actual charity/altruism movement, but I think it could be very interesting to talk about what it was and what it is now. But while its proponents call it utilitarian it seems like the cheapest sort of “gotcha” to use it to critique utilitarianism in general. I’d love to take apart the absolute moon logic of their utility function as long as I’m not having to do all the leg work (see above about how I think they’re just scammers to being with).
Finally, the FTX thing seems like a potentially interesting exploration. But scams exist everywhere. The scammers are claiming that they are justified under EA and utilitarianism, but since when do we take scammers at their word? Maybe link to EA, if the interesting bit is that they’re all scammers?
MisterMr 11.15.22 at 5:14 pm
A somewhat OT opinion about utilitarianism.
Years ago I read Jung’s book Psychological Types, a book where Jung itroduced the concepts of introversion and extroversion, that are commonly used today.
The book though, that as a whole is not considered scientific today, has a lot of arguments depatings philosophies and / or historical thinkers about whether they have extrovert thinking or introvert thinking.
The way Jung uses the terms thn the meaning we use today, and refers to people who have a thinking pattern directed toward their inner world (introvers) or the outer world (extroverts).
I think this difference is relevant when we think about morality (regardless of the dubious scientificity of Jung). For example, take these four cases:
Amy is an old rich woman who wants to help poor people. She spends all her money to fund a school in a very poor part of the planet, the kids from there can get an education thanks to her and will have a better life.
Bob is a rich white dude who spends some money to create a diamond mining business somewhere in africa. He totally mistreats his workers (who he drove out of their ancestral lands by buyng the land so have no choince other than working for him) to squeeze most profits for them, leading to very short and sad lives for them.
Emily is a rich old woman with religious obsessions, she founds a school in a poor place where she traumatize all the kids and forces them into religious extremism, while their parents don’t tealize this because they are poor or ignorant people who are overawed by Emily.
Richard is a rich white dude who creates a tourism business somewhere in africa. While Richard couldn’t care less about the locals the tourism that he leads to the places make the locals much richer and better off.
In the first two examples, introvert morality and extrovert morality are aligned: Amy is a good person and leads to good effects, while Bob is a bad person who leads to bad effects.
But in the last two examples introvert morality and extrovert morality are misaligned: Emily has arguably good intentions but fails, Richard is an asshole but by coincidence his actions have good results.
Utilitarianism is a theory whose purpose is that of judging actions, not people, and is completely extrovert: in utilitarian terms both Amy and Richard are equally OK, while both Bob and Emily are equally wrong.
However, we also have an introvert side that (regardless of what Jung meant with the term) also includes self judgement (the eye of conscience if you will) and the way we are used to think about morality mostly is that of introvert morality.
For this reason, many people find utilitarianism lacking or objectionable, often at a skin level.
But I think that if we agree that the object of utilitarianism is only the extrovert part of morality most people would agree with utilitarianism.
There are certainly ambiguous points like how do you evaluate the utility of the unborn, or how do we calculate utils even if “utils” do not exist in reality so we can’t count them etc., but I think these are objections of details rather than of principle.
CJColucci 11.15.22 at 5:21 pm
Nastywoman@25
True story. Our local legal newspaper had a puzzling classified ad in which someone was looking for a “correction lawyer.” While there are lawyers who specialize in representing prisoners, they don’t refer to themselves as “correction lawyers” and often aren’t in the private, for-profit sector, looking for clients. The address on the ad indicated a largely Chinese/Japanese neighborhood, so the most likely explanation was that the person in the classified department mis-heard a request for a “collection lawyer,” someone to go out and sue to collect a debt. (That’s not just a stereotype. My Japanese step-mother used to refer to both my father and me as “Crem.”)
As it happened, the political season was approaching, so I said: “Good thing it wasn’t a politician looking for an election lawyer.”
MisterMr 11.15.22 at 5:25 pm
An addendum to my previous comment.
There are some people who have a “mystic” relationship with religion, like for example hermit monks of the middle ages. Jung was extremely interested in this form of mystics (both christians and of other religions) and they appear a lot in his books.
This is the most obvious example of introvert morality, and is something that completely lacks in utilitarianism and modern concepts of morality.
The way I see things, we are built with a natural tendency to search some figures of authority and search their approval (like a kid desires the approval and love of his parents), and this sort of introvert morality comes from there; it is however also the base, or part of the base, of extrovert morality.
But the two sides of morality are the two sides of the same coin, it is just that the introvert part of morality is implicit, and doesn’t appear, in utilitarianism, whereas in virtue ethics often it is the extrovert part of morality that is implicit (it doesn’t appear but if we imagine a form of virtue ethics that botches the effect on the outer world it clearly becomes unteneable).
RealLongtermistsPlsStep4rd 11.16.22 at 6:35 am
TM @39 LFC@37
I don’t think the Aeon article is a very good explainer. AFAICT Torres has concerns about something in “longtermism” that they can’t coherently express, and instead they write fearmongery pieces trying to portray the EA movement as a far-right boogeyman. They occasionally have some real critiques, but it’s hard to engage with them since they’ll reject any disagreement as dishonest PR by someone hoping to get funding. (It’s a bit uncharitable, but in my head I sometimes compare their argumentative style to antivaxxers).
Effective altruism started as a small team trying to figure out how people could achieve the most good, on a quantitative level. The idea being, the actual impact of different careers or charities might vary a lot, and there actually wasn’t much work done to compare them. Eg Charity Navigator at the time only considered how much funding went to overhead, but it didn’t compare whether buying diapers for poor American mothers will improve lives better than funding cataract surgeries in rural India. The EA peoples’ best guess was that the most effective charities could save about 1 life for every 5-8 grand, and since most charities are more constrained for money than talent, most people would do best by choosing higher-income jobs and donating the excess to the best ones.
As the movement grew a lot of people became convinced of a couple ideas. One is that if nonhumans can suffer, some animal-focused charities prevent far more harm than poverty-focused ones. Another is that society seems to have a blind spot for potential catastrophes worse than WWII, and even if it’s extremely difficult to quantify, trying to reduce risks of such events may be even more effective than poverty or animals. People tend to favor one of those three, but there’s never been much desire to explicitly split into three movements.
The “AI crowd” is part of the 3rd group. Many people think it’s possible to make a program that’s more intelligent than a human (even if there’s currently large gaps between a human and what current ML paradigms can make). A superhumanly smart AI could be extremely good or extremely bad, depending on whether it does what humanity wants or some mangled proxy goal.
Sometimes people get in arguments about whether extinction risks are extra important, because wouldn’t it be great if there were lots of future people living badass fun lives in the future and therefore extra bad if instead we had a dead planet or a galaxy full of emotionless von neumann probes? And then get nerdy about how many humans you could put in the universe, or whether simulated humans could be even happier due to less physical constraints. This mostly doesn’t change anyone’s career choices, unless you count Torres writing about it on Aeon.
RealLongtermistsPlsStep4rd 11.16.22 at 7:12 am
AlexSL @28
“Not really, though, because in practice they like to spread out on the bailey of earning fifty billion by poisoning the water supply, donating fifty million to water remediation, and then getting a pat on the back for their charity. The dichotomy here is no-brainer versus flimsy fig-leaf for selfishness.”
That’s ridiculous. Nobody in EA thinks like that.
It is not marketed towards self-serving billionaires with a sudden urge to buy utilitarian-brand indulgences. I suspect that such people are rarely driven to get indulgences. In the rare event it happens, they would likely prefer a more warm, fuzzy indulgence or something that plays well to a popular audience, on account of being self-serving. If SBF’s thought process had anything to do with charity-related moral justifications, it was likely an “evil for the sake of good” decision, not an “I’m cashing in on my good deeds” decision.
In fact, the general consensus in the movement is that utilitarianism is bad for humans, on account of humans tend to underestimate the risk and magnitude of their “justifiable” evils. People tend towards “Do as much good as is possible, under the constraint of not doing stuff that common sense tells you is bad. No really, you aren’t the exception, don’t do evil things even a little bit.”
It’s unclear what was going on in SBF’s head, whether the apparent fraud was sheer pride/greed, utilitarian justification, or just drug-fueled incompetence. If it’s the middle one, there’s been 15 years of people specifically requesting the opposite of this.
Ray Davis 11.16.22 at 5:39 pm
Eric Schliesser, there are a couple of bad links in your original post & first comment, which makes it harder to follow the references. These seem to work:
Bart Schultz);
Julia Driver has shown that moral complexity can be fruitfully discussed in the tradition.
Sequoia’s investment
Doug K 11.16.22 at 9:34 pm
as I understand it, EA began with Peter Singer and found its best proselyte in MacAskill. Once MacAskill was captured by the longtermists EA became just another cult, not easily distinguishable from the long con. It’s amusing that MacAskill for all his cleverness doesn’t recognize this and is shocked by his funder’s behavior.
Indeed in the linked New Yorker article,
“Last year, the Centre for Effective Altruism bought Wytham Abbey, a palatial estate near Oxford, built in 1480. Money, which no longer seemed an object, was increasingly being reinvested in the community itself. The math could work out: it was a canny investment to spend thousands of dollars to recruit the next Sam Bankman-Fried. But the logic of the exponential downstream had some kinship with a multilevel-marketing ploy. ”
I liked and saved this paragraph from Amia Srinivasan in the London Review of Books,
“There is a small paradox in the growth of effective altruism as a movement when it is so profoundly individualistic. Its utilitarian calculations presuppose that everyone else will continue to conduct business as usual; the world is a given, in which one can make careful, piecemeal interventions. The tacit assumption is that the individual, not the community, class or state, is the proper object of moral theorising. There are benefits to thinking this way. If everything comes down to the marginal individual, then our ethical ambitions can be safely circumscribed; the philosopher is freed from the burden of trying to understand the mess we’re in, or of proposing an alternative vision of how things could be. The philosopher is left to theorise only the autonomous man, the world a mere background for his righteous choices. You wouldn’t be blamed for hoping that philosophy has more to give.”
engels 11.16.22 at 10:29 pm
The tacit assumption is that the individual, not the community, class or state, is the proper object of moral theorising…. The philosopher is left to theorise only the autonomous man, the world a mere background for his righteous choices. You wouldn’t be blamed for hoping that philosophy has more to give.
What a silly put down. Ofc morality and moral theory are about what individuals should do; ofc there’s more to philosophy than that.
Alex SL 11.16.22 at 11:20 pm
RealLongtermistsPlsStep4rd,
I do not doubt that there are EA theoreticians who are serious about maximising good outcomes and at least attempt to do a cost-benefit calculation.
But I have also seen quite a few people online who consider themselves EAs for taking the strategy of pursuing the highest-paying job or investment they can find, regardless of the damage it causes or rationalising that damage away, because then they can donate more later, with running a large crypto exchange merely being an extreme case example. (Even if it wasn’t theft of customer money and/or Ponzi scheme, best case scenario for this business is a kind of casino where the house and some customers win at the expense of most customers, thus causing damage either way.)
The question then only becomes to what degree that kind of EA is consciously or in denial about using EA as a figleaf for the damage they are causing in the here and now with their business.
M 11.17.22 at 12:02 am
Given that this post was at least partially inspired by the implosion of Sam Bankman-Fried’s FTX, < a href=”https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy”>this interview/conversation with SBF (via Twitter DMs, natch) may be of interest.
John Quiggin 11.17.22 at 1:35 am
TM @46 “Do you also agree with the premise that the interests of a larger number of hypothetical future people swamp the interests of the real actually living people?”
“Hypothetical” is the tricky term here. I don’t place any weight on the “interests” of hypothetical people who never actually come into existence. This is where I break with Parfit’s repugnant conclusion.
On the other hand, everything about the future is in some sense hypothetical, including the set of people who will actually exist, contingent on both chance and the choices we all make in the present. We do the best we can with probabilistic judgements about average outcomes across average populations.
The question of uncertainty is a big issue for consequentialists of all kinds, including utilitarians. But, AFAICT, critics of consequentialism make much of this problem, but then ignore it when proposing alternatives.
John Quiggin 11.17.22 at 1:57 am
It’s important to avoid too much hindsight here. Although it’s now obvious to everyone (including, I assume, SBF) that FTX was a fraud, this wasn’t the case until a week or so ago. The fact that SBF was clear on the Ponzi aspect of the whole thing was public knowledge, but the underlying assumption is that all financial assets, particularly fiat currencies, are like this. All of crypto is a fraud of the same kind as FTX, but that hasn’t stopped the financial system, the media and the political system treating it as a creator of genuine assets.
So, SBF wasn’t (as some above suggest) arguing for defrauding the rich in order to help the poor. Rather, he saw himself making money in a morally neutral or even socially beneficial way.
reason 11.17.22 at 4:05 am
To quote JQ
“As regards Parfit, the “repugnant conclusion” reads to me like a reductio on Sidgwick, who seems to have been the first person to argue for a notion of total utility. Bentham, the Mills, and (AFAICT) all the early utilitarians were Malthusians, which implies an average utility view. Longtermism, again AFAICT, goes the other way, picking up the repugnant conclusion and running with it.”
To, if I may stray off topic a bit, picking up the repugnant conclusion and running with it, has always seemed a speciality of “Austrian” economics. They want to turn the scientific process on it’s head, instead of following axioms to their logical conclusion and then accepting or rejecting the conclusions based on the feasibility (or acceptability) of the conclusions, they hold fast to axioms and evaluate the conclusions based on their being supported by the axioms.
reason 11.17.22 at 4:13 am
JQ @60
I know that some people like simple pithy statements to clarify moral principals (most famous of all is the “the golden rule” the clearly humanist basis of “christian” morality). Regarding our responsibility to future generations I like the simple sentence often found in public toilets – please leave (the toilet) as you would like to find it. I think that is as good a principal as any complicated philosophical structure could provide. Any outcome that strays too far from that is suspect in my view.
Alex SL 11.17.22 at 4:47 am
John Quiggin,
That is what I meant in my last comment with to what degree is it conscious vs denial. There are probably people who think that they make their money socially neutrally by mining coal or patent trolling. That just should not be accepted at face value.
I assume you do not believe that currency is a Ponzi scheme but were citing cryptoenthusiasts’ thinking?
engels 11.17.22 at 8:23 am
Regarding our responsibility to future generations I like the simple sentence often found in public toilets – please leave (the toilet) as you would like to find it
An inspiring vision!
Alex SL 11.17.22 at 9:56 pm
Further to my comment @58, I only just learned that William MacAskill, “a Scottish philosopher and ethicist, who is one of the originators of the effective altruism movement”, wrote a chapter called “The Moral Case for Sweatshop Goods”. Ye gods.
Taken at face value, his main argument seems to be that if the workers didn’t have that exploitative wage, they’d have even less! That is quite revealing in how somebody who considers themselves rational blatantly argues from a False Dilemma Fallacy.
And the thinness of this argumentation is also why I find it very, very difficult to assume honesty and take it at face value. The logical consequence of this thinking is that it supposedly leads to better outcomes if a billionaire pays starvation wages and then has more profits to disburse to charitable venues of his personal choosing than if he pays good wages and/or high taxes so that his workers and their wider communities don’t need charity in the first place.
LFC 11.17.22 at 10:03 pm
engels @57
But it is possible to take, say, the “basic structure” of society (per Rawls) as the (to quote the LRB passage) main object of moral theorizing — though I agree the passage is not an esp devastating criticism of EA.
Colin Danby 11.18.22 at 2:22 am
Given the Business Insider piece today, which makes connections between Peter Thiel, Elon Musk, EA, long-termism, and white supremacist pronatalism, William MacAskill has some explaining to do.
“MacAskill has never explicitly endorsed pronatalism, and he declined to be interviewed for this article. He did, however, devote a chapter of his best-selling book, “What We Owe the Future,” to his fear that dwindling birth rates would lead to “technological stagnation,” which would increase the likelihood of extinction or civilizational collapse. One solution he offered was cloning or genetically optimizing a small subset of the population to have “Einstein-level research abilities” to “compensate for having fewer people overall.”
Malcolm said he was glad to see Musk bring these issues to the forefront.”
The text is here (scroll down): https://www.reddit.com/r/ScienceBasedParenting/comments/yxzpuu/billionaires_like_elon_musk_want_to_save/
engels 11.18.22 at 9:33 am
it is possible to take, say, the “basic structure” of society (per Rawls) as the (to quote the LRB passage) main object of moral theorizing
I think there’s a reason Rawls’s book is called A Theory of Justice and not A Theory of Morality…
Trader Joe 11.18.22 at 12:42 pm
@63 & @ 65 regarding “leave it as you found it”
Unfortunately this is a standard that at best works in the short-run. Even if all users remain diligent (and we all know the tragedy of the commons), eventually the bowl becomes discolored, the pipes begin to corrode and ultimately you completely run out of bog wipe. Accordingly someone (or society as a whole) periodically needs to make things substantially better than ‘the way they found it’ or else the original will cease to function at all.
I find the similarly trite phrase “Everyone wants to change the world, but no one wants to change the loo roll” to be more useful. We’re constantly presented with people who want to build things, we’re rarely presented with people who delight in maintaining what we already have.
engels 11.19.22 at 9:02 pm
Sad I couldn’t post this on the “death of the humanities” thread:
“I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”—Sam Bankman-Fried
engels 11.21.22 at 9:28 pm
Agree with Trader Joe about the toilet rule. It also has the controversial implication that I should leave the seat up.
Comments on this entry are closed.