In defence of effective altruism

by John Q on November 23, 2023

With corrupt crypto king Sam Bankman-Fried as its most prominent representative, the Effective Altruism movement is not particularly popularly these days. And some other people associated with the Effective Altruism movement have bizarre and unappealing ideas. More generally, the association of the idea with the Silicon Valley technobro culture we’ve been discussing here has put a lot of people off. But the worst form of ad hominem argument is “someone bad asserts p, therefore p is bad”[1].

Whether under this name or not, most economic research on both welfare policy and development aid takes the premise of effective altruism for granted. The central premise of effective altruism is simple: if you want to help poor people, give them what they most need. The practical force of this premise arises from a lot of evidence showing that, in general, what poor people need most is money.

(Image generated with DALL-E)

The starting point for the premise is some version of consequentialism. It is most directly opposed to the idea that altruism should be evaluated in terms of personal virtue. To take a typical example, effective altruism would say that someone with professional skills that are highly valued in the market should not volunteer in a soup kitchen, they should spend the time working for high pay and then donate their pay to the people who are hungry. [2] And there are plenty of highly popular interpretations of personal altruism that are even less effective, such as “thoughts and prayers”.

Coming to the question: why give people money, rather than addressing their needs directly, I will quote from my book Economics in Two Lessons which presents the issue in terms of opportunity cost.

What would a desperately poor family do with some extra money? They might use it to stave off immediate disaster, buying urgently needed food or medical attention for sick children. On they other hand, they could put the money towards school fees for the children, or save up for a piece of capital like a sewing machine or mobile phone that would increase the family’s earning power.

The poor family is faced with the reality of opportunity cost. Improved living standards in the future come at the cost of present suffering, perhaps even starvation and death. Whether or not their judgements are the same as we would make, they are in the best possible position to make them.

There are plenty of qualifications to make here. Maybe the most important is that family heads (commonly men) may make decisions that are more in their own interests than in those of the family as a whole. Giving money to mothers is often more effective.

And sometimes delivering aid in kind is the only way to stop corrupt officials stealing it along the way.

In addition, there are some kinds of aid (local public goods) that can’t be given on a household basis. If people in a village don’t have clean drinking water, then the solution may be a well that everyone can use.

Nevertheless, whenever anyone advocates a policy on the basis that it will help poor people it is always worth asking: wouldn’t it be better to just send money?

[1] Ad hominem arguments aren’t always bad, as I’ve discussed before. If someone is presenting evidence on a factual issue, rather than a logical syllogism, it’s necessary to ask whether you are getting all the facts, or just those that suit the person’s own position .

[2} Contrary to the SBF example, it doesn’t suggest stealing money to buy an island in the Bahamas, then covering up by giving some of the loot away

{ 60 comments }

1

John Q 11.23.23 at 9:42 am

I’m talking about “effective altruism” the idea, not Effective Altruism (EA) the movement. But it’s worth observing that EA isn’t a global monolith. In Australia, it focuses much more on the original question “what’s the best way to help poor people”. In the US, it seems (from the Antipodes) to be much more about the crypto culture and in the UK to be influenced by an eccentric philosopher.

2

Matt 11.23.23 at 10:18 am

As you note in the comment (I would have put this first in the text!) there is a sort of common-sense meaning of “effective altruism”, and then there is a movement. The movement is as much influenced by Peter Singer as by any other person, I think. The movement (and Singer, of course) claim to be guided by utilitarianism in a straight forward way, but say some things that are importantly different from what’s claimed in the post, different enough to make them worth more discussion than is given here. (You don’t have to do that – you can write on what you want to, of course, but it seems worth thinking about by someone, especially by someone who wants to use the “effective altruism” lable.)

So, you say: The central premise of effective altruism is simple: if you want to help poor people, give them what they most need.

I don’t think this is what Effective Altrusim says. What EA says is that you should do the thing that helps the most people in the biggest way for the money you spend. But that’s importantly different from the claim above, because the EA claim implies that you (we) should be focusing on particular people – the poorest ones, or at least the poorest ones that can be effectively helped. So, if you are trying to help poor people in Australia (or maybe even Vietnam) by “giving them what they need”, you’re almost certainly not getting as big of a bang for your buck as you could be by doing something else for much poorer people in some other country. And so you should be doing this other thing. Maybe that’s right! But it’s a much more controversial claim.

Interestingly, the EA people also seem to not think that giving people money is the right thing, at least in many cases. Rather, they focus on malaria nets, vaccines, and things like that. One justification for this might be that these are things that can’t easily be supplied by poor people themselves if you just give them more money, but that will make a big differencen if the people in question can get them. (I don’t have any idea if this is true, but it would be a non-paternalistic justification.) It does seem plausible that if a rich guy can fund work on a vaccine that will help a huge number of poor people (or might do so) that might be better than giving a relatively small amount of money to individual families.

In any case, it seems hard to argue against claims like “It’s better to take effective steps towards your goals than ineffective ones”, or “if you want to help people, it’s more important that you do something that actually helps them than to do something that makes you feel somewhat better about yourself subjectively”, and even “it’s a good idea to figure out what the effective means to your ends are, and this often includes listening to experts”, but I’m not sure these things need a label (let alone a movement) or are the providence of utilitarianism alone.

3

SusanC 11.23.23 at 11:21 am

People have a problem with EA the movement, not effective altruism the idea.

In particular, people have a problem with an apocalyptic religious cult based on fear of imminent AI apocalypse.

Not clear if the idea necessarily leads you to the movement, except maybe utilitarianism breaks down when you have low probability outcomes with very large negative utility.

4

Alex SL 11.23.23 at 1:08 pm

The problem is that Effective Altruism is a classical motte-and-bailey. There is an interpretation that is sensible but utterly banal – as you wrote, “most economic research on both welfare policy and development aid takes the premise of effective altruism for granted” – and another interpretation that is seemingly exciting and special but ends up in ends justify the means reasoning and tunnel vision.

No reasonable person would, if given a direct and singular choice, donate a thousand dollars to a charity that wastes 80% on administration instead of a charity that spends 80% directly on aid. But no reasonable person would agree that therefore only the one most lives-saved-per-dollar cause ever deserves funding even after it has received ten times as much money as it needs, while lower urgency issues like merely improve the lives of the poor have to remain forever unfunded. Yet that is where prominent EAs tend to reason themselves to: they’d rather have more money to donate for malaria nets and AI alignment than pay their taxes that go to infrastructure, social welfare, education, etc.

No reasonable person would be opposed to the idea of a talented person pursuing a well-paying career and donating a lot of their income to good causes. But all reasonable people would agree that if a talented person pursues a career that extract a lot of money by harming people (e.g., by defrauding customers or investors, or massively underpaying workers) and then donate a bit of money, they are merely using the donations as a fig leaf, and their overall impact is negative. Yet there is a great tendency for EA to end up as a fig leaf. Reasonable people may also question the fairness of extremely well-paying careers and poorly paying careers existing in the same economy in the first place, but oddly EAs aren’t generally socialists. Because first and foremost, they want to be rich.

No reasonable person would be opposed to risk-benefit calculations. But any reasonable person is weirded out if every social question gets turned into a pseudo-rationalist, pseudo-Bayesian gambling addict’s calculation of Expected Value. Not all problems are math problems! There are sometimes simply things we don’t do because they violate people’s rights, not because it turns out upon careful calculation that they have a high risk of leading to lower EV. Yet prominent EAs are often pseudo-rationalists who see everything as a nail to their probabilistic hammer.

Now, one could say, No True Effective Altruist. My point is, if you remove all the No True EA, all that is left is the motte, the exact same moral reasoning about charity donations that every rational person uses even if they forcefully reject EA: hey, maybe we should avoid waste and be efficient.

5

Matt Bailey 11.23.23 at 1:12 pm

As I’m sure astute observers have previously noted, effective altruism (the idea) is the bailey, and the other, more suspect tenets of Effective Altruism (mind uploading, longtermism, robot gods) are the motte.

6

Bill Benzon 11.23.23 at 1:53 pm

@Matt. Yes. In the extreme, EAs (we’re talking about the movement, some of the “bizarre and unappealing ideas” mentioned in the OP) will reason that the worst thing that can happen is the we’ll create an artificial general intelligence (AGI), it’ll start improving itself and go FOOM (a term of art in that world) and become superintelligent. It will then turn bad and destroy us. Thus, so they reason, they can get the most bang for their bucks by attempting to stave that off by figuring out how to ‘align’ AIs with human values. A lot of philanthropic money has been given to that cause. That’s why SBF donated/invested $500 million, I believe, in Anthropic, which was founded by people who split off from OpenAI because they thought OpenAI wasn’t sufficiently concerned about AI safety.

And beyond even that, some EAs reason that, given that AI alignment is possible and successful, the very best thing we can do with our money is foster the development of fully aligned AI. Why? Because then those AIs can multiple like (artificial) bunny rabbits and fan out and colonize the whole galaxy over the next billion years. Think of all those (artificial) lives we’ve been able to bring into existence (save) by investing in AI. So much goodness! And all of it in the hypothetical by oh so wonderful future.

7

Phil H 11.23.23 at 2:35 pm

Not really the point of this post, I know, but this just struck me as I was reading:
“Whether or not their judgements are the same as we would make, they are in the best possible position to make them.”
Are they? I mean, the one thing we know about your hypothetical family is that they’re poor. One of the conclusions we might draw from their poverty is that they are not very good at ending their own poverty. One of the conclusions we might draw from our superior wealth is that we know more about being wealthy.
Obviously I’m ignoring an assumption that is built in here: we know that for most families, the biggest factor that determines their level of wealth is the society that they find themselves in. So the fact that family X in Liberia is poorer than my family is not to be ascribed to the financial incompetence of family X, but just to the fact that they live in Liberia. But that then immediately calls the ability of the family to change their own situation into doubt: if the family is poor because they’re in Liberia, then handing some cash over to this family isn’t going to fundamentally change their circumstances…
I’m not really anti direct aid. I just thought this justification was a little bit too simplistic.

8

engels 11.23.23 at 2:38 pm

British government policy is based on the principle that what poor people need is work, whether or not it pays well, or at all, and whether or not they want it, or are able to do it. Whether our charity sector agrees with that is not always as clear as it could be.

9

TF79 11.23.23 at 3:20 pm

The motte-and-bailey EA clarification is appreciated. As someone who has only partly paid attention to the topic, I was trying to understand how a bunch of weird shit had glommed on to some reasonably sensible intuitions.

10

LFC 11.23.23 at 3:31 pm

There are reasonable ways to criticize EA (the movement) and then there are unreasonable ways to criticize it. The latter approach is to spew snark, invective, and ad hominem, often without bothering to spell out what is objectionable (cf. quite a few posts at Lawyers Guns & Money). To their credit, the comments above take a reasoned/ reasonable approach to criticizing the movement.

Btw, re the OP’s point about money vs. in-kind aid: U.S. food aid to developing countries under PL480 is mostly in-kind (which incidentally benefits U.S. farmers, and probably the better-off among them) whereas other developed countries tend to give food aid in-kind.

11

Sashas 11.23.23 at 3:42 pm

One potential problem with Effective Altruism the Banal Idea: Even if we concede that “if you want to help poor people, give them what they most need” is basically correct, we need to ask how important it is. If someone is giving but not according to this principle, how much effort should I spend getting them to change how they give? If the answer is “zero effort”, then that’s a problem even for the banal idea of EA.

I’m arguably pretty into EA the Banal Idea myself, and my position is that all individual efforts pale in comparison to changes in public policy. So I channel most of my “altruistic” investment into labor organizing. But outside of a philosophical discussion like this, you won’t see me shitting on individual or private charitable efforts, because it’s a waste of my time and effort to fight people who are pointed in the correct direction, even if they aren’t being as effective as I think they can be. There’s always limits, and I have some, but these still aren’t something I throw much of my effort into. It helps that I’m not in a position to have to support or cooperate with other people’s charitable projects.

12

LFC 11.23.23 at 3:54 pm

Correction: meant to say that other developed countries (other than the U.S.) give food aid as cash earmarked to buy food, rather than in-kind.

13

Peter Dorman 11.23.23 at 5:23 pm

I agree with Alex SL @4 but would add one further point that takes us away from narrower versions of act utilitarianism: sometimes it is good to do a publicly virtuous thing aside from its direct outcomes because it inspires others to do the same. When a celebrity spends a day at a soup kitchen instead of doing another gig and donating the proceeds, it encourages the rest of us to emulate the spirit on display. And this also works in smaller scale communities like workplaces, in public spaces (parks, buses, etc.) and so on — you don’t have to be a celebrity to be seen and have a moral force on others.

Taking that one step further, sometimes specific acts or choices are embedded in a wider network of social relations, and the health of that system can be affected by decisions that are overly calculating. I argued that unhealthy working conditions belong in that box even when the economic costs of improving health and safety don’t seem to fully warrant the reduction in risk. It’s also about caring.

14

MisterMr 11.23.23 at 5:30 pm

” To take a typical example, effective altruism would say that someone with professional skills that are highly valued in the market should not volunteer in a soup kitchen, they should spend the time working for high pay and then donate their pay to the people who are hungry.”

The core question is if the money comes because of greater material output (as orthodox economics usually implies) or if it comes from exploiting some sort of rent (as e.g. the LTV usually implies, but also orthodox economics in the case of rents).

If for example Rich Guy X makes money by exploiting somehow inhabitants of the Fiji Islands, but then gives [some of] this money back to the inhabitants of the Fiji Islands as help, he is not really helping them.

So a common implicit doubt against this kind of altruism is that a group of rich guys become rich by exploiting others, then give away a percentage of the loot as charity and believe they are benefactors; if instead they worked in a soup kitchen this doubt would not exist (there would be a surplus of soup kitchen workers though).

Incidentally, this is very similar to the problem of simony: in the middle ages, the church mantained that making money from money (as banker but also merchants in general do) is sinful, and therefore these people would go to hell.
In order to save their soul these rich people gave money to the church as charity.
But as the economy in the late middle ages picked up the quanity and the influence of “merchants” increased, so that in the end the ones who were supposed to be sinners became the boss, and they also could buy “paradise” through charity; this became unacceptable to many who then started accusing the church of “simony” (selling absolutions for money).
This same ethical problem exists for people who make money with (supposedly) unethical means and then give out some in charity.

15

Thomas P 11.23.23 at 6:42 pm

Phil, if you want to help one family, maybe you can come up with a way that is better than what they would do themselves, but if you want to help a thousand sending money seems more efficient than trying to figure out what each of them needs most. This is especially true as wealthy people usually know little about how to live as poor. Being good at investment banking might be great for turning one million into two, but it’s useless if you start out living on the street.

16

SusanC 11.23.23 at 7:06 pm

Maybe SBF was just a fraudster who wasn’t the least bit altruistic.

But I can also see a way the ideology leads to SBF like behaviour.

So, you’ve convinced yourself that you can better benefit charity by working a well paying job and then donating some of your wages, rather than by helping the charity directly.

So, then the thought goes, what if I defraud millions of dollars from these rich guys, and donate it to charity? That sounds virtuous, maybe, at least within the EA ideology.

-> SBF

17

StevenAttewell 11.23.23 at 7:46 pm

“What EA says is that you should do the thing that helps the most people in the biggest way for the money you spend.”

I think this is the gap in the mott through which the “bizarre and unappealing ideas” come in to lay siege to the bailey: people can differ about what “the thing that helps the most people in the biggest way” is, and in the case of EA the movement the problem is that a bunch of wealthy and influential people have decided that, rather than poverty in the global south or on climate change or something actually useful, they’ve concluded that the thing that would help the most is to focus on Skynet and asteroids, because the “most people” are actually the unborn trillions of holographic minds uploaded to dyson sphere clouds in space.

Perhaps the next step for effective altruism the idea is banning people from Silicon Valley from reading science fiction novels until we can figure out the negative side effects.

18

Alex SL 11.23.23 at 9:47 pm

There is also a element of arrogance and narcissism to EA (again: not the utterly trivial idea that efficiency is good, but the character of the EA movement that makes it be a movement beyond “we should be efficient” – “yeah, well, duh”):

A bunch of rich people think that they specifically, personally, know better how to do charity or save the world than thousands of experienced professionals and thousands of years of moral philosophy, because they Dunning-Kruger themselves smarter than every other human being and can do some probability estimates with parameters they entirely made up to fit their libertarian and futurist biases.

Conclusions like it is best to give money to mothers or it is best to give aid directly are based on evidence, empirical studies, and hard-won experience. Conclusions like AI alignment is the most important cause because it saves trillions of future virtual minds simulated on supercomputers powered by Dyson spheres in the Andromeda galaxy are certainly based on something, but not on evidence or even being able to touch reality with a very long pole.

19

Chetan Murthy 11.23.23 at 11:28 pm

TF79: “I was trying to understand how a bunch of weird shit had glommed on to some reasonably sensible intuitions.”

As Prof. Temkin taught us in freshman Moral Philosophy (nearly 40 year ago), every codified moral system can be gamed. Every one. I remember the gaming of Utilitarianism, and even of Kantian Ethics. Any system for ordering human behavoir, that depends on human judgment, can be gamed. It’s up to us to notice when somebody is arguing in bad faith, gaming the rules, and smack ’em down.

20

Matt Bailey 11.24.23 at 2:42 am

@6Bill

The grey goo/paperclip maximiser is not even the full extent of craziness.

Consider the (chimeric) Basilisk, the idea that a hypothetical superintelligence will punish, in a virtual hell, simulated copies of the consciousness of those who foreknew its inevitability but did not work to bring about its existence.

Much virtual ink has been spilled and substantial sums donated to EA boffins to theorise about “alignment” to forestall this Yeatsian Second Coming.

Meanwhile, the individual who dreamed up the whole Basilisk scenario has gone completely mask-off “traditionalist” neoreactionary.

21

both sides do it 11.24.23 at 4:33 am

There’s also a kind of micro/macro epistemic gap in the ‘banal’ EA thinking: in the aggregate the strategy of “give people what they most need” might produce the ‘best’ results of every alternative “strategy”, but for any given individual decision that might not be the case at all

e.g. “I’m not going to volunteer cooking at this soup kitchen because I can make more working and donate it” -> that specific soup kitchen ends 3 months later b/c the volunteer cooks are burnt out

We don’t actually know the consequences of our individual decisions

22

Alan White 11.24.23 at 6:45 am

All I know is that I have benefitted from a lucky life where I have more than I need to live comfortably and so I follow my instincts about improving the world. I’m not rich by any means, but I’ve given to Oxfam, Doctors Without Borders, Planned Parenthood, The Southern Poverty Law Center, and lots of democrats among many other seemingly good charities to the tune of low tens of thousands of bucks because I naively believe that my meager largess might do some good if I send cash to those who I believe can keep the world from falling farther into some sort of abyss created by greed and hate. That’s all I can do by sheer instinct. If my instinct is somehow corrupted by failures to appreciate subtle distinctions about what constitutes effective altruism, well then I guess I’m a total idiot.

23

oldster 11.24.23 at 10:59 am

“But the worst form of ad hominem argument is “someone bad asserts p, therefore p is bad”[1].”
I agree that is a bad form of ad hominem argument, but I don’t think it’s the worst.
Consider: Hitler used to use a different form of ad hominem argument, and he was an even worse person. So, his form of ad hominem argument was even worse than the one you mention.

24

bekabot 11.24.23 at 1:29 pm

“not the utterly trivial idea that efficiency is good, but the character of the EA movement that makes it be a movement beyond ‘we should be efficient'”

Is efficiency good on its own? I don’t know that it’s not, but I think it’s open to question. If you want to do good, of course it’s better to do good more than less efficiently, because then you can do more good with less outlay (‘more bang for the buck’, etc.) But if what you want is just to be efficient, where does the do-gooding come in? The two aren’t necessarily connected.

(“I’m bound for the kingdom of heaven — I was efficient!” Dickens [for one] would have killed himself laughing. He would have responded: “Efficient at what?”)

25

Dr. Hilarius 11.24.23 at 8:14 pm

@17 mentions the possible role of science fiction in shaping long-term EA. SF writer John Scalzi has written in his blog about the negative consequences of tech bro science fiction readers mistaking SF for reality.

@20 mentions an AI, Basilisk, punishing late adopters with eternal virtual hellfire. I was unfamiliar with Roko’s Basilisk (and Yukowsky taking it seriously) but immediately recognized the idea as a major plot device in Iain Bank’s “Surface Detail.” Surface Detail was published in 2010, the same year the Roko presented his Balilisk thought experiment. Life imitates fiction? It is astounding that grown people fret about such things. They need some more immediate problems in their lives to distract from this nonsense.

26

LFC 11.24.23 at 11:58 pm

@ bekabot

I think the context of Alex SL’s comment @18, along with the topic of the OP and the whole thread, makes fairly clear that he means “efficient” as in: helping people without a lot of administrative waste/overhead.

27

Chetan Murthy 11.25.23 at 12:31 am

Matt Bailey: “Consider the (chimeric) Basilisk, the idea that a hypothetical superintelligence will punish, in a virtual hell, simulated copies of the consciousness of those who foreknew its inevitability but did not work to bring about its existence.”

For me, the truly funny bit about this, is the idea that somehow this superintelligence will be able to simulate the consciousness of a person existing today (long before the existence of this superintelligence). I mean ….. so much stupidity in that assumption, soooo much stupidity.

Na ga ha pen.

28

engels 11.25.23 at 12:42 am

I was trying to understand how a bunch of weird shit had glommed on to some reasonably sensible intuitions

I think this may be a general effect of how wealth corrupts political discourse.

29

Logothete 11.25.23 at 2:24 am

If you want to help poor people why not support them in deciding what they need and simply ask them what they need. If the structures aren’t there for them to bring their will to powered might be more effective to ally with them and their nation plus the international order in promoting more accountable and effective governments. I respect Peter Singer. I’m very skeptical of EA. In comments y oueffective altruism is different from the movement forgive us if outsiders can’t tell the difference. Anyway the way EA organizes, welcomes people and decides how to engage with people outside will convince more people to take it seriously or not. I suspect that will have a larger impact than apologetics. I have no problem with the idea which seems banal but a good starting place. I think of EA the way I think of sociobiology.; It had useful and valuable things to say until reactionaries dominated the public sociobiology discourse with just-so-stories defending gender, racial, and class hierarchies, making it harder for people outside the field to take it seriously. That might happen to EA but that’s something for effective altruistic to figure out. Schism are ok. Peter Singer is right about a lot of things

30

SusanC 11.25.23 at 10:27 am

Although AI doomerism seems to have grown out of effective altruism (the banal idea) it is, in some ways, counter to it.

In a normal ea (the banal idea) project, you can measure how effective your intervention is. That’s kind of the point.

Problem: we don’t know how likely AI doom is, so we have absolutely no idea how effective the attempts to prevent it are.

31

SusanC 11.25.23 at 1:10 pm

@chethan Murphy

I had always understood Roko’s Basilisk as a reductio ad absurdum of certain ideas that were prevalent in EA. If you don’t believe its premises, then it’s just ridiculous. On the other hand, if you do believe the premises, and end up with Roko’s Basilisk, it’s a call to consider which of the premises might be wrong.

32

SusanC 11.25.23 at 1:15 pm

33

Matt B 11.25.23 at 2:13 pm

It doesn’t matter much, and I don’t mean to nitpick, but the fallacy you’re describing is probably better termed the genetic fallacy (attacking an idea based on its origins).

The ad hominem, I think, is a subtype of the genetic fallacy that needn’t take the form you’ve laid out, but can simply be an irrelevant attack on someone’s character.

But absolutely, we shouldn’t lose faith in the idea of altruism because of billionaire crooks. When it comes to the capital EA movement, we’re better off looking to people such as Will MacAskill, who aren’t rich (in US/EU terms) but are still donating most of their salaries to causes most likely to help people.

34

bekabot 11.25.23 at 2:28 pm

@ LFC

I wasn’t speaking specifically but in general. My comment wasn’t aimed at any individual, certainly not at any individual who’s here.

35

Bill Benzon 11.25.23 at 7:45 pm

Here’s an interesting long Xerpt about how EA went over to the dark side. It begins:

Effective altruist now seems synonymous with AI doomer, but it wasn’t always that way. My own experience & why I think a lot of it is bullshit AI doomerism now:

I was an active member – went to multiple of their global summits and still mostly donate to causes that they recommend.

I love a bargain, and EA was about evidence-based ways to reduce global poverty & suffering. You donate to deworming, vaccines, malaria nets – they save lives and it’s quantifiable. Don’t donate to a university, give money to save lives today.

But then it shifted to being more about about tail risks, mainly AI. I first saw it in 2015 at the global summit on Google’s campus – the most high-profile person talking about it there was @elonmusk. He read Nick Bostrom’s book Superintelligence and donated $10M to the Future of Life Institute to study existential risks from AI, and made a convincing argument to the EA crowd about AI risk. A lot of folks were donating to such causes at the time – Peter Thiel was the top donor to the Machine Intelligence Research Institute, a similar research institute to manage existential risk.

And so forth.

36

Alex SL 11.25.23 at 9:16 pm

bekabot,

As LFC wrote, I thought it was implicit in the whole discussion that it is about efficiency with doing good – that is what the A in EA means, after all. Of course efficiency is bad if it is somebody doing bad things efficiently. (That is why I don’t understand why people complain about incompetence in government when the other side are in power. Don’t you want them to be incompetent at implementing policies you want to fail, at entrenching themselves in power, etc.?)

37

Bart Barry 11.26.23 at 5:19 am

I lean toward irrespective altruism, myself, as a rule.

38

Matt Bailey 11.26.23 at 5:41 am

This article provides a quick overview of the EA and its offshoots, including some fairly recent developments (e/acc, Andreesson), the main arguments, and their flaws/issues.

https://newsletter.mollywhite.net/p/effective-obfuscation

39

Fake Dave 11.26.23 at 6:07 am

I don’t think anything calling itself effective altruism could ever be banal or uncontroversial. The label itself implies that altruism without qualification is likely to be ineffective and thus a wasteful or even counterproductive way to distribute resources. That’s not necessarilly wrong.

The charity model has lots of problems and good intentions don’t justify bad policy. Critiquing altruism while respecting its existence and merit is at least a refreshing alternative to the tedious and circular “is real altruism even possible?” arguments of the Rational Choice crowd. The problem is that EA concedes a lot of ground to the cynics and I’m not sure it actually presents any compelling alternative model for philanthropy or humanitarianism. It’s still affluent people cutting checks, serving on boards, and lobbying governments for pet causes.

Really they seem mostly to be drawing lines in the sand about who gets to decide what “effective” means and disparaging approaches that fall outside it. Even the more ethical ones like MacAskill spill ink warning people away from channeling their desire to do good into causes that are unworthy or small potatoes in a way that betrays a certain grandiosity. It’s about having a vision to change the world, not random acts of kindness. The way that dovetails with individualism and the entrepreneurial ethos may help explain how it attracted so many people who are more Randian than Millian and the weird road it has gone down.

40

Thomas P 11.26.23 at 8:03 am

What makes the Basilisk even more silly is that you can just as well imagine an anti-Basilisk: a future AI that only punish those who tried to create something malevolent like the Basilisk. No matter what you do, either the Basilisk or the anti-Basilisk will torture you for eternity.

This is the same problem as with other religions: worship the wrong God and you end up in hell.

41

engels 11.26.23 at 11:26 am

One thing I’ve never understood is why worrying about an asteroid hitting the Earth or robots killing everybody is even considered to be “altruism”. Like, that’s going to suck for EAs themselves I’d have thought?

42

John Holbo 11.26.23 at 2:04 pm

The paradox of EA, in my opinion, is it’s a true ethical philosophy (probably) that, on average, is only readily subscribed to, in practice, by someone who is subscribing to it wrongly, i.e. if it is sending them on some ego-trip. For example, it is very hard to care about the billions of potential beings that might live in our light cone if we survive the next 100 years and colonise space, etc. But if you have delusions of grandeur that it might fall to your lot to save the future from anti-human wokesters and bugmen by wisely spending your billions on whatever, then you can fight for tax cuts and freedom of your industry from regulation in the moral confidence that you are saving the future. You are Harry Seldon or some Heinlein-style protagonist. EA turns into a hero’s journey fantasy (cf. Andreessen’s recent techno-optimist manifesto.)

43

steven t johnson 11.26.23 at 2:07 pm

Alex SL@36 asks a rhetorical question about how people complain about incompetence in government even when the government is carrying out bad policies? But as sometimes happen there is an answer: The objection is to government, i.e., taxes, in principle, not against the consequences of bad or inadequate policies. So far as I can tell, the underlying animus in EA is against government, i.e., taxes, not against any particular policy. As the wealthy and their academia move inexorably further and faster to the right, pursuing the objection to government, i.e., taxes, like a dog and tail, the university’s version of the Overton Window moves with them, bringing into view ever more varieties of right-wing thought. They are freshly bizarre and have none of the comforting familiarity of old inanities too numerous to name. It’s the intellectual version of the “New religions are crazy but old religions meet an innate need of the people, hence must be respected” approach.

I suppose at some point somebody will make a reputation showing with great learning, penetrating acuity and scholastic style that EA and AI are not really related, a revelation that will earn lasting eclat.

44

SusanC 11.26.23 at 3:19 pm

@stevect Johnson

… but the AI doomer part of EA is in favour of government regulation, particularly government regulation of AI.

And some of the strongest opponents of EA are totally opposed to any government regulation of anything, and oppose the EAs for trying to regulate AI.

45

Alex SL 11.26.23 at 8:37 pm

Fake Dave,

That seems very odd to me, because everybody who cares about altruism and charity would be in favour of doing it effectively by default. If you offered an altruist who is not an EA a binary choice between two options for the use of their donation, save the live of a child or help one family not go hungry for a day, how many people would seriously pick the second option? The issue was never that people want to be ineffective or that altruism becomes effective merely by slapping that label on, it is that this binary choice doesn’t exist outside of armchair reasoning. There are usually enough resources to buy malaria nets AND to provide that family with food, and it would be downright idiotic if every human who makes a charitable donation gives it to the exact same one cause at the same time, neglecting every other thing that would improve the world.

steven t johnson,

I agree about EA being very attractive to libertarian anti-government ideologues, but my point was more about cases like people on the centre-left, who aren’t anti-government, complaining about the incompetence of Trump, Johnson, Bolsonaro, etc: do you want to swap one of them against a very competent far-right populist, who is therefore able to implement more of his far-right policies and also to win another election? No? Exactly – one would logically wish for a driver whose aim it is to drive us against a wall to be so incompetent that they miss the wall.

46

KT2 11.26.23 at 10:34 pm

Matt Baily @20 & Dr. Hilarius @25
“I was unfamiliar with Roko’s Basilisk (and Yukowsky taking it seriously)”

Basilisk (chimeric) I’m intimately familiar with due to my school**.. Roko’s no.

Yet it seems Yukowsky is much maligned:
EliezerYudkowsky
“… Absolute statements are very hard to make, especially about the real world, because 0 and 1 are not probabilities any more than infinity is in the reals, but modulo that disclaimer, a Friendly AI torturing people who didn’t help it exist has probability ~0, nor did I ever say otherwise. If that were a thing I expected to happen given some particular design, which it never was, then I would just build a different AI instead—what kind of monster or idiot do people take me for? Furthermore, the Newcomblike decision theories that are one of my major innovations say that rational agents ignore blackmail threats (and meta-blackmail threats and so on). It’s clear that removing Roko’s post was a huge mistake on my part, and an incredibly costly way for me to learn that deleting a stupid idea is treated by people as if you had literally said out loud that you believe it, but Roko being right was never something I endorsed, nor stated. Now consider this carefully: If what I just said was true, do you think that an Internet hater would care, once they had a juicy bit of hate to get their teeth into?

“There is a lot of hate on the Internet for HPMOR. Do you think the average hater cares deeply about making sure that their accusations are true? No? Then exercise the same care when you see “Eliezer Yudkowsky believes that…” or “Eliezer Yudkowsky said that…” as when you see a quote “All forms of existence are energy” attributed to Albert Einstein. I have seen many, many false statements along these lines, though thankfully more from haters than friends (my friends, I am proud to boast, care a lot more about precision and accuracy in things like quotes). Don’t believe everything you read.

“Now a request from the author: Please stop here and get this material off this subreddit. This is a huge mistake I made, I find it extremely painful to read about and more painful that people believe the hate without skepticism, and if my brain starts to think that this is going to be shoved in my face now and then if I read here, I’ll probably go elsewhere.”
Thread: “Some strangely vehement criticism of HPMOR on a reddit thread today”
https://web.archive.org/web/20140626092902/http://www.reddit.com/r/HPMOR/comments/28yjbx/some_strangely_vehement_criticism_of_hpmor_on_a/cifxmzb

I am not a fact checker. I assume EY is the real EY.

What amazes me is that by bringing up Roko’s Basilisk, CT is providing a  training set for the next misinformed AI.

** A chimeric basilisk is still my theocratic militaristic single sex – males only! – bullies, captain of industry & finance, political insiders, mansplainers, entilted powerful ignorant of reality training ground. Straight out of albrahamic imagery.
Keep on training. Did I mention I loathed my school by yr10?

JQ’s eminently sensible question “… policy on the basis that it will help poor people it is always worth asking: wouldn’t it be better to just send money?” is not only never asked, due to the training – theocratic militaristic schooling – they are not even able to be IMAGINED. They can and so imagine favorably, a chimeric basilisk fondly, repleat with bounded religiosity.

Trained Truth is stranger than AI’s or EA or Roko’s Basilisk. Trained Truth bounded ‘rationality’ is still here now, and made worse with billionaires and chimeric manifestos.

47

Tim Sommers 11.26.23 at 10:40 pm

As an ethics professor who has read a bunch on this and even written a little, I am still scratching my head.
John Q, for example: the bit about the soup kitchen is spot-on, but I can’t see where you get that EA favors cash transfers bit at all. No. I don’t think so. Maybe, in Australia? But I like the argument. You should start a movement.
Long-termism? Faddish and dangerous, sure, but essential?
Here’s my take for what it is worth.
EA just started as out as applied act utilitarianism. But they had an interesting strategy.
When criticized, instead of pushing back, they would (or pretend to) adopt the criticism. Most impt example, when crized for neglecting institutional change, they started trying to calculate into their estimates. Which is good, I guess, but now you have a moving target. Maybe, it’s by bias as a nonutilitarian, but the good faith failing of EA to me is just this. You can’t meaningfully estimate so much of what they hope to estimate and (worse) if you throw out any limitations on what you can do that aren’t quantifiable you can justify whatever you want to do by fiddling with your estimates. Here’s a tell in. Why do they even need name for it? Why didn’t they just say we help you pick reliable charities to invest in?

48

J-D 11.27.23 at 1:34 am

I don’t think anything calling itself effective altruism could ever be banal or uncontroversial. The label itself implies that altruism without qualification is likely to be ineffective …

It depends on what you mean by ‘likely’.

Obviously the use of the expression ‘effective altruism’ suggests a contrast with ‘ineffective altruism’, which in turn suggests that the probability of altruism being ineffective is high enough to be worth taking into account. However, there’s a big difference between ‘having a probability large enough to be worth taking into account’ and ‘being more likely than not’.

Of course, the fact that we have the expression ‘being more likely than not’ is itself an indication that ‘likely’ by itself doesn’t always mean that, but it can mean that.

So if somebody refers to ‘effective altruism’, yes, it does mean they think that ineffective altruism is likely, but it doesn’t have to mean that they think ineffective altruism is likely.

49

Jerry Vinokurov 11.27.23 at 2:19 am

When it comes to the capital EA movement, we’re better off looking to people such as Will MacAskill

Will MacAskill expended considerable effort to provide intellectual cover for the FTX fraud machine and then, when the machine exploded, he made a very surprised Pikachu face and fainted on his couch. He might have started with some good ideas but he turned into nothing more than a court philosopher for billionaires.

50

Fake Dave 11.27.23 at 3:16 am

“… but the AI doomer part of EA is in favour of government regulation, particularly government regulation of AI.”

Bankman-Fried, who checks most of the sociopath boxes, had a whole song and dance going before Congress and calling for responsible regulation while he was engaged in serious law breaking. He also donated heavily to both political parties (while only advertising his donations to Democrats to, in his words, keep the “liberal” media on his side). The point wasn’t that he wanted to play by the rules, but rather that he recognized that regulation was inevitable and he wanted to be in the room so that he could influence things from the start. Sam Altman has done something similar with his world tour warning about the future AI apocalypse while he opposes most attempts (external and internal) to regulate his own irresponsible company and dangerous product. I don’t know if he’s exactly like SBF, but the point again seems to be publicity, influence peddling, and regulatory capture, not defending the public from people like himself.

51

Moz in Oz 11.27.23 at 5:31 am

One of the things I took from Singer is that the altruism you can sustain is more important than the perfect altruism. There’s a whole lot of “I know I should but I can’t” in the world. This also applies to the altruism you choose, if you’re antisocial or anorexic then volunteering in a soup kitchen is a poor choice even if it is the most effective one you have available.

Charles Stross has also blogged about billionaires mistaking dystopian science fiction for utopian or just for prediction: https://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html

The actual EA movement seems to have taken my critque of emissions offsets via the fictional equivalent “murder offsets”and run with it. Do whatever horrible, evil, unethical things you want now and use some small part of the proceeds to buy absolution. In some cases they’re quite literally killing people while claiming to have “everyones” best interests at heart (Tesla self-driving cars for example, but also the “death by defrauding” that is barely even illegal because so few societies are willing to take white collar crime seriously)

Tim Sommers asks Why didn’t they just say we help you pick reliable charities to invest in?

I think the answer is that “they” did, just not the same “they” as run round calling themselves effective altruists. There are quite a number of groups producing rankings of charities by various measures. A web search like this should turn up ones in your area: https://duckduckgo.com/?t=ffab&q=charity+ranking+by+effectiveness

52

Tim Sommers 11.27.23 at 1:09 pm

“Why didn’t they just say we help you pick reliable charities to invest in?”
Sorry to nitpick Moz, but that was a rhetorical question. And the intro to it was “Here’s a tell.”

53

steven t johnson 11.27.23 at 2:07 pm

SusanC@44 Yes, there are purists who claim they reject using taxes to fund government repression/regulation. Somehow these people in recent centuries never cut military budgets. Occasionally, for instance cutting criminal enforcement budget by the IRS happens, but more often police and prison budgets are as sacred as military budgets. I omitted them from consideration, yes.

Alex SL@45 Partly it’s a response similar to mine to SusanC, are there enough of these people to require consideration? In particular, I’m rather inclined to think that “centre-left” is never actually left but moderate conservative still committed to aspects of the New Deal—which is coming up on a century old, possibly hoary enough to count as tradition?—as well as the panaceas of elections, civility and the golden mean in all things. Besides, the “centre” is there to signal their commitment to the status quo in direct opposition to the left.

The other part though is that incompetence always exacts a price. In the case of Trump, his demented determination to politicize Covid as Chinese flu and all that other rot contributed to Covid getting loose and killing hundreds of thousands of people. So yeah, “incompetence” is always to be feared in the pilots of planes, the drivers of tanker trucks, the commanders of missile batteries and Presidents. Also, in the case of Trump, it is remarkable how competent he was in tax relief for the wealthy. And he is far more competent in running in this phase of the primaries, where the support from the mass media’s owners is still his and his alllegedly competent competitors are losing the big donor primaries. (The official primaries are rather like auditions to see who is apt to be the most effective sales manager for the generally unpopular policies the owners want.)

54

Trader Joe 11.27.23 at 8:49 pm

+1 Peter @13
+1 Alan @22

My idea of EA is if there is a guy ringing a bell with a pot to throw money into, its better to throw a bit right there in that moment than to go home, research the related cause for two hours and conclude I’d be better off buying malaria nets and then never send any money at all. At the scale of 99.9% of all individuals, Alan’s approach (and Peter’s) is closest to the pin as far as being effective, in the moment and virtuous to both recipient (and donor).

As a professional trader each year I select at least one investment and buy it with the view I’ll give all of the return to a charity I select. If the investment goes up, they benefit from my skill (and I get a double tax benefit under US code). If it goes down I give the original principal anyway. That’s as close as I come to effective altruism in practice but I’m at a loss as to whether that is EA or not.

55

Alex SL 11.27.23 at 9:39 pm

steven t johnson,

Point taken about whether there is actually a left left anyway. But regarding people like Trump, I stand by it: the real danger to US democracy is somebody who has the same political views as Trump but is less cowardly, narcissistic, and short-sighed. Somebody who, instead of thinking at every occasion, what do I have to blurt out to get out of this situation with my self-image intact?, thinks, okay, I have four years now to strategically make sure I cannot lose the next five presidential elections, what exactly do I need to do and in what order? What alliances do I need to build and maintain instead of constantly throwing people under the bus so that nobody who is competent wants to work with me anymore?

56

KT2 11.27.23 at 10:34 pm

JQ aks “wouldn’t it be better to just send money?”
LA Food Bank agrees.
(Why LA having “900,000 individuals seek food assistance”  is another post! LA population “Estimate (2022)[7] 3,819,538… or ! 23% !?. Wikipedia )

96% of cash direct to food… :”those same funds provide 100 meals – 85 more meals than purchasing the canned items”…. “Donate
“Your gift counts TWICE as much this
“The high cost of everyday goods hurts everyone. Around 900,000 individuals seek food assistance from the Los Angeles Regional Food Bank and our partners each month. You can help provide meals for those struggling with hunger in our community by making your generous gift today.

“Give now and the first $200,000 will be matched by an anonymous donor.
380,000 Meals”

Via;
“Why Cash Donations are So Valuable to Food Banks
Published on June 30, 2022

“While food donations and donated time are greatly appreciated, the Food Bank also relies on monetary donations, which provide the greatest flexibility in the fight against hunger.

“Suppose you want to help the Food Bank, and you decide to go to the store to purchase 15 cans of soup, and you spend about $25. If the cans of soup are 10 ounces, that’d be less than the 1.2 pounds that equal a meal according to Feeding America, but for this example, let’s say one can is one meal, so for $25, you provided 15 meals.

“Now let’s say you donated $25 to the Food Bank. Because that money is combined with donated food and donated labor, those same funds provide 100 meals – 85 more meals than purchasing the canned items and bringing them to the Food Bank.

https://www.lafoodbank.org/stories/cash-donations-are-most-valuable-way-help/

Via “Giving locally is key. I support our area food shelf year-round, with an extra gift for Thanksgiving and the December holiday; giving money instead of food is best. 
https://kottke.org/23/11/the-2023-kottke-holiday-gift-guide

57

Moz in Oz 11.27.23 at 10:50 pm

Tim, you actually said “Here’s a tell in. ” Not to nitpick or anything.

And I’m sorry, but your question doesn’t make sense to me as a rhetorical question “why don’t ridiculously rich people focus on helping poorer people give more effectively”… the naive explanation that they can get more money where they want it more easily by just writing a cheque doesn’t seem to have any obvious flaws.

The deeper explanation that they’re personally justifying evildoing now on the basis of possible futeure good also seems plausible. It’s what they say they’re doing, and their evil behaviour matches that part of the plan. Saying “they lie like Eichmann”… yes, and they’re being found guilty just as he was, although seem unlikely to be executed.

What am I missing? Is your point that they can skim money off or divert public interest via fake “rate my charity” efforts? Or that they could just say they help and not do anything? Or that if they’d been smarter they could have got other people to commit the crimes while they got credit for the charitible giving that resulted? Maybe that’s what really happened, and we should be looking for their inspiration (Singeeer in an angry teacher voice?)

58

Tim Sommers 11.28.23 at 1:20 am

“…the real danger to US democracy is somebody who has the same political views as Trump but is less cowardly, narcissistic, and short-sighed.”

You see that out-of-control car careening towards us at high speed? You might think that that car is the real danger to us, but the real danger is a purely hypothetical bus that might one day careen towards us while being larger than a car.

59

Tim Sommers 11.28.23 at 2:00 pm

“Tim, you actually said “Here’s a tell in. ” Not to nitpick or anything.”
True. That was a typo. Drop the “in.”
You:
“I’m sorry, but your question doesn’t make sense to me as a rhetorical question “why don’t ridiculously rich people focus on helping poorer people give more effectively””
Yeah. That wasn’t my question. I blame myself since, as far as I can tell, nothing you say has anything to do with what I was talking about. Here it is again minus typo.
“Here’s a tell. Why do they even need name for it? Why didn’t they just say we help you pick reliable charities to invest in?
In other words, you can have a discussion about effective charitable and the other issues without giving yourselves a team name, a mascot, and a flag. Or even some specific label to demarcate the people that are on your “side.” My super subtle or befuddled rhetorical question was meant to suggest that the people involved self-consciously sought to present themselves as a movement.
It’s hard to get tone sometimes on a comment thread, so just to be clear. I really did think I was nitpicking, since I just don’t think there was much overlap in what we were talking about. And I don’t blame you for not seeing the question as rhetorical. I just didn’t want to be put on a side in a debate where my only point was, “Don’t you think it’s revealing that some people thought it was important that they have a new name for giving to charity?”

60

Fake Dave 11.29.23 at 4:38 am

“(Why LA having “900,000 individuals seek food assistance” is another post! LA population “Estimate (2022)[7] 3,819,538… or ! 23% !?. Wikipedia )”

Their map covers most of Los Angeles County, which has a population of about 10 million. The LA metropolitan area is even bigger and they probably do draw people from outlying rural areas. The way LA sprawls, people often don’t realize how many distinct municipalities there are. They sort of blur together unless you live in one. Long Beach has about half a million people and there are a couple dozen other cities with over 100k within an hour’s drive of LAX.

Comments on this entry are closed.