On MacAskill’s *What We Owe the Future*, Part 1

by Eric Schliesser on November 24, 2022

The effect of such extreme climate change is difficult to predict. We just do not know what the world would be like if it were more than seven degrees warmer; most research has focused on the impact of less than five degrees. Warming of seven to ten degrees would do enormous harm to countries in the tropics, with many poor agrarian countries being hit by severe heat stress and drought. Since these countries have contributed the least to climate change, this would be a colossal injustice.
But it’s hard to see how even this could lead directly to civilisational collapse. For example, one pressing concern about climate change is the effect it might have on agriculture. Although climate change would be bad for agriculture in the tropics, there is scope for adaptation, temperate regions would not be as badly damaged, and frozen land would be freed up at higher latitudes. There is a similar picture for heat stress. Outdoor labour would become increasingly difficult in the tropics because of heat stress, which would be disastrous for hotter and poorer countries with limited adaptive capacity. But richer countries would be able to adapt, and temperate regions would emerge relatively unscathed.–William MacAskill (2022) What We Owe The Future, “chapter 6: collapse” p 136.

Two ground-rules about what follows:

  1. I ignore all the good non-longtermist, effective altruism (EA) has done. It’s mostly wonderful stuff, and no cynicism about it is warranted.
  2. I ignore MacAskill’s association with SBF/FTX. I have said what I want to say about it (here), although if any longtermists associated with the EA movement come to comment here, I hope they remember that the EA community directly benefitted from fraud (and that there is an interesting question to what degree it was facilitated by the relentless mutual backscratching of the intellectual side of the EA community and SBF); and perhaps focus on helping the victims of SBF.
  • Perhaps, for some consequentialists (1) and (2) cancel each other out?

Anyway, after my post on MacAskill’s twitter thread (here) and my post on the concluding pages of Parfit’s Reasons and Persons (here), I was told by numerous people that I ought to read MacAskill’s What We Owe the Future. And while I am going to be rather critical in what follows (and subsequent posts), I want to note a few important caveats: first, MacAskill is asking very interesting social questions, and draws on a wide range of examples (also historically far apart). I am happy this is a possible future for philosophy today. Second, he is an engaging writer. Third, What We Owe the Future is — as the first and last chapter make clear — quite explicitly intended as a contribution to movement building, and that means that the standards of evaluation cannot be (say) identical to what one might expect in a journal article. In a future post, I’ll have something to say about the relationship between public philosophy and movement building, but in this post I will be silent on it. Fourth, if you are looking for a philosophically stimulating review of What We Owe the Future, I warmly recommend Peter Wolfendale’s essay here for a general overview (here). If you are especially interested in objections to the axiology, I warmly recommend Kierin Setiya’s piece in Boston Review (here). It’s also worth re-reading Amia Srinivasan’s high profile, prescient critique of MacAskill’s earlier work (here).*

In today’s post, I offer two (kinds of) criticisms of What We Owe the Future. First, I discuss its cavalier attitude toward injustice. This criticism will be extrinsic to MacAskill’s own project. Second, I argue it treats a whole number of existential risks as uncorrelated which are, almost certainly correlated. (This I consider an intrinsic problem.) And this exhibits two kinds of lacunae at the heart of his approach: (a) his lack of theoretical interest in political institutions and the nature of international political coordination; (b) the absence of a disciplining social theory (or models) that can help evaluate the empirical data and integrate them. (That is, lurking in this second criticism is the charge of cherry-picking data and relentless privileging of some measures/inputs rather than being transparent about these being one of many such measures/inputs–a charge I will develop over subsequent posts.) I have argued elsewhere all integrative or synthetic philosophy requires such models, so here I illustrate what happens absent such a synthetic glue.

So much for set up.

The quoted paragraph is literally the only time MacAskill confronts injustice in the book. To his credit, he notices that climate change is generating what he rightly calls a “colossal injustice.” The countries and peoples least responsible for climate change shoulder most of its downside risks and burdens. I would emphasize more than he would that this will have foreseeable consequences in such places of civil war, famine, flooding, and emigration of the population, including skilled labor who may find refuge elsewhere.

While MacAskill treats the possibility of colossal injustice as something in the future, these unjust effects of climate change are already visible in the world today, often exacerbated by imperfect political institutions and/or irresponsible social elites (arguably the case in Sri Lanka and Pakistan). Elsewhere in the book, MacAskill notes that 15% of the world’s adult population (!) wants to move, but he does not connect it to the unfolding climate crisis (see p. 101, which, while descriptive, I read the implicature as a defense of the “quality of life” in “rich liberal democracies.” These are the VERY countries responsible for the colossal injustice he diagnoses two chapters later!)+

Now, one would think that the diagnosis of colossal injustice would motivate, say, discussion of reparations, or less ambitiously, mitigation and prevention. But as the quoted paragraph shows, that’s not MacAskill’s route. In fact, because he is so concerned with evaluating the risk of civilizational collapse, the significance of this colossal injustice never quite gets internalized in or further developed in his approach. In fact, while climate change is by no means ignored in the book (MacAskill is, for example, a proponent of decarbonization—a topic that recurs regularly), climate justice is except in this quoted passage.

The lack of further focus on this colossal injustice is due to three features of his longtermist population ethics approach (two of which are philosophical in character): first, the nearly infinite number of possible future people simply dwarf the interests of the living, as Srinivasan already noted (“the expected value of preventing an x-risk dwarfs the value of, say, curing cancer or preventing genocide.“) To be sure, this is not MacAskill’s intent — in the book one can find plenty of statements that one should not sacrifice the interests of present generations to an uncertain future –; but there is a clear distinction between actively sacrificing present interests, and actively undoing harms to present generations. On his approach there is simply no reason to privilege attention to (say) intersectionally vulnerable populations. Second, the relentlessly forward looking longtermism of MacAskill doesn’t really know what to do with the past (and the possible colossal injustice he diagnoses is, in part, the effect of imperialism, colonialism; and in part the effect of industrialization). From the vantage point of maximizing effective value, compensation or reparations for developing injustices among relatively poor or unskilled populations is simply an inefficient use of money. (Again, this point is not original with me; it echoes the complaints of those who worried about the effects of, say, trading emission rights on the poor without a seat at the technocratic negotiating table.) Third, MacAskill’s implied ‘we,’ — and I would bet ‘we’ is one of the most used words in the book — which often seems like it’s speaking for all of humanity, clearly does not include the victims of this injustice (this is made explicit when he addresses who his likely readers are on p. 194).

Okay, let me turn to the second issue I promised to discuss in this post. On the very same page that I have quoted above, MacAskill writes,

There is a substantial chance that our decarbonisation efforts will get stuck. First, limited progress on decarbonisation is exacerbated by the risk of a breakdown in international coordination, which could happen because of rising military tensions between the major economies in the world….Decarbonisation is a truly global problem: even if most regions stop emitting, emissions could continue for a long time if one region decides not to cooperate. Second, the risk of prolonged technological stagnation, which I discuss in the next chapter, would increase the risk that we do not develop the technology needed to fully decarbonise. These are not outlandish risks; I would put both risks at around one in three. (136)

The underlying concerns strike me as very apt (even if I would like to see more transparency about how MacAskill arrives at his probabilities–the endnotes often send you to a website, and the calculations are then burried in cited papers). But it is odd to treat the risk of a breakdown in international coordination and the risk of technological stagnation as independent from each other. In fact, if the breakdown of international coordination would lead to world war, then decarbonization is off the table not just because war is truly a great source of carbon emissions, but also because it (de facto) ends international coordination.

In fact, it is odd that MacAskill misses this point because earlier he recognizes “that great power war also increases the risk of a host of other risks to civilization.” (p. 116) He reports “a chance of a third world war by 2050 at 23%” and add that “annual risk stayed the same for the following fifty years, this would mean another world war before the end of the century is more likely than not.” (p. 116) If that’s right then the numbers reported for  “engineered pandemics” (risk 1% [p. 113]) and technological stagnation (as well fossil fuel depletion [which MacAskill wants to prevent] all seem rather optimistic because in reality they represent correlated or systemic risks.

While MacAskill is highly interested in great power war (pp. 114-116), he is curiously uninterested in how to theorize explicitly about great power politics in the context of international institutions despite these being the causal source of the main factor in the probabilities he bandies about throughout the book. Throughout his argument, he tacitly black-boxes what he calls “the international system,” “international cooperation,” “international coordination” and “international norms.” (Obviously, he could claim that great power politics is independent from international institutions and shaped by the interactions of small number of elite actors—something he hints at in his historical examples; but it is not developed in his future oriented chapters.) And so, somewhat curiously, a book devoted to building a social movement and changing values, leaves under-theorized the main social factor that will determine (by its own lights) the possibility of that movement having a future at all.**

To be continued.




*And after reading What We Owe to the Future and re-reading Srinivasan’s review it generates the unsettling feeling that MacAskill simply ignores objections that he can’t turn into a feature of his approach.

+As regular readers know, I can’t be accused of excessive criticism of liberal democracy.

** In my next post, I’ll suggest that a related problem also shows up in non-trivial ways in MacAskill’s potted history of abolitionism (which is the social movement that is the main rhetorical model or template for longtermism).

{ 17 comments… read them below or add one }


Oscar the Grouch 11.24.22 at 6:04 pm

Why is it worth it to spend time on this book or this view?

If our time is a precious resource both for our lives and for the other things we could be doing, why is this book worth reading or talking about?

Yet, people will read and talk about it a lot even though they may see it isn’t contributing to giving them any practical guidance or moving our understanding about the problems it takes up.


Maybe it’s because other people are talking about it, or perhaps they have hopes there’s something worthwhile there.

I can’t blame anyone. Maybe they have persuaded themselves of its significance. But life is short and there’s a lot to be done and I wish we weren’t so often bogged down in conversations about someone’s clever spin that take us nowhere. I was hoping to leave it to the young to become serious again given what they’re facing but I suppose it’s up to us to make sure the young who get a chance to be heard are the type who will understand what’s at stake, since we’re the gatekeepers after all.

I appreciate the review, by the way. My comment is not directed at the post. The situation is such that books like this need such reviews.

Apologies for typos, I am old and need new glasses. As well as attitudes, as I am old and worried about what there is less time to do.


Eric Schliesser 11.24.22 at 7:33 pm

Hi Oscar, Like it or not, MacAskill is at the center of a growing and influential social movement.


Alex SL 11.24.22 at 9:54 pm

Great post addressing the central moral issue with this “longtermism”: The movement (I hesitate to call it a philosophy) is not about injustice; it doesn’t care about injustice. That injustice isn’t mentioned beyond a passing remark is no accident, because the movement is all about what they call extinction risks.

(Which notably, are not about extinction as commonly understood, but about whether humanity will colonise the universe and become transhuman. To longtermism, stagnation at a sustainable population and tech level and with perfect social justice would explicitly be an “extinction” scenario.)

In their minds, the imaginary value of an imaginary future of fantastillions of simulated minds across the galaxy so far outweighs anything happening right now that even the unnecessary suffering of billions becomes insignificant in comparison.

Apart from this being morally odious, there is also a central technical problem, i.e., that there is no good reason to believe in the possibility of their utopian future anyway, and in fact we already know enough about how physics and biology work to be extremely confident that it is impossible.

Interstellar travel is not survivable. Either you travel so fast that hitting a speck of dust annihilates your spaceship, and at any rate you would have to consume unfeasible amounts of energy to theoretically get up to that speed, so that won’t happen; or you travel so slowly that all the gas leaks out of your vessel and all technical components more complex than a floor tile stop working before you are a fifth of the way to the next star. The people who assume it is possible have read too much scifi and have no clue whatsoever about the distances involved. The distances are so huge, and the travel times involved would be so long, that the human mind is simply incapable of grasping them, and that really shows in the case of these longtermists.

Not even Mars is going to be colonised. The soil is toxic, it is too cold, there is virtually no atmosphere, radiation levels are too high, and you can’t fix any of this because having lost its magnetic field, Mars can’t keep an atmosphere even if you started creating one. Earth it is, sorry, all the rest is imaginative scifi stories.

And, although this shouldn’t have to be said: simulated minds are not minds any more than a drawing of a person is a person. Longtermism in this MacAskill sense is based on layers and layers of magical thinking.

The flip side of this is MacAskill’s and the entire movement’s bizarre estimation of risks. Their greatest fears – “biorisk” and “hostile AI” are, again, based on magical thinking. The logic is always further technological progress -> ??? -> human extinction, where the ??? part is, once you start questioning them about it, something that is known to be biologically or physically impossible. My latest interaction on this revealed this the scenario of an AI designing an organisms that is entirely protein based but somehow able to reproduce, and also 100% deadly to humans but with sufficient delay to spread before that happens, and getting humans to build and release it for them. There are several steps hidden in that scenario that do not seem biologically plausible at all. Their counter is that the AI will so very smart that it will be able to do things we know to be impossible, because it is just so smart.

On other hand, they refuse to appreciate the risk of cascading collapse posed by climate and ecological catastrophe and only mention it at all so that they can claim to have taken it into account when challenged. The quoted paragraphs at the top of the post are extremely revealing: no, richer countries would not be able to adapt, they would collapse into a dark age with a fraction of their previous population levels and at best early middle age technology, and temperate regions would not emerge unscathed but turn into regions that aren’t temperate anymore. Also, all currently coastal population centres would be under water, displacing and immiserating most of the population of rich/temperate countries even if they could survive the collapse of global food supplies.

MacAskill seems to understand nothing about climate and even less about human society. You can’t look at a future where billions of people in poorer countries will die of starvation, wars, and disease and/or migrate to already highly stressed wealthier nations in their hundreds of millions and then expect those wealthier nations to “adapt”. Five to seven degrees within few centuries means complete collapse of global civilisation, because complex technological societies are complex, and complex is another word for fragile.

But based on other quotes I have seen, MacAskill seems to think that even 15 degrees, which is much hotter than even the early Tertiary, would be survivable and still allow agriculture in “many” parts of the world. All of this shows that he should not be taken seriously on futurology and risk, full stop. Given that he is all about future risks, his wilful ignorance on these issues is exactly as disqualifying as if somebody published a book on astrophysics and then revealed that they believe in a flat earth, or as if you plan a surgery with your doctor and then learn that they believe the function of the brain is to cool the blood.

intended as a contribution to movement building, and that means that the standards of evaluation cannot be (say) identical to what one might expect in a journal article

Bit puzzled by this, though. What does it mean if not, “you are allowed to talk nonsense while you are trying to convince people to join your sect?”


J-D 11.24.22 at 11:33 pm

Hi Oscar, Like it or not, MacAskill is at the center of a growing and influential social movement.

Being at the centre of a growing and influential social movement doesn’t always make books worth reading.


Moz in Oz 11.25.22 at 1:53 am

The longtermist nonsense seems to miss one key step in the process: first you have to survive until the long term. That makes what’s happening right now not just important, but critical. In that context selfish white men in London are disposable, especially when their goal seems to be “first we make things worse, much worse, then eventually the triumph of the will!”

I think great power war is not the primary war risk. As we see in Ukraine and Syria the great powers are fairly good at playing silly buggers with other people’s lives but not risking too much of their own. Or a global war.

I’m more concerned about the militarised arguments over water and agricultural land that seem to be ramping up. And the use of fences-as-genocide that we’re seeing for non-white Ukranians and anyone trying to get from Africa to Europe. I don’t see any philosophical or political reason why we’d stop slaughtering refugees in whatever numbers they present themselves. That’s ramping up around the world, with Australia too often “leading” the way. Just because Europe can kill off the first billion refugees doesn’t mean the numbers won’t eventually overwhelm them.

In another context I’ve plotted out how the rich countries would deal with India-is shipping refugees wholesale in containers, for example. “Tomorrow, When the War Began” style. It’s quite practical to ship “refrigerated” (ie, air conditioned) boxes full of people in very large numbers, and that would be hard to detect if done by a nation-state. Sure, once a million Bangladeshis step out of the containers in Sydney people will notice, but now what do you do? Machine guns in Port Botany?


J-D 11.25.22 at 4:45 am

Which notably, are not about extinction as commonly understood, but about whether humanity will colonise the universe and become transhuman. To longtermism, stagnation at a sustainable population and tech level and with perfect social justice would explicitly be an “extinction” scenario.

Even if humanity colonised the universe and became transhuman, its ultimate fate would still be extinction. Nothing lasts forever or, if you prefer, ‘decay is inherent in all complex beings’, or, if you prefer, ‘in the long run, we are all dead’–meaning not only that every individual human will die, but also that the species will become extinct and the universe devoid of life.


novakant 11.25.22 at 10:27 am

The effect of such extreme climate change is difficult to predict. We just do not know what the world would be like if it were more than seven degrees warmer; (…) it’s hard to see how even this could lead directly to civilisational collapse.

I wanted to stop reading there – why bother with such nonsense – but I’m glad I went on to read your very thoughtful takedown, thank you.

The trouble is that we have seen a shift from outright climate change denialism (not many are seriously promoting this anymore) to blaming the indivual, to promoting “solutions” that are none (biofuels, carbon capture etc.) and greenwashing, and finally to “adaptation”.

This is all part of a startegy to prevent or slow down real change and has been an ongoing PR effort on since Exxon scientists et al found predicted climate change in the 70s. If this sound slightly conspiracy minded I urge people to read Michael E. Mann’s “The New Climate War”.

And if one needs to disabuse themselves of the notion that billionaires / big money will save the world, I suggest Jane Mayer’s “Dark Money”.


oldster 11.25.22 at 12:39 pm

“He reports “a chance of a third world war by 2050 at 23%”….”
23% , eh? Not 25%? Not 1 in 4, or less likely than even odds?
So, this guy has no understanding of spurious precision. Or was 23% just a rounding off from his highly technical determination that the chance is 23.479%?
Nothing shows me that someone is bad at numbers more quickly than this kind of spurious precision. It also tells me that they will lack the appropriate doubts about their results, because they are misled by their own numbers.
Not a good sign.


bekabot 11.25.22 at 5:32 pm

It makes me sad when I think of the ad guys for Balenciaga knocking themselves out trying to be decadent. It makes me sad because the longtermists are already so far out ahead of them that the rag sellers will never catch up.



Jonathan Goldberg 11.26.22 at 4:02 am

MacAskill thinks he’s Hari Seldon. He’s wrong.


Peter Dorman 11.26.22 at 6:10 am

@3 is devastating. I haven’t read WWOTF, but if this analysis is accurate, MacAskill is a big waste of time at best. As in “like Jordan Peterson but with even bigger donors”.


Abigail Nussbaum 11.26.22 at 8:35 am

The essence of longtermism is “it’s OK for billions of people in the Global South to die so long as people who look like me can continue living a comfortable, technologized existence, if only in small enclaves”. It’s an inherently eugenicist project – as this post notes, the word “we” is doing a lot of work. Under the guise of saving humanity from extinction, it’s narrowing the definition of humanity to include only people who share the author’s culture, class, and yes, race.

It’s also, as noted in these comments, an absurd fantasy. Inherent to the claim that humanity will “adapt” to much of the planet becoming uninhabitable is the assumption that the people who live in those places will just lie down and die. But there is no future in which the Global South is mired in starvation, resource wars, and mass refugeeism, but Global North continues unaffected. At the very least we will see the (further) rise of autocratic, supremacist, fascist regimes who promise to “protect” their borders from the brown-skinned hordes. But then, people like MacAskill tend to imagine themselves at the top of the world orders they propose, so perhaps he doesn’t see the loss of democracy and human rights as a particularly onerous one.


Alex SL 11.26.22 at 10:32 am

MacAskill thinks he’s Hari Seldon.

Wow. I hadn’t even made that connection, but that is pretty much it, isn’t it?


J-D 11.26.22 at 9:55 pm

MacAskill thinks he’s Hari Seldon. He’s wrong.

But you repeat yourself.


bekabot 11.27.22 at 12:12 am

Gangrene: “…but I didn’t do anything wrong; that limb needed to come off! I was trying to help!…”


J, not that one 11.27.22 at 2:54 pm

The quote at the start strikes me z very “everybody knows environmentalists hate people, I love people, everyone knows they think we should have less people*, I’ll say we should have more (no matter how many, more is better).” (Reading other comments, I seem to have interpreted civilizational collapse as referring to the global food system, which could be incorrect.)

Effective altruism predates McCaskill by a lot. Is he really that influential except as a spokesperson for someone else’s version?


Moz in Oz 11.28.22 at 2:58 am

{interstellar travel} or you travel so slowly that…

… you might as well stay here and build those habitats with less propulsion?

By the time we’re confident that anyone will survive the trip we’ll have so much experience with large colonies outside the orbit of Jupiter for long periods that it’s an open question as to whether we’ll even be human (the difference between us and Neanderthals or Denisovians is only ~1M years, and we’re interfertile). Yes, I’m pessimistic about the timeframes for making this work (we need portable nuclear fusion or some similarly efficient power generation, but also long-term political stability… having a revolution after 1000 years on a spacecraft that’s still 1000 years from the next star system is likely to be fatal)

Worth noting that Musk isn’t the first to propose one-way trips to form colonies on nearby planets, but he is currently one of the more enthusiastic. Notably he doesn’t currently have a working prototype somewhere more hospitable like Antarctica or Atacama. So the “one way” nature of the trip is likely to manifest very quickly (overwintering in Antarctica is hard, despite regular emergency flights… recall the person who DIY’d their appendectomy).

Leave a Comment

You can use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>