On What We Owe the Future, part 6

by Eric Schliesser on March 3, 2023

This is my sixth post on MacAskill’s What We Owe the Future. (The first here; the second is herethe third here; the fourth here; the fifth here; and this post on a passage in Parfit (here.)) I paused the series in the middle of January because most of my remaining objections to the project involve either how to think about genuine uncertainty or involve disagreements in meta-ethics that are mostly familiar already to specialists and that probably won’t be of much wider interest. I was also uneasy with a growing sense that longtermists don’t seem to grasp the nature of the hostility they seem to provoke and (simultaneously) the recurring refrain on their part that the critics don’t understand them.

In what follows, I diagnose this hostility by way of this passage in Kukathas’ (2003) The liberal archipelago (unrelated to Effective Altruism (hereafter: EA) and longtermism), which triggered this post:

In rejecting the understanding of human interests offered by Kymlicka and other contemporary liberal writers such as Rawls, then, I am asserting that while we have an interest in not being compelled to live the kind of life we cannot abide, this does not translate into an interest in living the chosen life. The worst fate that a person might have to endure is that he be unable to avoid acting against conscience. This means that our basic interest is not in being able to choose our ends but rather in not being forced to embrace, or become implicated, in ends we find repugnant.–Chandran Kukathas The liberal archipelago: A theory of diversity and freedom, p. 64. 

Given my present purpose I can allow that Kukathas is mistaken that the worst fate that a person might have to endure is that a person be unable to avoid acting against conscience. Maybe this is just a very bad fate (consider, as Adam Smith suggests, being framed and convicted for murder one didn’t do; or being tortured for no good reason, etc.) All I stipulate here is that Kukathas is right that being (directly) implicated in bad ends is really very bad. This is, in fact, something that seems to be motivating longtermists and is compatible with their official views. While ‘repugnant’ is a good concept to use here, having one’s conscience violated is, in turn, a source of indignation. I think that’s fairly uncontroversial and i don’t mean to import Kukathas’ wider political theory into the argument (although I am drawing on his sensitivity to the significance of moral disagreement).

MacAskill’s book doesn’t use, I think, the word ‘conscience.’ This is a bit surprising because the key example of successful moral entrepreneurship (his term) in the service of moral progress (again his term) is Quaker abolitionism inspired by Benjamin Lay. And Lay certainly lets conscience play a role in (say) his All Slave-keepers that Keep the Innocent in Bondage (although he is also alert to the existence of hypocritical appeals to conscience). It’s also odd because one gets the sense that MacAskill and many of his fellow-travelers are incredibly sincere in wishing to improve the world and do, in fact, have a very finely honed moral sense (and conscience) despite arguing primarily from first principles, and with fondness for expected utility, and about (potentially very distant) ends.

Now, it’s not wholly surprising, of course, given his (defeasible) orientation toward total wellbeing that MacAskill is de facto attracted to, that conscience is not high on his list.  In fact, in general the needs and views of presently existing people are a drop in the bucket in his overall longtermist position. But this lack of attention to the significance of conscience also leads to a kind of (how to put it politely) social even political obtuseness.

Let me explain what I have in mind in light of a passage that expresses some of MacAskill’s generous sentiments. He writes,

The key issue is which values will guide the future. Those values could be narrow-minded, parochial, and unreflective. Or they could be open-minded, ecumenical, and morally exploratory. If lock-in is going to occur either way, we should push towards the latter. But transparently removing the risk of value lock-in altogether is even better. This has two benefits, both of which are extremely important from a longtermist perspective. We avoid the permanent entrenchment of flawed human values. And by assuring everyone that this outcome is off the table, we remove the pressure to get there first—thus preventing a race in which the contestants skimp on precautions against AGI takeover or resort to military force to stay ahead.

Now, MacAskill isn’t proposing anything illegal or untoward here. His good intentions (yes!) are on admirable display. But it is worth reflecting on the fact that he or the social movement he is shaping (notice that ‘we’) is presuming to act as humanity’s (partial) legislator without receiving authority or consent to do so from the living or, if that were possible, the future. And he is explicitly aware that this might well generate suspicion (which is, in part, why transparency and assurance are so important here).* One suspicion he generates is that he will promote ends and means that go against the conscience of many (consider his views on human enhancement and what is known as ‘liberal eugenics’).

So, while MacAskill is explicit on the need to preserve “a plurality of values” (in order to avoid early lock-in), that’s distinct from accepting deeply entrenched moral pluralism–this means tolerating, at minimum, close-minded and morally risk-averse views. MacAskill does not have a theory, political or social, that registers the significance of the reality of such entrenched moral pluralism and the political and inductive risks (even backlash) for his project that follow from it. I don’t think he is alone in drifting into this problem: variants of it show up in the technical version of population ethics and in multi-generational climate ethics, and other fundamentally technocratic approaches to longish term public policy.  That is, it is not sufficient to claim to be promoting “open-minded, ecumenical, and morally exploratory” values, even reject premature lock-in of “a single set of values,” if one never shows much sensitivity toward those that seriously disagree with you over ends and means.

In addition, to feel unseen and unacknowledged is a known source of indignation. MacAskill’s longtermism constantly flirts with lack of interest in taking into account the needs and aspirations of those whose wellbeing it aims to be promoting. But even if that’s unfair or mistaken on my part, given that MacAskill really doubles down on the need to promote “desirable moral progress” and tying “moral principles” that are thereby “discovered” to a “more general worldview,” it is entirely predictable that he will advocate for ends and means that many, who reject such principles, will find repugnant, and a source of indignation. As, say, Machiavelli and Spinoza teach, this leads to political resistance, and worse.


*Yes, you can object that the suspicion is officially at a less elevated level (the risk of AGI value lock in or conquest), but he is effectively describing a state of nature, or a meta-coordination problem, when it comes to dealing with certain kind of existential risk.



Sashas 03.03.23 at 8:51 pm

How much of MacAskill’s argument relies on the notion of “values lock-in”? I ask because that part still seems so implausible to me. I am very familiar with the concept of stable social and political systems that are nonetheless quite bad. (Case in point, I live in the failed state of Wisconsin where elections for our state legislature are largely decoupled from the will of the voters. If you know of an actual way out for our state, please let me know!) However, values lock-in seems to be a much stronger statement about the future. It appears to be implying that (again – using my WI example) the WI state legislature will from now on and forever be dominated by the GOP. It’s the extrapolation to forever from as far as I can predict that gets to me.


Eric Schliesser 03.03.23 at 9:04 pm

It’s a big part of his argument. He offers both historical examples of value lock-in and worries about how AGIs can generate it.


J, not that one 03.03.23 at 10:49 pm

It seems to me there are a number of ways you get values lock-in assumed and unavoidable. (1) It might be impossible for people to imagine that things could be different or that people could have different beliefs. (2) It might be impossible for people to imagine that different beliefs could be publicly tolerated. (3) People might believe that all difference of belief is superficial or deceptive — that everyone really believes the same thing deep down. (4) People might believe that all dissent is really a complete lack of belief, because there’s really only one actual belief system.

But given that things do change, people do have different beliefs, some degree of dissent is almost always permitted explicitly and can almost never be completely suppressed, . . . MacAskill’s way of formulating the problem (if it is the same problem) seems strange.

But that’s a kind of values lock-in I’d be interested in reading about. And MacAskill sounds like he’s interested in a different kind. He’s interested in a kind of meta-tolerance that would get impatient if too many people were too tolerant, because maybe intolerance would be more socially useful, I think. He sounds a little like the people who say that only religion can teach morality and discipline, so it’s all very well if there are a lot of “takers” who are tolerant and stuff and have different cultures and subcultures, but they still have to defer to the “makers” who belong to conservative churches (probably of the dominant, i.e. white, ethnic group). That seems to be why conservatives often seem to think they are often forced to act against their conscience, while assuming liberals and progressives couldn’t possibly be.

But I suspect MacAskill will still sound too liberal and tolerant to many religious believers who fall into one of the 4 categories in my first paragraph.


Lee A. Arnold 03.03.23 at 11:47 pm

It’s all a bit too complicated. Does MacAskill discuss the ways in which most common people conceived of the future, in ages past? In a rather nonstandard, nonacademic book called The Image of the Future (1973), Fred Polak observed that in the West there were two futures, one replaced by the other: eschaton, which gave way to utopia. Traditional religious society conceived of the past as a golden age from which we had fallen. We were all fallen parts of the body of God, born into an immobile hierarchy, and your best virtue was to be the best peasant or farmer or merchant or lord or king that you could be, and that was how others should judge you. In the future was only the eschaton, the proper culmination of preserving this fateful way of life. With the rise of modernity came the new notion that there might be improved times ahead while still living on this earth: utopia. Actually there were many published speculations of utopia, helping to fuel the common appetite for the new age of exploration and scientific discovery. But now we have lost even this image of the future. If anything the current predominant image of the future is dystopia, spurred on by the reality of huge wars and the social and environmental problems of crowding and swift technological change. This bled right into storytelling, aiding the simple requirement of science fiction writers and screen entertainers for dramatic conflict with villains: the dramatic image must always be a dystopia. And storytelling is how most people learn and remember. So it strikes me that among the academics there are two philosophical mistakes here: one is simply forgetting that society always needs an image of the future, and at present we don’t really have one. The other mistake is to try to argue people out of their rather juvenile dystopias by calling upon them to remember the trillions yet unborn in order to solve our current problems. Most common people are untutored in the sophistications of modern philosophy, and will find this a roundabout, uninspiring and unnecessary approach.


MisterMr 03.04.23 at 6:48 am

In my view, “coscience” is a word that means the introiected expectations of others. However, the introiected expectations and what others actually expect can be quite different: for example I might have introiected a certain model of male-female relationships, and therefore I feel that I should behave in a certain way with my wife, but my wife might expect differently.

When we speak of morals, some people will think more about the internal aspects of it, and therefore see it as a problem of relationship with one’s own coscience, others will look more to the external aspect and see the moral problem as “what exactly my coscience should be telling me”.

In particular utilitarianism is mostly an answer to “what my coscience should be telling me”, and thus coscience is often taken as a result of the theory and not as a starting point of it.


Matt 03.04.23 at 7:01 am

MisterMr: Not to take us off-topic, but what are “introiected expectations”, or even, what does “introiected” mean? I was curious to learn a new word, but google gives me nothing on any of this, unless it’s a typo. (I tried a few varients and it was just blank.)


MisterMr 03.04.23 at 11:11 am

Apologies, the correct form of the verb is “introject” if you want to search on a dictionary.

It is a term used in psychanalysis (I think) and means that there is something that comes from the outside, like the moral values of the parents, but these are absorbed by the child so that when he/she grows up he/she will perceive said moral values as something that comes from the inside.

E.g. many women have anti feminist beliefs because they introjected the values of patriarchy.

It’s a very cool word.


Alex SL 03.04.23 at 10:36 pm

Yes, this value lock-in idea is really odd, as discussed in one of the previous threads.

The key problem isn’t the assumption that lock-in happens, because one could argue that it is just another way of saying that beliefs and values can achieve a hegemony that makes them very hard to dislodge. For example, the following ones are currently assumed to be obvious truths, and their alternatives considered completely ridiculous and unimaginable, by a decisively large proportion of the population in most ‘Western’ countries, although they are all obviously false:

Everything works better if in private hands and organised as a market. I.e., if a public servant is paid $50k salary to dig a hole, that is inefficient statism wasting taxpayer money, but if the job is outsourced and a private company is paid $75k to dig a hole ($40k digger’s salary plus $35k profit for the owners), that is super-efficient.

Financial success is evidence of some combination of skill and moral fiber, and poverty is evidence of their absence. Like, you can’t criticise Elon Musk or Jeff Bezos for mistreating their employees or writing things that are demonstrably false until you have first become a billionaire entrepreneur yourself, because they have “built something” and you haven’t.

If you own something, you can do whatever you want with it, and nobody has the right to criticise you. Even if what you own is a gigantic company whose behaviour affects hundreds of millions of people. Also, of course, if you own something, you must deserve to own it, see previous point.

Everybody owning a car, the entire society being organised around massive fields of asphalt for parking, and cars as opposed to pedestrians having right of way on roads and streets, is normal and desirable and the only imaginable or possible way to live.

These are locked-in in the sense that I find it very hard to imagine how they can be replaced with more accurate beliefs and better values without societal collapse and formation of a new, successor society, or at the very least a massive crisis that forces the current society to transform itself to survive.

The main difference is that MacAskill somehow assumes that current society is malleable, but that soon beliefs and values will become locked in for the next few millions of years. Merely spelling it out like this should reveal the problem. It is such a laughably implausible assumption that nobody advancing it should be taken seriously on anything they write or say. Also, it is revealingly narcissistic: “my time is special, and that makes me special, because I get to influence all those people in the future in a way that soon nobody else will be able to”.


KT2 03.04.23 at 11:26 pm

I’m introjecting – very cool word.

Did god have a values lock in moment? Maybe he was in Wisconsin recently?

Frank Zappa said:
“When God created Republicans, he gave up on everything else.”


Alex SL 03.06.23 at 3:14 am

Isn’t it an argumentative device?

He has no way to explain we can’t wait for more information, let future people have influence over their destinies, given their preferences, and avoid wildly speculative forecasting.

So he comes with a ridiculous way to make it seem necessary to act now plus dire and unavoidable given what will happen.

Is it more than grasping at straws?


engels 03.08.23 at 1:47 am

It’s a very cool word.

Cathexis is better.

Comments on this entry are closed.