On What we Owe the Future, Part 3

by Eric Schliesser on December 5, 2022

To illustrate the claims in this book, I rely on three primary metaphors throughout….The second is of history as molten glass. At present, society is still malleable and can be blown into many shapes. But at some point, the glass might cool, set, and become much harder to change. The resulting shape could be beautiful or deformed, or the glass could shatter altogether, depending on what happens while the glass is still hot. William MacAskill (2022) What We Owe The Future, p. 6

This is the third post on MacAskill’s book. (The first one is here which also lists some qualities about the book that I admire; the second one is here.)

A key strain of MacAskill’s argument rests on three contentious claims: (i) that currently society is relatively plastic; (ii) that the future values of society can be shaped; (iii) that in history there is a  dynamic of “early plasticity, later rigidity” (p. 43). Such rigidity is also called “lock-in” by MacAskill, and he is especially interested in (iv) “value lock-in.”

Before I get to criticizing these three claims, it’s worth stating that in a previous post I noted that from (ii) MacAskill and the social movement he is creating (pp. 243-246) have (v) claimed for themselves the authority to act as humanity’s legislators without receiving authority or consent to do so. It’s quite odd that MacAskill doesn’t reflect on the dangers in the vicinity here because most of the examples MacAskill offers of relatively long-lasting ‘value lock-in’ are, by his lights, the effects of “conquest.” (p.92) Not to put too fine a point on it, but in general the project of ‘value lock-in’ is team evil (as MacAskill notes, this is the project of imperialists, colonialists, religious monopolists, etc.) You would hope that one of the lessons one takes from this fact is that it’s not a good idea to be on team lock-in. I return to this below.

On (i) I don’t think MacAskill ever offers a metric of social plasticity or even really provides thorough evidence that our age is genuinely molten. (And, in fact, I have noted that at times he undermines this claim by suggesting that our age is characterized by “homogeneity” (p.96) and the effects of “modern secular culture” or a “single global culture.” (158)) But it’s worth looking at how MacAskill articulates the first claim:

In China, the Hundred Schools of Thought was a period of plasticity. Like still-molten glass, during this time the philosophical culture of China could be blown into one of many shapes. By the time of the Song dynasty, the culture was more rigid; the glass had cooled and set. It was still possible for ideological change to occur, but it was much more difficult than before.
We are now living through the global equivalent of the Hundred Schools of Thought. Different moral worldviews are competing, and no single worldview has yet won out; it’s possible to alter and influence which ideas have prominence. But technological advances could cause this long period of diversity and change to come to an end.–(p. 79)

Not unlike MacAskill, I am fascinated by the period of the Hundred Schools of Thought. Each year I spent some time on it with my undergrads. But I never fail to point out that this era is also known as the ‘Warring States period.’ (In fact, I observe, as a puzzle, that the intellectual fertility and relentless war within a  relatively fractured political system is something it shares with the Italian Renaissance of Machiavelli and, perhaps, Kautilya’s age in India.) The absence of empire is beneficial to value pluralism.

I don’t mean to suggest that value pluralism is a necessary effect of multi-polar world. Presumably there are other social and institutional sources. Max Weber thought such pluralism was the effect of the advanced division of labor, Plato and Al-Farabi seem to have thought it was the effect of the diversity of human passions that flourish in democratic societies alongside freedom of speech and lack of educational uniformity. That’s to say, one may obtain value pluralism without context of permanent war.

So, if one thinks cultural plasticity is worth having — or at least if one thinks rigidity is something threatening — then one should be thinking about the practices and institutions that prevent empire and that promote enduring cultural diversity. MacAskill is rather fond of thinking about society in terms of cultural evolution. But, as I have repeatedly noted, he is thoroughly uninterested in thinking of the role of institutions — as selection mechanisms or ecological structures– in generating and sustaining such pluralism.* And the effect of this is to flatten (to use his lingo) the cultural fitness space. For MacAskill treats technology as a determining cause of value lock-in.

Now, I don’t want to suggest technology never shapes values, but it is peculiar that MacAskill doesn’t notice that technology can be neutral among competing values and that technology is often shaped by values. I call it peculiar because (a) the hot topic in AI ethics today is that even neutral algorithms often reflect and amplify existing structural (that is, institutional) injustice(s); and (b) lots of military technology gets used for competing  ends. We can observe this peculiarity in the very next paragraph:

When thinking about lock-in, the key technology is artificial intelligence. Writing gave ideas the power to influence society for thousands of years; artificial intelligence could give them influence that lasts millions. I’ll discuss when this might occur later; for now let’s focus on why advanced artificial intelligence would be of such great longterm importance.–(p.79)

Let’s stipulate that it’s true that “writing gave ideas the power to influence society for thousands of years” but writing itself does not limit the number of ideas that can be expressed. From the perspective of cultural evolution, the invention of writing creates an explosion of cultural variation. And while, surely, some technologies may be homogenizing along some dimensions (including as instruments of empire), it is simply not intrinsic to technology to be value-homogenizing. (As an aside, it is notable that in his work MacAskill draws on economists who think about productivity, but that he has ignored the rich area of philosophy of technology and what we might call STS studies. MacAskill is engaged in (what Nathan Ballantyne calls) epistemic trespassing without realizing, it seems, which fields he has ignored.)

In fairness, MacAskill cites this paper. But a key premisse in the argument is this: “If a large majority of the world’s economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions).” The dangerousness of AGI would be a possible effect of (near) world peace. So, even if one grants that the probability of AGI in the next fifty years is “no lower than  10%,” (p. 91) the whole argument for (iv) relies on the utopian thought that in the context of the stress of rising climate change the great powers of humanity opt for world peace!

In fact,  MacAskill’s argument for (ii/iii) rests on the idea that stagnation is inevitable because scientific and technological innovation become harder and harder (pp. 150-151) and as countries grow wealthier fertility drops (and there is an implied absolute plateau to population on Earth (pp. 152-155). MacAskill is clearly influenced by Tyler Cowen’s work (147-148), but he cites as authority research by Stanford University’s Chad Jones on “longer timescales.” (p. 150) And since I try to keep up in philosophy of economics, I thought it useful to take a look at some of Jones’ papers (which is, as MacAskill himself notes in an endnote, a dressed up Solow-Swan growth model–these leave considerable known uncertainty in long range forecasting).+

Jones assumes that “a larger population means more researchers which in turn leads to more new ideas and to higher living standards.” Something MacAskill also embraces (p. 152).Here’s a passage from a conclusion of one of the key papers MacAskill cites:

Of course, the results in this paper are not a forecast—the paper is designed to suggest that a possibility we have until now not considered carefully deserves more attention. There are ways in which this model could fail to predict the future even though the forces it highlights are operative. Automation and artificial intelligence could enhance our ability to produce ideas sufficiently that growth in living standards continues even with a declining population, for example. Or new discoveries could eventually reduce the mortality rate to zero, allowing the population to grow despite low fertility. Or evolutionary forces could eventually favor groups with high fertility rates (Galor and Moav 2002). Nevertheless, the emergence of negative population growth in many countries and the possible consequences for the future of economic growth make this a topic worthy of further exploration.–“The End of Economic Growth? Unintended Consequences of a Declining Population” American Economic Review, November 2020

The reason I quote this passage is not to refute MacAskill (although I think MacAskill is not sufficiently attentive to the implied difference between a model-driven scenario and a forecast. But because it helps explain why MacAskill is so focused on artificial intelligence. (Some critics of longtermism suggest that it is the effect of the values of Sillicon Valley and its donors on the EA movement, but while one cannot rule it out, I like to think it’s  model driven.) At one point MacAskill writes:

Think of the innovation happening today in a single, small country—say, Switzerland. If the world’s only new technologies were whatever came out of Switzerland, we would be moving at a glacial pace. But in a future with a shrinking population—and with progress being even harder than it is today because we will have picked more of the low-hanging fruit—the entire world will be in the position of Switzerland. Even if we have great scientific institutions and a large proportion of the population works in research, we simply won’t be able to drive much progress. (p. 155)

Neither Jones nor MacAskill really considers the benefits of educating a large part of the world’s population at, say, Switzerland’s levels. (Go look up, say, its patents or education spending per capita.) Presumably because in the Solow-Swan model such benefits are a one-off and don’t generate a permanent productivity spiral. But it seems to have the perverse effect on MacAskill’s program/longtermism that the economic development of poor countries (and, say, opening markets to their products) does not figure in What we Owe the Future as an especially important end worth pursuing.  As an aside, in another paper (also cited by MacAskill) Jones and his co-authors notes the significance of the fact that ideas are non-rivalrous. Their model implies that ‘educating a large part of the world’s population at, say, Switzerland’s levels’ would be worth doing.

I quote Jones for two other reasons. First, Solow-Swan does not imply that technology driven future productivity or intensive growth is impossible. It’s important to MacAskill’s general argument that something like “past [scientific/technological] progress makes future progress harder,” (151) is true (this is Cowen’s influence on MacAskill). And the main empirical argument for it is the record of declining productivity growth of the last half century or so (which gets accentuated by drop in fertility in countries with good education systems). We are at risk of reaching what in the eighteenth century was called a ‘stationary state.’ But even if we were to really understand what caused the scientific and industrial revolution to happen, there is really no reason to think a future leap in productivity would necessarily have to follow the same underlying causal structure.

As a non-trivial aside, for MacAskill a civilizationa plateau is (if we don’t destroy ourselves) inevitable due to physical constraints of the universe. He thinks the number of atoms puts an absolute upper limit on growth (p. 27). But I really don’t understand the argument for why increasing value-added per atom is impossible on his view. Again, it is noticeable that institutions are irrelevant to MacAskill’s argument.

In a future post, I will explore MacAskill’s “ideal” that shapes his argument for (ii). Here I just want to close with the observation that it is odd to see a model, Solow-Swann, which (let’s stipulate) is “foundational for all of modern growth theory” (note 18, p. 304) has known problems as a forecasting device because there is considerably room for uncertainty,+ be presented as a reliable guide to very longterm developments. This is a scientific field that is still in its infancy. And the known uncertainty in the error margins of the models don’t get eliminated over the very long term, but the sensitivity to even minor modeling mistakes get worse. To sum up, any collective decision for the long term future made on the prospect of world peace and this model is an expression of a lovely faith.

 

+I thank John Quiggin for this reference.

*Yes, on p. 86 MacAskill mentions the significance of institutional design. But it plays no  role in his actual argument.

{ 31 comments }

1

Luis 12.05.22 at 11:19 pm

(This is excellent!)

2

Alex SL 12.06.22 at 6:23 am

I find it highly ironic and also just plain weird that MacAskill considers the present time of a single global economy under the absolute dominance of capitalism, and saturated with Western entertainment media, to be an age of particularly diverse and fluid values. The opposite is clearly the case. For better or for worse (I don’t particularly envy the sacrificial victims of the Aztecs, for example), values and worldviews were much more diverse in pre-colonial times. Which also means they were so at much lower and much more sustainable population numbers than today.

There seems to be no reason why the relationship between population size and speed of innovation would be linear (even assuming all else being equal, i.e., all other factors such as degree of education or institutional arrangements being constant). Surely adding researchers at the lower end will have disproportionate benefits but adding them to a community of hundreds of thousands will run into diminishing returns as people find out the same things in parallel, so I would expect a saturation curve.

More generally, and not having read the book beyond for the various summaries, critiques, and excerpts that are being quoted everywhere, it seems to me that MacAskill may not take seriously enough the possibility that there are diminishing returns to innovation full stop.

Now, admittedly people could have made the claim in 1000 CE that they already knew all the stuff that is easy to find out, and most of the rest would likely remain mysteries forever, and they would have been wrong. But just because they were wrong then it doesn’t follow that therefore there will never be a later time at which all the low-hanging fruit have indeed been harvested. As a biologist myself I just do not see how anything in the future of my field, even assuming thousands of years of contemporary investment into research, could possibly be as transformative to our understanding of biology as the theory of evolution. It really is the case that sometimes we figure something out, and then it stays figured out, and we largely fill in detail with diminishing return of insight per research effort. Why should it be different in power plant design or computer science?

MacAskill seems to assume not only that adding more researchers could resume or maintain past speed of innovation, but he and his ilk also believe that a single self-improving AGI thinking much faster than a human could suddenly revolutionise all of science and expect variously utopian or catastrophic results (“ex risk”).

I commented before that although I find MacAskill’s ethics odious (eradicate all animals, because they don’t have happy lives anyway), I nonetheless find the singularitarians’ and longtermists’ ridiculous assumptions about empirical reality more intellectually upsetting. Because that is simply not how innovation works.

Put aside the implausibility of it and imagine a superintelligence that can formulate and ponder ideas within microseconds that would take an entire nation’s worth of researchers years to come up with. Now what? Well, now it will have to test whether it theorised correctly, by conducting empirical experiments. So, it will have to request funding to run lab experiments, collect field samples, build and test prototypes, etc. And the various ideas it developed in the first twenty microseconds will all compete with each other for funding.

Even under the best-case, charitable assumption of no diminishing returns because we have run out of low-hanging fruit, that is still an unavoidable bottleneck. The idea of a singularity, and by logical extension even the much less ambitious idea of accelerating progress merely by adding a few million more researchers into a world of resource constraints and competing social and political interests, are simply idiotic. Why are people being taken remotely seriously on the future of progress and its risks who have so little understanding even just of how science works? MacAskill on research is the equivalent of me going into a conference of epidemologists, asking them why they don’t just give every patient a hammer to squash the viruses themselves, and then being lauded as a visionary by thousands of gullible tech bros.

3

nastywoman 12.06.22 at 6:40 am

Within communication design, practitioners use reality testing to influence the patient or client to recognize their negative thoughts, evaluate the thoughts logically rather than emotionally, and then determine whether the thoughts are valid (ie: internally consistent and grounded in reality). The focus of reality testing is not necessarily concentrated on the source of the behavior or thought, but rather on the fact that current thoughts are occurring and influencing behaviors in the here-and-now. After undergoing this technique, the patient or client is often able to see that the thoughts they have been experiencing are, in fact, not valid or based on reality, and should therefore not be used as the basis for life decisions.[2] Reality testing can be used in this way to help facilitate corrective emotional experiences by disconfirming and altering previously held negative or unrealistic expectations in favor of more adaptive functions.[3] Psychotherapy methods such as rational emotive behavior therapy and cognitive behavioral therapy rely heavily on the client’s ability to frequently self-examine internal thoughts and assess their preceding influence on perceptions, judgments, and behaviors. Continual reality testing directed by therapists can help educate clients on how to habitually examine their own thought patterns and behaviors without the ongoing need for a therapist. Constant and prolonged exposure to a multitude of corrective experiences can lead clients to form their own internal and enduring changes in thoughts, expectations, feelings, and behavior.[4] Reality testing has also been identified as a curative factor when implemented within a group therapy setting. In group counseling, clients can use the perspectives of other group members as the basis of reality testing and receive instant feedback through group discussions, roleplaying, and other group activities.[5]’

4

TM 12.06.22 at 10:31 am

Worth pointing out that Graeber and Wengrow have made a claim (in “The Dawn of everything”) about our age being politically “stuck” compared to prehistory, a claim that I think is quite nonsensical (and likewise isn’t backed up with any meaningful empirical metric).

5

TM 12.06.22 at 10:39 am

“Or new discoveries could eventually reduce the mortality rate to zero, allowing the population to grow despite low fertility.”

Or perhaps 1=0 and everything is possible… How dumb can economists be?

6

TM 12.06.22 at 10:45 am

“But I really don’t understand the argument for why increasing value-added per atom is impossible on his view.”

So let’s impute a million dollar value on each atom. Problem solved?

Is the author seriously defending the idea of limitless growth on a finite planet? Makes it hard to take anything else seriously.

7

reason 12.06.22 at 2:52 pm

I would think the declining measured productivity growth (in which countries) in the last 50 years is largely a result of neo-liberalism. Not even considering the difficulty of measuring productivity growth in a mainly service economy, there has definitely been a fall in physical investment rates in rich countries because of off-shoring. And concentrating income growth amongst the very richest (i.e. not in net consumption increases by the masses) hardly encourages productivity growth.

8

reason 12.06.22 at 2:59 pm

Alex SL,
it might be relevant to contemplate that “Why should it be different in power plant design or computer science” weren’t even fields of study in Darwin’s time, and what that might mean to your musings.

9

J, not that one 12.06.22 at 3:56 pm

It is strange that MacGaskill would assume lock-in is always a result of technology and never of ordinary human behavior.

10

J, not that one 12.06.22 at 4:32 pm

It occurs to me that the 18th and 19th century readings in a history of science seminar I took maybe 20 years back, on the boundary between the scientific revolution and sociology, uniformly assumed that culture was non-plastic, and if it addressed social change at all, engaged in grasping at outlandish metaphors in order to explain how that might happen (excluding the ones that saw human civilization as a living entity with a youth that always leads to maturity). This was so different from my own views at the time, more like Alex’s in 2, that it eventually was probably my main takeaway from the class.

11

DK2 12.06.22 at 4:48 pm

Thanks much for the explanation of MacAskill’s thinking. It’s valuable to get an informed opinion, even if I can’t follow a lot of the references.

But here’s something I don’t understand. It seems to me self-obvious that we can’t meaningfully predict the consequences of our actions 100 years in the future, much less millions of years. The error bars are far too high. Attempting to guide present actions by reference to consequences for the distant future is an absurdity. Imagine what we’d think about a ruler in 1000 AD who attempted to act in a manner designed to increase human happiness today. We’d laugh at the arrogance.

Take uncontrolled super-intelligent AGI as an example. Assuming such a thing is possible, it’s not clear whether future humans would be better or worse off. Maybe an uncontrollable god-like computer process would exterminate humanity to create paperclips, or whatever, so maybe we should focus on making sure that entity is under tight human control.

But human control means government control, and governments inevitably weaponize anything weaponizable. Do we really want North Korea building and controlling a god-like device that has the ability to kill everyone? Do we want ISIS controlling that device? Maybe human-controlled super-intelligent AGI is a bad idea.

So maybe we should abandon efforts to control super-intelligent AGIs on the theory that we’re better off if such entities make their own inscrutable decisions, since human control means they’d probably be used in a catastrophic extinction-level war. An uncontrolled super-intelligent AGI might in fact save humanity from such a war, by seizing control of nuclear weapons from unpredictable and dangerous humans.

Or maybe instead we should attempt to block development of superintelligent AGIs in the first place. But doing so means attempting to control technological development throughout the world, which is almost certainly impossible and if successful could lead humanity into a suffocating world-wide totalitarian state that would have to exist literally forever. Which also seems like a bad outcome from the perspective of a million years in the future.

I don’t know what the right choice is here, other than that we should be careful about creation of weaponizable technologies. What that means exactly is above my pay grade, but trying to answer it based on the well-being of humans a million years in the future is so pointless that I’m wondering how anyone could take the exercise at all seriously.

Or take the assertion that future progress is limited by the number of atoms in the universe. By the time we get to that point (assuming we ever do), we’ll presumably have god-like powers that could well render the limitation meaningless. Maybe we’ll be able to quantum-tunnel into an infinite number of parallel universes and end up with infinite resources. Or maybe we’ll be able to take advantage of the essentially infinite number of quasi-particles that wink into and out of existence every second everywhere in the universe. Or maybe something else entirely. Who knows?

Trying to predict scientific and technical advances a century out is notoriously difficult, and any school of thought that takes the number of atoms in the universe as a serious constraint on human development millions of years from now is the equivalent of trying to figure out how many angels can dance on the head of a pin. Worse than that, actually, since as far as I know no one ever pretended that calculating that number would lead to actionable consequences for current human behavior.

It seems to me that long-termism is so inherently absurd as to be interesting solely from the perspective of what it says about the psychology of a type of intellectual.

So I can’t figure out why anyone would spend much time responding to it.

On the other hand, I’m quite positive that everything I’m saying and thinking has been debated at length somewhere and there must be a good explanation of why long-termism is worth taking seriously. I’d appreciate a pointer, particularly to something written for a general audience.

12

Peter Dorman 12.06.22 at 9:35 pm

I’d like to thank ES and the various commenters for laying bare the idiocy of MacAskill’s longtermism. Its problems are not subtle or matters of fuzzy boundaries or loose ends; it is shot through will grandiose, ill-supported assumptions, ignorance of the relevant academic literatures and severe shortcomings in quantitative reasoning. He’s not worth our time.

So now the question becomes, why is he the center of so much attention? Yes, people are susceptible to intellectual fads, confirmation bias and all the rest, but why this guy? Now we are into the sociology of knowledge, which is where all the interesting questions about MacAskill remain.

I wouldn’t be surprised if very generous sponsorship isn’t part of the story. Money makes all sorts of nonsense respectable, after all. And it’s not hard to see how longtermism can anoint a range of earning and giving strategies with an aura of benevolence without rich people actually needing to challenge the institutions and conditions on which their wealth depends. Hell, they don’t even have to give handouts to the poor any more.

The patina of pseudo-rigor, with its array of models, vast historical generalizations that touch down on disparate, specialist-appealing episodes appeals to a sort of intellectual conspicuous consumption. (Look at the clever stuff I’m into!)

And how should we explain the delicate reticence of critics to eviscerate this BS? Even ES, who delivers what I view as crushing blows to longtermism, is so measured in the way he words them.

I don’t know what humans, if they still exist, will be doing with themselves in a million years, but I doubt many of them will care about MacAskill and his “movement” a decade or two from now.

13

Alex SL 12.06.22 at 9:36 pm

reason @8,

Not sure what exactly you are implying. That although some research fields will see diminishing returns, that doesn’t matter because others will open up?

But does that imply that there are infinity research fields and potential technologies? It just seems highly implausible that there is infinite meaningful stuff to discover about how the universe works. Coming at it again as a biologist, the biodiversity of the planet Wrstg!gl hundreds of light-years away (if we could ever get there) would be a whole new world of discovery, yes, but that doesn’t mean that it isn’t also subject to the same laws of nature that apply here. Its biochemistry would have to build on the same atoms and available power sources as that of Earth, i.e., sunlight or chemical. The geology of Wrstg!gl is another such field, but again it will also be constrained by the atoms etc etc. Not saying that that isn’t worth knowing, but it isn’t the same level of transformative insight as discovering natural selection or the Hardy-Weinberg formula.

Even if tomorrow we discover a new research field such as hyperthaumic oscillometry, it may be a great opportunity for a few thousand additional researchers, yes, but it will then in turn run into diminishing returns within decades at most. At a more practical level, whatever it is, it is unlikely to have direct use for increasing agricultural productivity or the efficiency of solar cells. All the stuff that would be required to gain the kind of powers-like-a-god type scenarios imagined by these futurists (be it from sudden singularity or, as DK2 discusses, hypothetical millions of years of linear progress) is still constrained by physical limits on how efficient/productive it can be even in theory, so as one approaches those limits one will unavoidably see a saturation curve. Maybe not in 2030, but certainly before 20,000 CE.

At the heart of all this longtermist and transhumanist futurism is the naive extrapolation of the first half of what will very likely turn out to be a a saturation curve once one considers the broader context of constraints and just plain physics.

14

Gavin 12.06.22 at 9:39 pm

“Our beginnings never know our ends” – TS Eliot

On the causes of the Industrial Revolution, I’d like to propose a universal historical rule!: “the more a period is studied the less we find we really know about it.”

DK2 – although I generally agree with you, the example of a ruler in 1,000 gave me pause for thought. Would the founding of a University, say Bologna a bit later, not count as a longlasting good?

Roughly, I’d say doing good things now in the hope but not certainty of fruitful reward isn’t too bad.

15

Alex SL 12.06.22 at 10:22 pm

Given the potential for confusion and conflation, it might be useful to say that the following are all independent statements. Even if one finds reason to disagree with one that doesn’t mean the others are also wrong.

Research progress looks like a saturation curve because it is likely that there isn’t infinity stuff to figure out about how the universe works.

Research output per researcher ~ number of researchers is a saturation curve because at some point additional researchers will just find the same insights in parallel.

Technological progress is likely a saturation curve because of physical limits on what efficiency or productivity is achievable even in theory, much less in practical implementation.

Research per time is constrained by the need to test hypotheses in the real world. Merely thinking in an armchair isn’t generating scientific or engineering progress, so the idea of AGI solving all of STEM in five minutes is based on a ludicrous misunderstanding of STEM.

And all of that is before considering the constraint resource limits on Earth put on population levels and economic growth plus the utter, fantastical physical and biological implausibility of colonising certain other planets that do not have an atmosphere, let alone traveling to other stars.

16

KT2 12.06.22 at 10:57 pm

Eric S said “He [MacAskill] thinks the number of atoms puts an absolute upper limit on growth (p. 27).”

We are not locked in to atoms in the universe.

Atoms in the universe is so quaint relative to quantum. MacAskill needs to read up on a proton (^1.). And add another rule to a Turing machine using Busy Beaver number. “But adding just a few more rules instantly blows up the number of machines to consider”. (^2.)

DK2 is on the right track;
DK2 @11 “Or take the assertion that future progress is limited by the number of atoms in the universe. By the time we get to that point (assuming we ever do), we’ll presumably have god-like powers that could well render the limitation meaningless”
*

^1.
“Inside the Proton, the ‘Most Complicated Thing You Could Possibly Imagine’

“The positively charged particle at the heart of the atom is an object of unspeakable complexity, one that changes its appearance depending on how it is probed.

“Such subtle details about the proton’s makeup could prove consequential. … The occasional apparition of giant charm quarks would throw off the odds of making more exotic particles.

“Rojo’s collaboration plans to continue exploring the proton by searching for an imbalance between charm quarks and antiquarks. And heavier constituents, such as the top quark, could make even rarer and harder-to-detect appearances.

“Next-generation experiments will seek still more unknown features.

https://www.quantamagazine.org/inside-the-proton-the-most-complicated-thing-imaginable-20221019
*

^2.
“How the Slowest Computer Programs Illuminate Math’s Fundamental Limits

“The goal of the “busy beaver” game is to find the longest-running computer program. Its pursuit has surprising connections to some of the most profound questions and concepts in mathematics.

“The logician Kurt Gödel proved the existence of such mathematical terra incognita nearly a century ago. But the busy beaver game can show where it actually lies on a number line, like an ancient map depicting the edge of the world.

 https://www.quantamagazine.org/the-busy-beaver-game-illuminates-the-fundamental-limits-of-math-20201210/

17

DK2 12.06.22 at 10:59 pm

“DK2 – although I generally agree with you, the example of a ruler in 1,000 gave me pause for thought. Would the founding of a University, say Bologna a bit later, not count as a longlasting good?”

Wanting to improve conditions for the benefit of future generations has been a human trait for as long as we have reliable records.

Maybe that’s all long-termism comes down to. But if so it seems pretty trivial and I still have the where’s the beef question. I don’t understand why anyone takes this seriously.

Truth be told I never got crypto either. There may be a deficiency in the quality of my imagination.

18

Oscar the Grouch 12.07.22 at 2:51 am

We can explain it, Peter. Can we justify it?

As said to me on the first post ‘it’s a significant movement.’

I guess that’s a reason but it shouldn’t apply to all movements, clearly.

There’s an international white supremacist movement and I’d venture to argue we shouldn’t delve into their ideas and take them seriously. Doing so only puts their ideas on the table as serious and well-founded. What they want– what climate change denial and covid denial also benefit from –is to pull less informed people into a muddle of half baked ideas that can be used to wear down any scientific consensus and create dunking moments seem like a win for people who approve of their nitwit and vile dogma.

It’s an attempt to fashion a permission structure, not a search for truth.

If one wants to argue this is a similar situation, it won’t work since these ideas have a certain imprimatur so pointing out that they’re sloppy, half-baked and blithely ignore a long, long tradition of moral argument, leading to morally repellent and even ridiculous conclusions will put one outside the conversation about it.

To be taken seriously, one must abide by the best intelectual norms. We can’t talk about this author as we talk about Judith Butler. He’s one of us.

However, if we’re in a post-reason or post-truth situation demolishing the argument in the usual way won’t work either.

Quite a dilemma. It might seem the solution to ignore the book. That won’t work either though, because the ideas of the book are titillating and so will captivate people who pay no attention to what professors think.

Overall it seems better to have the counterarguments at hand. So I am glad for these posts.

It’s funny to me that the reason this book is bullet-proof is similar to the way crypto is bullet proof. People won’t dismiss what is self-evidently sketchy because smart people say it is smart and it can be made complicated. If you say ‘that’s obviously sketchy nonsense’ you run the risk of sounding dumb. And people don’t want to sound dumb.

19

TM 12.07.22 at 8:48 am

Alex 15: I would add two more relevant saturation curves, namely the number of people on the planet, and the human the life span. There is no absolute proof for this but everything we know about physiology, anatomy, biology indicates that life expectancy is unlikely to increase much beyond what has been already achieved in rich countries. I mention this because for some reason, the longtermists are adamant to deny that these two curves are bounded (or deny that they matter).

DK2 11: I don’t know what to make about the rest of your comment but this is spot on:
“It seems to me self-obvious that we can’t meaningfully predict the consequences of our actions 100 years in the future, much less millions of years. The error bars are far too high. Attempting to guide present actions by reference to consequences for the distant future is an absurdity.”

It’s not just that the error bars are huge, I think it’s even more fundamentally a question of imagination. Hardly anybody in the year 1900 had the imagination to anticipate to any meaningful extent the world of 1920, let alone 1950 or 2000. Any attempt at long term prognostication is pure hubris und deserves to be dismissed out of hand.

That doesn’t mean we know nothing about the future, however. We know a great deal about the laws of nature, and they give us meaningful constraints on what is physically possible, boundary conditions for the development of human societies. Some would simply ignore or deny the existence of these constraints, resort to Science Fiction-“inventions” or come up with Deus ex machina-like justifications (DK2: “By the time we get to that point (assuming we ever do), we’ll presumably have god-like powers that could well render the limitation meaningless” – is this parody or what?) for ignoring them. But the laws of thermodynamics won’t be repealed, humanity will continue to be constrained to a limited planet subject to physical, physiological and ecological limitations, and humans won’t reach immortality. The best we can do is to keep the boundary conditions on our limited planet in a range conducive to the existence of future generations.

20

Alex SL 12.07.22 at 9:24 am

TM,

Thanks, and good point. I realised how implausible immortality is when I learned that a sufficiently fine-tooth-comb autopsy of any human of any age reveals several cancers. Nearly every one of them is so slow growing that they will not become an issue before the affected human dies of old age; the ones that do kill humans are the very few that grow faster. But now if we somehow managed to rejuvenate our cells to extend our natural lifespan to 500 years… we would die horribly of twenty concurrent cancers by age 180 or so, actually, because by then more and more of them will have become an issue.

Our bodies are so complicated that something always goes sproing sooner or later. Actually amazing they mostly work okay for a few decades.

21

bekabot 12.07.22 at 1:15 pm

“MacAskill and the social movement he is creating have claimed for themselves the authority to act as humanity’s legislators without receiving authority or consent to do so”

“Unacknowledged legislators.” As is the case with lots of catchy phrases, there’s a joker in the pack and a worm in the bud. (FIWI, I think the affinity of “unacknowledged legislators” with “invisible superiors” should be recognized more widely than it is.)

22

reason 12.07.22 at 2:44 pm

Alex SL
What I was implying is that “technical efficiency” (an absolutely poorly defined term anyway in economics), is only relevant if you continue to produce the same things, and history shows that we actually don’t. What you are saying may be true if we were somewhere near the limits of human ingenuity and understanding, I see no reason to believe it is so. Keynes was right that “in the long term we are all dead” even in terms of our species, but I trust we still have the chance to continue for some time and that massive improvements are still possible. The Libertarian belief that material welfare (rather than social organisation) is all that matters is naive in the extreme and it is even more foolish to assume that having more is always better than having less.

23

J, not that one 12.07.22 at 3:53 pm

Good points from Oscar @ 18. In an ideal world, MacAskill sparks a discussion, but in this world, he’s staked a claim to essentially everything his book touches on and established himself as the reigning expert, with the methods approved by those aligned with him as the only appropriate ones.

A book preview shows me he begins with the preposterous assumption that if “I” could live every human life ever in succession, “I” would still think the same way I actually do right now. Maybe that’s a philosophically interesting though experiment, but from a reader-response perspective he’s authoritatively established that his assumption is an accepted truth. Good luck digging ourselves out of that hole.

24

J-D 12.08.22 at 12:45 am

A book preview shows me he begins with the preposterous assumption that if “I” could live every human life ever in succession, “I” would still think the same way I actually do right now. Maybe that’s a philosophically interesting though experiment

‘The Egg’, by Andy Weir
http://galactanet.com/oneoff/theegg_mod.html

25

Dr. Hilarius 12.08.22 at 1:54 am

26

Alex SL 12.08.22 at 9:17 am

reason @22,

The question is in what context we are discussing efficiencies and progress more generally. The context here is the idea of MacAskill and other like-minded futurists that there can and will be, if we only stack up enough “research points” in our society, either an AI with god-like powers that can turn us into paperclips or solve all our issues depending on its values, or a future human society with god-like powers that can build computers simulating trillions of minds, harvest the entire energy of entire starts, and finding itself limited at most by the number of atoms in the universe.

So, efficiency would here in many cases be about fairly straightforward issues: how much output or benefit can you get from a technical solution to a given technical problem. That is not so culturally contingent; an ancient Chinese artificer and an ancient Greek artificer both trying to design a crossbow may have had different religious beliefs, culinary tastes, and clothes, but they would still both have dealt with the question of storing kinetic energy and releasing it suddenly to shoot off a bolt, under the same physical laws constraining their solution space. Their designs weren’t identical, but the point is, neither of them would have been able to produce a crossbow that can shoot the bolt at the speed of light, for example. You can replace that with a gun at some point, but again designs of guns face their own physical constraints.

The same then applies to all manner of other problems. Take farming, for example. We want to feed more population. How much energy can we, even in theory, harvest from a given crop on a hectare of field? The upper limit is how much energy is in the sunlight that hits the hectare between germination and harvest, and in reality of course only a minuscule fraction of that due to all the unavoidable losses along the way. No progress and innovation and research even by a super-AI will ever be able to exceed that upper limit, so it can at best be approached in a saturation curve.

27

TM 12.08.22 at 2:18 pm

Alex 25: As an additional point: regarding physical constraints, techno-optimistic futurists often assume that the relevant constraint is energy input. Thus the idea of “harvesting” the energy output of multiple suns (going back to the 1960s, https://en.wikipedia.org/wiki/Kardashev_scale). But that is only taking the first law of thermodynamics into account. The second law is actually the more restrictive constraint. Solar output even of “just” our one and only sun is fantastically high. But even if we could “harvest” a much higher fraction of that energy bounty, we couldn’t solve the problem of disposing of the waste heat. (Similarly, if nuclear fusion gave us the limitless and fantastically cheap energy source we’ve been promised for decades, we’d run into the same constraints).

Many of our ecological constraints are not due to lack of resources but lack of assimilative capacity. It is amazing that still in our time and age, a significant number of very clever people are ignoring resp. denying the reality of physical constraints.

28

Alex SL 12.08.22 at 9:34 pm

Apologies for posting this often in one thread, but as it is turning into a great collection of limits to growth, I would like to add detail to TM’s last post:

In the post “Exponential Economist Meets Finite Physicist” on the Do the Math blog, Tom Murphy explained that 3% annual economic growth would boil the planet’s oceans off after 400 years, for exactly the reason TM mentions: unavoidable waste heat, so irrespective of what kinds of regenerative energy are used to power that economic activity. And we’d be in trouble long, long before the oceans boil, of course.

At this point a cornucopian economist may start saying “efficiency” and “doing things differently” a lot to keep the fantasy of eternal growth alive, but if economic growth is supposed to mean anything beyond shuffling ever larger numbers between different cells in an Excel sheet (and to be a cornucopian or transhumanist or singularitarian, one must mean more than that), thermodynamics apply as described by Tom Murphy.

Also, having now written that, I am wondering if TM is Tom Murphy.

29

Tm 12.09.22 at 10:57 am

;-) Nope but I also enjoyed the Do the Math blog and learned a lot from it…
O wow just noticed he continued the blog after lapsing in 2015! Cool looking forward to reading the new posts!

https://dothemath.ucsd.edu/

30

Sneerclub Wanderer 12.09.22 at 8:22 pm

“Neither Jones nor MacAskill really considers the benefits of educating a large part of the world’s population at, say, Switzerland’s levels.”

This is because EA/longtermism is, among other things, highly committed to eugenics and the race-science view of “IQ” and “heredity”. They think it’s more likely that they’ll be able to create an AI that can be a scientist than that they can educate an African person to be one.

31

TM 12.14.22 at 10:06 am

How effective altruism let Sam Bankman-Fried happen
Profound philosophical errors enabled the FTX collapse.
https://www.vox.com/future-perfect/23500014/effective-altruism-sam-bankman-fried-ftx-crypto

The article is rather long and rambling but I found this part discussing the philosophy of EA interesting:

“… the problem is the dominance of philosophy
Even before the fall of FTX, longtermism was creating a notable backlash as the “parlor philosophy of choice among the Silicon Valley jet-pack set,” in the words of the New Republic’s Alexander Zaitchik. Some EAs like to harp on mischaracterizations by longtermism’s critics, blaming them for making the concept seem bizarre.

That might be comforting, but it’s mistaken. Longtermism seems weird not because of its critics but because of its proponents: it’s expressed mainly by philosophers, and there are strong incentives in academic philosophy to carry out thought experiments to increasingly bizarre (and thus more interesting) conclusions.

… There are professional incentives to defend surprising or counterintuitive positions, to poke at widely held pieties and components of “common sense morality,” and to develop thought experiments that are memorable and powerful (and because of that, pretty weird). …

When Bostrom writes a philosophy article for a philosophy journal arguing that total utilitarians (who think one should maximize the total sum of happiness in the world) should prioritize colonizing the galaxy, that should not, and cannot, be read as a real policy proposal… The value in that paper is exploring the implications of a particular philosophical system, one that very well might be badly wrong. It sounds science fictional because it is, in fact, science fiction, in the ways that thought experiments in philosophy are often science fiction. The dominance of academic philosophers in EA, and those philosophers’ increasing attempts to apply these kinds of thought experiments to real life — aided and abetted by the sudden burst of billions into EA, due in large part to figures like Bankman-Fried — has eroded the boundary between this kind of philosophizing and real-world decision-making.”

Comments on this entry are closed.