Better, Fitter, Happier

by Henry on August 8, 2003

Glenn Reynolds outs himself as a Transhumanist – why am I not surprised?

Would I like to be smarter? Yes, and I’d be willing to do it via a chip in my brain, or a direct computer interface. (Actually, that’s already prefigured a bit in ordinary life, too, as things like Google and wi-fi give us access to a degree of knowledge that would have seemed almost spooky not long ago, but that everyone takes for granted now). And I’d certainly like to be immune to cancer, or AIDS, or aging.

Fair enough if that’s what turns him on. What’s a little less impressive is his dismissal of skeptics as cheerleaders for AIDS, irritable bowel movement, and everyday stupidity. Contra Reynolds, there are serious, principled reasons why you might want to disagree with transhumanism. And this argument has been going on for a long, long time.

As usual, Max Weber has interesting things to say on the subject. Which is why I propose to treat you to a big, quasi-digestible chunk of Science as a Vocation.

Now, this process of disenchantment, which has continued to exist in Occidental culture for millennia, and, in general, this ‘progress,’ to which science belongs as a link and motive force, do they have any meanings that go beyond the purely practical and technical? You will find this question raised in the most principled form in the works of Leo Tolstoi. He came to raise the question in a peculiar way. All his broodings increasingly revolved around the problem of whether or not death is a meaningful phenomenon. And his answer was: for civilized man death has no meaning. It has none because the individual life of civilized man, placed into an infinite ‘progress,’ according to its own imminent meaning should never come to an end; for there is always a further step ahead of one who stands in the march of progress. And no man who comes to die stands upon the peak which lies in infinity. Abraham, or some peasant of the past, died ‘old and satiated with life’ because he stood in the organic cycle of life; because his life, in terms of its meaning and on the eve of his days, had given to him what life had to offer; because for him there remained no puzzles he might wish to solve; and therefore he could have had ‘enough’ of life. Whereas civilized man, placed in the midst of the continuous enrichment of culture by ideas, knowledge, and problems, may become ‘tired of life’ but not ‘satiated with life.’ He catches only the most minute part of what the life of the spirit brings forth ever anew, and what he seizes is always something provisional and not definitive, and therefore death for him is a meaningless occurrence. And because death is meaningless, civilized life as such is meaningless; by its very ‘progressiveness’ it gives death the imprint of meaninglessness. Throughout his late novels one meets with this thought as the keynote of the Tolstoyan art.

Now this is very ponderous and Germanic, but I think that Weber is onto something. What he’s saying, I think, is that Tolstoy, and people like him, ask some interesting and important questions, which ‘progress’-obsessed types don’t. They may not have the right answers to those questions, but that’s beside the point. They’re interested in whether life is meaningful, not whether it can be infinitely extended. And meaning, for Tolstoy, requires some reference point other than the internal desires of the individual.

Which maybe allows me to articulate a little better what I find creepy about transhumanism than I could last week. It isn’t the prospect of brain-machine interfaces, Singularities, telomere hacks and the like, few of which are likely to be with us anytime soon, if at all. It’s the underlying philosophy behind this geek aesthetic – the idea of the self as a sort of infinitely extensible meccano-set, where you can plug in new bits and pieces all the time, just because it’s cool. And, in the best of all possible worlds, keep on doing this forever. Myself, I’d rather be dead.



W. E. Wade 08.09.03 at 12:46 am

“It’s the underlying philosophy behind this geek aesthetic – the idea of the self as a sort of infinitely extensible meccano-set, where you can plug in new bits and pieces all the time, just because it’s cool. And, in the best of all possible worlds, keep on doing this forever. Myself, I’d rather be dead.”

And in the course of all human history, that was the only option. Granted, the existence of suicide gave an choice between Die Now and Die Later, but we are now be looking at the option of Die Never.

In the Tolstoi excerpt, if we were to offer the old peasant immortality, he would supposedly refuse because he has had all that life can offer him. On the other hand, has he only had what he can imagine life has to give him? Satiation can only arise from world view, or more accurately view of what is just beyond the world.

Ultimately, the only thing that can sustain a person’s desire for life is curiosity. I would want to live long enough to see the questions answered, but no longer.


Avram 08.09.03 at 1:12 am

So, Henry, would you refuse to get an electronic pacemaker installed if you needed one to stay alive? Have you refused getting tetanus-prevention booster shots?

I can already tell you make use of a common millennia-old memory-enhancement technology, as well as some much more recent memory- and communication-enhancing tech. You’ve already transcended what it was to be human a thousand generations ago. What makes you think life will be so awful for our thousand-generation hence descendants?


Geoff Pynn 08.09.03 at 2:28 am


If you set aside the geek aesthetics and the – admittedly juvenile – Transhumanist manifestos, what do you disagree with? Is it:

(A) It is always good for humans to try to improve and lengthen their lives using technological means.


(B) There is no theoretical limit to the improvements humans can make to their lives and longevity by technological means.

A is, on the whole, a moral claim, while B is an empirical one. There are many reasons one might disagree with B, though all of them would have to be rooted in specific claims about problems inherent to technology, biology, or human nature. I don’t see that you’ve offered any such argument here or in your previous post on the subject, though I could be missing something.

From what I can make out, your problem is with A, and here you seem to be endorsing something like:

(C) Some elements of human nature are better left unimproved by technology.

You suggest (via Weber) that one of those elements is our natural mortality, since the recognition of our mortality pushes us to ask questions about the meaning of life. But couldn’t we just as easily ask those questions while simultaneously extending life indefinitely? Maybe the questions would take on a different character, but I’m not sure why very long-lived humans would be unreflective humans. Or perhaps it’s some other element of human nature you think is better left unimproved?

Maybe instead you think:

(D) Technological improvements to human life and longevity tend simultaneously to degrade the quality of human life.

If this were the case, there might be a point at which technological improvements to human life were outweighed by the degradations. I think this is a promising objection, but you’d need to flesh out precisely what the trade-offs were. Again, I’m not convinced by the asking-questions-about-the-meaning-of-life line of argument, since unless technological improvements to human life somehow directly limited our ability to ask those questions, it seems to me that the more time and leisure we have, the better we can refine and pursue them. What do you think?


Henry 08.09.03 at 5:25 am


Quick thoughts in response as I’m running to catch a plane. My objection is not to any technological innovation as such, although some of the ideas being floated seems icky to me. It’s rather to the implied philosophy of personality and identity that lies behind transhumanism as I see it; this aspiration to a world in which one can swap in and out extensions to cognition like plug’n’play peripherals. I’m not worried that this is likely to happen – it seems to me to rest on some very dodgy science – but I find it weird that some people think of themselves and others as beings of that sort. It’s a sort of radical individualism without real individuals – a weird kind of solipsism in which you want to remake your own individuality.

The immortality stuff isn’t really what I was trying to get at (my fault if you misunderstood – I was being a little abstruse), but for what it’s worth I suspect that human beings probably couldn’t be immortal and stay human in any meaningful sense of the word. One of my favourite SF series of the last few years talks directly to this subject – Paul McAuley’s Confluence, which is in part a scathing parody of Tipler-type fantasies about intelligence going on forever. In McAuley’s words:

bq. The problem of Tipler’s heaven — the problem with all secular heavens, including those of the virtual reality variety — is that he can’t imagine what it could be like to retain all the human attributes of one’s present personality yet also to be truly immortal. It’s a general problem in any SF dealing with transcendence in any serious kind of way, and of course the answer is that one cannot become immortal without becoming other than one’s self. That’s the root of Angel’s dilemma, and the Great Lie at the heart of the propaganda of the heretics, and one of the lies that SF tells itself over and over.


Neel Krishnaswami 08.09.03 at 3:54 pm

The self-as-meccano-set is precisely what I find attractive about transhumanism. It doesn’t take any great leaps of self-reflection for me to realize that I don’t live up to a lot of my ethical and social ideals, and it doesn’t take much more to see that my personality changes over time, as my health, experiences, and social networks change. Both of these facts seem to strongly contradict the sort of Romantic conception of life and experience that you seem to be arguing for.

I mean, as far as I’m concerned the existentialists were right — life has no extrinsic meaning. It only has what meaning we build into it. For me the potential for transhuman technologies is that they open the door to making new and qualitatively richer modes of existence possible.

For example, according to primatologists, the average human has about 150 people in his or her social network — people he or she knows well enough to maintain a continuing relationship with. This is about 3 times as large as chimpanzee networks, which has led the ethologist Frans de Waal to quip that small talk is about three times as efficient as mutual grooming. This is, incidentally, the reason that working in a large firm or government agency of several thousand people can feel dehumanizing — decisions that are important to you will often be made by people outside your social network, whom you have no social influence over.

Now, let’s suppose that a brain upgrade is feasible, that increases your memory and linguistic abilities so that the number of people you can maintain relationships with by a factor of, say, 10, so that you can keep track of fifteen hundred people. Now in most firms every decision will be taken by someone you know, or by someone known to someone close to you. Life becomes dramatically more personal than it was. To a regular human, you will be inhumanly sociable, but is that such a bad thing?


Mike Van Winkle 08.11.03 at 2:10 am

I’m intrigued. I’ve posted an extended response here


Stefanie Murray 08.11.03 at 9:21 am

Eternal life plus human fecundity equals a crushing population problem right quick. So here’s a wee thought: would Glenn and others looking for the No Exit clause be willing to stipulate in return that they would not breed?


Curtiss Leung 08.11.03 at 4:12 pm

On the prospect of immorality in machine form, I’m surprized that no one has dragged out this old chestnut yet:

Death is the mother of beauty; hence from her,
Alone, shall come fulfillment to our dreams
And our desires. Although she strews the leaves
Of sure obliteration on our paths,
The path sick sorrow took, the many paths
Where triumph rang its brassy phrase, or love
Whispered a little out of tenderness,
She makes the willow shiver in the sun
For maidens who were wont to sit and gaze
Upon the grass, relinquished to their feet.
She causes boys to pile new plums and pears
On disregarded plate. The maidens taste
And stray impassioned in the littering leaves.

Wallace Stevens, "Sunday Morning"

Mortality is the price to be paid for the experience of beauty. Immorality would be life without beauty, and so a life not worth living.

IIRC, along the same lines, Stevens also wrote in "Esthetique du Mal" that the greatest poverty is not to live in a physical world, and that if such non-physical beings, like ghosts, actually existed, they would be unable to tell their desires from their despair.

So the transhumanist ideal is a life without beauty and one where longing would only ever be misery. Transhumanists, beware—you may just get what you want.


Stephen J Fromm 08.12.03 at 6:59 pm

My chief concern with transhumanism isn’t the philosophical questions people like to bandy about (e.g. Bill McKibben, in his recent book, _Enough_).

Rather, I think the main *PRACTICAL* point is that germ-line manipulation (admittedly only one general way to try to “improve” humanity) is a TERRIBLE idea.

Practical, because messing with the natural frequency of genes could have unpredictable social consequences. I’m sure every middle class parent would want their child to be beautiful, brilliant, assertive, a leader… And in the aggregate, we might end up with a society of a**holes.


Ratherworried 08.12.03 at 7:59 pm

Hilton in Lost Horizon dealt with the issue of immortality. He described it in such glorious terms (Shangri-la) that the term has become synonymous with paradise. His immortality was caused by the unique food and environment of the Himalayan steppes, today we look to technology to provide the tool.

In Lost Horizon characters all had different reactions to the concept some fighting it wanting to escape, others eventually embracing it because it solved life problems for them and the few who relished the opportunity.

I’ve always seen the sentiment of Hilton’s Lost Horizon, “…time to study and learn the least important information…” as describing paradise to some and hell to others.

From reading the many comments posted here I gather that Hilton describes well the mixture of emotions toward perpetual life and learning. It has been many years since I’ve read Lost Horizon, I think I’ll pick it up again tonight and see how the discussion here measures against the book. My recollection is that it is quite close and I find that intriguing.


Brad DeLong 08.13.03 at 8:43 am

>>It’s rather to the implied philosophy of personality and identity that lies behind transhumanism as I see it; this aspiration to a world in which one can swap in and out extensions to cognition like plug’n’play peripherals.< < But we do already. We have since the invention of writing. What are video, radio, printing, writing but extensions to cognition? What is learning a discipline--music, painting, sociology, physics, public health--but a swapping in of an extension to cognition? If it is the case that there is a time to call a halt to this process, why should it be now? Why wasn't it before the development of modern IT, or before printing, or before writing? Ever since the first descendants of the primeval slime crawled out of the swamp, stood upright, lit their brands, and brandished their hand axes at the stars while exclaiming "I AM HUMAN!" we have striven to amplify and extend the only significant skills that evolution has given us: our minds, and our opposable thumbs with which to carry out the projects that our minds think of. To strive to extend and amplify cognition is the essence of being a homo sapiens, rather than an African plains ape.


Jeffrey Gordon 08.14.03 at 3:49 am

Transhumanists are betting on one very simple thing: humans can outdesign four billion years of evolution. I’m betting against. I personally believe the Nanotechnologists/Genetic Engineers are vastly underestimating exactly how non-reductionistic the human design is. If we can’t divide and conquer we’re not going to be able to engineer it. We’ll only be able to “evolve” it, which will probably not be any faster than natural evolution.


Neel Krishnaswami 08.14.03 at 10:28 pm

Walready outdesign nature in many respects, and have done so for thousands of years. There are no animals that make use of a wheel and axle system for propulsion. There are no animals that make use of ceramics or pure metals as structural materials. There are no animals that make use of combustion as an energy source. Evolution is cool, but it’s a hill-climbing process — it can get stuck in local optima realy easily, and high-energy processes (like, say, smelting) aren’t going to be optima that will be found very rapidly.


Jeffrey Gordon 08.26.03 at 2:39 am

All good points … but, all of these examples have one thing in common: they require a factory. When I say I don’t think we can outdesign nature, I’m not talking about any given specific trait, which we have definitely outdone, but the big picture. Natural evolution has had one giant constraint: no factory. Nature’s designs have had to be essentially self-produced and self-maintained. Humans, up till now, has not had this constraint: we can make/repair stuff in a factory. We’ve been able to get away with this for some time, because our designs have been relatively simple. However, what the Nano/Gentic engineers are envisioning are systems thousands of orders of magnitudes more complex than anything humans have created before. Once we try to do this, the systems will have to be self-organizing because otherwise it would take eternity to make/repair one. And self-organizing means no factory. Moore’s second law is a foreshadowing of this.


Greg Slade 01.13.04 at 11:21 am

Actually I would disagree that these systems are thousands of times more complex than what we have already created, biological systems are just that, systems full of mechanical pieces. If you in fact examine any one of these pieces they are not that complicated. There are technical challenges to be sure (protein folding problem for example) but they are not insurmountable ones. As for requiring a factory this is just a shift of perspective. Instead of thinking that human invention has required factories up until now you need to start thinking of looking at the human body as the factory.
As for outdesigning nature I am sure it can be done through a lot of incremental upgrades and the accumulation of the knowledge of these upgrades is what will allow in time a new system to be designed. Nature in general usually takes a slightly more advantageous path with a relatively low energy cost. In short nature designs for efficency first and performance second. With the limited energy sources available within biological cells this makes perfect sense however humans may well be able to design a less efficent system with much greater advantages than a natural system will ever have on its own for example producing a maximum effect for a maximum energy cost. This could be adapted from other already existing systems eg regeneration systems from things like the salamander family. The cost will of course be much higher than nature would ever design on its own, we may have to eat a great deal more but I dont think most people would have a problem with this if the designed system conferred significant benefits such as extremely rapid healing, regeneration or some other kind of increased physical or mental capability.
As for ths self organising argument. I happen to agree with this one, self organisation is a nice plus when you are dealing with the sheer volume of machinery that would be required in these systems, at the very least it would be highly beneficial but I disagree that it is critical. It should be quite possible to design equipment that links into some kind of personal area network for instructions and is controlled from a central location. However I think a mixture of the two methods will probably initially result. Partial self organisation with some manual input required for fine tuning purposes depending on what systems are developed although I am talking primarily about nano systems. Biological systems are likely to be largely self organised as they link into existing machinery with its own control mechanisms however they will be more subject to incremental upgrades than nanosystems through methods such as introducing an artificial plasmid into a cell rather than introducing new code into the main chromosomal structures.

Comments on this entry are closed.