But such a form as Grecian goldsmiths make

by Henry Farrell on July 28, 2003

Chris “writes”:https://www.crookedtimber.org/archives/000292.html a couple of days ago about his sense of discomfort at

bq. an attitude that sees the non-human world as merely an instrument for or an obstacle to the realization of human designs and intentions.

I’ve been interested for a while in a small group of people who take that attitude one step further. “Transhumanists” and “extropians” are extreme techno-libertarians who argue that _human_ nature is an obstacle to the realization of human designs and intentions.

In their “own words”:http://www.transhumaninstitute.com/cgi-bin/pageserver.cgi?page=what_is_transhumanism

bq. Transhumanism is an extension of humanism, the philosophy of emphasizing human ability and intelligence as the main shaper of our personal reality and the realities of others. Transhumanism, like humanism, rejects superstition and religion in favor of empiricism and reason. However, transhumanism goes one step further than humanism and argues that to obtain true freedom, human beings deserve the opportunity to become more than human through the use of science and technology, in order to improve our mental, emotional, and physical capacities.

Transhumanists would like to escape human beings’ ‘natural’ limitations through various technological tricks: cryonic preservation, genetic engineering for immortality, human-machine interfaces, personality re-engineering, the uploading of personalities to computers. They’re fascinated with Vernor Vinge’s idea of the “Singularity”:http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html – a posited event in the near future in which humans either invent or become super-intelligences, and all the rules change (“Charlie Stross”:http://www.antipope.org/charlie/blosxom.cgi accurately describes this as “the Rapture for nerds”). In short, they want to escape the human condition for something else.

All of these ideas makes for some good science fiction, but the aspirations behind them make my skin creep; they bring out the conservative in me. There’s something deeply unpleasant – and scary – about the assumption that human nature is an engineering problem. Not only does it hark back to some of the nastier elements of 1930’s socialist theory, as Chris has already pointed out (although those guys preferred eugenics), but it seems fundamentally ungrounded in any sense of what human beings are, and what that says about what they should be. And I don’t think that you need to make strong claims about natural law, or an Aristotelian _telos_ for human beings to be concerned.

This said, these are real questions, even if the big predictions of Singularities and the like turn out to be utterly bogus. We already face some of them today. For example, drugs like Prozac allow a sort of primitive psychoengineering. I’m only vaguely familiar with bioethics, but this seems to me to be a rather tricky set of questions. How do we decide which ‘enhancements’ to human capacities are praiseworthy, and which beyond the pale? I’ve no idea: over to you.



Nick Blesch 07.28.03 at 7:20 pm

Well, let’s just hope that anything that’s possible is also praiseworthy – it would be very difficult indeed to imagine a world in which something which was possible remained unrealized.


back40 07.28.03 at 8:22 pm

Chris’s earlier blogentry Evolving altruism is as relevant as Orwell on food technology and modernity since they both deal with what it is to be human.

An idea than a number of CTers seemed to find sensible is that genes and culture coevolved to the present state and by implication the process continues. The culture that excites transhumanists seems certain to have an eventual effect on genes but we can’t know what those effects will be.

The inability to know that future, to see beyond the singularity where by definition the rules change, is what makes the yearning for that future analogous to a rapture.

One possibility is that there will be a proliferation of sapient species with attendant concerns for legal and ethical norms and preferneces. Our quaint struggles with race, gender and ethnicity are child’s play compared to the portential problems to come.


Loren 07.28.03 at 10:05 pm

I’m often impressed by the speculations about consciousness, identity, permanence, uniqueness, physical form, etc. that I dig out of so-called “hard” science fiction (esp. Greg Egan and Neal Stephanson), and I occasionally sometimes get the creepy feeling I think Henry describes. Furthermore, I think that much of the extopian “philosophy” is tedious and silly. But they do make one very simple, intuitive point about data preservation and network design: we have this inordinately complex computing device sitting on our necks, connected to an incredible array of input/output devices, yet we no way to back up the software and data structures we’ve accumulated over a lifetime of experience and reflection (at least none that preserves the software in its functioning state — sorry, beating this metaphor to death here) … that’s a frustrating design flaw for anyone who’s ever set up and maintained a server and network!! 8)


sander 07.28.03 at 11:29 pm

I don’t think it’s possible to escape the human condition, but an attempt to alter it should not be constrained by notions of humanness or naturalness when those are left unexamined as to the many assumptions and thus intentions that may be (are surely) hidden in them.

When you say that the assumption that human nature is an engineering problem is fundamentally ungrounded in a sense of what human beings are, my question is: what does your concept of humanness look like, because I don’t know. To me, it feels like a good thing to have an attitude towards my own identity where I can easily embrace change; and I’ve found that when I have convictions about my real nature that are too strong, I inhibit healthy change. OK, that’s an “intramental” example. But I think it’s interesting to see how cultural value-systems that heavily stresses an explicitly defined form of humanness have a harder time innovating than those that leave room for interpretation and reinterpretation by the humans themselves.

Sure, Prozac is psychoengineering, but so is alcohol and caffeine and giving to charity and going dancing on Saturday night. Sure, there’s a difference, and we may learn a lot when we try to define those distinctions, but the basic project of enjoying your life more by tweaking your brain by adding chemicals or practiced behaviour is as old as, well, *really* old. And it’s a good thing.


Stentor 07.29.03 at 12:05 am


Henry 07.29.03 at 12:27 am

Thanks everyone – one of the nice things about blogging is being able to express vague feelings of ickiness, throw them at people, and then figure out better what you really think from their responses.

Sander’s point is fair enough – I’m proceeding from a squishy and illdefined sense of what is human, and what isn’t. But I don’t find his answer satisfactory – to say that notions of humanness are rife with loaded and unexamined assumptions isn’t to show that we should instead simply embrace the change. Some kinds of change might definitely not be for the better.

I think I can better define what it is that I find slightly repulsive about the transhumanists – it’s the idea that not only our abilities can and should be edited – but so too our emotions and our driving forces. Loren brings up Greg Egan’s work – I was actually thinking of his fiction all the while I was writing this post. I find his stuff positively unpleasant to read, because of his assumption (for example, in _Diaspora_ ) that human minds can be edited to get rid of all sorts of redundant caveman thinking, and that this is a good thing. That religions are nothing more than pernicious self-referential self-reproducing memes. The man’s a Bright _avant la lettre_ . And he and others like him scare me.


Invisible Adjunct 07.29.03 at 12:53 am

Very interesting thread. These ideas bring out the conservative (and dare I say it, the prelapsarian Catholic) in me, too. I think the need to grapple with such issues is one of the prices we pay for giving up on notions of natural law. And since I can’t argue that we should return to natural law notions of human nature, I am of course forced to confront just such issues.


sander 07.29.03 at 1:06 am

Ah no, Henry, that wasn’t what I meant — I’m not very used to expressing myself in writing yet, you see.

Notions of humanness are definitely rife with loaded and unexamined assumptions, and I should first of all respect those assumptions, because they’ve brought me where I am, and this is good. Transhumanism is often taken for a desire to radically do away with those assumptions, and sure enough there is a big fringe of Singularitarians who think this will be easy and great. I don’t agree with that. I think that some changes could be good and useful, and if my assumptions withhold me from trying to effect a useful change, I should challenge that assumption and if possible do away with it. I like the way transhumanism gets me to consider a lot of those assumptions and gets me to think about why I hold them central to my humanness or not, and to consider if there could be a different kind of humanness built on different assumptions.

I generally agree on the creepiness of Egan — not only does his style get extremely didactic, infodumpy and tiresome, he also seems to have something against Buddhism, which I deplore. Ha yeah sure, he challenges something I like and I think that’s creepy, there we go. I wonder if he would come out as Bright; I think Brights are practicing identity-politics and Egan seems in his writings pretty much concerned with the mutability (ephemeralness?) of identity, so he might not take that too seriously.

Oh by the way: by even reading this blog I’ve been changing and thus editing my mind — the editing itself is not what you find creepy, I presume. Is it the technology that we assume would be used for it that is more creepy, or the fact that Egan targets specific memes for editing out?


Krupnick 07.29.03 at 1:15 am

Henry, if you haven’t already, you ought to read Montaigne, espeically the essays On experience and On Drunkeness. His view of humans and human nature and how it all relates to ambition and the pursiuit of perfection is very similar to yours I think. One of his main ideas is that by trying to transced human nature we become subhuman or unhuman, not superhuman. And I agree with that. Plus he is just a pleasure to read. One qoute: “And upon the highest throne in the world, we are seated, still, upon our arses.”


krupnick 07.29.03 at 1:26 am

Wait, a better and more transparently relevant qoute: “They want to be beside themselves, want to escape from their humanity. That is madness: instead of changing their Form into an angel’s they change it into a beast’s; they crash down instead of winding high. Those humors soaring to transcendency terrify me as do great unapproachable heights.” Not much argument for his view in this qoute, but nevertheless a pretty good expression of it.


Brian 07.29.03 at 1:51 am

My favorite website about this kind of thing is David Pearce’s Hedonistic Imperative (Click here) where he argues that eventually genetic engieneering will let us rewire our brains to escape certain cycles (related to brain chemicals like dopamine) that cap the pleasure we normally experience. When something good happens, rather than just feel happy, we feel like we just took a hit of crack, and we don’t become indifferent to it because the brain activity that would numb such pleasure is shut off. Well, thats not quite what he says. He puts it like this: “The Hedonistic Imperative outlines how genetic engineering and nanotechnology will abolish suffering in all sentient life.

The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and ethically mandatory. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. The world’s last unpleasant experience will be a precisely dateable event.

Two hundred years ago, powerful synthetic pain-killers and surgical anesthetics were unknown. The notion that physical pain could be banished from most people’s lives would have seemed absurd. Today most of us in the technically advanced nations take its routine absence for granted. The prospect that what we describe as psychological pain, too, could be banished is equally counter-intuitive. The feasibility of its abolition turns its deliberate retention into an issue of social policy and ethical choice.”

and seems to think that if we thought regular drugs were good, we haven’t seen nothing yet. Of course, all the causes of the bad effects such as addiction, stupor, would be cut out so we can be super productive while constantly enjoying all the better than any drug bliss.

Sign me up.


back40 07.29.03 at 1:54 am

For sci-fi fans:

David Zindell’s Neverness series deals with transhumanism of several sorts in the context of religion and epistemology. It’s hard fiction, he’s a mathematician, but has literary qualities lacking in Egan’s work.

Ken MacLeod’s Fall Revolution series deals with transhumanism in the context of leftist political history and speculation. It’s hard fiction, he’s a programmer, but even harder politics. His Engines of Light series deal with transhumanism too though less explicitly. MacLeod’s work also has some literary qualities.

Adding to the quote quotient:

“Trying to define yourself is like trying to bite your own teeth.”
—Alan Watts


clew 07.29.03 at 2:05 am

Rodney BrooksFlesh and Machines surprised me a lot by delicately avoiding creepiness. Where I expected him to extol the machine-like possibilities of future humans, he described current humans as more mechanical than we think, and was very affectionate about it.


Jeremy Osner 07.29.03 at 2:22 am

Nice Montaigne quotes, Krupnick — Loren’s frustration at being unable to back up her wetware makes me think, but what are we doing right now? Well my initial thought was, that’s what the pencil and paper are for — as I read down the list and thought about that, I expanded the “backup” metaphor to other ways of recording data; and just now as I started to type, realized that it really applies to all communication — data transfer between humans is a (messy, error-prone) means of backup.


Keith M Ellis 07.29.03 at 4:22 am

Huh. Yet another example that the tendency toward to conservativism or progessivism are the core organizing intellectual principles.

That it is “best” to conserve some presumed essential “humanness” or, conversely, that it is “best” to progress beyond it are two ideas that seem to me to be naive and intellectually lazy. But common.


Loren 07.29.03 at 4:33 am

Jeremy, note that the pencil-and-paper method of backup stores data, and may even allow some complex structure, but doesn’t necessarily leave the software in a functional state (again, beating the metaphor to death, or at least stretching it way beyond reasonable expected tolerance). That is, the scribblings and legacies we leave on blogs and usenet threads don’t preserve what we take to be our consciousness. Oh, and I’m a “him” not a “her” (part of my identity that doesn’t get translated clearly by this particular data dump protocol)! Okay, long and rambling, speculative and far-from-rigorous musings follow: Identity must be bound up in at least some measure, probably intimately, with our visceral experiences, the tapestry of our corporeal lives — not a daring conjecture, I suppose, but it begs the interesting (unsettling? creepy?) question of whether radically different ways of interacting with an environment might lead to very different sorts of consciousness (sentience?), such that meaningful communication is practically unfeasible (I add “practically” because I’ve always found Davidson’s thoughts on translateability rather appealing: given some shared referents, roughly convergent ways of framing experiences of those references, and time and sincerity, meaningful utterances can be tranlated across conceptual schemes). Let me throw out the following half-baked and absurdly speculative query (and hopefully lay out some grounds for thinking that this query is relevant here). If self-awareness and intelligence could develop in entities with very different ways of interacting with their environment (on distributed networks, for instance, or near intense gravitational fields), what could *we* say with (to?) them about ethics? Chris and Brian raised Jerry Cohen’s ongoing assault on Rawls and self-styled constructivists (especially those who endorse a sort of Quinean holism). Cohen has recently published a much-presented paper on why the ultimate normative principles we affirm cannot be dependent on facts. If you say “I affirm normative principle P because of some state of the world F” then Cohen conjectures that he can always find some normative principle P’ that conditions your grounding of P in F, yet that does not itself depend upon any fact for its affirmation by you. And if you find some grounding fact F’ for P’, Cohen says he’ll find P” and so on until there are no more facts to ground your principles. To tie this back to Henry’s thread here: Cohen reflects that this sort of independence will hold for any normative principle, even if we imagine situations very different from those familiar to us. Think about beings much like us, but who only live for twentyfour hours: Cohen is a bit murky in this example, but he seems to think that facts about their lives will not bear on the principles they ultimately affirm. Perhaps (although I wonder about more extreme hypotheticals — beings who live backward in time, relative to our experience? or who can move through time but not space?). But suppose Cohen is correct (and his limited argument seems plausible to me): even for such “here today, gone tomorrow” beings, normative principles will not ultimately be grounded in factual claims, beliefs about states of the world. But could we ever come to any sort of principled agreement with such beings on matters of ultimate normative principles? In more extreme cases, of radically different ways of interacting with the world, could we even count on finding reliable ways to frame such debates that are mutually intelligable, given radically different life experiences? I have a as-yet-undeveloped suspicion that this points to the real beef Cohen has with Rawls (that justice might be about something other than our ultimate normative principles, yet still be more fundamental than mere regulative principles for institutions and day-to-day affairs), but explaining why would require more thought than I’ve yet given the matter.


ApollosDaughter 12.06.03 at 5:19 am

I look forward to the day I can get bioluminecent skin in diffrent colour and blue hair and a chip, and new organs. Really whats the worry. humans dont adapt to enviorment any more whats left is to perfect, with out eugenics, of improve and at the very least improve on our presant state. How about if we were green with clorophill, no need for food no wast, we would have just solved a third world of problems. We can adapt to the enviormental changes before they happen. If we can imorve ourselves and do a world of good similtaniously then whats the harm. I dont need flesh and blood to be human, I am.
As for singularities, there is already too much apocolipic thinking out there no need to add to it. Drivel.
Nice to see a lively discorce with out retoric flying, better then congress


motorola 01.13.04 at 9:56 pm

Mein Hobby ist es Gästebücher zu besuchen. Das ist immer ganz interessant und widerspiegelt so, was die Leute im Internet wirklich denken. War auch interessant bei Dir ! Bis zum nächsten Mal. All The Best OfNew Year. Sorry for my english i’am from Germany.

Comments on this entry are closed.