Singularity draft review

by John Quiggin on October 5, 2005

My draft review of Ray Kurzweil’s Singularity is below. Comments much appreciated, and thanks to commenters on earlier posts on this topic.

Update Lots of great comments, thanks. This will improve the final version a lot, and is one of the ways in which blogging works really well for me. Keep ‘em coming.

I’ve finally received my copy of Ray Kurzweil’s Singularity, which was posted to me by its American publisher six weeks ago …

The title refers to the claim that the process of technological change, notably in relation to computing, biotechnology and nanotechnology is accelerating to the point where it will produce a fundamental, and almost instantaneous, change in what it means to be human, arising from the combination of artificial intelligence and the use of biotechnology to re-engineer our bodies and minds.

The term Singularity, used to describe this event, apparently arose  in discussions between the mathematicians Stanislaw Ulam and John von Neumann. The idea of the Singularity was popularised in the 1980s and 1990s by mathematician and science fiction writer Vernor Vinge, and later by  Kurzweil, a prominent technologist and innovator.


Kurzweil’s argument has two main components. The first is the claim that continuing progress in microelectronics and computer software will, over the next few decades, realise the ultimate ambitions of the artificial intelligence (AI) program, producing computers that first equal, and then dramatically outstrip, the reasoning capacities of the human mind.


The key to all this Moore’s Law. This is the observation, first made by Intel CEO Gordon Moore in the mid-1960s, that computer processing power, roughly measured by the number of transistors on an integrated circuit, doubles every eighteen months to two years. Over the intervening forty years, the number of transistors on a typical integrated circuit has gone from less than a thousand to hundreds of millions.


No exponential trend can continue indefinitely, and the end of the expansion described in Moore’s Law has been predicted on many occasions, often with reference to seemingly unavoidable constraints dictated by the laws of physics. The constraint most commonly cited at present relates to the size of components. On present trends, transistors will be smaller than atoms within 15 years or so; this does not appear to be feasible, and current industry plans only extend to two or three more generations of progress, enough for perhaps a 100-fold increase in computing power.


Not surprisingly, Kurzweil dismisses such talk, arguing that just as transistors displaced vacuum tubes and integrated circuits displaced discrete transistors, new computing paradigms based on quantum effects will allow continued progress along the lines of Moores Law right through this century, and well past the point at which computers are powerful enough to permit functional emulation of human brains.


The second part of Kurzwei’s argument is based on three overlapping revolutions in genetics, nanotechnology and robotics. These revolutions are presented as being in full swing today but iin any case it is assumed that AI will smooth out any rough spots. Between them, Kurzweil argues, developments  in these three fields will transform medicine, science, finance and the economy.  Although all sorts of miracles are promised, the most dramatic is human immortality, achieved first through dramatic extensions in lifespans delivered by nanorobots in our bloodstreams and, more completely, by the ability to upload ourselves into infinitely-lived computers.


Not surprisingly, Kurzweil has attracted a passionate support from a small group of people and derision from a much larger group, particularly within the blogosphere which might have been expected to sympathise more with techno-utopianism. The wittiest critique was probably that of Daniel Davies at the Crooked Timber blog (disclosure: I’m also a blogger there) who modified Arthur C Clarke’s observation about technology and magic to produce the crushing ‘Any sufficiently advanced punditry is indistinguishable from bollocks’. Riffing off a link from Tyler Cowen on the expected value of extreme forecasts, and a trope popularised by Belle Waring, Davies outbid Kurzweil by predicting not only that all the Singularity predictions would come true, but that everyone would have a pony (“ Not just any old pony by the way, but a super technonanopony!”).


Before beginning my own critical appraisal of the Singularity idea, I’ll observe that the fact that I’ve been waiting so long for the book is significant in itself. If my great-grandfather had wanted to read a book newly-published in the US, he would have had to wait six weeks or so for the steamship to deliver the book. A century later, nothing has changed, unless I’m willing to shell out the price of the book again in air freight. On the other hand, whereas international communication for great-grandad consisted of the telegraph, anyone with an Internet connection can now download shelves full of books from all around the world in a matter of minutes and at a cost measured in cents rather than dollars.


This is part of a more general paradox, only partially recognised by the prophets of the Singularity. Those of us whose lives are centred on computers and the Internet have experienced recent decades as periods of unprecedently rapid technological advance. Yet outside this narrow sector the pace of technological change has slowed to a crawl, in some cases failing even to keep pace with growth in population. The average American spends more time in the car, just to cover the basic tasks of shopping and getting to work, than was needed a generation ago, and in many cases, travels more slowly.


The advocates of the Singularity tend either to ignore these facts or to brush them aside. If there has been limited progress in transport, this doesn’t matter, since advances in nanotech, biotechn and infotech will make existing technological limits irrelevant. Taking transport as an example, if we can upload our brains into computers and transmit them at the speed of light, it doesn’t matter that cars are still slow. Similarly, transport of goods will be irrelevant since we can assemble whatever we want, wherever we want it, from raw atoms.


Much of this is unconvincing. Kurzweil lost me on biotech, for example, when he revealed that he had invented his own cure for middle age, involving the daily consumption of a vast range of pills and supplements, supposedly keeping his biological age at 40 for the last 15 years (the photo on the dustjacket is that of a man in his early 50s). In any case, nothing coming out of biotech in the last few decades has been remotely comparable to penicillin and the Pill for medical and social impact (a case could be made that ELISA screening of blood samples, was crucial in limiting the death toll from AIDS, but old-fashioned public health probably had a bigger impact.


As for nanotech, so far there has been a lot of hype but little real progress. This is masked by the fact that, now that the size of features in integrated circuits is measured in tens of nanometers, the term “nanotech” can be applied to what is, in essence, standard electronics, though pushed to extremes that would have been unimaginable a few decades ago.


Purists would confine the term “nanotechnology” to the kind of atomic-level engineering promoted by visionaries like Eric Drexler and earnestly discussed by journals like Wired. Two decades after Drexler wrote his influential PhD thesis, any products of such nanotechnology are about as visible to the naked eye as their subatomic components.



Only Kurzweil’s appeal to Moore’s Law seems worth taking seriously. There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years. There are two very different things this could mean. One is that computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost. The other is that digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.



In the first case, the contribution of computer technology to economic growth gradually declines to zero, as computing services become an effectively free good, and the rest of the economy continues as usual. Since productivity growth outside the sectors affected by computers has been slowing down for decades, the likely outcome is something close to a stationary equilibrium for the economy as a whole.



But in the second case, the rate of growth for a steadily expanding proportion of the economy accelerates to the pace dictated by Moore’s Law.  Again, communications provides an illustration – after decades of steady productivity growth at 4 or 5 per cent a year, the rate of technical progress jumped to 70 per cent a year around 1990, at least for those types of communication that can be digitized (the move from 2400-baud modems to megabit broadband in the space of 15 years illustrates this).  A generalized Moore’s law might not exactly produce Kurzweil’s singularity, but a few years of growth at 70 per cent a year would make most current economic calculations irrelevant.



One way of expressing this dichotomy is in terms of the aggregate elasticity of demand for computation. If it’s greater than one, the share of computing in the economy, expressed in value terms, rises steadily as computing gets cheaper. If it’s less than one, the share falls. It’s only if the elasticity is very close to one that we continue on the path of the last couple of decades, with continuing growth at a rate of around 3 per cent.


This kind of result, where only a single value of a key parameter is consistent with stable growth, is sometimes called a knife-edge. Reasoning like this can be tricky – maybe there are good reasons why the elasticity of demand for computation should be very close to one. One reason this might be so is if most problems eventually reach a point, similar to that of weather forecasting, where linear improvements in performance require exponential growth in computation (such problems are said to be polynomial in complexity).


If the solution to a problem involves components that are polynomial (or worse) in complexity, initial progress may be rapid as non-polynomial components of the problem are solved, but progress with the polynomial component will at best be linear, even if the cost of computation falls exponentially.


So far it seems as if the elasticity of demand for computation is a bit greater than one, but not a lot. The share of IT in total investment has risen significantly, but the share of the economy driven primarily by IT remains small. In addition, non-economic activity like blogging has expanded rapidly, but also remains small. The whole thing could easily bog down in an economy-wide version of ‘Intel giveth and Microsoft taketh away’.


In summary, I’m unconvinced that the Singularity is near. But unlike the majority of critics of Kurzweil’s argument, I’m not prepared to rule out the possibility that information technology will spread through large sectors of the economy, producing unprecedently rapid economic growth. Even a small probability of such an outcome would make a big difference to the expected returns to investments, and would be worth planning for. So it’s certainly worthwhile reading Kurzweil’s book and taking the time to consider his argument.


At this stage, though, the Singularity is still best considered as science fiction. If you really want to get a feel for the ideas that drive discussion of the Singularity, read Ian McDonald’s River of Gods or, better still, Charles Stross’ Accelerando.




{ 61 comments }

1

Cristobal Senior 10.05.05 at 2:20 am

Very good analysis of Kurweil’s Singularity. It seems that he wrote it after watching re-runs of the old StarTrek series.
Cristobal Senior

2

anno-nymous 10.05.05 at 2:41 am

One reason this might be so is if most problems eventually reach a point, similar to that of weather forecasting, where linear improvements in performance require exponential growth in computation (such problems are said to be polynomial in complexity).

If the solution to a problem involves components that are polynomial (or worse) in complexity, initial progress may be rapid as non-polynomial components of the problem are solved, but progress with the polynomial component will at best be linear, even if the cost of computation falls exponentially.

Ummm… Don’t problems of polynomial complexity require at most *polynomial* increases in computation with respect to linear improvements in performance? Isn’t that, like, why they’re called “polynomial”?

3

Tyler Emerson 10.05.05 at 2:48 am

“No exponential trend can continue indefinitely,…”

Ray explores at great length the misappropriation of this common statement to computation. Several independent authors, notably Seth Lloyd, have shown that computational growth can continue “forever” for pragmatic consideration by analyzing the in principle computational limits of matter.

See TSIN p. 131 “How Smart Is a Rock?” and p. 133-4 “The Limits of Nanocomputing.”

“Not surprisingly, Kurzweil dismisses such talk [i.e. the potential limits of computational growth],…”

Saying he “dismisses such talk” implies he ignored rather than analyzed the issue. On p. 111-122 he supports at length why he places significant confidence in computational growth continuing smoothly (exponentially) rather than slowing after integrated circuits have ran their course. The generalized Moore’s Law, where growth is measured in calculations per second per $1000 over five computing mediums, shows that this exponential trend has held for 100 years. I think you need extraordinary evidence to support the argument that this 100 year pace will slow rather than continue, or will need to find legitimate fault in the trend analysis. I fear that the analyzed computational limits of known physical laws are also weighted against this argument.

The work of Ed Fredkin, Tommaso Toffoli, Charles Bennett, Rolf Landauer, K. Eric Drexler, and Seth Lloyd is relevant here.

I want to express my thanks for the TSIN review and discussion here. The book’s thesis deserves rigorous rather than derisive analysis.

Best,
Tyler

4

Luc 10.05.05 at 3:13 am

From memory, i’d say that the divide is between the polynomials (with constants in the exponents) and the exponentials (with the variable in the exponent). Easy computational problems are polynomial and hard ones are exponential.

5

John Quiggin 10.05.05 at 3:23 am

Slack of me. I’ll fix this ASAP.

6

yabonn 10.05.05 at 3:36 am

Not surprisingly, Kurzweil X 2. Rather predictable chap, isn’t he?

It didn’t look to me the repetition was intented, oder?

7

John Quiggin 10.05.05 at 4:25 am

“It didn’t look to me the repetition was intented, oder?”

Fortunately, my team of unpaid editors picks up this kind of infelicity before publication :-)

8

Stephen Bryant 10.05.05 at 4:53 am

The “ability to upload ourselves into infinitely-lived computers” would not offer immortality. Uploading is copying. The original remains on the glass. Without this disingenuous confusion we’re just talking about AI.

9

abb1 10.05.05 at 5:06 am

Not surprisingly, Kurzweil dismisses such talk, arguing that just as transistors displaced vacuum tubes and integrated circuits displaced discrete transistors, new computing paradigms based on quantum effects will allow continued progress along the lines of Moores Law right through this century, and well past the point at which computers are powerful enough to permit functional emulation of human brains.

I don’t know what’s in the books, but I heard him on the radio saying, IIRC, that he’s more conservative than many of his comrades and he doesn’t expect this trend to continue indefinitely.

Which is, I think, an important point. Tyler above said: I think you need extraordinary evidence to support the argument that this 100 year pace will slow rather than continue…. Well, if you don’t expect the trend to continue indefinitely (do you, Tyler?), then why do you expect it to continue for the next 30 years (or the next 5 minutes for that matter)? Seems to me that it’s you who needs extraordinary evidence…

10

Matt Daws 10.05.05 at 5:38 am

‘Intel giveth and Microsoft taketh away’

For me, that sums up the IT industry pretty well. Todays PCs are faster and generally more stable than anything that’s come before, and yet they really seem to struggle to give much productivity boost to workers. The most interesting things to come out of IT recently are things like the ubiquity of the World Wide Web, tools like Wiki and blogs, and integration of music, cameras etc. with computers. It seems like it’s software which is both holding us back, and software which is taking us in interesting directions.

If one looks at computer games, it’s amazing how much more eye-candy they have these days (I mean, AMAZING: compare your average Xbox game to a movie in the 1980s or 90s). This has mostly come about because computer graphics are very amenable to having lots of specialist silicon thrown at them, and lots of good artists generating the artwork. In contrast, there’s a general moan in computer games circles about a lack of innovation in gameplay: in many cases, todays games are simply much prettier versions of yesterday’s games.

The problem is the balance. We seem fairly bad at improving software: it is happening, but at a slower pace than Moore’s law. Furthermore, all the extra computing power is being sucked up by, essentially, lazy programming. That makes sense: why not use the extra power to allow you to use easier programming tools etc. However, it leads to the fact that you still need a shiny new PC to run even, say, Microsoft Office. That new PC is cheaper than it was 10-20 years ago, but it’s certainly not vanishingly cheap. I’m not sure I see this trend changing soon. Furthermore, for big business, it seems to get even worse. I presume I don’t need to point you towards the UK government’s recent IT disasters. Our ability to handle really big IT projects seems to have remained exactly as bad as it always was (although not from a lack of academic work: I’ve heard a colleague from the Computer Lab complaining about how he spends his mornings researching project management, and his afternoons seeing everything being ignored in the university’s managing of a real-life IT project!)

I don’t know much about biotech or nanotech: I think, in reality, we (as in collective human knowledge) know a lot less than for computers. This makes forecasting much harder, but I guess makes it more exciting as well, as we don’t really know what we’ll come up with!

11

Backword Dave 10.05.05 at 5:41 am

This is merely a stylistic criticism, but I’d cut “No exponential trend can continue indefinitely, and” and start that paragraph with “The “. You still explain the arguments why Moore’s Law can’t continue indefinitely, and you lose the generalisation. I know it’s pedantic and annoying of me, but when I come across a statement like that I always think “Can’t it?” and “Why not?” and “Is there a clear exception?” and this sends me off into a tangential reverie …

12

Tyler Emerson 10.05.05 at 6:00 am

I recommend reading TSIN if you’re interested in a thorough review of the ideas presented.

Ray doesn’t expect the original formulation of Moore’s Law (integrated circuits shrinking) to continue indefinitely; he expects a new computing medium to take its place around the time we begin to face the physical limitations of ICs. His greatest confidence lies in nanotubes becoming the sixth computing medium because multiple research breakthroughs – showing their computing viability – have been made in recent years. (He covers nanotubes and molecular computing at length in the book.) WRT the weight of evidence being greatest for anyone expecting computation’s continued exponential growth: I pointed out originally that the data says otherwise. The data shows 100 years of exponential growth in computation through five mediums: electomechanical, relay, vacuum tube, transitor, and IC. Even if we didn’t have multiple points of tangible indication for nanotube circuitry being a viable replacement for ICs, the burden of evidence cannot reasonably be said to weigh greatest on the shoulders of those who stand on a century of supporting data. With that said, data trends have a lot more qualitative than quantitative utility in my opinion, especially in the context of attempting to predict when scientific innovation may lead to the Singularity.

13

abb1 10.05.05 at 6:39 am

I just listened to the radio program again. He said: “it doesn’t become infinite but it does become vast … it’s limited, but it’s not very limiting”. It’s 18-19 minutes into the show. This is not about integrated circuits shrinking, this is about technology in general.

If you believe it’s limited, then your’re just guessing where the limit is. Is it Dow 36000 or is it Dow 11000? Your guess is as good as mine.

14

Tyler Emerson 10.05.05 at 8:06 am

Again, the “limits” of computation have been analyzed by several authors, and they’re incomprehensibly immense. In the book, Ray gives a back-of-the hand analysis of the computing capacity of a 2.2 pound rock, noting that there are ~10^15 changes in state per bit per second inside the rock, representing ~10^42 cps, if its particles were organized effectively for computation.

Regarding an upper bound on computation: Seth Lloyd has given the most notable analysis, showing that a one kilogram and one volume liter computer could achieve ~10^50 cps based on known physical laws. Ray notes, “If we relate that figure to the most conservative estimate of human brain capacity (10^19 cps and 10^10 humans), it represents the equivalent of about five billion trillion human civilizations.” So, yes, there are limits, but the range of possible computation in a tiny amount of matter (1 kilogram) is so vast, that when you consider the amount of matter on Earth alone, you can get a sense of why he would say the limitations are not very limiting.

Japan recently announced plans to begin building the world’s fastest supercomputer in 2006, to be completed by 2011, with 10^16 cps. We’re a long ways off from computational “limits.”

15

Tom T. 10.05.05 at 8:17 am

One small point: The delay JQ experienced in receiving the book was due to legal constraints, not technological. Had there been a contract in place for a local publisher, or some legal regime that would have enabled an Internet download of the book, he could have had it much quicker.

Moreover, the relative expense of air freight presumably drops considerably with economies of scale. I live in the eastern US, and I regularly see fresh apples in our stores that purport to be from New Zealand.

16

Chris 10.05.05 at 8:22 am

Even more rapid growth in technology, while making a big difference to quality of life, would not make a difference to expected investment returns as any firm’s comparative advantage will be competed away, indeed under this scenario this will happen more and more quickly.

17

abb1 10.05.05 at 8:30 am

OK, Tyler, fair enough; this is highly speculative but much better than your saying: it’s been going for 100 years so it’s bound to keep going for the next 100 years, because that was kinda weak, IMO. ‘Cause a lot of things keep happening for hundreds of years and then stop.

18

paul 10.05.05 at 9:00 am

I think (probably mistakenly) that your “stagnant rest of the economy” argument could benefit from more explicit invocations of both Amdahl and Baumol.
Between the two of them, you get a promise that the time and cost structure of any activity will be dominated by the non-automatable parts, with the obvious economic implication that those parts will have to wither in either relative or absolute terms.

So in a sense the notion that non-IT-driven activities will stagnate isn’t a caveat to the Singularity, it’s a necessary part of its structure.

19

Peter 10.05.05 at 9:50 am

Sometimes, the religious folks claim that science is a religion too! This singularity stuff is what they are referring to. It is a wierd version of the Rapture. TechnoRapture™ I call it.

When I dream, I have a pony.

20

Tom T. 10.05.05 at 10:39 am

Matt Daws makes an excellent point in #10. The proposed Singularity is purely technological in nature. No one is postulating a Singularity in artistic creativity or project management.

21

soru 10.05.05 at 10:48 am

Surely the religious viewpoint would be to claim that the human brain was something who’s capabilities could never be technologically duplicatable?

If it is really true that the human brain runs on some kind of magic wierdness that cannot be reproduced by any material means (including even some newly discovered ‘wierd quantum physics’ a la Penrose), that would seem to more or less disprove modern evolutionary theory, which is a theory of computation, and at least deal a severe blow to the entire materialist worldview.

In other words, if God or some close facsimile didn’t super-intelligently design the human brain, then what is the justification for thinking that engineering cannot reproduce what evolution did?

soru

22

Jason Orendorff 10.05.05 at 11:11 am

Terminology: Computer scientists use “non-polynomial” (NP) to refer to problems that are more difficult than polynomial-complexity (P) problems.

I would avoid the whole mess and say “problems that you can solve by throwing extra computing power at them” and “problems that you can’t”.

23

anon 10.05.05 at 11:12 am

The “ability to upload ourselves into infinitely-lived computers” would not offer immortality. Uploading is copying. The original remains on the glass. Without this disingenuous confusion we’re just talking about AI.

No no, your immortal soul, which is the seat of consciousness, attaches itself to the copy as soon as the copy is ready.

24

abb1 10.05.05 at 11:27 am

If it is really true that the human brain runs on some kind of magic wierdness that cannot be reproduced by any material means (including even some newly discovered ‘wierd quantum physics’ a la Penrose), that would seem to more or less disprove modern evolutionary theory, which is a theory of computation, and at least deal a severe blow to the entire materialist worldview.

But at this point engineering can hardly even reproduce a simple single-cell organism, it’s that complicated. People die from flu, there’s no cure for malaria. It’s just that technology is not that advanced yet. Sure, if the humans survive, they may (or may not) eventually progress to the point where human-level AI is possible, but in 25 years? I’ll bet my brick and mortar house against your computer-in-a-rock that it’s not going to happen.

A man’s gotta know his limitations.

25

WhichFerdinand 10.05.05 at 11:46 am

Terminology: Computer scientists use “non-polynomial” (NP) to refer to problems that are more difficult than polynomial-complexity (P) problems.

NP stands for “non-deterministic polynomial”, and problems that are in P are by definition also in NP. For a problem to be in NP, you need to be able to accept or reject a proposed solution(answer) to the problem in time polynomial in the length of the input.

Whether NP-complete problems are “non-polynomial” is an open problem, though it’s likely they really are.

26

Jeremy Osner 10.05.05 at 11:48 am

I regularly see fresh apples in our stores that purport to be from New Zealand.

And the kicker is they’re not particularly good, certainly nowhere near as good as the fresh apples available from just up the road in NYS, which take way less techbnology to transport.

27

Antoni Jaume 10.05.05 at 11:52 am

“Surely the religious viewpoint …” means all religion holds the same point of view, and that’s not so. Soru should write “maybe some religious viewpoint …”

DSW

28

Daniel 10.05.05 at 11:57 am

only comments would be 1) I wouldn’t bother to go out of your way to credit me for that joke; consider it covered by a public domain licence. and also when you say “functional emulation of human brains” what weight does “functional” carry? I seem to remember that functionalism is a quite controversial position in philosophy of mind and you might want to stay out of those trenches.

cheers
dd

29

Jake 10.05.05 at 1:18 pm

Kurzweil:

Giant squids are wondrous sociable creatures with eyes similar in structure to humans (which is surprising given their very different phylogeny) and possessing a complex nervous system. A few fortunate human scientists have developed relationships with these clever cephalopods.

whah?

30

Graham Heffern 10.05.05 at 1:19 pm

Those of us whose lives are centred on computers and the Internet have experienced recent decades as periods of unprecedently rapid technological advance. Yet outside this narrow sector the pace of technological change has slowed to a crawl, in some cases failing even to keep pace with growth in population. The average American spends more time in the car, just to cover the basic tasks of shopping and getting to work, than was needed a generation ago, and in many cases, travels more slowly.

Is it just me or is this wrong? Not so much in the facts but in the way of looking at it. Hasnt the average car gone up in efficiency that is derived soley from technological advances?

This efficiency generally goes towards faster and bigger, then safer and more gas milage but from my eyes i still can picture singularity type growth rates in design and development of cars. Even more so in the future as IT spreads more pervasively into that industry.

Also as far as time it takes to travel to do various task. That seems almost entirely limited by human norms or limitations. As IT technology improves and spreads to cars and roads and such perhaps we will see gains in this area as the human equation is taken out and most of the functions of transportation are automated.

31

Jake 10.05.05 at 1:23 pm

sorry for the petty ad hominem that really has nothing to do with your very nice review.

also, I forgot to add a link.

32

Anonymous 10.05.05 at 2:06 pm

Having read the book myself now, I too felt the review was a wee bit overly derisory.

For instance, I didn’t see any discussion at all about Kurzweil’s fairly thorough explanation of how brain scanning and reverse engineering progress has been accelerating, and how it will likely continue. This goes a long way towards explaining “what we’ll do with all that computer power” once we have it, and how human-level (and beyond) AIs could become reality, whereas our current brain information today limits us from constructing real AIs. And understanding how real Ais and/or dramatically augmented human minds can come to pass is KEY to understanding how something like a Singularity could occur. Without this increasing quality and speed of intelligence, there can be no dramatic Singularity. If you want to shoot holes in the Singularity, prove that that can’t occur.

Also, another example is the mention that there doesn’t appear to be much biotech on the horizon that will have a large social impact. Did you read the book? There are examples given of drugs already successful in mice and being moved to human tests that can eliminate obesity, let you eat all you want, and actually live longer. Plus there is news almost weekly now on successful cancer vaccines, methods of extending lifespan in various animals, stem cell progress – including eventually being able to regrow entire organs, etc. Even the insurance guys are catching on: Dramatic rise in human longevity – and that’s only using technology from more than a decade ago (backwards looking information). There’s even a large prize now called the M-Prize that is around $2 million for awarding work on radical life extension biotech.

This progress is being fueled, as Kurzweil explains, by combining computing with biotech research in ways that just weren’t possible a scant few years ago. Huge (and continuing) drops in the price of genomic sequencing, rapidly improving “virtual cell / biology” allowing experimentation purely in computers, and recently discovered mechanisms like RNAi will continue to increase the pace of these breakthroughs. It looks to me, as an interested observer, that we’re right on the cusp of some rather dramatic new biotech.

Besides reading the book, I would encourage you to subscribe to some daily science news sources and follow them for a few months (assuming you don’t already). I myself like KurzweilAI.net’s daily news email, The Speculist, Science Blog. transhumantech (email), Green Car Congress, Robotic Nation Evidence, and New Scientist’s feeds for example. There’s nothing better than drinking from the firehose for a while to give you better perspective on what’s going on right at this moment.

P.S. For the person who mentioned that maybe human brains use “weird physics” or whatever to get their magic thinking powers, this also is addressed briefly in the book, and is easily answered: if the brain uses weird quantum effects, there is no reason we can’t build computer hardware that also operates using similar tricks. After all, it’s all just atoms at the lowest level, and as Kurzweil notes the atoms in your brain turn over completely in a matter of weeks.

P.S. #2 For the person who wonders if Kurzweil talks about also increasing creativity, emotions, and other “non-analytical” areas of human capabilities, he does so, and expects those also to increase.

33

Anonymous 10.05.05 at 2:18 pm

By the way, I’d like to make one other comment. This one regarding the whole “point out other things in life that aren’t speeding up dramatically”.

Isn’t that just a big non-sequitur?

I don’t recall Kurzweil making any claims about things like that in the book. He’s not claiming your traffic jams will clear up, your car speeds will increase exponentially, etc.

He’s making claims about carefully selected fields, which DO show the exponetial growth, which DO have an explainable basis as to why they will continue growing, and which DO contribute significantly to showing how they could combine into a Singularity event.

Bringing in other stuff is just avoiding the issue in my opinion. It’d be nice if we could stick to the book’s ideas and hypothesis instead of rigging up straw men to attack.

34

Matt Daws 10.05.05 at 3:19 pm

Shorter comment: my main point was that, to date, we seem to be using faster computers to do the same things, but faster. Has anyone got any idea how to do AI properly? AFAIK, not really. I haven’t read the book, but I would need Kurzweil to make some pretty specific claims about why faster computers can be used for AI. I really doubt that, simply by having insanely quick computers, AI will magically become easy.

To be more specific: it seems that the trend is going to be more faster, wider computing, which means tomorrow’s computers will be little better at a single task than today’s, but will be able to do many, many tasks at the same time. If you’re going to get AI on the desktop in, say, 20 years, you’d really expect a supercomputer to today have some limited form of AI. Instead, we have Deep Blue…

35

abb1 10.05.05 at 3:31 pm

Anonymous, aren’t you being a bit sensationalistic here? Where are all those tremendous breakthroughs – still in the research phase, aren’t they?

Meanwhile in the real life old people with real health problems can hardly use any of these new miracle pills because side-effects are horrendous and/or untested, they have to take dozens of pills every day to compensate each-other’s side-effects and pray. Even the famous V-ra apparently can cause blindness. It’s a mess.

36

lemuel pitkin 10.05.05 at 3:46 pm

Elasticity of demand for computing is doing a lot of work here, as it did in your earlier Kurzweil post. And I’m still not quite buying it.

1. For new goods in general, apparent price elasticity should start out very high and move toward one. After all, the initial price is finite and falling, while consumption is rising from zero. In fact, most of your argument would seem to apply to any new technology. It’s not clear why computing is special.

2. You claim that, if productivity increases in computing faster than in the economy as a whole, its share will increase if the price elasticity of demand is greater than one, and decrease if it is less than one. But this is only true if price is the only factor in computing consumption — which of course it isn’t. It’s easy to think of goods or sectors whose share of the economy has changed dramtically without any change in relative prices. (This also creates problems for the calculation of price-elasticity.) Symptomatic of this confusion is your habit of writing “elasticity of demand” when you mean “price elasticity of demand.”

Also, when you write “the rate of technical progress [in communications] jumped to 70 per cent a year around 1990,” what specific series are you referring to?

37

Graham Heffern 10.05.05 at 3:59 pm

AI does not magically become easy with higher computing power but it does become easier, its pretty easy to see why.

38

John Quiggin 10.05.05 at 4:02 pm

Lemuel, if the price of computation is falling at 70 per cent a year and income is rising at 3 per cent (mostly due to computation in any case) it’s safe enough to focus on price elasticity, I think. For communications, I’m using speed of modems as an index series.

Anon, we have seen some notable improvements in life expectancy recently, but the bulk of the gains have come from fairly boring things like less road accidents (outside the US, at least) and better emergency treatment, reduced smoking and gradual improvements in the management of heart disease. Add them all up and you still don’t get anything like the gains from clean water supply and sewage disposal.

39

abb1 10.05.05 at 4:05 pm

Elasticity of demand is the change in demand devided by change in price, correct? So, then, when the price is high and demand is zero and as it probably is going to stay near zero even as the price starts going down significantly (say, from $20K for a plasma TV to $10K), the elasticity will not necessarily be high, correct? In fact, it may stay close to zero for a while and then start raising. Am I getting this wrong?

40

lemuel pitkin 10.05.05 at 4:56 pm

abb1-

Basically right. But it is the percent change in demand divided by the percent change in price. So if demand is increasing from a very low base (as it must, in the case of a new good) and price is falling at all, measured elasticity will be very high.

There’s a larger conceptual issue here, which is the tendency of economists (and especially philosophers and political theorists with an interest in economics) to see price signals as decisive in the economy. Exposure to economic history or various heterodox traditions (structuralism, marxism, etc.) would lead to more skepticism about the role of prices, but those approaches aren’t interesting to the philosophers, precisely because they aren’t as clean and axiomatic.

41

lemuel pitkin 10.05.05 at 5:02 pm

Lemuel, if the price of computation is falling at 70 per cent a year and income is rising at 3 per cent (mostly due to computation in any case) it’s safe enough to focus on price elasticity, I think. For communications, I’m using speed of modems as an index series.

Now we’re making progress. But income and price elasticity still don’t exhaust the story.

I am 100% certain that, if productivity in computing dropped to the economy-wide average tomorrow and never rose from that level again, the share of computing in the economy would still rise substantially in coming years. Don’t you agree? (And if you do, what does that do to your elasticity-of-demand argment?)

42

Daniel 10.05.05 at 5:11 pm

AI does not magically become easy with higher computing power but it does become easier, its pretty easy to see why.

I can thoroughly recommend Fred Brooks’ excellent essay “No Silver Bullets” in “The Mythical Man-Month”.

43

John Quiggin 10.05.05 at 5:15 pm

Lemuel, I am actually an economist with an interest in philosophy and political theory, not vice versa. And, as economists go, I’m not known for being overly price-oriented, but as you say, nearly all economists pay a lot of attention to prices.

Of course, in this case, prices are just a reflection of technical change, so we could do the whole analysis in terms of quantities and nothing much would change – we just wouldn’t have an easy equivalent of the elasticity measure I’m using.

I agree that, even with no more technical progress we’d still see expanding use of computers, essentially because we haven’t caught up with the potential created by past progress. This is one reason long-run price elasticity of demand is greater than short-run. But I don’t see a problem for my argument here.

44

Matt Daws 10.05.05 at 5:28 pm

D^2, Thanks: that seems interesting (I might even look it up in the library if tomorrow is a slow day). As ever, Wiki has a nice (if short) summary.

Graham, I disagree. My point is that we don’t have slow, poor AIs at the moment (which would lead to fast, good AIs with extra computing power). It’s that we don’t have *anything* which gets close to human-like intelligence. Some sort of things which come under the heading of “AI” will certainly become ubiquitous (we *should* have hand-writing and voice recognition at some point in the future). But Kurzweil is talking about human-level AIs, which are something very different. Nice blog, btw.

45

Graham Heffern 10.05.05 at 5:43 pm

I would think that we would at least need to have the computational power to simulate the brain to start creating things that could be considered human like AI. So I think Increasing computational power certainly makes it easier in some respects but not necessarily easy.

I think we are making steady and ploding progress in philosophy of the mind and AI. I am however more of a proponent of hybridization. We will be internalzing devices to increase the functioning lvl of our brains long before entirely artificial human like AI is achieved.

46

lemuel pitkin 10.05.05 at 5:43 pm

John, thanks for the reply.

Another way of framing my argument is that a large part of the increased use of computing is independent of price — call it the result of changing tastes. Improvements in computer performance, for instance, probably have very little to do with the increasing popularity of of computer games.

This means that a smaller share of the increased conumption of computing is explained by price declines than under your implicit assumption of unchanging tastes. Which means the price elasticity is also lower. So your first scenario — computing becomes a free good — looks more likely than your second one — singularity-lite.

47

soru 10.05.05 at 6:04 pm

“Surely the religious viewpoint …” means all religion holds the same point of view, and that’s not so. Soru should write “maybe some religious viewpoint …”

I meant that surely the viewpoint that the singularity (in the sense of ‘dramatic perceived change’, not so much ‘everyone resurrected in the Eschaton’ or whatever) cannot occur can only plausibly be based on religiously-founded assumptions rather like Intelligent Design.

It’s like you look at engines getting better in the 19C, and say ‘in a few years, flying machines will be practical’.

One person says flight is physically impossible, you say ‘look, a seagull’.

Another person says noone has yet build a flying machine, and you say, ‘no, it needs a certain power/weight ratio, that should be attainable in 10 to 20 years’.

A third person says that power/weight ratio will be unnattainable due to physical limits, and you say ‘I previously pointed at the seagull, which by now has flown off. Stop rephrasing the same question’.

A fourth person say ‘but seagulls only fly by God’s will, and it is not God’s will that a machine fly’, and you say ‘well, there’s no arguing with that’.

soru

48

mtraven 10.05.05 at 6:08 pm

Step 1) Exponential increases in computer power.
Step 2) ???
Step 3) Artificial Intelligence! and Profit!

“Time to go to work, work all night
search for singularity hey!
We won’t stop until we transcend humanity
Yum tum yummy tum tay!”

49

jasmindad 10.05.05 at 6:50 pm

Soru:
If it is really true that the human brain runs on some kind of magic wierdness that cannot be reproduced by any material means (including even some newly discovered ‘wierd quantum physics’ a la Penrose), that would seem to more or less disprove modern evolutionary theory, which is a theory of computation, and at least deal a severe blow to the entire materialist worldview.

We need to separate two kinds of computationlism-is-not-enough argument. One is that there are indeed material phenomena at the heart of minds, but they are not captured by the traditional Turing computation account. Something like this is Penrose’s proposal. Searle also thinks that the brain phenomena are completely materialistic, but the computational story is not enough. The second is the mystical version — that something beyond the properties of matter are involved in making brains (and minds).

The non-mystical, i.e., materialist, anti-computationalist would laugh off Kurtzweil’s prediction based on great breakthroughs in computing, not because he is against materialism, but against poor materialist accounts of the brain.

Soru:
then what is the justification for thinking that engineering cannot reproduce what evolution did?

How about complexity, or lack of the right kind of theory/technology? This is not a proof that engineering cannot reproduce minds, but simply an argument against the idea that just because nature made something, we will be able to successfully copy it.

50

Jed Harris 10.05.05 at 8:51 pm

Tom T.: “No one is postulating a Singularity in artistic creativity or project management.”

Well, open source development does represent a radical change in project management — i.e. what we can accomplish, how fast, with what $$

And it is only possible due to exponential trends.

Similar transitions may occur in music and movies.

Graham Heffern: “I would think that we would at least need to have the computational power to simulate the brain to start creating things that could be considered human like AI.”

Depending on what you mean by simulate. We don’t have to know how to grow feathers to build things that fly.

More generally I think we need conceptual breakthroughs as well as computational power to do some things. Happily cheap computing and bandwidth permits even crazy experiments. That is how we get things like Wikipedia and Spambayes.

51

Jed Harris 10.05.05 at 9:21 pm

Interesting that this topic generates so much snark.

The elasticity discussion is useful, this is very important to get right. The classical example is lighting — the efficiency of lighting improved exponentially, but just reduced the amount of money spent on lighting. We know it *can* go either way; what are the key factors that will decide the outcome?

Various comments about AI and “things in life that aren’t speeding up dramatically” are salient, but not necessarily correct (as I hope my previous comment shows).

But they don’t seem to move on to the interesting question: what limits the impact of the exponential trends? In AI we have logistics planning, check; voice recognition, check (see TellMe Networks); spam recognition, check; etc. But are these just special cases, or the camel’s (exponential) nose under the tent of general AI? We need good arguments about this, not just snark.

I think we do know, empirically, that “good old fashioned” AI and more generally computer programming has fatal flaws when we try to scale it exponentially (in size, parallelism, complexity, etc.). There are alternatives like bayesian techniques that seem to scale better — but what will it take to fully ride the underlying exponential trends? It is issues like this that will ultimately decide whether we get a singularity or not.

52

P.M.Lawrence 10.05.05 at 11:34 pm

I read the blurring between exponential and polynomial as mere loose expression anyway. After all, being exponential doesn’t have anything to do with approaching a singularity either; if anyone starts talking about “exponential growth”, that’s just a common expression by now, standing for something like “bigger than I can imagine”.

That’s sort of the effect a singularity produces on physics. If we had a real singularity in human affairs it wouldn’t be characterised by exponential growth but by something that went beyond anything easy to cope with. You might want to try renormalising the infinities or something, but even that might not work.

Anyhow, the point is that it’s not the exponential stuff that gives you singularities, not as such anyway. There’s a blurring of concepts going on here, which doesn’t matter for purposes of gee-whiz illustration but does if you’re getting serious.

53

Jed Harris 10.06.05 at 12:43 am

P. M. Lawrence: “it’s not the exponential stuff that gives you singularities, not as such anyway.” True but not really responsive.

We aren’t talking a “real” singularity, like a black hole, because the exponential trend continues, rather than reaching a singular point. (I really do mean exponential, not just loosely.) However from the human perspective, if everything becomes incomprehensible because it is moving so fast, it might as well have passed through an event horizon.

And knowing what is happening doesn’t help all that much. We’ve consistently found that we can’t grasp exponential trends. Working in the computer industry for twenty-five years, I and others around me kept underestimating the trends we were part of. The only way to get them right is to do the math, and then believe the numbers rather than your lying intuition.

54

Chris 10.06.05 at 2:53 am

However please do not then go ahead and make any stockmarket investment decisions on that basis! This all sounds a lot like 1999 to me…are our memories that short? Whatever happened to tech stocks’ “exponential” growth…er, it stopped.
Even if growth continues, don’t confuse growth with investment returns. If competition ensures that abnormally high returns on capital are quickly competed away – it will – then growth in itself creates no extra shareholder wealth.

55

Graham Heffern 10.06.05 at 3:24 am

However from the human perspective, if everything becomes incomprehensible because it is moving so fast, it might as well have passed through an event horizon. Is this alone enough to put the brakes on the advance of technology and short circuit the singularity? As the amount of information moves beyond what society can absorb then then the trend should level off.

But it also seems like as the technology advances society is capable of processing and absorbing more information and the two tend to reinforce each other.

56

James Wimberley 10.06.05 at 3:35 am

A diffferent issue. John writes:
“On the other hand, whereas international communication for great-grandad consisted of the telegraph, anyone with an Internet connection can now download shelves full of books from all around the world in a matter of minutes and at a cost measured in cents rather than dollars.”

This is only true for old books. The intellectual property régime that underpins the creation of new works like Kurzweil’s has until now made it impossible to take advantage of the opportunities offered by technology for their rapid dissemination. Jared Diamond’s Collapse, a very different long-range take on the human condition, suggests thet human societies do not necessarily solve such contradictions.
For a not imposible example, the victory of the IP fundamentalists would bring technical progress to a crawl, as innovators would spend nearly all their time fighting claims to prior rights in ideas. Another scenario is the collapse of IP, and the transfer of innovation to socialist structures like universities and NASA, or communitarian networks. I can’t see either scenario doing as well as Intel in moving along Moore’s curve.
Oh, and great-grandad’s telegraph network was global and near-instantaneous if low bandwidth. Contrary to Kipling’s prediction in his poem The Deep-Sea Cables, the world hasn’t become one yet.

57

Chris 10.06.05 at 4:43 am

Robert Gordon has a better perspective on all this “rate of change is getting faster and faster” stuff in this excellent paper comparing the “New Economy” – OK I know the “Singularity” is the New New Economy – to the great inventions of the 20th century.

http://faculty-web.at.northwestern.edu/economics/gordon/GreatInvention.pdf

58

soru 10.06.05 at 5:23 am

We need to separate two kinds of computationlism-is-not-enough argument.

Absolutely, I thought I made that distinction, I guess it was not clear. Also, I am only claiming this is in some sense possible, if there was a nuclear war tomorrow then obviously it wouldn’t take place.

How about complexity, or lack of the right kind of theory/technology?

Mere complexity can, by definition, be tackled by sufficient computation, as provided by generation N -1 of the technology. Look at the recent computer-assisted proofs of things like the 4-colour problem.

Assuming insight, a new theory, is _necessary_, as opposed to helpful, something that would reduce the amount of computation required, really is equivalent to the Intelligent Design position on evolution. At the very least, it would imply the mechanism for evolution was not random discrete changes to DNA, as that is just another way of saying ‘computation’.

The standard scientific model of evolution is computation, if computation can’t design a brain, then evolution can’t, and so something very like Intelligent Design theory would become accepted as true. Presumably evolution of the brain, at least, would turn out to involve the same kind of meso-scale wierd quantum stuff that allowed non-Turing computation, i.e. ‘thought’.

soru

59

Matt Daws 10.06.05 at 9:00 am

Soru, Interesting comparison with ID. I guess I was partially guilty of saying “I can’t see how AI can be done, hence it can’t be done”. However, I think it’s fair to push the burden of proof onto Kurzweil: why does he think AI will be “automatic” in, say, my lifetime? Afterall, we’ve understood the physics of nuclear fusion for quite a while now, but are still seemingly some way off (well, I hear 20 years banded around as an absolute minimum) having a working, electricity producing fusion reactor. Furthermore, no-one in the comments has suggested that we really have *any* understanding of true, human-level AI: look at what the guys at MIT have been doing, say. It’s great, but it’s very, very slow progress.

I absolutely don’t wish suggest that we’ll never get human-level AIs (that really would be like ID). However, I just don’t buy this idea that *merely* having more computing power will solve anything. Has anyone even a suggestion of what software you’d put on this super-powerful computer? Software still has to written by humans, and we need to have an *idea* before we can write that software.

As a mathematician, your point about the 4-colour problem is off the mark: the level of computation required was *tiny* by today’s standards, and would at least in principle be human checkable (but over many years). The proof is interesting because it did require a computer, and that the mathematicians learnt a lot by programming the computer and playing around with it (to a working mathematician this is more interesting: it suggests the ability to use a computer as an experimental tool). To suggest it had anything to do with AI is incorrect though.

60

soru 10.06.05 at 10:01 am

Half a fusion reactor isn’t very useful or profitable.

A machine that could think half as well as a human (say, as well as a penguin) would be immensely valuable in many areas, not least the military.

soru

61

abb1 10.06.05 at 10:18 am

Lol, good one, Soru. I’m of the same opinion of the military.

Comments on this entry are closed.