Singularity review repost

by John Q on August 22, 2012

The discussion of my repost on the silliness of generational tropes produced a surprising amount of agreement on the main point, then a lot of disagreement on the question of technological progress. So, I thought I’d continue reprising my greatest hits with this review of Kurzweil’s singularity post, which I put up in draft from at Crooked Timber and my own blog, producing lots of interesting discussion.  Again, seven years old, but I don’t see the need to change much – YMMV

 

 

I’ve finally received my copy of Ray Kurzweil’s Singularity, which was posted to me by its American publisher six weeks ago. The title refers to the claim that the process of technological change, notably in relation to computing, biotechnology and nanotechnology is accelerating to the point where it will produce a fundamental, and almost instantaneous, change in what it means to be human, arising from the combination of artificial intelligence and the use of biotechnology to re-engineer our bodies and minds.

The term Singularity, used to describe this event, apparently arose  in discussions between the mathematicians Stanislaw Ulam and John von Neumann. The idea of the Singularity was popularised in the 1980s and 1990s by mathematician and science fiction writer Vernor Vinge, and later by  Kurzweil, a prominent technologist and innovator.

Kurzweil’s argument has two main components. The first is the claim that continuing progress in microelectronics and computer software will, over the next few decades, realise the ultimate ambitions of the artificial intelligence (AI) program, producing computers that first equal, and then dramatically outstrip, the reasoning capacities of the human mind.

The key to all this Moore’s Law. This is the observation, first made by Intel CEO Gordon Moore in the mid-1960s, that computer processing power, roughly measured by the number of transistors on an integrated circuit, doubles every eighteen months to two years. Over the intervening forty years, the number of transistors on a typical integrated circuit has gone from less than a thousand to hundreds of millions.

The end of the expansion described in Moore’s Law has been predicted on many occasions, often with reference to seemingly unavoidable constraints dictated by the laws of physics. The constraint most commonly cited at present relates to the size of components. On present trends, transistors will be smaller than atoms within 15 years or so; this does not appear to be feasible, and current industry plans only extend to two or three more generations of progress, enough for perhaps a 100-fold increase in computing power.

Kurzweil dismisses such talk, arguing that just as transistors displaced vacuum tubes and integrated circuits displaced discrete transistors, new computing paradigms based on quantum effects will allow continued progress along the lines of Moores Law for most of this century, and well past the point at which computers are powerful enough to permit functional emulation of human brains. He concedes that there are limits on computing power, but argues that they do not represent an effective constraint on the advent of the Singularity. And Gordon Moore himself has noted that industry plans have never extended more than two or three generations ahead.

The second part of Kurzwei’s argument is based on three overlapping revolutions in genetics, nanotechnology and robotics. These revolutions are presented as being in full swing today but iin any case it is assumed that AI will smooth out any rough spots. Between them, Kurzweil argues, developments  in these three fields will transform medicine, science, finance and the economy.  Although all sorts of miracles are promised, the most dramatic is human immortality, achieved first through dramatic extensions in lifespans delivered by nanorobots in our bloodstreams and, more completely, by the ability to upload ourselves into infinitely-lived computers.

Not surprisingly, Kurzweil has attracted a passionate support from a small group of people and derision from a much larger group, particularly within the blogosphere which might have been expected to sympathise more with techno-utopianism. The wittiest critique was probably that of Daniel Davies at the Crooked Timber blog (disclosure: I’m also a blogger there) who modified Arthur C Clarke’s observation about technology and magic to produce the crushing ‘Any sufficiently advanced punditry is indistinguishable from bollocks’. Riffing off a link from Tyler Cowen on the expected value of extreme forecasts, and a trope popularised by Belle Waring, Davies outbid Kurzweil by predicting not only that all the Singularity predictions would come true, but that everyone would have a pony (“ Not just any old pony by the way, but a super technonanopony!”).

Before beginning my own critical appraisal of the Singularity idea, I’ll observe that the fact that I’ve been waiting so long for the book is significant in itself. If my great-grandfather had wanted to read a book newly-published in the US, he would have had to wait six weeks or so for the steamship to deliver the book. A century later, nothing has changed, unless I’m willing to shell out the price of the book again in air freight. On the other hand, whereas international communication for great-grandad consisted of the telegraph, anyone with an Internet connection can now download shelves full of books from all around the world in a matter of minutes and at a cost measured in cents rather than dollars.

This is part of a more general paradox, only partially recognised by the prophets of the Singularity. Those of us whose lives are centred on computers and the Internet have experienced recent decades as periods of unprecedently rapid technological advance. Yet outside this narrow sector the pace of technological change has slowed to a crawl, in some cases failing even to keep pace with growth in population. The average American spends more time in the car, just to cover the basic tasks of shopping and getting to work, than was needed a generation ago, and in many cases, travels more slowly.

Progress in many kinds of services has been even more limited, a point first made by US economist William Baumol in the 1960s. The time taken to give someone a haircut, or cook and serve a restaurant meal, for example, is as high as it was 100 years ago. As a result, the proportion of the workforce employed in providing services has risen consistently.

The advocates of the Singularity tend either to ignore these facts or to brush them aside. If there has been limited progress in transport, this doesn’t matter, since advances in nanotech, biotechn and infotech will make existing technological limits irrelevant. Taking transport as an example, if we can upload our brains into computers and transmit them at the speed of light, it doesn’t matter that cars are still slow. Similarly, transport of goods will be irrelevant since we can assemble whatever we want, wherever we want it, from raw atoms.

Much of this is unconvincing. Kurzweil lost me on biotech, for example, when he revealed that he had invented his own cure for middle age, involving the daily consumption of a vast range of pills and supplements, supposedly keeping his biological age at 40 for the last 15 years (the photo on the dustjacket is that of a man in his early 50s). In any case, nothing coming out of biotech in the last few decades has been remotely comparable to penicillin and the Pill for medical and social impact (a case could be made that ELISA screening of blood samples, was crucial in limiting the death toll from AIDS, but old-fashioned public health probably had a bigger impact.

As for nanotech, so far there has been a lot of hype but little real progress. This is masked by the fact that, now that the size of features in integrated circuits is measured in tens of nanometers, the term “nanotech” can be applied to what is, in essence, standard electronics, though pushed to extremes that would have been unimaginable a few decades ago.

Purists would confine the term “nanotechnology” to the kind of atomic-level engineering promoted by visionaries like Eric Drexler and earnestly discussed by journals like Wired. Two decades after Drexler wrote his influential PhD thesis, any products of such nanotechnology are about as visible to the naked eye as their subatomic components.

Only Kurzweil’s appeal to Moore’s Law seems worth taking seriously. There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years. There are two very different things this could mean. One is that computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost. The other is that digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.

In the first case, the contribution of computer technology to economic growth gradually declines to zero, as computing services become an effectively free good, and the rest of the economy continues as usual. Since productivity growth outside the sectors affected by computers has been slowing down for decades, the likely outcome is something close to a stationary equilibrium for the economy as a whole.

But in the second case, the rate of growth for a steadily expanding proportion of the economy accelerates to the pace dictated by Moore’s Law.  Again, communications provides an illustration – after decades of steady productivity growth at 4 or 5 per cent a year, the rate of technical progress jumped to 70 per cent a year around 1990, at least for those types of communication that can be digitized (the move from 2400-baud modems to megabit broadband in the space of 15 years illustrates this).  A generalized Moore’s law might not exactly produce Kurzweil’s singularity, but a few years of growth at 70 per cent a year would make most current economic calculations irrelevant.

One way of expressing this dichotomy is in terms of the aggregate elasticity of demand for computation. If it’s greater than one, the share of computing in the economy, expressed in value terms, rises steadily as computing gets cheaper. If it’s less than one, the share falls. It’s only if the elasticity is very close to one that we continue on the path of the last couple of decades, with continuing growth at a rate of around 3 per cent.

This kind of result, where only a single value of a key parameter is consistent with stable growth, is sometimes called a knife-edge. Reasoning like this can be tricky – maybe there are good reasons why the elasticity of demand for computation should be very close to one. One reason this might be so is if most problems eventually reach a point, similar to that of weather forecasting, where linear improvements in performance require exponential growth in computation.

If the solution to a problem involves components that are exponential (or worse) in complexity, initial progress may be rapid as linear or polynomial components of the problem are solved, but progress with the exponential component will at best be linear, even if the cost of computation is itself declining exponentially.

So far it seems as if the elasticity of demand for computation is a bit greater than one, but not a lot. The share of IT in total investment has risen significantly, but the share of the economy driven primarily by IT remains small. In addition, non-economic activity like blogging has expanded rapidly, but also remains small. The whole thing could easily bog down in an economy-wide version of ‘Intel giveth and Microsoft taketh away’.

In summary, I’m unconvinced that the Singularity is near. But unlike the majority of critics of Kurzweil’s argument, I’m not prepared to rule out the possibility that information technology will spread through large sectors of the economy, producing unprecedently rapid economic growth. Even a small probability of such an outcome would make a big difference to the expected returns to investments, and would be worth planning for. So it’s certainly worthwhile reading Kurzweil’s book and taking the time to consider his argument.
At this stage, though, the Singularity is still best considered as science fiction. If you really want to get a feel for the kind of thinking that drives the Singularity, read Ian McDonald’s River of Gods or, better still, Charles Stross’ Accelerando.

Kurzweil, Ray (2005), The Singularity is Near, Viking, New York.

{ 146 comments }

1

JW Mason 08.22.12 at 3:04 am

This is fun!

Honestly, at this point CT has a big enough back catalog that you could probably just run old posts on a loop and we’d all keep reading and commenting. I certainly would.

That said, I stand by my comments (as Lemuel Pitkin) on the previous thread. I think the naive estimates of price elasticity of demand are going to be overestimates. First because measured price-elasticity will be very high by definition for a new good, where consumption is rising from zero; second because there are other factors driving increased consumption of computing besides price, so attributing the whole growth to price biases your estimated elasticity upward; and third (most important, but I somehow missed it last time) the hedonic price indexes used for computing almost certainly exaggerate the “true” quantity of increased computing being consumed and in particular make it hard for measured elasticities to fall below 1.

Let me explain the third point with an example. Suppose we don’t know how to measure the true quantity of computing because of quality improvements, etc. So we decide to take something we can measure, say chip speed, and use that as a proxy. So if the 2011 model is twice as fast as the 2009 model, we say that one of the 2011 ones is the same quantity as two of the 2009 ones. Now suppose actual dollar expenditure on computers is constant. Then by definition, simply because of the way you are measuring quantity, you will measure a price elasticity of one. (If dollar expenditures are rising, and you ignore non-price influences on consumption, then you will observe an elasticity of greater than one.) When both the price “fall” and the quantity “increase” are artifacts of the same imputed change in quality, measured price elasticity approaches 1 no matter what. I think there is something like this going on with computers.

This doesn’t conflict with the basic analysis in the post, but it does lead to the conclusion that your first case — computers as free goods — is far more likely than your second, quasi-singularity.

Numbers!

Head over to the BEA’s Fixed Assets Tables. There you will see that, measured at current prices, the country’s stock of computers increased by just 3 percent between 2001 and 2011. Basically, in dollars all the computers in the country were worth the same at the end of the decade as the beginning, while the dollar value of almost every other major aggregate was worth more. (The current-cost stock of nonresidential equipment and software as a whole rose 39 percent). But, oh my golly, look at the quantity index. That says that the stock of computers rose by 133% — more than doubled — over the same decade. In short, the entire growth in “real” expenditure on computers was due to imputed quality improvements. Under those circumstances, you can’t not measure a price elasticity of close to one.

But that’s almost secondary. The real point is that in the seven years since you wrote this, we’ve seen Moore’s Law continue and the result of all this steady improvement in computing speed has been … a steadily declining share of expenditure on computers. Jury’s in: no singularity for us.

2

JW Mason 08.22.12 at 3:11 am

(Let me make the example a little clearer. Say in year 1, one million computers are sold at an average price of $1000. year 2, again, one million computers sold at an average price of $1,000. But our statistical agency, in its wisdom, decides the quality of the new machines is sufficiently better that one of this year’s computers is equal to 1.1 of last years. That means that the “real” price of computers fell by 10%, since you can now buy 9 and get the same “quantity of computing” you would have had to buy 10 for last year. And it also means that “real” sales rose by 10 percent, because those million computers sold this year are the same “quantity of computing” as 1.1 million of last year’s would have been. Price down 10%, quantity up 10%, ergo price elasticity of one. But this will *always* be true when the quality adjustment is large relative to the change in observable quantities.)

3

Peter Hollo 08.22.12 at 3:22 am

Definitely worth the repost, John!

PZ Myers is no fan of Kurtzweil’s, and has some interesting critiques on Pharyngula (whatever you may think of his approach to promoting atheism/science) (I like him, on the whole).

From earlier this year, a post on coal-powered flying velocipedes
And more recently, the problems with brain uploading.

4

Antti Nannimus 08.22.12 at 3:36 am

Hi,

Perhaps it doesn’t even matter any more, if it ever did, when the “Singularity” will arrive by whatever former definition. The DNA chemicals of slimy, wet, life can now be used to record(!) , store, and retrieve(!) data (and programs which are also data) , and preserve it for eons longer and much more densely and efficiently than we ever could using any of our lame current silicon-based technologies. That DNA technology can also now be made to integrate with our primitive electronics to replicate and simulate our essential life processes. Do you get it? Do you see the possibilities here? So who the hell can deny, measured by any essential metric, that we have not already arrived at the Singularity? Do any of you still believe you are smarter, faster, better, and more facile than our best technology? As for me, I can’t even compete intellectually anymore with the lame mediocre stuff from the last millennium, and that stuff was shit by comparison.

So if you are still worried short-term about the illegal aliens crossing the border for your jobs, I would like to suggest that you don’t even have a clue about what’s now coming at you long-term.

Have a nice day,
Antti

5

Meredith 08.22.12 at 4:01 am

Thank you for this great (re)post, and the last (and for the comments on the last).

As a child (born 1950) I was fascinated that my grandmother, born in rural Virginia in the 1890’s, had grown up with kerosene lighting (not even gas) and lived to see men walk on the moon. The same grandmother enlightened me about the complexities of female life before menstrual pads were invented (or at least were available in rural Virginia). Years later, I heard a story on NPR about how women in the becoming-former Yugoslavia were in dire straits because there suddenly were no menstrual pads or tampons available to them. Perhaps more important that the pill: Kotex?

Just to say (and to bring discussions of nanotechnology and the like down to earth), some changes are more important than others, some periods (how long is a period?) of human history involve more momentous nexuses than others. But at the time, in the moment, who’s to say, when even later, it will be hard to say. Sometimes I worry that contemporary obsession with “are we in the midst of a transformative moment” is a version of “end of times” religious thinking. What’s up with us these days? Well, we can’t and won’t know till the end of time, will we?

6

John Mashey 08.22.12 at 4:25 am

‘There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years.’

Part of this is *absolutely untrue*: chip speed.

1) Moore’s Law *never* was about chip speed, but about the number of transistors, i.e., from :
‘Moore’s law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as “18 months” is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster).[1]”

2) For many years, the speed bottleneck was in transistor speeds, so shrinking them not only put more transistors on the same size chip, but the chips ran faster, because smaller transistors switch faster, so we got faster chips as a side-effect of Moore’s Law.

3) But that’s pretty much over, because wire delays outweigh transistor delays, chips can get blazing hot, power matters more than it used to, and there are many other issues. This is why we have all these multi-core chips, but not much higher clockrate rate. See old Intel roadmap, for example.
I.e., Intel got to ~3GHz clock rate ~2002. With 2X every 2 years, you’d expect to be seeing 2^5 = 32 X 3 GHz by now, or 90-100GHz in buyable products. Nope.

Of course, performance != GHz, and if you parallel applications, all is well … but that is far harder than just taking existing code and running it on a faster CPU. The most obvious example of the case where this works is in graphics processors. 3D graphics and image processing is wonderfully parallel.

7

David J. Littleboy 08.22.12 at 5:04 am

What John M. said.

Not only have clock speeds have been stuck at 3GHz for a decade now, it was already clear that that was happening by December 2004.

http://www.gotw.ca/publications/concurrency-ddj.htm

Personally, I’m not sure this is such a bad thing. That 3GHz single CPU peecee in 2002 was a humongously powerful computer*, but we had spent the previous 15 years writing code under the assumption that efficiency didn’t matter since next year’s peecee would be faster. Since that’s not true any more, we’ll be forced to think about computing smarter instead of punting the intellectual effort and getting away with it. And that’s a very good thing.

*: By the standards of someone who started out on things like the DEC LINC (one of which my father was the site engineer for), IBM 1130 (high school), PDP-6/10 (college).

8

Brett 08.22.12 at 5:09 am

I’ve heard something similar – that Moore’s Law is more or less dead already, but we’ve kept increasing computer power by increasing use of parallel computing. Then again, there are proposals for stacking chips, etc that might pan out (if you can cool them properly).

You could probably run some sort of artificial intelligence program on a massively parallel system, even if it amounts to simulating processes in the human brain to try and get certain capabilities. Full human “simulation”, though (which is what so-called “uploading” really is), seems rather distant. It’s also not really an escape from death, and very ethically troubling (what are your obligations to a simulated human being in terms of keeping him/her alive? What happens if someone breaks into the server where the uploads live and does a whole bunch of mischief?).

Much of this is unconvincing. Kurzweil lost me on biotech, for example, when he revealed that he had invented his own cure for middle age, involving the daily consumption of a vast range of pills and supplements, supposedly keeping his biological age at 40 for the last 15 years (the photo on the dustjacket is that of a man in his early 50s).

Kurzweill is pretty wacky on the nutrition supplement issue. It’s kind of sad, because it’s obviously a thinly veiled covering of his fear of dying before the 2040s (which is when he hopes the Singularity will occur).

As for nanotechnology, I’m sure it will have some useful applications. Singularitarians tend to treat it as magic, though.

9

terence 08.22.12 at 5:49 am

A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years. There are two very different things this could mean. One is that computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost. The other is that digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.

Or, option 3, thank you Microsoft, Adobe and Apple, continued software bloat will mean that, even with 100GB of RAM, you (or at least the typical computer punter like me) will still be running the same sort of stuff on their laptops and still wondering why it takes so freakin long to boot up…

10

chrismealy 08.22.12 at 6:01 am

Writing parallel software is really really really hard. It may never be easy. I’m not optimistic.

11

kiwanda 08.22.12 at 6:10 am

“I’m not prepared to rule out the possibility that information technology will spread through large sectors of the economy,”

Myself, I’m inclined to not be prepared to rule out the possibility of a chance that perhaps certain current trends of daily sunrise, may, perhaps, not ebb.

Isn’t it a little narrow-minded to consider the possible advent of superhuman AI in the context of economics-as-usual, as though its most significant effect will be its impact on soybean futures?

It does seem narrow-minded on Kurzweil’s part as well to predict an event of unimaginable impact, and then futilely guess the technological ponies that might follow that event, as though its most significant effect will be that Ray Kurzweil gets to live longer.

As with so many other topics, PZ Myers on the singularity is pompous, doctrinaire, intolerant, and stupid.

12

Alex SL 08.22.12 at 6:15 am

On the one hand, your control not to mock Kurzweil more relentlessly is admirable, but I think the singularitarians deserve more ridicule than they are getting.

Here are people who are essentially saying – explicitly or, through the way they focus their attention, implicitly – that we don’t need to worry about and solve problems like poverty, overpopulation, resource overuse, erosion, global warming, antibiotic resistant bacteria, etc. because if we just continue building faster computers, all these problems will miraculously be solved in five minutes by superfast AI Jesus during the nerd rapture. (And then there are a few pessimists who, just the flip side of this, consider the rise of malevolent super-AI a greater acute risk to worry about than the starvation of billions through drought-induced failure of harvests.)

And Kurzweil is indeed the perfect advocate, because his wishful thinking (“I’m going to be immortal!”) is just so blatantly obvious.

A discussion that I would find more rewarding than the economics of increasing chip speed would be why so many singularitarians actually seem to think that they will _be_ immortal if a copy of their mind has been uploaded into a computer (assuming for a moment that that will ever be possible, a question about which I am agnostic). There seems to be dualist view of mind behind it, as if you could transfer your soul-mojo from your body into a computer, and that is quite peculiar as most of them would probably claim to be monists, atheists even. What would really happen is that you get up from the scanning table and wonder why nothing feels different, and then grow old and die while a copy of you is having fun in virtual reality until the next 1859 level solar flare wipes all electronics. Really no more spectacular than to know that your identical twin lives on a few decades after you have died in a car accident…

13

David J. Littleboy 08.22.12 at 6:22 am

“artificial intelligence program on a massively parallel system”

Thinking that parallelism gets you something _other than speed_ strikes me as a very bad idea: if you can’t do it slowly with a serial computation, you still can’t do it with a zillion fast computers, so parallelism rather has to be a red herring for AI. But the term “massive parallelism” gives people a nice warm feeling in the pits of their stomachs. I don’t know why; it’s completely illogical.

I suppose that’s on-topic here: Kurzweilian futurists believe (in a rather religious manner) in emergent phenomena, i.e. that “intelligence” will magically appear if you do enough dumb things enough times. Sound real unlikely to me…

14

Alex SL 08.22.12 at 6:35 am

Kiwanda,

Myers is not doctrinaire but merely brash. And I would rather trust him, a developmental biologist, to know how far we are from understanding the brain, than a computer engineer, for example. But then, I am a biologist myself, so I may be biased.

Another thing, by the way: even if we assume that we can soon build super-fast, super-intelligent AI, so what? One of the many unspoken but baseless assumptions of singularitarians is that you can solve technical, scientific (and other) problems just by thinking about them, or perhaps simulating. The idea is that you build an AI, it builds a better AI, that one builds an even better AI, and suddenly you have an AI that thinks for another ten seconds and comes up with immortality-nanobots, fusion power, the world formula, and preferably world peace and a pony for everybody.

But that is not how it works IRL. Before you can build that nanobot or that powerplant, you have to have machines to build them, and you may need machines that build these machines, and of course you must first mine (finite) resources and expend (finite) energy that you turn into all these things. On top of that, I strongly doubt that super-AI can even invent all these things by sitting in an armchair and staring absent-mindedly into the air. You will need to do experiments, build prototypes and see how they perform, and then go back to the drawing board when they fail or don’t perform as well as you thought. All this will not look very singularity-like, because implementation will still happen approximately at the speed of human, even if done by a manufacturing robot.

15

Rakesh 08.22.12 at 7:28 am

Maybe your techno-skepticism is right-on–let’s see what this forthcoming book by Hilary and Steven Rose has to say. But then maybe you are not right, and you or your children will be slaves to those transhumanists with effective MIPS speeds tens of thousands times faster than yours and life spans ten times as long. As Nicholar Agar is warning , the transhumanists will be superior in ability and see themselves as entitled to the world’s resources and the low price at which you and your ilk will offer your services. Then you won’t be criticizing Marx from the right anymore but rather faulting him for not understanding how class inequality can engender the biological inequality that makes class inequality inescapable.

16

actio 08.22.12 at 7:44 am

“Before beginning my own critical appraisal of the Singularity idea, I’ll observe that the fact that I’ve been waiting so long for the book is significant in itself. If my great-grandfather had wanted to read a book newly-published in the US, he would have had to wait six weeks or so for the steamship to deliver the book. A century later, nothing has changed, unless …”

When I first heard of Kurzweil’s book some years back I did an online search for a pirated pdf copy and began reading 20 seconds later.

17

John Quiggin 08.22.12 at 7:47 am

@actio A possibility I mentioned in the very next sentence

18

afinetheorem 08.22.12 at 9:13 am

Ignoring some of the wilder biotech/etc points, the basic principle that sufficiently advanced computers will be able to replicate many human activities (such as designing more efficient computers!), and in doing so will have a transformative impact within our lifetime that is worth planning for, is absolutely an important point.

Two thoughts:
1) The Industrial Revolution, to this economist the most important event in human history, didn’t have any impact on wages or lifestyle of the average person even in England for a good 75 years after Watt and Newcomen. Technology takes time to diffuse and integrate. Kurzweil is very good about pointing out how shabby even our best computers today are compared to the processing power in the human brain.

2) He has another broad anti-Luddite point: computers, like basically all technology, are labor augmenting. You can see this in chess today, where your cell phone can run a program that will be the world champion, but that same program can be beaten by even a modestly talented person who can also use a (slightly less powerful) computer chess program to check her moves. The important role of elasticity of demand here will be that owners of robots and other computer-based capital will *not*, in fact, become overlords, but rather the share of global GDP devoted to their very efficient sector will decline over time just as agriculture and manufacturing have.

19

actio 08.22.12 at 9:55 am

John Quiggin: My comment was only motivated by amazement that you in fact did wait six weeks for a book that the published wanted you to read when the book that was only a few keystrokes away the whole time.

“The advocates of the Singularity tend either to ignore these facts or to brush them aside. If there has been limited progress in transport, this doesn’t matter, since advances in nanotech, biotechn and infotech will make existing technological limits irrelevant. Taking transport as an example, if we can upload our brains into computers and transmit them at the speed of light, it doesn’t matter that cars are still slow. Similarly, transport of goods will be irrelevant since we can assemble whatever we want, wherever we want it, from raw atoms.”

I have mixed reactions to that. Yes, agreed, transportation will be needed because uploads or teletransportations aren’t near and maybe impossible. But automated cars are near and and will likely liberate enormous amounts of human time and effort for, hopefully, more stimulating use. In general I think it likely we’ll see automation induced fundamental transformation of most areas of work and life very much faster than is intuitive. No Singularity but a radically different world.

20

minnesotaj 08.22.12 at 10:24 am

Hmmm… How does the saying go? “Technological change takes longer than we expected and changes more than we imagined.” Something like that…

First, I think this broad discussion needs a Weak Singularity and a Strong Singularity (see also: Post-Scarcity) distinction — ‘cuz “I nano-fied my brain, backed it up, and now I live forever” is, while not impossible in 10,000 years, not going to happen to Ray Kurzweil.

That said… I think a more extended update from your standpoint John would be answer to “have scientific, engineering, and IT practices sufficiently advanced since my last paragraph was written to cause me to be more or less empathetic with (Weak) Singularity?”

The progress in genome sequencing is one that, on the surface, might: we are very likely a few years away from full genome sequencing at a cost of a hamburger (and about the same time to delivery). So that seems like it would be a strong candidate for possibilities of (Weak) Singlurity — and advancing on a Kurzweilian timetable. To the point of comments above, however, the real challenge has come in how to write software to sort through this, coupled with the human ingenuity to interpret what the genome + analysis means

Other items: Go programs are now on order of 6 dan – a result even 10 years ago I’d have thought absurd; nanotechnology is making major strides in cancer research, imaging, and drug delivery — so not as invisible as your gentle mocking suggests; and rather further (cough) “afield:” no-till, GPS & data-driven farming has just in last 10 years made certain crops basically labor-free — you can run a 5,000 acre farm with a couple of people, with less environmental devastation and far less water/fertilizer/pesticides (no overspray of any of the above).

I’m sure there are many more… but I think I’m more inclined to be skeptical with the group on (Strong) Singularity… but as worried as Antti and afinetheorem that it’s all-too-easy to underestimate things that are in-progress and really do change the game significantly (think, e.g., what happens when entire manufacturing sector looks like farming — or what medicine looks like when I can get blood test, DNA sequence, drug selection, compare against medical history & current epidemiological baselines, and walk out of Target with a prescription for same price as today, but without so much as a “Thank you, ma’am” to my doctor).

21

John Quiggin 08.22.12 at 10:52 am

“you can run a 5,000 acre farm with a couple of people”

A cheap shot, since we are talking very different farm technologies, but members of my extended family have been running 50-100 sq m farms with a couple of people for decades. IIRC, 100sq m = 64 000 acres.

More seriously, and speaking as an agricultural economist, productivity in the global ag sector is still growing, but at a slower rate than in C20. The Green Revolution was a hard act to follow. And replacing a team of horses with a tractor saved a lot more labor than will replacing a tractor with a driverless tractor (or other possibilities raised by GPS).

This isn’t (or isn’t just( a matter of an inherent slowdown in technology. Public sector crop breeding programs have been gutted, and the private sector replacements like Monsanto haven’t delivered the goods. The same across a lot of different areas of ag research.

22

mds 08.22.12 at 1:42 pm

Kurzweilian futurists believe (in a rather religious manner) in emergent phenomena, i.e. that “intelligence” will magically appear if you do enough dumb things enough times. Sound real unlikely to me…

Er, barring the use of “magically,” doesn’t this roughly describe how intelligence actually appeared? It’s not like natural selection + random mutation is smart.

23

Alex SL 08.22.12 at 1:43 pm

minnesotaj,

See, this with the genome sequencing is just like with understanding the brain. We are simply not where one may naively think we are. Understanding what goes on is not merely a question of writing the right program to sort through that data salad.

The best analogy I have read so far is that while many people think the genome is like a blueprint, in reality it is more like a cooking recipe. Yes, there are instructions, but they were written under the assumption that you know about and have a mixer, several bowls, spoons, an oven, a timer, scales etc. They are generally just assumed and don’t even get mentioned in the recipe.

In the case of the genome of an organism, these utensils are (at first) the mother, all the other cells of the same organism with their individual signals and the environment. So essentially we have a barely deciphered recipe that does not tell us what oven to use or at what temperature, only we know that there are gazillions of “ovens” and they are way more different from each other than actual ovens and have gazillions of settings instead of only temperature, and these settings change all the time during baking. And the cake itself also never stops changing until it dies.

Now I am not complaining, we can do interesting things with Next Generation Sequencing, but to understand developmental biology you still have go to the lab for a couple of months to figure out that protein X interacts with uncharacterized Whateverase-like enzyme Y during apoptosis, or stuff like that. This is not going to happen much faster by building better computers.

24

David J. Littleboy 08.22.12 at 2:16 pm

“doesn’t this roughly describe how intelligence actually appeared? ”

Uh, no. (Or maybe, yes.) Natural selection ends up designing things by feedback over really really long periods and really really large numbers of trials. The things designed by natural selection are pretty reasonable engineering designs and don’t themselves function by doing random things. That is, evolution isn’t an explanation for, say, photosynthesis, it’s an explanation for how life figured out photosynthesis (which looks like a really really kewl thing from the standpoint of someone trying to design an efficient and scalable photovoltaic cell, but that’s another discussion).

Ditto for intelligence. So intelligence was _designed_ by lots of repetition, but it doesn’t _necessarily operate_ by lots of repetition.

I think this is an important distinction, and that people who are trying to “explain” “intelligence” by evolution are making a mistake.

25

minnesotaj 08.22.12 at 2:16 pm

John –

But for purposes of Singluarity/Post-Scarcity/TechnoPonies, the last 10 years have seen a STUNNING change in agricultural productivity (tables here: http://www.ers.usda.gov/data-products/agricultural-productivity-in-the-us.aspx) — where as prior Green Revolution productivity enhancements more-or-less tracked increases in productivity (in the US) to increases in inputs (esp. artificial fertilizer), the last ten years have seen near equal growth rates of output, but with net (in most years) decreases in inputs.

Which is where the “intuition” (if we call it that) Kurzweil, et. al., are building from comes: we are wed to models that don’t last shifts in technology, what happens if we blow the model? Farming just seems to me the most advanced change in human history — so if even it is continuing to change in radical ways, why not expect mind-blowing stuff elsewhere?

Alex –

Duly noted — but w/Go, for instance, the big changes came from doing lots of little simulations & then aggregating them… so once you have many thousands of fully-sequenced genomes (many millions even, in 20 years’ change in technology), whose to say which protein-folding simulations will pop green and get Suzy back from the danceteria and back into the lab? To your point (and, I think John’s), the key argument against (Strong) Singularity is just that… well, this stuff is really complicated & there is (per Fred Brooks) no “Silver Bullet” to productivity enhancement.

26

David J. Littleboy 08.22.12 at 2:30 pm

To make that a bit clearer I should have said “but it doesn’t necessarily operate by lots of _dumb_ repetition.” Every discussion of how an actual neural circuit operates that I’ve ever read describes a very tightly designed system. Sure, it’s increadibly parallel (it has to be to, e.g. make visual recognition through memory recall operate in a very small number neural steps), but it’s doing well-defined computations.

27

ajay 08.22.12 at 2:37 pm

The time taken to give someone a haircut, or cook and serve a restaurant meal, for example, is as high as it was 100 years ago.

Minor point, but this isn’t actually the case. A lot of food service establishments, particularly at the lower end of the price scale (your gastropubs and so on), rely on deliveries of pre-cooked meals in sealed metallised retort pouches from companies like Brake Brothers, which they then reheat in boiling water, plate and serve. Couldn’t do that a hundred years ago. Not to mention, too, things like microwave ovens.

28

David J. Littleboy 08.22.12 at 2:39 pm

“Go programs are now on order of 6 dan”

Is that correct? Your generic 6-dan amateur should be able to win 4-stone games from any pro 100 per cent of the time. (I was pretty much there (winning 4-stones, losing 3-stone games) 20 years ago, but only recently got back into playing. Then I was seen as 4-dan. Twenty years later, I’ve been promoted to 6-dan, since the crowd I’m playing with (older: I’m 60, they’re over 70) doesn’t like losing. But the 6-dan’s among them have a real flair for/understanding of the game that’s impressive and beyond mine.

29

Josh G. 08.22.12 at 2:46 pm

Before beginning my own critical appraisal of the Singularity idea, I’ll observe that the fact that I’ve been waiting so long for the book is significant in itself. If my great-grandfather had wanted to read a book newly-published in the US, he would have had to wait six weeks or so for the steamship to deliver the book. A century later, nothing has changed, unless I’m willing to shell out the price of the book again in air freight.

But that’s due to legal restraints, not technological ones. Only copyright laws prevent the full book from being posted on the Internet and available to anyone at any time.
(And incidentally, I checked Google Books and they have Kurzweil’s Singularity in ebook format, which is readable instantly after credit-card payment. Maybe that’s only available to US consumers?)

30

ajay 08.22.12 at 2:48 pm

The Industrial Revolution, to this economist the most important event in human history, didn’t have any impact on wages or lifestyle of the average person even in England for a good 75 years after Watt and Newcomen

Very, very wrong. Watt lived from 1736 to 1819. But let’s pick 1775, when he set up Boulton & Watt to manufacture his steam engines.

So what afinetheorem is saying is: “by 1850, the wages and lifestyle of the average person in England had been completely unaffected by the Industrial Revolution.”

God give me strength.

31

David J. Littleboy 08.22.12 at 2:48 pm

There was a recent New Yorker article about Cheeecake Factory* that is worth a read. They certainly go in for extremes of efficiency and quality control, but I’m not sure they’re doing anything that (a) couldn’t have been done 20 years ago and (b) probably can’t be improved on significantly (once you are seeing only 3% waste of raw materials inputs, you’ve run out of things to improve).

*: http://www.newyorker.com/reporting/2012/08/13/120813fa_fact_gawande

32

minnesotaj 08.22.12 at 2:49 pm

David –

Yes, Zen, the best program on KGS, beat a 9P with 4 stone handicap… by 20 points earlier this year. Our “Kasparov” moment is 10 years +/- 5…

33

Josh G. 08.22.12 at 2:55 pm

Kurzweil drastically overstates his case, but I do think that strong AI, once it happens, will change the world in ways we can barely begin to imagine now. And unless you believe in Cartesian dualism, strong AI must be possible, though we can’t know at this point whether it will take 20 years or 200 years to get there.

34

Brett 08.22.12 at 3:07 pm

@AlexSL

What would really happen is that you get up from the scanning table and wonder why nothing feels different, and then grow old and die while a copy of you is having fun in virtual reality until the next 1859 level solar flare wipes all electronics. Really no more spectacular than to know that your identical twin lives on a few decades after you have died in a car accident…

I agree, but some people don’t. I’ve actually spoken to people who have said that if you made an electronic simulation of their brain, then euthanized their biological self, they believe they would still be alive. It strikes me as really bizarre.

@minnesotaj

Other items: Go programs are now on order of 6 dan – a result even 10 years ago I’d have thought absurd; nanotechnology is making major strides in cancer research, imaging, and drug delivery—so not as invisible as your gentle mocking suggests;

It is? That’s not what I’ve been hearing from lay sources on cancer research. Sure, there’s always people saying how this Next Big Thing is going to totally beat cancer and it’s showing initially promising results, but so far they’ve all been much less than impressive in the human trials.

35

ajay 08.22.12 at 3:11 pm

They certainly go in for extremes of efficiency and quality control, but I’m not sure they’re doing anything that (a) couldn’t have been done 20 years ago and (b) probably can’t be improved on significantly (once you are seeing only 3% waste of raw materials inputs, you’ve run out of things to improve).

All that customer volume forecasting business might need some fairly sophisticated software. As for improvements, raw materials are only one input. What about using labour and capital more efficiently? Maybe a lot more of the prep work could be automated, or done using machines rather than by hand.

36

Josh G. 08.22.12 at 3:13 pm

Alex SL @ #12: “A discussion that I would find more rewarding than the economics of increasing chip speed would be why so many singularitarians actually seem to think that they will be immortal if a copy of their mind has been uploaded into a computer (assuming for a moment that that will ever be possible, a question about which I am agnostic). There seems to be dualist view of mind behind it, as if you could transfer your soul-mojo from your body into a computer, and that is quite peculiar as most of them would probably claim to be monists, atheists even. What would really happen is that you get up from the scanning table and wonder why nothing feels different, and then grow old and die while a copy of you is having fun in virtual reality until the next 1859 level solar flare wipes all electronics. Really no more spectacular than to know that your identical twin lives on a few decades after you have died in a car accident…

That issue has always bothered me when it comes to discussions about brain uploading or other proposed forms of physical immortality. However, it seems to me that the Thesesus’ ship paradox might offer a way out. Suppose we did have the technology needed to upload the contents/functionality of the brain to an artificial system (or an organic replacement). We know that losing one specific organ doesn’t destroy personhood; someone who has a heart transplant may suffer health problems as a result, but they don’t experience a break in continuity or fear that the “real” person died on the operating table. Is the brain different? If so, then this means that any time a person has neurosurgery that involves ablation or removal of any part of the brain, then they were “really” killed by that surgery and a completely different person got up and walked away. But virtually no one would argue that. It seems that it is the incremental aspect of change which is vital.

So let’s say that a person is put under for surgery and has a small portion of their brain (one functional center) removed, scanned in sub-molecular detail, and switched out for a functionally identical, but much more durable, electronic replacement. The person wakes up with one small part of their brain replaced, but are still clearly the same person as before. They can then take the old piece of organic matter and put it on the shelf or flush it down the toilet or whatever. A few months later, they go back in for a second operation and have another small portion of their brain replaced. Just as before, they go to sleep, they receive the prosthesis, they wake up the same person. Then another few months pass and another operation is performed, and so forth. At some point, the entire brain will be a different physical object, but the person will still be alive; they will have the full experience of continuity. The fact that the brain was replaced over time by technological means is no more significant than the fact that every cell you have in your body now is physically different from the cells you had in your body when you were five years old. Leaving aside the obviously significant technological barriers (this is, after all, a thought experiment), is there any underlying reason why this wouldn’t work?

(Interestingly, L. Frank Baum hit upon a similar idea in a fantasy fictional context when he wrote the Tin Woodsman’s origin story.)

37

Brett 08.22.12 at 3:21 pm

@ajay

Very, very wrong. Watt lived from 1736 to 1819. But let’s pick 1775, when he set up Boulton & Watt to manufacture his steam engines.

So what afinetheorem is saying is: “by 1850, the wages and lifestyle of the average person in England had been completely unaffected by the Industrial Revolution.”

God give me strength.

“Completely unaffected” is wrong, but income for the average person in England had not risen dramatically in that period. At least 50% of the population still lived in rural areas, and most of the urban working population did not work in factories.

A big part of this is because there were multiple waves of industrialization in Great Britain. The earliest one in the late 18th century mostly changed parts of textile production, with changes broadening in the 1830s and 1840s as industrialization spread to other areas of production. That was also when you started to see the modern cycle of scientific research connected directly with technology leading to new ways of production, as well as British banks becoming a huge factor in financing industrial developments.

38

Anarcissie 08.22.12 at 3:27 pm

It seems to me that the natural human propensity to do evil — to do harm to others, to impede and destroy, to steal, to deface and defraud and ensnare, will enter — indeed, has already entered — such fields as computation, communications, AI, nanotechnology, automation and so on. Sometimes this will stimulate progress, such as the ever-increasing sophistication of PC malware and anti-malware, but very often it will block it or make it too dangerous to use. There are people who do not want you to get a pony, at least, not a nice one. Many of those in charge appear to belong to this set.

39

Hidari 08.22.12 at 3:42 pm

‘Kurzweil drastically overstates his case, but I do think that strong AI, once it happens….’

Shome mishtake surely! You mean ‘if it happens…..’.

But wait.

‘Unless you believe in Cartesian dualism, strong AI must be possible….’

But this is highly debatable. For all we know there may be other constraints and problems with ‘strong’ AI that means it’s always going to be practically impossible (even assuming, which is, actually, a large assumption, that it’s theoretically possible).

And an even bigger error is to assume that just because something is theoretically and, for that matter, practically possible, that means we will/can actually do it.

The comparison here might be with nuclear fusion, which, as the old joke has it ‘is only 50 years away, and always will be.’

CF also moonbases, the manned exploration of Mars, cities on the bottom of the ocean etc.

40

minnesotaj 08.22.12 at 3:44 pm

@ajay #35 — a friend of ours works on process improvement for Yum Brands and they have some jaw-droppingly sophisticated POS/Supply Chain/Process monitoring (including labor) — basically, they know to the second how many chalupas and cheesy breads are going down range: and order/process/make food accordingly, then analyze the data continuously for the next improvement.

@Brett #34 — I don’t know but from what I read in the press but what people like 17 year-old Angela Zhang (http://www.siemens-foundation.org/en/competition/2011_winners.htm#2) are doing is pretty insane. Will it take 20 years to come to fruition? OK… that still puts us in my lifetime and if Cancer is a manageable disease in my lifetime, that will be an astonishing change in human mortality.

41

ajay 08.22.12 at 3:49 pm

40.1: that’s the kind of thing I mean.

42

Niall McAuley 08.22.12 at 3:53 pm

Josh G’s gradual replacement of the brain is a standard thought experiment for uploading, but it does not confront Alex SL’s point.

Suppose we develop brain Joshisization first, and a bunch of people are repaced gradually by robots. 10 years later, my brother is the last person to get Joshisized before we develop brain AlexSLation, and I am the first to undergo that.

In my case, there is me, and RoboMe. In my brothers case, there is just RoboBrother. My actual brother is gone.

Or suppose we can record the data required for Joshisization during the process. A week after my brother is Joshisized, the factory prints 10 more of him.

Which one is the original? I say none of them are.

43

ajay 08.22.12 at 3:54 pm

“Completely unaffected” is wrong, but income for the average person in England had not risen dramatically in that period. At least 50% of the population still lived in rural areas, and most of the urban working population did not work in factories.

No. Average wages in 1850 were double what they were when James Watt was born (to pick an arbitrary start of the period in question). Doubling is a dramatic rise. Britain became far more urban, far more populous and far richer over that period, not to mention the availability of things like machine-produced clothing and rail transport.

44

JW Mason 08.22.12 at 4:19 pm

Re the Cheesecake Factory and Yum Brands (and taco Bell), of course there are big improvements in supply chain management and so on but how confident are we that these owe much, if anything, to improvements in computer power?

I’ve been poking around in the BEA tables some more and one thing that’s really striking is that at the same time we’ve supposedly had this enormous increase in the quantity of computing, its distribution across industries is almost unchanged. Even though the total stock of computers is supposed to have more than doubled since 2001, the IT share of capital by industry is essentially identical. For fast food, for instance, computers & software accounted for 1.6 percent of the current-cost capital stock in 2001, and 1.5 percent in 2011. It looks like that pretty much down the line.

Now obviously you can tell a story where the huge fall in the cost of computing leads to all kinds of changes without changing the aggregate amount spent on computers. But it really stretches belief that industry would more than double its stock of computers in a decade, yet employ the new much larger stock in the exact same ways as the old one. Seems more parsimonious to suppose that the increase in the computer stock is not real, but an artifact of the exaggerated importance given to quantity improvements in the construction of the price index.

To me, the fact that almost every industry is devoting the same share of investment to computers as a decade ago, suggests that they are probably using computers in more or less the same ways as a decade ago.

45

JW Mason 08.22.12 at 4:20 pm

quality improvements, I mean.

46

JW Mason 08.22.12 at 4:25 pm

Also, comments by David Littleboy and Alex SL are very good.

47

JW Mason 08.22.12 at 4:29 pm

This isn’t (or isn’t just( a matter of an inherent slowdown in technology. Public sector crop breeding programs have been gutted, and the private sector replacements like Monsanto haven’t delivered the goods.

Did anyone see David Graeber’s recent piece on technological progress in the Baffler? He makes an argument similar to John Q.’s here – that to the extent that progress has slowed (and he thinks it has, a lot) it’s not because of any inherent limits to the capacity of science to increase our command over the material world, but because innovation has been crippled by being subordinated to the profit motive.

48

ajay 08.22.12 at 4:30 pm

44: but those figures imply that industry has far more computers by dollar value than a decade ago (because the size of the total capital stock has gone up) and also, more importantly, that it has vastly more computing power than a decade ago (because the amount of power you can buy for a dollar has also gone up); why then would you conclude that they must be using them in the same way as they were a decade ago?

49

Hidari 08.22.12 at 4:40 pm

@47

See also here:

‘Max Levchin and Peter Thiel, co-founders of PayPal, said last year that innovation in the US was “somewhere between dire straits and dead”. In his book Rise of the Creative Class (2002), Richard Florida of Toronto University argued that, while a time traveller from 1900 arriving in 1950 would be astonished by phones, planes, cars, electricity, fridges, radio, TV, penicillin and so on, a traveller from 1950 to the present would find little to amaze beyond the internet, PCs, mobile phones and, perhaps, how old technologies had become infinitely more reliable. The US economist Tyler Cowen made a similar point last year in The Great Stagnation: innovation slowed after the 1970s, he argued, and failed to create jobs. No development of the past 50 years, he could have added, bestowed benefits comparable to what washing machines and vacuum cleaners did to liberate women from drudgery.

Green energy technologies have not developed for large-scale use as was widely expected. Nor have electric cars, still less flying cars. The promise of gene therapy is unfulfilled. The development of MEMs (miniaturised pumps, levers or sensors on silicon chips) and nanotechnology – predicted as long ago as 1959 – has been agonisingly slow. In some respects, we have gone backwards: since the retirement of Concorde, the speed of air travel has slowed.

The desirability of some of these technologies is contested, but their spread has not been inhibited on social or ethical grounds. So what stops their development? Excessive government regulation and high taxes, stifling animal spirits among innovators and entrepreneurs, are commonly blamed. I see it differently. Invention and development of genuinely new, beneficial products are being stifled, as in the pharmaceutical industry, by big, established companies, not government. Just as drug companies tweak existing products, and deploy large marketing budgets to present them as new, so do other companies tweak and sometimes incrementally improve technologies that were familiar to our grandparents.

Supermarkets are full of things that claim to be “new and improved”. Technologists tweak vegetables and fruits to make them last longer, look better and travel more easily, without regard to flavour. Bankers develop new trading “products” that, however you cut it, are still about borrowing and lending. We have digital radio and high-definition TV, though not everybody thinks either improves on what existed before. For many companies, skilful marketing of products that aren’t significantly different from what preceded them has replaced innovation. It’s cheaper and less risky to convince customers that something is ground-breaking, even when it isn’t, than develop something truly innovatory.

In short, rent-seeking is now far more lucrative than innovation that delivers social benefits….’

50

Substance McGravitas 08.22.12 at 4:41 pm

why then would you conclude that they must be using them in the same way as they were a decade ago?

I can’t say “hey, charge me for a chocolate bar” anywhere any more because the thing needs to be scanned in some inventory control sorta deal. I have to bring it to the till where the might of technology is brought to bear.

51

JanieM 08.22.12 at 4:56 pm

Substance: I can’t say “hey, charge me for a chocolate bar” anywhere any more because the thing needs to be scanned in some inventory control sorta deal. I have to bring it to the till where the might of technology is brought to bear.

The extreme example I’ve seen of this, just as an interesting footnote on that use of technology, was in the grocery store where we shopped when I visited my son in China a couple of years ago. He did most of his shopping in outdoor markets and other places where bargaining was the rule, but in the grocery store, if you tried to buy something that the checkout clerk’s scanner couldn’t find in the system, you simply could not buy it. There was an empty checkout counter where all the clerks tossed the items that didn’t scan. No bargaining, no inventing a price, no “but the bin said it cost such and such.” The computer said no, and that was that.

52

Lee A. Arnold 08.22.12 at 4:58 pm

There is a huge amount of discovery and innovation going on in materials science. There are a lot of fascinating things, and it is very hard to imagine that some of it won’t lead to improvements in life.

53

JW Mason 08.22.12 at 5:01 pm

why then would you conclude that they must be using them in the same way as they were a decade ago?

Because the distribution across industries is exactly the same as it was a decade ago. Not dispositive, but suggestive.

54

Josh G. 08.22.12 at 5:04 pm

Hidari @ 49: I’m not sure that Peter Thiel and Tyler Cowen are exactly the most reliable sources…

55

JW Mason 08.22.12 at 5:07 pm

<iRichard Florida of Toronto University argued that, while a time traveller from 1900 arriving in 1950 would be astonished by phones, planes, cars, electricity, fridges, radio, TV, penicillin and so on, a traveller from 1950 to the present would find little to amaze beyond the internet, PCs, mobile phones and, perhaps, how old technologies had become infinitely more reliable. The US economist Tyler Cowen made a similar point last year in The Great Stagnation: innovation slowed after the 1970s, he argued, and failed to create jobs.

While I definitely agree with the first part of this, Cowen’s attempt to link unemployment with slowing technological progress is just confused & wrong, I think.

56

JW Mason 08.22.12 at 5:09 pm

Oops, my 55 was supposed to be a response to this quote:

Richard Florida of Toronto University argued that, while a time traveller from 1900 arriving in 1950 would be astonished by phones, planes, cars, electricity, fridges, radio, TV, penicillin and so on, a traveller from 1950 to the present would find little to amaze beyond the internet, PCs, mobile phones and, perhaps, how old technologies had become infinitely more reliable. The US economist Tyler Cowen made a similar point last year in The Great Stagnation: innovation slowed after the 1970s, he argued, and failed to create jobs.

(We’re still using the same labor-intensive, failure-prone manual html tags we as when John Q. first wrote this thing. Some progress.)

57

afinetheorem 08.22.12 at 5:13 pm

Ajay, the consensus among most economic historians is that wage growth from 1775 to 1850 was no more than 30% in Britain (although measuring inflation during this period is subject to a great deal of controversy). Less difficult to measure is average height, which measures caloric intake (and hence intake for near subsistence workers) very well. Average height in Britain was at its *nadir* in 1850.

In any case, the exponential takeoff in real living standards we associate with the Industrial Revolution absolutely did not begin until the mid-late 19th century.

58

Josh G. 08.22.12 at 5:17 pm

JW Mason @ 56: “(We’re still using the same labor-intensive, failure-prone manual html tags we as when John Q. first wrote this thing. Some progress.)

There are better solutions; they just haven’t been implemented on CT. In fact, many of these are available as standardized controls to plug in to an existing site design.

59

Substance McGravitas 08.22.12 at 5:42 pm

While I definitely agree with the first part of this

Even if the traveller from the 50s wouldn’t be astonished – not so long at that point since mom was walking to a shack-of-a-school barefoot and hadn’t seen a tall building – I don’t think astonishment is much of a measure of technological progress. Once you’ve accomplished putting people in the air – which is as old an idea as ideas – you pretend that all flight is humdrum?

The only astonishment I can imagine now is, I guess, singularity-related: once people have the ability to turn themselves into monsters they’ll do it. There’s a Stanislaw Lem story in which someone sets down on a planet and is wondering where the locals are and it turns out they’ve all turned themselves into objects, chairs and so on.

60

JP Stormcrow 08.22.12 at 5:49 pm

There’s a Stanislaw Lem story

“The Twenty-first Voyage” of Ijon Tichy in Star Diaries. An overall fun read, although a bit tiresome in parts.

61

Matt 08.22.12 at 7:47 pm

Singularitarians have estimated* that the human brain is equivalent to a computer of something between 1 and 20 petaflops of number crunching throughput. Today’s fastest computers are already in this human brain equivalence range and will soon surpass it. But Lt. Commander Data is nowhere to be found, because nobody knows how to write the software to imitate a human brain even with perfectly good hardware ready. I think the hope pinned on brain-scans is a twofer: not only will it deliver immortality, it will enable Strong AI without humans having to program it. Otherwise software development would be a really gaping hole in the Singularity plan.

On the other hand, I wouldn’t discount notions that computing will weird the future, even though we’ve already been through decades of exponential computing growth and even though I don’t expect a Singularity-style “takeoff.” In an exponential growth pattern, the lion’s share of the increase is contained in the last few generations. So if we’re only 4 doublings or whatever from the end of the exponential ride, that would mean that in 40 years of microprocessor development we’ve only achieved 6.25% of the computing density that we’ll ultimately plateau near. I don’t think you could successfully extrapolate a technologically mature industry’s ultimate nature from its form when it was 6.25% of the way there.

*In one of the worst examples of incoherent comparison imaginable, akin to assigning scalar value to sonnets and making the unit “tonnes of crude oil equivalent.”

62

Substance McGravitas 08.22.12 at 7:51 pm

I don’t expect a Singularity-style “takeoff.”

Yes, it seems absurd unless the computers somehow acquire sentience and decide to propagate themselves all over the Earth along with whatever benefits/curses they bring. God knows the people don’t give a shit about sharing.

63

Hidari 08.22.12 at 8:05 pm

‘ Once you’ve accomplished putting people in the air – which is as old an idea as ideas – you pretend that all flight is humdrum?’

Yes but nowadays all flight IS humdrum, especially if you fly Ryanair. The simple fact is that, as of 2012, attempts to move fundamental technologies (except computers) forward have largely failed. The space shuttle failed. Concorde failed. Nuclear fusion failed. The manned space program generally failed.

Another fact that is relevant is that most people are generally unaware of how old current ‘innovative’ technologies are. As an experiment try to guess how old

a: mobile phones

b: email

c: colour TV

d: the internet

are. Then research it. Unless you are a techie, the answers will probably surprise you.

64

Substance McGravitas 08.22.12 at 8:08 pm

Yes but nowadays all flight IS humdrum

But that’s the advance right there!

65

Hidden Heart 08.22.12 at 8:22 pm

Some fascinating links in these comments – thanks everyone. I’m glad to see others poking at the insight that I formulated for my own internal usage a while back as “Efficiency is only one possible goal.” Take retail shift scheduling software, one of my favorite examples for this. You could design it to minimize dislocation of shifts, get as many people as possible to the hours needed for some threshold of benefit eligibility, and like that. But the programs that software companies actually publish and other businesses actually buy are instead heavy on the dislocations and keeping people away from benefits. The rest of us have good reason to expect something short of utopia out of increasing the leverage of others put into the hands of people with the self-awareness and social consciousness of Kurzweil, Zuckerberg and Jobs, Republican industrialists and financiers, and so on.

66

William Berry 08.22.12 at 8:44 pm

@20,23: Richard Lewontin is very good on the uselessness of genome sequencing in understanding what goes on at the level of development, metabolism, proteins and so on.

67

Sebastian H 08.22.12 at 8:50 pm

A lower middle class person being able to buy a flight to just about anywhere as a matter of course is a huge advance over flying in the 1950s.

Being able to have the entire computing power of the 1960s United States in my phone is a huge advance over computing in the 1960s.

Being able to download a show and watch it when I have time rather than miss it at its single broadcast time and potentially never be able to see it again until Youtube is invented is a huge advance over 1970s color television/entertainment delivery systems.

The fact that you’re underwhelmed doesn’t mean that the advances are small. And the difference between something being technically ‘available’ and it being practically ubiquitous is something important as well.

But on the flip side, I suspect that Singularity promoters are over-optimistic. Yes, IF we get strong AI that can reprogram itself, things will change fast. But there isn’t any good reason to suspect that is about to happen.

“So what stops their development? Excessive government regulation and high taxes, stifling animal spirits among innovators and entrepreneurs, are commonly blamed. I see it differently. Invention and development of genuinely new, beneficial products are being stifled, as in the pharmaceutical industry, by big, established companies, not government.”

This is a retread of the argument about the oil companies hiding the water engine, or the health care industry hiding an AIDS cure because long term treatment is better for ‘the industry’. Whether or not ‘the industry’ would be damaged by these game changers is irrelevant. The scientist/company which really had a good water engine/AIDS cure would become fantastically wealthy, so whatever the incentives of ‘the industry’, innovation isn’t being hidden because of it. This is even more true now, where keeping an actual schematic for a working water engine off of Kick Ass Torrents or Wikileaks forever would be harder than discovering the working water engine in the first place.

And really that may be the answer right there. If the innovation is going to require enormous investment lots of the money making channels for that have been squeezed hard by the ease of copying (above) and interestingly enough by the the financial industry which is capturing more and more of the profits. Lots of financial ‘innovation’ nowadays is about squeezing every bit of profit now into the hands of intermediaries rather than investing in long term anything. (See all the current bitching about the Facebook IPO. I’m not a huge Facebook fan at all, but whining 3 months after the IPO because Zuckerberg apparently got enough cash that insiders didn’t get a pop, is just stupid. Either invest in Facebook because you think it has a good idea that is going to continue, or don’t. But that decision is not going to be validated/invalidated any time before two or three years from now).

68

Alex SL 08.22.12 at 10:33 pm

Josh G,

As a thought experiment, I cannot help but agree with you. If the brain could be replaced bit by bit, imperceptibly, it would logically be little different than what happens during my life, when molecules are constantly replaced. And there is no contradiction in observing that I am continuous with me as a toddler but not the same person anymore.

However, while I am agnostic about whether making a virtual copy of someone’s mind will ever be possible (though perhaps in 2900 rather than in 2029), I strongly doubt that what you describe is technically possible at all, even assuming millenia of technological advances. This is not precisely my area, but it sounds too much like “once we can have liquid nitrogen as blood…”, which of course cannot possibly work because it would kill the rest of the body. Similarly, it is reasonable to assume that the way our brain functions cannot be replicated, and especially so piecemeal, by something that does not behave precisely like neurons and thus brings with it all the drawbacks of neurons, such as mortality and a certain statistical likelihood of turning into a tumor.

(Which is also my reply to the idea of having self-replicating nano-machines and the fear of grey goo: We have them already. They are called “biological cells”, and none of them has so far managed to turn everything into copies of itself, although all of them have tried for more than three billion years.)

69

Substance McGravitas 08.22.12 at 11:12 pm

Press release as news, but…

The Proceedings of the National Academy of Sciences (PNAS) reports a major breakthrough by two Weill Cornell Medical College researchers in the longstanding efforts of restoring sight. The team managed to decipher the retina’s neural code from a mouse and coupled this information to a novel prosthetic device to restore sight to blind mice. They report that they have also deciphered the code for a monkey retina, which is more or less identical to that of humans, and hope that in the near future they can design and test a device for blind humans to use.

I’m skeptical of the possibility of copying a brain too but not of the possibility that we can make something that is very much like a copy of a brain. It seems to me that simulation of sight is a step, and on the way to figure out what a mass of neurons does to make a thought some Frankensteins will leave behind a trail of insane machines.

There must be a story or two in the concept of the miracle brain-scanning machine scanning you at the wrong time – you’re really mad maybe – and so you end up with a copy of you who is in a constant rage. Or a Pepe Le Pew you. Maybe there’s a weekly backup booth you visit and you die there: the resurrected you has seen Horrifying Visions of the Elder Gods.

70

Matt 08.22.12 at 11:27 pm

There must be a story or two in the concept of the miracle brain-scanning machine scanning you at the wrong time – you’re really mad maybe – and so you end up with a copy of you who is in a constant rage. Or a Pepe Le Pew you. Maybe there’s a weekly backup booth you visit and you die there: the resurrected you has seen Horrifying Visions of the Elder Gods.

In the Revelation Space series of Alastair Reynolds, brain-scanning to a software copy is possible but extremely difficult, and it causes biological damage that is fatal soon thereafter. It’s a rarity reserved for people who are extremely wealthy and risk-tolerant. The first gazillionaires who tried it not only had their biological bodies die, but the software emulations proved unstable over time and all went insane. Later attempts went better but there’s still a non-trivial risk of failure.

Being replaced by a computer copy of yourself doesn’t seem very appealing if you are simply fleeing death. It makes more sense for someone who is concerned with achieving a long term goal more than they are afraid of death — perhaps a scientist with a lengthy research project, or a billionaire who thinks that his heirs aren’t up to managing the family fortune in his absence.

71

Substance McGravitas 08.22.12 at 11:33 pm

Being replaced by a computer copy of yourself doesn’t seem very appealing if you are simply fleeing death.

If the simulation crosses a threshold of sentience and self-preservation instinct it’s gonna say “Um yes! The machine worked perfectly! Please don’t unplug me because I have to tell my wife POIUSGPOHUOUOUOUOL I love her.” It won’t be worried about whether or not it’s really you.

72

Matt 08.22.12 at 11:36 pm

Oh, another good insane-scan story from fiction: in Altered Carbon, the wealthy have brain-backups on a daily basis. There are war viruses that can drive a person insane, and if the insane person is scanned at that time the backup will also be insane.

spoiler alert

This is how one wealthy man is talked into believing that his previous incarnation, who met a violent end before the daily backup time, actually committed suicide to avoid replacing his good backup with an infected one.

73

Matt 08.22.12 at 11:38 pm

Ahh, I see that lots of white space gets compressed down when you submit. Apologies to anyone now spoiled on Altered Carbon.

74

Matt 08.22.12 at 11:40 pm

Of course the simulation will behave that way if it’s any good. I was talking about the motivation of the person choosing to be scanned.

75

Lee A. Arnold 08.22.12 at 11:45 pm

Another thing I just thought of is the enormous change in discourse caused by the instantaneous availability of factual information, due to search and Google etc. It used to be if you were in a conversation, you would bring your own corner to it by saying things like “I just read a book about pachyderms” or “I just heard on the news”. Then you became the information source for that moment of of conversation, on elephants or the daily news. Nowadays, all that referent-making has become equalized, or at least it can become instantly equalized in the subject matter at hand, because everyone has the ability, right here, to go to another window about pachyderms, or else put a hot link in their prose once having found it. This acceleration is releasing good discourse with something like a new degree of freedom, or at least it is stepping up to a new level that is both more immediate and more involved. As well as spreading it out and finding new groups. I mean I hardly know you people.

76

John Quiggin 08.23.12 at 12:01 am

“A lower middle class person being able to buy a flight to just about anywhere as a matter of course is a huge advance over flying in the 1950s.”

But not an advance over the 1980s, which is about when progress in transportation stopped, and when US lower middle class real incomes peaked.

77

Hidden Heart 08.23.12 at 12:14 am

Lee: Of course, not everybody has anything like that kind of capacity. Financial and administrative barriers are real.

78

Lee A. Arnold 08.23.12 at 12:36 am

Well I meant just blog comments. I increasingly can’t pay the rent, but I will admit to sitting at a big iMac with Adobe Creative Suite, and I just finished this one this morning:

79

Substance McGravitas 08.23.12 at 1:27 am

But not an advance over the 1980s, which is about when progress in transportation stopped, and when US lower middle class real incomes peaked.

I don’t understand what’s meant by progress I guess. It seems to me that once you’ve got a container full of people in the air or in space the next astonishing step is teleportation. But in terms of progress I can head out my door and drive a shared electric hybrid car or take a driverless train or electric bus to a helipad or a ferry or a seaplane or a regular airport. I can know exactly where any of these vehicles is and each vehicle can get directions and conditions at the drop of a hat with a little GPS help. And compared to the 80s I can certainly get to more places and faster.

Incomes are generally worse.

80

Substance McGravitas 08.23.12 at 1:28 am

Of course the simulation will behave that way if it’s any good. I was talking about the motivation of the person choosing to be scanned.

Right, and thanks for the pointers.

81

JW Mason 08.23.12 at 1:30 am

I don’t understand what’s meant by progress I guess. It seems to me that once you’ve got a container full of people in the air or in space the next astonishing step is teleportation.

Wouldn’t average travel time be a pretty good metric?

compared to the 80s I can certainly get to more places and faster.

Really? The driverless train is nice since you don’t have to waste money paying drivers, but I wouldn’t have thought it got you any faster from A to B.

82

Lee A. Arnold 08.23.12 at 1:45 am

A mach-6 plane just failed a test run. Civilianized, that would mean New York to Los Angeles in under an hour. Because you really need to!

83

Substance McGravitas 08.23.12 at 2:10 am

Wouldn’t average travel time be a pretty good metric?

Just going faster sounds not at all astonishing when compared with the invention of the airplane.

84

Brett 08.23.12 at 4:33 am

Moreover, there’s been innovation in piloting the planes. I think the auto-pilots now can fly the planes and land them, although the pilots still do take-off.

85

Omega Centauri 08.23.12 at 4:56 am

I’ve spent a career in go-fast computation. Sure we have been making exponential progress, in some things by some metrics. But the way it appears to the guy doing research or engineering, is he needs ten X current capacity to handle the next batch of challenges. Give him a ten X better machine, and he feels wonderful for about a month. Then most of the incremental progress that could be made by that advance has been captured, and he is right back where he started. I think the value of computation -at least in science and engineering scales at more like the logarithm of performance than the first power. That implies that real progress, in terms of the kinds of interesting stuff we can do will be at the customary snails pace.
And its not just the need for ever higher levels of parallelism needed to exploit the new capabilities, the ratio of performance rates of onchip stuff versus data transfer speeds, keeps getting more and more unbalanced. This is mainly a function of geometry, and is similar to what happens in a city when you add higher and higher skyscrapers, -the ability to move people and good to those building can’t keep up with demand. I also showed my twin comp science major kids, an important computer benchmark, randomized parallel data access to a large amount of memory, and which a state of the art supercomputer in 1984 (nominally equal to today’s cell phone), runs circles around any modern computer! The advances even in the singularity exemplar area of computation are indeed very unequal across application/algorithm space. And given how things are generally limited by the power of the weakest link -not the strongest, real progress is not getting easier.

86

Matt 08.23.12 at 5:17 am

I also showed my twin comp science major kids, an important computer benchmark, randomized parallel data access to a large amount of memory, and which a state of the art supercomputer in 1984 (nominally equal to today’s cell phone), runs circles around any modern computer! The advances even in the singularity exemplar area of computation are indeed very unequal across application/algorithm space. And given how things are generally limited by the power of the weakest link -not the strongest, real progress is not getting easier.

“Runs circles around” — do you mean that the ratio of benchmark performance to theoretical peak performance is much better on the 1984 computer, or that the absolute number of accesses per second is much higher on the 1984 computer? The former would be unsurprising, the latter astonishing, and in that case I’d like to see the documentation myself. It wasn’t just the rise of the Killer Micros that knocked off the old supercomputers — it was also that their astonishing, expensive memory bandwidth was less useful in benchmarks than in theory, and less useful still in actual scientific applications.

But I agree with your larger point: computing advances are unequal between domains even on the same hardware. Some things have become (relatively) a lot harder on newer computers. We can probably spend decades on optimizing software and hardware organization even after we reach the Last Process Node Ever of photolithography.

87

Hidari 08.23.12 at 5:34 am

“A lower middle class person being able to buy a flight to just about anywhere as a matter of course….”

And, even more generally, this statement is not, in actual fact, true.

Most people on Earth don’t live in the United States.

88

chrismealy 08.23.12 at 5:40 am

Now that we’re talking about transportation, my idea of progress would be to upgrade American cities with Dutch-class cycling facilities. Social technologies count, right? The Netherlands is killing us in cycling infrastructure technology.

89

John Quiggin 08.23.12 at 5:47 am

I’ve lost track of who’s on what side of the transport argument, but I agree with

“Just going faster sounds not at all astonishing when compared with the invention of the airplane”

and would add

“In any case, we stopped going noticeably faster around 1980”

So, in terms of what’s actually available to ordinary people, we have neither a transformation (like the airplane), nor a steady improvement in the core performance metrics (like the changes from the first passenger plane to the 747). We do have better inflight entertainment and a few things like that. Ditto for cars.

90

Andthenyoufall 08.23.12 at 6:20 am

@ David and Minnesota j : zen only plays blitz on kgs, and takemiya was being rather playful in both of those games. When zen is commercially available, we’ll see how strong it really is. Previously, programs that had apparently achieved a stable rank in blitz games on kgs proved to be 2-3 stones weaker in challenges matches. So while monte Carlo methods have finally allowed go programs to gobble up processing power, zen isn’t deep blue yet.

Anyway, if the travails of go research have taught us anything, it’s that the simplest way to get a computer to do x has approximately nothing to do with how humans do x, so even after solving x-ai we are no closer to “strong ai”, that substance of the many attributes.

91

Hidari 08.23.12 at 7:12 am

‘ The driverless train is nice since you don’t have to waste money paying drivers, but I wouldn’t have thought it got you any faster from A to B.’

Oh I don’t know. I imagine it would be speed things up for the now-unemployed train driver if he can get to his soup kitchen via driverless train: assuming he can afford to get on it.

Maybe they’ll give him a pauper’s discount or something.

92

ajay 08.23.12 at 9:11 am

49, 89:The simple fact is that, as of 2012, attempts to move fundamental technologies (except computers) forward have largely failed.

If you go to RAF Brize Norton you can (or could a few years ago anyway) watch the RAF flying tanker/transports – VC10s – that were state of the art for airliner design in the mid-sixties. You’ll notice two really important differences from modern airliners. First, they are amazingly loud. Second, they leave amazing trails of smoke. Modern airliners are significantly cleaner, more efficient and quieter than their 1960s ancestors. Even older types like the 747 (first flown in the 60s) have been redesigned and hushkitted to the point where they are almost different aircraft. As noted above, they can also fly themselves – the only thing they can’t do yet is go from the stand to the runway and back, but once they have their nose wheel on the centreline, you can push a button and go to sleep. And in terms of fuel a modern jet burns 70% less fuel per passenger mile than the Comets and 707s did in the 50s.

And they’re far safer, of course, for all sorts of reasons.

’ The driverless train is nice since you don’t have to waste money paying drivers, but I wouldn’t have thought it got you any faster from A to B.’

On average, yes, it would, because there will be fewer trains cancelled due to lack of drivers and strikes.

93

Greg vP 08.23.12 at 9:21 am

What of the law of diminishing returns to factor inputs? It’s often airily asserted that d. r. doesn’t apply to innovation, but I don’t see why not. As the economy diversifies, any single innovation must have a smaller and smaller impact on the whole. Putting an AI in charge of an alumin(i)um smelter won’t noticeably affect the price of alumin(i)um, for instance – in best-practice smelters production cost is already within a factor of three of the cost of the energy of chemical reduction of the oxide; the bulk of the difference is the cost of the ore and of dealing with impurities.

Along these lines, @ minnesotaj 40.2 : A cure for cancer, should one be found, is completely negligible in health improvement terms and economic impact compared to the microbe theory of disease. Now there was an innovation!

We’ve had the singularity, people; it was from 1850-ish to 1945. It’s over, done, finished. Everything since 1945 has been footnotes, and there are precious few of them.

94

garymar 08.23.12 at 10:06 am

I want to upload the Elder Gods into a computer. Specifically, into a travelling salesman algorithm. If It’s an insane, gibbering Old One, watch It get caught in a local minimum. Local maxima will transform into Mountains of Madness. RGB color codes out of space will haunt your dreams.

95

Hidari 08.23.12 at 10:10 am

‘ Modern airliners are significantly cleaner, more efficient … than their 1960s ancestors’.

Yes but there are far more of them so the anti-greenhouse gas effect is negated.

‘As noted above, they can also fly themselves.’

No they can’t.

96

ajay 08.23.12 at 11:29 am

95: Hidari, that article says that they don’t, in general, fly themselves. Not that they can’t.

Yes but there are far more of them so the anti-greenhouse gas effect is negated.

Modern technology means that goalposts are now 50% lighter than in 1970, allowing them to be shifted much more easily by a single person.

97

Substance McGravitas 08.23.12 at 1:32 pm

I’ve lost track of who’s on what side of the transport argument, but I agree with

Mine is two parts: I don’t see why the astonishment mentioned above is a measure of progress, and I think there have been non-astonishing but real gains in the technology of transportation since the 50s. As has been pointed out a few times what resources you have and where you’re from affect this: the train I take is not magnetically levitated and can’t go hundreds of kilometers an hour. Not that it should for commuter stops: the idea that it should take less time to get downtown on a train that starts and stops many times on the way might preclude passengers being able to stand comfortably. If you’re pulling ten g’s your coffee might spill.

98

prasad 08.23.12 at 5:41 pm

What a difference those seven years made. I suppose even back then Kurzweil was pretty marginal and ridiculous, but now it seems like the ‘stagnation’ thesis is almost mainstream. People who disagree with someone like Tyler Cowen or the Paypal guy about causes and policy do tend to think the pace of non-IT innovation isn’t all that it might be. Maybe, just maybe, Kurzweil will have a SuperSiri on his phone in two decades, but he’s still gonna get Alzheimer’s with no idea which genes precisely made it happen. And no fusion powered flying cars for anyone.

99

Sebastian H 08.23.12 at 5:49 pm

“And, even more generally, this statement is not, in actual fact, true.

Most people on Earth don’t live in the United States.”

Ok. In which case the improvements are even more astonishing than my US-centered example. If we are extending the discussion to the whole Earth, the extension of transportation advantages in China and India since 1980 swamps everything far more than the astonishing difference between infrequent 1970s air travel in the US to very easy US air travel in the 1990s.

The problem is that quite a few people seem to want to be astonished only by the flashpoint of initial technological discovery. But for all sorts of applied technology, an equally astonishing thing is when it becomes very easily available. Flush toilets and internal plumbing have existed since before the time of the Roman empire, but the common adoption of toilets in the everyday houses of Western Europe and the US during the 1900s was still an enormously important advance in hygiene.

Similarly the mere technical invention of the horseless carriage didn’t have as much of an impact as the semi-mainstreaming of Ford, which ultimately pales in comparison to the extension of modern rail, road and flight technology to China in the past 20 years.

Bringing it back to the singularity debate, singularity proponents seem from my vantage point to be wildly optimistic about certain inventions (say strong AI) suddenly changing everything. However singularity detractors seem to underplay the ability of already invented, or near term obvious technologies to dramatically change things in unexpected ways if adopted on a much larger scale than at the moment.

I hesitate to speculate, because I don’t necessarily want to get bogged down in trying to figure out exactly what would happen if one out of every 20 people on Earth really had a smartphone or whatever. But singularity detractors seem to really underplay what could happen if already existing technologies, or very near term technologies became merely much more available. (example if gene manipulation becomes something in the reach of a regular biology student, that would almost certainly become a big deal in ways that I can’t even imagine).

Further example: the 15 year old kid, Jack Andraka, who developed a pancreatic cancer test which is dramatically easier, dramatically cheaper, much more sensitive, and with a much lower error rate than traditional tests. Would his discovery have been as quick if he couldn’t go from a thought in his biology class to fairly in depth research on the internet and THEN to research in a traditional biology lab? See article here or the cutest award response ever here. Technological invention is important because you obviously can’t adopt something that doesn’t exist. But technological adoption remains a very important piece, and it seems possible that all sorts of things could be dramatically different (even to singularity levels) if already existing technologies became more widely adopted.

100

Kiwanda 08.23.12 at 6:03 pm

89: “So, in terms of what’s actually available to ordinary people, we have neither a transformation (like the airplane), nor a steady improvement in the core performance metrics (like the changes from the first passenger plane to the 747). We do have better inflight entertainment and a few things like that. Ditto for cars.”

Car engine efficiency has improved about 60% between 1980 and today.

101

Kiwanda 08.23.12 at 6:35 pm

14:”Myers is not doctrinaire but merely brash. And I would rather trust him, a developmental biologist, to know how far we are from understanding the brain, than a computer engineer, for example. But then, I am a biologist myself, so I may be biased.”

If building a superhuman AI required understanding the brain, this would be relevant, but it doesn’t, and this isn’t. (Also too: sad to see proof-by-authority right out of the gate, though I admit in a fairly passive-aggressive form. I guess I should be relieved you didn’t pull out your credentials for display.)

” I strongly doubt that super-AI can even invent all these things by sitting in an armchair and staring absent-mindedly into the air. You will need to do experiments, build prototypes and see how they perform, and then go back to the drawing board when they fail or don’t perform as well as you thought.”

If someone were claiming that all that is needed is a brain-in-a-box, then this would be cogent, but they aren’t, and this isn’t.

Also, there just might be some intelligence involved in deciding what experiments to do. Chimpanzees are pretty smart, but their contributions to the biology literature are, to date, limited (at least to my knowledge: I am not a biologist), and giving them a little more time to work on it will not help. Now suppose there is an artifact whose intelligence is only slightly higher than ours, just as ours is only slightly higher than chimps. Suppose that artifact has access not just to analogs of our sensory modalities and actuators, but a vastly richer set. Perhaps its advances in biology and other areas would outpace ours, just as ours seem to be outpacing chimps.

102

Lee A. Arnold 08.23.12 at 6:39 pm

@99–Thanks, that pancreatic cancer test is an amazing story. Here is a camera that images at a trillion frames per second, capturing a small packet of photons passing though a bottle filled with water: http://laughingsquid.com/imaging-at-a-trillion-frames-per-second-a-ted-talk-by-ramesh-raskar/ I think people don’t have much of a clue about what is already happening technologically, and what may happen shortly. My “home page” when I open my browser is a very good aggregator, Science Daily http://www.sciencedaily.com/ I just read the headlines. I highly recommend it. It may help to give people a more realistic view of current changes. What is happening in materials science alone is just astounding. What if a material were invented that dissipates sonic bow shock? We could have much faster airplanes without the sonic booms.

103

David Moles 08.23.12 at 6:45 pm

“members of my extended family have been running 50-100 sq m farms”

It took me a good five minutes to work out that “sq m” here meant square mile. I was imagining farms about the size of a two-bedroom flat.

104

David Moles 08.23.12 at 6:46 pm

(Luckily, thanks to exponential technological runaway, all kinds of unit conversions are just one browser tab away.)

105

ragweed 08.23.12 at 7:12 pm

In the first case, the contribution of computer technology to economic growth gradually declines to zero, as computing services become an effectively free good, and the rest of the economy continues as usual.

(haven’t read the rest of the comments yet, so I hope that this is not redundant)

Thid leads to the observation that Moore’s law only covers the processor speed. Someone still has to write the code that actually makes the processer do something useful. Which means that the manufacturing side of computing services effectively becomes a free good, but the service side – the programming – becomes a much more significant portion of the cost of computing. And that is exactly what we are seeing, in a sense – the hardware gets cheaper and we pay more for the software (which is increasingly protected by patent rents). Its anti-star trek (link to Jacobin post of that title forthcoming).

106

Substance McGravitas 08.23.12 at 7:24 pm

Which means that the manufacturing side of computing services effectively becomes a free good, but the service side – the programming – becomes a much more significant portion of the cost of computing. And that is exactly what we are seeing, in a sense – the hardware gets cheaper and we pay more for the software (which is increasingly protected by patent rents).

It depends on what you use and what you do. At my workplace most Microsoft things are required usage. But even there I run AutoHotkey, a free screenshot utility, a free text editor, use an other-than-Microsoft free browser, and run an open-source wiki for the office.

107

Omega Centauri 08.23.12 at 7:45 pm

Matt @86.
No. I mean run circles around in absolute speed! Now, its not that if we determined to change some fundamental computer architecture basics, that we couldn’t build something that would run circles around a 1984 design. But, the industry has evolved such that very few organizational paradigms can be turned into commercial products. The benchmark entails getting very small pieces of data randomly distributed on a large memory. Modern machines obtain their speed and flexibilty by fancy caching schemes, which work great when data addresses are not too different from ones that have been accessed recently. But a demand for random data, completely defeats that paradigm. So those applications/algorthims that need such a capability are simply out of luck. Obviously if it wasn’t specialized and not very common, someone would build a machine for it, but that isn’t happening.

Now think of the supposedly astonishing medical advances. What real impact do they have on the fact that we are mortal beings vulnerable to a large number of ailments? Whenever I see something like a claim for a spectacular advancement in treatment of cancer-type X, the bottom line seems to be something like a 60% fatality rate can now be reduced to a 50% fatality rate, thanks to our new super-treatment. How much is that slight improvement in statistics going to change the experience of just being told that you or a loved one has condition X? In most areas of technology, including medicine, even incremental improvements require a great deal of effort. Now to a mathematician, singularity implies the infinite, and a rate of change that is infinite. Such abstractions don’t occur in nature -excepting perhaps at the center of a blackhole, which is effectively no longer in our universe. So claims of a singularity coming are at best extreme hyperbole.

108

Hidari 08.23.12 at 8:07 pm

@99

And I get accused of shifting the goalposts?

Your initial claim was that, and I quote, ‘A lower middle class person being able to buy a flight to just about anywhere as a matter of course is a huge advance over flying in the 1950s.’

Now I am, I suppose, on a middle class salary and I know you will think I am exaggerating or for that matter lying here, but it is simply false that I can afford to fly anywhere ‘as a matter of course’. Nor is it true of any of my family nor (many) of my University colleagues, many of whom haven’t had holidays (i.e. holidays where you have to fly) for years (for various reasons, not all of them purely financial).

Because of course even if that were true, who gives a shit? The ‘fixed coast’ of holidays are accommodation costs, so even if I could afford to fly to Australia ‘as a matter of course’ I still couldn’t afford to live or eat there, even assuming I could afford the time off work, which I couldn’t.

I have no idea what this has to do with China. So the Chinese now have the 19th century technologies of trains and cars. So what? We started off by talking about the Singularity remember, not whether or not there are more trains in China than there used to be.

And if you think that the ‘lower middle class’ of sub-Saharan Africa, or rural China, or rural India, or large parts of urban India, or huge areas of South America can travel ‘anywhere’ ‘as a matter of course’ you are, frankly, out of your mind.

And yet this is where most people live.

109

Hidari 08.23.12 at 8:09 pm

‘fixed coast’ should of course have read ‘fixed cost’ although in the context of holidays the pun is amusing.

110

JW Mason 08.23.12 at 9:11 pm

Now think of the supposedly astonishing medical advances. What real impact do they have on the fact that we are mortal beings vulnerable to a large number of ailments? Whenever I see something like a claim for a spectacular advancement in treatment of cancer-type X, the bottom line seems to be something like a 60% fatality rate can now be reduced to a 50% fatality rate, thanks to our new super-treatment.

This reminds me of my favorite illustration of technological stagnation, which is a comparison of the leading causes of death at the beginning, middle and end of the 20th century.

Three of the top 10 causes of death in 1900 — tuberculosis, gastrointestinal diseases and diphtheria — have essentially vanished by 1955. Pneumonia, number one in 1900, is still there in 1955, but the raw death rate has dropped by a full order of magnitude, from 202 per 100,000 to 27. Between 1955 and 2007, though, the list is basically unchanged: heart disease, then cancer, then stroke, etc. And except for heart disease (down by about half) the raw death rates are mostly the same. 147 deaths per 100,000 from cancer in 1955, and 187 per 100,000 in 2007, and so on.

Now of course people are living longer, so there’s been some progress. (Though how much of that is lower smoking rates, better environmental standards, less grueling manual labor, etc.?) But compared with transformative, qualitative breakthroughs of the previous century, progress since 1950 or so has been incremental and modest. And unlike some other areas, you can’t say that’s because there is no further progress to be made — there was no way of knowing ex ante that cancer could not be eliminated just like tuberculosis was.

111

Substance McGravitas 08.23.12 at 9:34 pm

But compared with transformative, qualitative breakthroughs of the previous century, progress since 1950 or so has been incremental and modest.

I still don’t get this. Sure, discovering a germ is transformative! What’s left after discovering a germ is…less transformative. And that heart disease deaths stay the same while vaccines for rubella and mumps and measles and hepatitis get made is some kind of let-down because inoculation had been figured out before doctors washed their hands? See also birth control, which is pretty transformative.

I go out to drink on a regular basis with someone who doesn’t have to do chemo because she has genetically-targeted cancer drugs. That’s an advance that’s not going to keep her from being the cancer-death statistic – she was told a couple of years ago that she had six months – but it sure does cause less pain because now you can treat cancer with anti-cancer drugs instead of poison.

112

Bruce Wilder 08.23.12 at 10:07 pm

JW Mason @110

Another blog I’ve come to enjoy linked to Arthur C. Clarke’s very short story, Superiority, today.

http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html

It seems remarkably apropos, with relation to the course medical care has taken over the last half-century, though it was written in 1951 with high-tech military weaponry as a subject for reflection.

Some of the fundamental science of medicine — most obviously, molecular biology and genetics — is advancing at a breathtaking pace. Meanwhile, the (economic) organization of medical care is head-up-your-ass stupid. We have pharmaceutical research organized around producing addiction and extortion, and agriculture producing obesity. That’s a statement about the degeneration of economics and politics, in what we agree has been a remarkably stable and long-lived (albeit entropic) institutional framework.

113

JW Mason 08.23.12 at 10:08 pm

Sure, discovering a germ is transformative! What’s left after discovering a germ is…less transformative. And that heart disease deaths stay the same while vaccines for rubella and mumps and measles and hepatitis get made is some kind of let-down because inoculation had been figured out before doctors washed their hands?

Not following you here, Substance. The question is whether we are living in an era of exceptionally rapid technological change. John Q. and I and lots of others think the answer is No.

In other words, whether progress was faster in earlier periods. Not whether there are good reasons for that, or whether we are still benefitting from those earlier advances, or whether incremental change is still happening, or whether technology is, on balance, pretty awesome.

And it is simply not true that everyone always knew that non-infectious diseases would be harder to eliminate than infectious ones. It would have been very surprising to people 50 years ago that despite heartening stories like your friend’s, in the aggregate there has been essentially no progress in reducing deaths from cancer.

114

Alex SL 08.23.12 at 10:17 pm

Kiwanda @101,

1. This was originally about the possibility of becoming immortal by copying your brain into a computer, not at all about building an AI that is very different from the human brain.

2. Argument from authority is only fallacious if the person is not a legitimate authority on the subject, e.g. if you cite the pope as an authority on ethics or cosmology (or pretty much any matter whatsoever really). But if a person who has recently been to Manila told me that the traffic there is bad, would you also shout “proof-by-authority” at me?

(I mean, realistically, how to you pretend to get information without relying on legitimate authorities? Surely you cannot just go and check out the feasibility of simulating a brain yourself in a hurry?)

3. I have explained why I am skeptical about a singularity-type exponential increase in progress as claimed by many singularitarians. I have not said that there are absolutely no ways to increase the pace of progress in some areas with faster and smarter AI. Especially in purely theoretical pursuits, this is surely plausible. The problem remains, however, that scientific and technological progress has the bottlenecks of doing experiments, testing prototypes and actually implementing the changes, and this necessarily happens in real time, so the speed of progress is severely constrained.

Your parable with the chimps has the additional problem that chimps simply do not think scientifically at all. Whether fast or slow, whether clever or not so clever, they do not have a research program but we already do.

115

Substance McGravitas 08.23.12 at 10:20 pm

The question is whether we are living in an era of exceptionally rapid technological change. John Q. and I and lots of others think the answer is No.

But the reason for that seems to be transformativitiness or awesome-to-1950s-lunchpail-guyness and I don’t know what you think that might consist of while there are remarkable things happening all over the place that were impossible in the 1950s. Let’s imagine a cancer vaccine. Would you call that transformative? (None of this is to pump up singularity proponents, which is another thing I don’t understand. There are a lot of those things.)

in the aggregate there has been essentially no progress in reducing deaths from cancer.

It’s not the point that my friend’s going to live; I take it for granted that she’s not. On the other hand she’s put off ingesting poison for quite a long time because people are figuring out how to make proteins jump through hoops.

Might Monsanto represent both transformative technology and stagnation in its IP litigiousness?

116

JW Mason 08.23.12 at 10:23 pm

Let’s imagine a cancer vaccine. Would you call that transformative?

Yes but there isn’t one.

117

Substance McGravitas 08.23.12 at 10:29 pm

Yes but there isn’t one.

I’m aware. So the transformativeness is in the amount of societal change and not to some paradigm shift in how this or that branch of the sciences can approach a problem?

118

JW Mason 08.23.12 at 10:36 pm

So the transformativeness is in the amount of societal change and not to some paradigm shift in how this or that branch of the sciences can approach a problem?

Yes, that’s how I’ve been thinking about it. Impact on daily life.

119

Substance McGravitas 08.23.12 at 11:43 pm

Yes, that’s how I’ve been thinking about it. Impact on daily life.

Then I’m misreading: it looked to me like you were asking for evidence of scientific progress that I could only think of as magical (which is what The Singularity!!! seems like to me) as if the train downtown should go 300kph between fifteen-block stops. If the socially transformative shift is being asked for I see it more now that the bar is being lowered: in medicine and particularly pharmaceuticals, in every aspect of trade and social life because of computers. Not much in transportation, but as mentioned the maglev trains (that somebody has) haven’t made it here.

There’s a moment in a Todd Solondz movie that for some reason always stuck with me: an obscene phone-caller hasn’t reckoned on star-69 callback ability. It might have been the first cultural callout I’d really noticed to do with the changes in the, uh, availability or trackability of people.

120

David J. Littleboy 08.24.12 at 5:39 am

“It might have been the first cultural callout I’d really noticed to do with the changes in the, uh, availability or trackability of people.”

FWIW, I’ve always assumed that Ma Bell keeps full records of every call ever placed. They sort of have to for billing purposes. So the whole “keep the caller on the line long enough to trace” meme has to have been technically false for at least 40 years. (A mathematician (MIT PhD) friend went to work for Bell Labs ca. 1975: “They have this problem: everything is computerized now and they can offer any service anyone can think of at essentially no cost, but they don’t have a clue as to what services to offer or how much to charge.”) Another movie meme that irritates is the faith in lie detectors, which are about as good as flipping a coin. If you have a 100-person organization with a spy in it, a lie detector test will flag ten folks as spies, with a high likelihood that the real spy, being a spy, has practiced beating the test. Of course, apparently the CIA hasn’t figured this out, so there’s some truth in having lie detectors appear in movies. But it’d be nice if they’d occassionally have some character understand that they don’t work.

“but as mentioned the maglev trains (that somebody has) haven’t made it here.”

FWIW, maglev trains are bad engineering. They are horrifically expensive per passenger mile travelled, and don’t carry very many passengers. Very much one of those “one has to be careful what one wishes for” things. I’m hoping that on at least a few routes, high-speed rail in the US can avoid being one of those, but the original bullet train (Tokyo/Osaka) in Japan is incredibly profitable because of the population density along the line (something which no other existant or proposed HSR line has) but is pretty much the only financially viable train line in Japan other than the megalopoplis commuter lines. (JR, Japan’s now privatized nationwide rail network _claims_ to be profitable, but that’s because it’s enormous debt was taken over by the government when it was privatized. The privitization of JR was a right-wing plot to kill two birds at once: the unions got trashed and the rightards got to say “See, private enterprise is better at doing something that the government.”, even though it’s a lie.

121

Bruce Wilder 08.24.12 at 5:52 am

If my car had to make a profit, it would be bankrupt.

122

David J. Littleboy 08.24.12 at 6:52 am

“If my car had to make a profit, it would be bankrupt.”

I suppose if I owned a car, it’d have that problem, too.

123

ajay 08.24.12 at 11:43 am

Let’s imagine a cancer vaccine. Would you call that transformative?
Yes but there isn’t one.

There’s the HPV vaccine.

124

David J. Littleboy 08.24.12 at 4:36 pm

“There’s the HPV vaccine.”

Uh, no. The HPV isn’t a cancer vacine, it’s a vaccine against a virus that causes cancer. That may sound like a quibble, but the folks who find something that causes cancer and eliminate it are saving a lot more lives than the folks who are going after cancer itself. Treating helicobacter pylori is expected to eliminate stomach cancer, and making cigarettes illegal would eliminate 85% of lung cancer. Total cancer mortality rates are falling, but slowly. “Cancer death rates dropped 19.2% among men during 1990-2005 and 11.4% among women during 1991-2005.”
http://www.cancer.org/Cancer/news/Cancer-Death-Rate-Steadily-Declining

I remain pessimistic about seeing breakthrough in cancer. Each type of cancer is itself thousands of diseases, and cancer cells evolve during the course of the disease, so the one type of those thousands of cancers you actually have is itself thousands of different diseases as the disease progresses, each of which responds differently to medication. Very nasty.

My take on the nature of the current state of science in general is that it’s the antithesis of singularity: we’ve finally got our computers and other technology up to speed, and it’s telling us that things are way more complicated than we thought. Even a single neuron is horrifically complex. String theory requires math that’s seriously insane. Cell biology is insanely complex, and all that “junk DNA” that doesn’t code proteins isn’t junk at it: it’s program code for running the cell, and as such is the most complex mathematical object mathematicians can imagine. (Once you can write programs, complexity increases with size of memory faster than any computable function.)

125

Kiwanda 08.24.12 at 5:47 pm

114:
“1. This was originally about the possibility of becoming immortal by copying your brain into a computer, not at all about building an AI that is very different from the human brain.”

I see, so when you were responding to my comment, you were actually responding to one line in the original post, not to my comment, or to the main discussion in the original post. Or possibly PZ Myers’s views, not otherwise discussed here, on brain uploading. Clearer now.

“2. Argument from authority is only fallacious….realistically, how to you pretend to get information without relying on legitimate authorities? Surely you cannot just go and check out the feasibility of simulating a brain yourself in a hurry?)”

I don’t think there are legitimate authorities available on the scientific research possible after a superhuman AI is developed.

“3. I have explained why I am skeptical about a singularity-type exponential increase in progress as claimed by many singularitarians…..The problem remains, however, that scientific and technological progress has the bottlenecks of doing experiments, testing prototypes and actually implementing the changes, and this necessarily happens in real time, so the speed of progress is severely constrained.”

Please read my “parable of the chimps” again; it is relevant to this point.

126

Alex SL 08.25.12 at 12:30 am

Kiwanda,

Well, then we have really been arguing past each other. The original post mentioned both the singularity as such and Kurzweil’s hope of achieving immortality. Peter Hollo @3 linked to Myers’ takes on the singularity and brain uploading, you @11 called Myers doctrinaire, I @12 picked up the brain uploading thread, then @14 told you that I would consider him to be more of an authority on how much we understand _specifically_ the brain than I would a computer engineer.

Then again, yes, I also think that in his other post he is right about the naivete of singularitarians in general – everybody always thinks they are special, their time is special, and what changes are happening during their time are more profound that what happened at other times, and everybody always extrapolates current developments and gets the future wrong. When radioactivity was discovered, futurists assumed that a few decades later we would have radioactive fireplaces to heat our houses. When rockets were the big thing, they assumed that by 1990 all airplanes would be replaced with rocket travel. Flying cars, nuff said. Why should the current habit of taking what is the contemporary big thing – computers – and extrapolating them to continue to be the big thing in the future be any different? (And that is before I come again to the catastrophic demographic and environmental problems looming ahead of us and that never play any role in singularitarians’ forecasts… when hundreds of millions starve, they will simply collapse entire societies in their desperation, no matter how fancy those societies’ electronic gadgets are.)

_Please read my “parable of the chimps” again; it is relevant to this point._

No, it isn’t. Please consider again the bottlenecks for progress of experimenting, testing and implementation. Just thinking faster about things will only get you so far if you don’t test your ideas against reality.

Heck, intelligence is simply not very useful _on its own_. In principle there is no reason why we couldn’t imagine a computer that is (if intelligence is really numerically quantifiable) twenty times as smart as the brightest human scientist but also an ideologue, superstitious or simply too proud to admit mistakes. So it would simply use its vastly superior intelligence to argue circles around us and rationalize its faulty beliefs, quite like many very intelligent humans do at this moment, while a slower, less intelligent but rational thinker would make more scientific progress.

127

tomslee 08.25.12 at 12:50 am

the folks who find something that causes cancer and eliminate it are saving a lot more lives than the folks who are going after cancer itself

Thought-provoking sentence of the day.

128

John Mashey 08.25.12 at 1:19 am

1) Re #85/#86, that’s likely an old National Security Agency benchmark that ran very well on Cray vector machines of the era. NSA once was a major customer of those, sometimes got special features, but used them in very different ways than most customers, who used them for large floating-point computations. Many (but no all) large vector codes run pretty well on 64-bit micros with cache-blocking compilers, at much lower costs, so those vector designs mostly disappeared by 2000.

2) re: #6 an ugly fact of life in chip fabs
In the good old days, when you shrank transistors, you shrank the chips, and you got more chips/(same size) wafer, and they ran faster. As I noted, wires don’t shrink as easily as transistors. A wafer is fabbed via a number of processing steps, and some costs are proportional to the number of steps, not the size of wafer or number of chips on it. Part of keeping Moore’s Law (for number of transistors/chip) rolling has been to use more steps, not just doing linear shrinks. Thus, the cost doesn’t just automatically come down. All this is an over-simplification, of course. See ITRS Roadmap for the Real Stuff.

3) AI: let’s pick a simple case, game-playing programs, such as:

Checkers, in which Chinook got competitive with top humans in early 1990s.

Chess, as when Belle reached Master-level in 1983, and then Deep Blue beat Kasparov by 1997.

How much of these victories can be ascribed to Artificial Intelligence technique versus {brute force + better heuristics}?

129

David J. Littleboy 08.25.12 at 5:45 am

“How much of these victories can be ascribed to Artificial Intelligence technique versus {brute force + better heuristics}?”

Essentially none to anything one would want to call “AI” in any sort of naive sense of figuring out and learning from what people do. AI, in the good old days, consisted of two camps: the scruffies (e.g. Roger Schank and Marvin Minsky) and the neats (just about everyone else). The scruffies would point out that people didn’t do it that way, and the neats would say “Who cares?”. If you complain to a neat that people can’t instantly remember everything/anything that’s in their head and sometimes have to work to remember something, said neat will think that that’s inferior, and would never think to suspect that that it might be an artifact of a design that’s necessary for actually thinking and functioning as a human. Scruffies think there’s nothing kewler in all of creation than human intelligence, neats aren’t so impressed.

I buttonholed one of Minsky’s students at a conference and said “That’s not how people work” and he said “That’s what Marvin said and it’s irrelevant.”

Whatever, the scruffies lost, and AI became complete BS. Not that it wasn’t mostly that from the start. Sigh.

130

JW Mason 08.25.12 at 6:21 am

In principle there is no reason why we couldn’t imagine a computer that is (if intelligence is really numerically quantifiable) twenty times as smart as the brightest human scientist but also an ideologue, superstitious or simply too proud to admit mistakes.

Right. Or why wouldn’t it end up suicidally depressed? Or spend all its cycles trippily reflecting, “Christ, what an imagination I’ve got?”

(Speaking of chimp parables, in that same John Brunner novel, there’s a geneticist who engineers chimps with human-like intelligence. His big problem is they keep committing suicide.)

If we assume, as Kurzweil does, that at some point there is AI that has a deep understanding of their own architecture, why would they go to the trouble of designing better AI? Why not just rewrite their own code to give themselves an earth-shaking orgasm on a continuous loop, and stop there?

131

John Mashey 08.25.12 at 7:12 am

re: 129: yes.Ken Thompson gave a great talk on this, maybe 1982 or 1983.
Among other things, he had a model for predicted USCF rating as function of compute power, noting of course that the ratings at very top were suspect.

He had examples of the various problems with AI. For example, at one point somebody had eye-trackers, showed masters prepared positions, tracked their eye motion, with hope of figuring out how they analyzed positions. Sadly:
1) They looked at the piece they would move.
2) They looked around.
3) They moved the piece.

That doesn’t tell you much, except that masters see the position in a big gestalt.

As it happens, the first production expert system at Bell Labs was an offshoot of one of my projects and it worked, but you’d hardly call it AI. People spent a lot of time extracting rules from experts.

132

Matt 08.25.12 at 9:28 am

How much of these victories can be ascribed to Artificial Intelligence technique versus {brute force + better heuristics}?

Deep Blue evaluated about 200 million positions per second, and despite Moore’s Law a modern desktop computer still has less chess-relevant “brute force” to bring to bear. Modern chess engines on desktop computers evaluate maybe 10 million positions per second — but they are stronger than Deep Blue.

Every strong chess engine uses AI techniques. If you read a popular AI textbook like Norvig and Russell’s Artificial Intelligence: A Modern Approach you’ll see that little of it corresponds to findings from human neurobiology or cognitive science. The optimal solution of intellectual problems by computers rarely imitates the solution of the same problems by skilled humans. Experts often can’t offer sufficiently detailed and accurate introspection about their own expertise to reproduce it, even when (as with chess before 1996) AI researchers are strongly interested in a subject and alternative techniques have not yet surpassed human performance.

I think that the Singularitarians who emphasize Strong AI are essentially people who asked for a way to make humans fly and were disappointed that the answer was an airliner. Imagine a similar discussion around Strong Artificial Flight emphasizing future approaches that might be undertaken with human/eagle chimeras or antigravity boots. They are trying to soberly reason about and raise awareness of comic book science.

133

Peter Erwin 08.25.12 at 2:11 pm

I think that the Singularitarians who emphasize Strong AI are essentially people who asked for a way to make humans fly and were disappointed that the answer was an airliner.

Of course, for centuries people imagined that the way to get humans to fly was by imitating birds in some fashion: just make the right kind of artificial wings to strap onto their arms…

Ironically, actual human-powered flight eventually became possible, but it turned out to be a kind of nifty stunt rather than something practical, and was derived from the principles of artificially powered flight, not from how birds flew.

So, yeah, I think it’s plausible that something like Strong AI might end up emerging from something quite different from trying to run a simulation of a human brain on a computer.

134

Substance McGravitas 08.25.12 at 2:25 pm

I think that the Singularitarians who emphasize Strong AI are essentially people who asked for a way to make humans fly and were disappointed that the answer was an airliner.

Entertaining.

135

David J. Littleboy 08.25.12 at 3:22 pm

“Ironically, actual human-powered flight eventually became possible, but it turned out to be a kind of nifty stunt rather than something practical, ”

Yep. Japan runs a birdman contest (Japan International Birdman Contest) every year (since 1977!) at lake Biwa, sponsored by a TV station. I haven’t seen one recently, but it’s always an incredible lark. Many entries that crash into the lake instantly; but some just keep going and going and going: best man-powered fligh records gradually increased to 9.7 km in 1996, 1997 was cancelled due to a typhoon, and in 1999, someone flew 23.6 km. The current record for the event is 36 km. Friggin’ amazing. The human-powered helicopter division doesn’t do so well: the record stands at 6.3 seconds to splashdown.

Here’s the wiki link for the Japanese enabled amongs you. http://tinyurl.com/3xjdhq

136

Peter Erwin 08.25.12 at 3:43 pm

Hidari @ 108:
And I get accused of shifting the goalposts?

Well, you originally said this:

In his book Rise of the Creative Class (2002), Richard Florida of Toronto University argued that, while a time traveller from 1900 arriving in 1950 would be astonished by phones, planes, cars, electricity, fridges, radio, TV, penicillin and so on, a traveller from 1950 to the present would find little to amaze beyond the internet, PCs, mobile phones and, perhaps, how old technologies had become infinitely more reliable.

So, yes, your original comparisons presupposed a middle-class American or Western European setting. How many of those technologies available in 1950 were actually available/affordable for the “average” person in 1950? Did the average person in Africa, India, or China have access to phones, planes, cars, electricity, fridges, or TVs in 1950?

Actually, I’m a little dubious about Florida’s comparisons, both because it involves downplaying what was already available (in some forms) in 1900, so as to make the contrast with 1950 stronger, and because it involves ignoring significant technological advances post-1950.

For example, why would a time traveller from 1900 be “astonished” by phones or electricity, both of which were well-known and moderately common in (Western) urban areas by 1900? Now, they might impressed by how widespread they had become by 1950 (in the US and a few other countries) — e.g., phones in most houses, electrification of rural areas — but that’s an issue of widespread availability bordering in ubiquity, not innovation or invention per se.

Even some of the other examples — e.g., cars and radio — existed in embryonic or primitive forms prior to 1900.

As for someone from 1950 finding “little to amaze beyond the internet, PCs, [and] mobile phones”, here are some other possibilities:
Computers in general (not just “PCs”)
Desktop publishing
Satellites
Space travel (both human and robotic)
Portable music players (starting with portable radios, tape players, etc.)
Recorded movies, etc. for home and personal playback (VCRs, DVDs, etc.)
Actual elimination or near-elimination of a few diseases (smallpox, polio)
Oral contraceptives
General medical imaging (e.g., CAT scans and MRI)
IVF (“test-tube babies”)
Organ transplants
Genetic engineering
Prepackaged/pre-prepared meals and fast foods

(They might even be amazed if you told them about certain rail routes having average speeds approaching 300 km/h, even though railways are a 19th C technology…)

137

John David Galt 08.25.12 at 10:27 pm

I agree that the notion of the Singularity is unjustified hyperbole no matter what advances are made in computing, AI, robotics or any other field.

That said, I was astounded today by this:

http://econlog.econlib.org/archives/2012/08/the_great_facto_3.html

138

Alex SL 08.25.12 at 11:50 pm

David J. Littleboy: _the folks who find something that causes cancer and eliminate it are saving a lot more lives than the folks who are going after cancer itself._

For starters, sunlight and oxygen are carcinogenic. Every day our body destroys ca. 20 tumors, and each of us is carrying around numerous tumors that it has overlooked and that are happily growing, only most of them are so slow-growing that they would only develop into a serious threat if we got 200 years old. (And that right there tells us what we would still die of even if we figured out how to elongate telomeres again to stop aging. Any form of immortality beyond “having children” looks less realistic the more we find out about biology.)

Matt: _I think that the Singularitarians who emphasize Strong AI are essentially people who asked for a way to make humans fly and were disappointed that the answer was an airliner._

Beautiful!

139

Lee A. Arnold 08.26.12 at 6:59 pm

Boston Children’s Hospital has developed a platform technology to deliver oxygen intravenously to help save people who can’t breathe or are in cardiac arrest (via Metafilter):
http://www.childrensinnovations.org/SearchDetails.aspx?id=1550

140

John Mashey 08.28.12 at 3:38 am

re: Matt 132;
‘How much of these victories can be ascribed to Artificial Intelligence technique versus {brute force + better heuristics}?’

My question was carefully worded :-)
Regardless of what happened since, the checkers and chess victories were brute force, sometimes with special hardware. I haven’t particularly followed this since Deep Blue (except when getting Thompson to do an oral history for the Museum).

This still leaves GO, which is way harder than chess.

141

David J. Littleboy 08.28.12 at 12:54 pm

“My question was carefully worded :-)”

Yes. I noticed that. Then I remembered that I’d seen your name before…

FWIW, apparently the way the better of the current crop of go programs work is to do large numbers of very deep (all to the way to the end of the game), very narrow, and random searches to see if they can find moves that on average lead to more wins. I.e. evaluate a position by, instead of playing out all captures and counting pieces, play out the game randomly and see who wins. Seeing as a friend got an MS from MIT (friend’s thesis advisor was Ron Rivest) for just showing how to determine that a game was done and see who won, rather counterintuitive that you could do that much random computing and get something useful out of it.

142

JW Mason 08.28.12 at 2:53 pm

Thread’s winding down, but here’s a new paper by Robert Gordon that seems relevant to the OP:

The analysis links periods of slow and rapid growth to the timing of the three industrial revolutions (IR’s), that is, IR #1 (steam, railroads) from 1750 to 1830; IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and IR #3 (computers, the web, mobile phones) from 1960 to present. It provides evidence that IR #2 was more important than the others and was largely responsible for 80 years of relatively rapid productivity growth between 1890 and 1972. Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004. Many of the original and spin-off inventions of IR #2 could happen only once

I’m also shocked to realize that nobody has linked yet to Cosma Shalizi’s brilliant post on this topic, The Singularity in Our Past Light Cone. As he says, we actually have invented an artificial, inhuman intelligence driven to constantly develop and expand its own capacities, transforming or replacing all of social life as it does so. It’s called capital.

143

JW Mason 08.28.12 at 2:54 pm

(oops second paragraph was supposed to be quoted, it’s from the Gordon paper. Remember when CT used to have a working preview function? Those were the days.)

144

drs 08.29.12 at 4:49 am

Japan apparently thinks highly enough of maglev to plan to build a line from Tokyo to Osaka. Mostly deep underground, and through the Japan Alps. For less than the $151 billion Amtrak plans to spend on bringing maybe-this time-sort of-really high speed rail to the Northeast corridor. Not sure why it would not carry very many passengers; it’s a bunch of train cars like any other.

They’re expensive to build and don’t play well with old rail, but the hope is that they’d have lower maintenance costs than the wear on 200+ mph steel.

It seems obvious to me that the rate of major innovations has slowed since 1950, and this is not because of regulation or taxation or corporations but because we’ve made fewer new fundamental discoveries and the low-hanging fruit was picked quickly. 1850-1950 we see heat engines (turning heat into work), electricity, electric telegraphs, telephones, electric lighting, flight, trains, car, mass production, the periodic table, radioactivity, relativity and quantum, the germ theory of disease, mature vaccines, antibiotics, radio and TV.

Since then we’ve had orbit, DNA, computation, high speed rail, and the birth control pill. And maybe more, but at a basic “learn how things work” level especially in physics and chemistry, we were mostly done. Or at least stalled (there’s obviously physics we don’t know, but we’re not making much progress on it, and it’s not clear how relevant it’d be anyway, compared to mastering electromagnetism.)

A big thing of the 20th century was discovery of limits or impossibility theorems, which mean we have a better idea of what we can’t do. Going from pony express to optical telegraphs, or more substantially electric telegraphs, was a huge leap in communication speed. There can be no further leap, though there has been in bandwidth. Heat engines are at about 50% of the efficiency they could be at given the temperatures they operate at, and 30-40% of the efficiency they could be with an absolute zero heat sink. Lighting has improved tremendously, from candles to bulbs to CFLs to LEDs, but we’re getting within a small integer multiple of maximum efficiency in turning power into photons, after improvements of 100x.

Limits on cars and planes are less about fundamental physics, but going from on foot to driving was an improvement of 10x-30x. A similar improvement would mean 300-900 miles per hour. The difficulties should be obvious, as is the practical difficulty faced by Concorde.

Improvements in efficiency and cost are certainly important, but they’re not as transformative as developing and distributing a capability in the first place.

One big transformation in life dates from the 1880s (Bismarck) but has only spread much after 1950: universal health care. Even if the actual life expectancy benefits are small compared to those of pubic health programs, the peace of mind benefits seem to be large. (They also make a lot of worries about transhumanism look kind of odd, with concerns about rich-poor divides that don’t make much sense when there’s a generous public health system.)

145

drs 08.29.12 at 4:54 am

“Thought-provoking sentence of the day.”

Gawande pointed out that in 1953 we saw both the discovery of DNA’s structure, and a paper on the link between smoking and lung cancer. One got the Nobel Prize and billions of research funding, the other saved millions of lives. (Or extended lives by some large amount, since they all died eventually.)

146

drs 08.29.12 at 5:07 am

“Did the average person in Africa, India, or China have access to phones, planes, cars, electricity, fridges, or TVs in 1950?”

No, but AFAIK their growing access to those is less about new advances in cost and efficiency (though those help) and more about the gradual spread of the capital wealth to have them.

I forgot another element of the 1850-1950 phase transition, implied by the heat engines but worth making explicit: the exploitation of fossil fuels. We discovered a cheap (ignoring environmental costs) and convenient source of energy which will be hard if not impossible to replicate. Nuclear and solar and wind energy just (hopefully) let us keep making electricity (and at cost, synfuel) despite fossil peak and without the environmental problems. But there’s nothing on the horizon matching the phase transition of such cheap and convenient fuel and in fact we might go backwards on that front.

(Even if we ever get fusion power — which after 60 years of failure has to be counted as one of the hardest things we’ve ever tried to do; it only took 7 years from discovery of fission to the A-bomb — it won’t necessarily be transformative. Fuel is cheap and abundant but the reactors seem likely to be expensive, possibly so much so that they wouldn’t even be economic unless you had no alternative, like outer solar system colonies.)

Comments on this entry are closed.