The singularity and the knife-edge

by John Q on September 27, 2005

I’ve been too busy thinking about all the fun I’ll have with my magic pony, designing my private planet and so on, to write up a proper review of Ray Kurzweil’s book, The Singularity is Near. The general response seems to have been a polite version of DD’s “bollocks”, and the book certainly has a high nonsense to signal ratio. Kurzweil lost me on biotech, for example, when he revealed that he had invented his own cure for middle age, involving the daily consumption of a vast range of pills and supplements, supposedly keeping his biological age at 40 for the last 15 years (the photo on the dustjacket is that of a man in his early 50s). In any case, I haven’t seen anything coming out of biotech in the last few decades remotely comparable to penicillin and the Pill for medical and social impact.

But Kurzweil’s appeal to Moore’s Law seems worth taking seriously. There’s no sign that the rate of progress in computer technology is slowing down noticeably. A doubling time of two years for chip speed, memory capacity and so on implies a thousand-fold increase over twenty years. There are two very different things this could mean. One is that computers in twenty years time will do mostly the same things as at present, but very fast and at almost zero cost. The other is that digital technologies will displace analog for a steadily growing proportion of productive activity, in both the economy and the household sector, as has already happened with communications, photography, music and so on. Once that transition is made these sectors share the rapid growth of the computer sector.

In the first case, the contribution of computer technology to economic growth gradually declines to zero, as computing services become an effectively free good, and the rest of the economy continues as usual. Since productivity growth outside the sectors affected by computers has been slowing down for decades, the likely outcome is something close to a stationary equilibrium for the economy as a whole.

But in the second case, the rate of growth for a steadily expanding proportion of the economy accelerates to the pace dictated by Moore’s Law. Again, communications provides an illustration – after decades of steady productivity growth at 4 or 5 per cent a year, the rate of technical progress jumped to 70 per cent a year around 1990, at least for those types of communication that can be digitized (the move from 2400-baud modems to megabit broadband in the space of 15 years illustrates this). A generalized Moore’s law might not exactly produce Kurzweil’s singularity, but a few years of growth at 70 per cent a year would make most current economic calculations irrelevant.

One way of expressing this dichotomy is in terms of the aggregate elasticity of demand for computation. If it’s greater than one, the share of computing in the economy, expressed in value terms, rises steadily as computing gets cheaper. If it’s less than one, the share falls. It’s only if the elasticity is very close to one that we continue on the path of the last couple of decades, with continuing growth at a rate of around 3 per cent.

This kind of result, where only a single value of a key parameter is consistent with stable growth, is sometimes called a knife-edge. Reasoning like this can be tricky – maybe there are good reasons why the elasticity of demand for computation should be very close to one. One reason this might be so is if most problems eventually reach a point, similar to that of weather forecasting, where linear improvements in performance require exponential growth in computation (I still need to work through this one, as you can see).

So far it seems as if the elasticity of demand for computation is a bit greater than one, but not a lot. The share of IT in total investment has risen significantly, but the share of the economy driven primarily by IT remains small. In addition, non-economic activity like blogging has expanded rapidly, but also remains small. The whole thing could easily bog down in an economy-wide version of ‘Intel giveth and Microsoft taketh away’.

I don’t know how much probability weight to put on the generalized Moore’s Law scenario. But as Daniel points out, even a small probability would make a big difference to a mean projection of future growth. Since the Singularity (plus or minus pony) has already been taken, I’ll claim this bit of the upper tail as my onw pundit turf.

{ 76 comments }

1

tom lynch 09.27.05 at 2:42 am

Moore’s Law is firstly an empirical observation about the rate of growth of “computing power” (which, as this post points out, doesn’t necessarily have a proportional impact on economic realities), and only secondarily a well-founded principle for predicting that rate of growth.

For years we have been told that the curve would flatten out and Moore’s Law would start to outpace reality as the semiconductor-chip-with-billions-of-transistors paradigm approached some hard physical limits. This hasn’t happened yet, but we are currently seeing a shift to multicore processors as a result of the diminishing returns of just trying to make one core do more and more. This in itself is going to make writing the software that runs on these processors significantly more difficult, not to mention the fact that there are some computing tasks that are inherently difficult to parallelize, that it will not be possible to speed up by increasing the number of distinct computing cores on a chip.

In short, we seem to be seeing the start of some real signs that Moore’s Law will slow down and eventually grind to a halt.

At this point the scientific meliorist or the extropian will claim that there is always some unanticipated advance still to come that will allow us to continue “pushing the envelope”. This is always unconvincing (at least until the advance happens along, which I suppose it might).

For me this is the key problem with the Singularity. The fact that for it to occur, some tech advances are required that seem very close, but may in fact be very far away. Advances for which we don’t even (seem to) have the basic ingredients of a convincing implementation – e.g. human level computer consciousness.

I really enjoy the optimism of the Spike theory, but I doubt it will happen in my lifetime, or Ray Kurzweil’s for that matter.

2

Doug 09.27.05 at 3:25 am

but the share of the economy driven primarily by IT remains small.

What about the share of growth driven primarily by IT?

3

Luc 09.27.05 at 4:22 am

You used to be able to have a discussion about what Moore’s law actually says, but Google has spoiled that fun, because you can find the original article with just a few clicks here.

You could use the original version to construe an argument that the cost of computing would not reduce to zero, even if Moore’s law holds ’till far in the future.

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”

Thus as any computer needs at least one chip, and Moore’s law does not predict that the cost of a chip decreases, only that the cost-wise optimal number of components on a single chip will double every two years, there will likely be a minimum for the cost of using a computer that is far from zero.

I don’t think there’s much value in this argument, but I think that holds for most arguments based around Moore’s law.

Though maybe it explains why computers will become ever faster, even if most people won’t need the additional computing power, and leave it essentially unused.

4

Keith M Ellis 09.27.05 at 5:34 am

John Stokes wrote a very nice little whitepaper, Understanding Moore’s Law a couple of years ago. I don’t have time to summarize it at the moment, but I highly recommend the article to anyone that wishes to refer to “Moore’s Law” and actually know what they’re talking about. The vulgar interpretation is sensational and simplistic at best and false at worst. It really isn’t correct to say that Moore claimed that computing capacity will double every year. Luc has the correct quote; but its implications require some illumination, especially for the layperson.

5

José Ángel 09.27.05 at 5:51 am

The computing capacity of the brain of average computer users will not double every two years, anyway.

6

Brett Bellmore 09.27.05 at 5:57 am

Moore’s law can not continue forever, based on our current understanding of physical law. However, most people have very little grasp of just how much room there IS for it to continue. Even setting aside iffy things like quantum computing, molecular scale computing elements that have been demonstrated to work in the lab could keep it going for a few more decades. Certainly long enough to provide the computational resources necessary to support a neural network model as complex as a human brain.

The key point about the sigularity, is that currently technological progress is constrained by the human intelligence of engineers. The curve wouldn’t really spike until you got AI tools that were capable of improving themselves, so that the engineers were getting smarter as technology advanced. Which would, at least for a little while, make technological progress a double exponential.

7

Matt Daws 09.27.05 at 6:08 am

Similarly, for a techie take on Moore’s Law (and its abuses) you can do worse than see here: Arstechnica Article.

8

Robb Lutton 09.27.05 at 6:35 am

The problem with Kurzweil’s timeline for increases in computer power is that he doesn’t understand the big picture. Computer speed is made up of Memory speed, disk speed, bus speed and processor speed. The biggest bottlenecks are outside of the processor. These are not doubling at anywhere near Moore’s law type rates.

My company has sold graphic processing software into the printing industry for 13 years. The work involves image manipulaton of very large files. One color magazine page might be 50 megabytes.

The time to process work has gone down of course, but the increase in speed is more like doubling every decade than doubling every two years. In fact our biggest increases in speed have come more from software development than increases in machine speed.

Everyone who really thinks about this can see for themselves. Download a catalog from some online store that also publishes one of those catalogs that you get in the mail. Even after 30 years of supposedly doubling of computer speed every x years, you can flip through the physical catalog far faster than your computer can display the pages.

If you look at the progress in disk speed and memory access speed, you will see that this situation is not about to change any time soon.

9

Cranky Observer 09.27.05 at 7:25 am

> The other is that digital technologies will
> displace analog for a steadily growing proportion
> of productive activity, in both the economy and
> the household sector, as has already happened with
> communications, photography, music and so on.
> Once that transition is made these sectors share
> the rapid growth of the computer sector.

Digital technologies have certainly led to great benefits for business, particularly in profit margins and encouragement of throwaway cultures. It is quite a bit cheaper to manufacture a cheap digital radio than a high-quality Hallicrafters analog unit, for example, and even at $17 the digital unit has a much higher profit margin. But does the digital technology do anything to advance human quality of life? In my experience, quite often not. We now have digital technologies that incorporate some improvments over their analog predecessors but are very complex, hard to use, impossible to repair, and which are obsoleted on a 2-year schedule. It is not clear to me that “progress” can be measured by higher profit margins.

Cranky

10

jw 09.27.05 at 8:02 am

Actually, there are clear signs that the rate of computing progress has slowed down, and physics dictates that Moore’s Law will halt its progress in the near future because it predicts transistor sizes smaller than a silicon atom by around 2020.

Reduced size introduces a variety of other issues, such as heat and leakage current, which have prevented processor clock speeds from growing in recent years. Intel released its fastest processor, the 3.8 GHz Pentium 4, in November 2004, almost a year ago. In turn, that processor is only 26% faster than the 3.0 GHz P4 which was released two years earlier in November 2002.

11

Brendan 09.27.05 at 8:14 am

The key point is why is a man who is clearly insane like Kurzweil being given the cover story in this week’s New Scientist? Apart from the fact that he has a crappy book out?

Sorry, on reflection that great scientist of our time Glenn Reynolds thinks that Kurzweill is great, so I am obviously wrong.

12

vinc 09.27.05 at 8:23 am

It seems to me that computing’s share of the economy has grown darn fast over the past few decades and therefore your elasticity (if you *must* pick a single value) has to be greater than one. You have a couple huge computers owned by governments in the 1940s, a handful of room-sized computers in the 1970s, a small number of personal computers in the 1980s, everyone getting one in the 1990s, and now computing is invading your car, your phone, etc. That looks like exponential growth from a very small baseline, not staying contant as a share of the economy.

13

des von bladet 09.27.05 at 8:27 am

Brendan: New Scientiste has been shite for years; shun it, if you don’t already.

14

Andrew Bartlett 09.27.05 at 8:29 am

Oh, this IS the guy in this week’s New Scientist. There are some amazing graphs in his article [could someone scan these and mail them to me?]. If I remember right, one of these graphs traces the increase in complexity, plotting such diverse events as the creation of the solar system, the invention of the wheel and the development of democracy. Now, I might be wrong in my remembrances here – I did just flick through it in WH Smith – but that graph (and some of the text accompanying it) displays such a parochial attitude towards the development of the universe (and human society) that it cannot help but be wrong.

15

abb1 09.27.05 at 8:45 am

In my Microsoft-dominated world increase of processing speed is being pretty much offset by immediate infusion of crappier, more bloated software.

So, here’s the ABB1’s law for you: doubling of computing power doubles the waste of computing power.

16

Matt McGrattan 09.27.05 at 9:01 am

All this singularity-bollocks (that’s a technical term) seems predicated on the idea that the hard/interesting problems can be solved just be chucking raw processing power at them.

This doesn’t seem to be the case. The PCs I use now are several orders of maginitude faster than those I first used in the late 80s but useable speech recognition, to take one example, is _still_ nowehere near a reality.

17

Brendan 09.27.05 at 9:05 am

Yes but Andrew GLENN REYNOLDS thinks Kurzweil is great. That PROVES he is. Don’t you know ANYTHING? After all it was Reynolds who proved that the Lancet study on Iraqi deaths was wrong, by the simple expedient of pointing out that it clashed with Reynold’s own political preconceptions. A proof Pythagoras would have been envious of.

18

Doctor Memory 09.27.05 at 9:08 am

The real problem with the pop-science gloss on Moore’s Law is that it misses the obvious intrinsic correlary:

There is no Moore’s Law for software quality.

Yes, CPUs get faster, along with (substantially behind the curve) memory, disk and general system throughput. Unfortunatly, software is still crap. All of the big “breakthroughs” in software development of the last few decades (high-level languages, object orientation, the ‘open source bazaar’, what have you) have (at best and not often even that) made it possible to continue development with a constant level of bugginess while the complexity of the underlying hardware increases.

Absent some highly unlikely breakthrough in software engineering methodologies, the only “singularity” that we’re approaching is that of IT budgets collapsing under their own weight and forming a fiscal black hole from which no information can escape.

19

Doctor Memory 09.27.05 at 9:12 am

BTW, I propose — in honor of Ray Kurzweil’s actual and undisputable positive accomplishments prior to his going round the bend into George-Glider-cum-Marshall-Applewhite territory — that all such wanking about the “Singularity” in the future be termed “keybollocks.”

20

paul 09.27.05 at 9:56 am

As robb lutton notes, Moore’s Law in the real world is generally trumped by Amdahl’s Law, which says that when you have a problem with parts that are easy to speed up and parts that are hard to speed up, speeding up the easy parts doesn’t really get you very far.

One thing that speeding up the easy parts gets you, though, is a spurious increase in the amount of computing being done. Where you used to flip a switch to turn off the lights, now you push a button that tells a microcontroller to send a message packet across your powerline network to a server, which in turn sends messages to the microcontrollers attached to each of the light sockets in the room in which you wish to extinguish the illumination.

So yes, certain kinds of computation become free, and therefore more attractive. But much of what that does is not to increase utility but rather to color our perceptions of the kinds of computations we might want done. Meanwhile the hard-to-speed-up part of the problem involving getting computers to do useful stuff that they don’t do now remains as hard as ever.

(It’s a very strange feeling looking back to the computer science research of the 70s and 80s, and seeing the toy examples and prototypes of that time now working much faster, more reliably and with somewhat greater scope. But not much new.)

21

Dylan 09.27.05 at 10:11 am

Matt McGratan wrote:

The PCs I use now are several orders of maginitude faster than those I first used in the late 80s but useable speech recognition, to take one example, is still nowehere near a reality.

Have you called your bank recently? Many banks and other companies now have voice prompt systems that work vastly better than earlier ones. When I called T-Mobile just now they asked me to state my problem however I wanted.

Yes, it’s not full speech recognition, but progress is being made.

doctor memory wrote:

Unfortunately, software is still crap. All of the big “breakthroughs” in software development of the last few decades (high-level languages, object orientation, the ‘open source bazaar’, what have you) have (at best and not often even that) made it possible to continue development with a constant level of bugginess while the complexity of the underlying hardware increases.

That’s an interesting observation. Perhaps there’s a constant level of bugginess that people are willing to tolerate.

22

paul 09.27.05 at 10:56 am

Dylan writes:

Have you called your bank recently? Many banks and other companies now have voice prompt systems that work vastly better than earlier ones. When I called T-Mobile just now they asked me to state my problem however I wanted.

Pretty much all of what you’re talking about was demonstrated 20 years ago, but not implemented then because it would have required hundreds of megabytes of memory and hundreds of millions of processor cycles. Now a cell phone or a pda has that much power. On the other hand, projects like Cyc (codifying enough of the everyday world so that programs can reason about it with some hope of success) are 15 years in and still plugging along. That next step, as they say, is a doozy.

I also think that the idea of a constant level of bugginess may be misleading. There may be some level of bugs that’s a maximum for any program to survive deployment. But the common buzzstat is that roughly three quarters of all major software projects die before deployment, so we’re talking more in descriptive than prescriptive terms.

If software has to be both complex and safe (e.g. fly-by-wire control systems, which aren’t even that complex in the scheme of things) then development costs can easily run into the billions of dollars. Since so many of the things we’d like to do with computers in the supposed next phase of history do involve killing people if something goes wrong, development cost and time will probably turn out to be limiting resources.

23

Matt McGrattan 09.27.05 at 11:32 am

Yeah, those kinds of ‘restricted domain’ speech recognition tasks have been done for a while. We’re still really not very close to being able to, say, dictate to a computer that’s not been extensively trained with your own particular voice and know that we’ll achieve acceptable standards of accuracy.

Ditto most natural language processing tasks. Within narrowly specified domains computers are quite good at it now. Stray outside them, and the results aren’t much better than they were 20 years ago.

24

abb1 09.27.05 at 11:32 am

It’s not so much the question of bugginess but rather the evolution of programming, from highly efficient assember coding in the 60s and early 70s to Dijkstra’s ‘structural programming’ – sacrificing a bit of efficiency for the sake of clarity and readability, to high level languages – sacrificing a whole lot of efficiency for simplicity, to object oriented approach, to portability, etc. – to the point when a program that does ‘i=1’ will compile into a 100K executable.

Every time your cpu speed doubles, you use it for a variety of things: to improve graphics, compression, OS in general, to simplify your programming languages, to make your code more portable – but hardly for actual computing, ‘intelligence’. IOW, almost all of it goes into the overhead.

25

mkl 09.27.05 at 11:45 am

To the extent this is a discussion of economic growth paths, I think it is a very reasonable question as to how many bites of the economy digital can take as costs to process / store / transmit data come down.

Photography seems the best example here. The cost to produce an image of a given quality is now generally lower with current digital equipment than chemical. Plus, the value of a digital image is higher than a chemical negative given the relative efficiency of manipulation, archiving and distribution. (I’m excluding artistic considerations here, just looking at media and snapshot level consumption) So, the increase in digital imaging, processing and bandwidth over the last 15 years has broadly given us more, and more useful, pictures of the kids playing in the park at lower cost than we had before.

And that, my friends, is true economic progress.

Now, the matter is how many areas are subject to outright replacement or substitution by digital counterparts. Even if bowling can’t be digitally improved, people abandoning the lanes to play on their xbox (presumably something other than a bowling simulator) are accomplishing the same result.

26

Bill Gardner 09.27.05 at 11:55 am

You people are no fun. Are you telling me that the Rapture — sorry, I meant the Singularity — won’t happen?

Nevertheless, SOMETHING is happening:

(1) There is something to the idea that computing costs are falling exponentially

and

(2) We are watching Google build a global scale (150,000+ server) computing entity…

and I’m really curious to hear what others think this will lead to by, say, 2015.

27

joe o 09.27.05 at 11:59 am

Intelligent computers will start to program themselves and some version of the singularity will happen. Just not anytime soon.

28

abb1 09.27.05 at 12:06 pm

has broadly given us more, and more useful, pictures of the kids playing in the park at lower cost than we had before

I think one could plausibly argue that a $5 disposable cardboard camera bought at a convinient store on the way to the park provides a better value than $200 digital camera with rechargeable batteries, usb cables, drivers, PC, and its propensity to be left home when you need it in the park.

29

Doctor Memory 09.27.05 at 12:11 pm

Bill: we’ve already, I think, disposed with point A.

With regard to point B, what I expect Google’s build-out to lead to in the long run is a massive and long-overdue downward adjustment of their market capitalization when the market finally realizes that all that shiny new tech has been put together to build Yet Another Advertising Company, albeit a reasonably profitable one.

A culture of coolness-via-secrecy and trophy PhD hires does not translate into a paradigm shift. Usually quite the opposite, actually: cf. Worldcom, Enron, et al.

30

Marcus Stanley 09.27.05 at 12:39 pm

It seems to me the big issue is how well and how fast we can break what might be called the “physical stuff/computing barrier”. Computers are very good at algorithmic manipulations of information, and then they can display the results of those manipulations. But translating that into actually producing or transporting physical goods more cheaply is something else. There are a whole set of physical and human interfaces required right now to do that (see e.g. comment #8 above), and computers have not had much effect on the physical energy costs of production or transport. Unless the exponential growth in raw processing power can be translated somehow into exponential declines in the cost of moving or manipulating physical stuff, it seems to me there are some pretty formidable barriers to turning processing power into improved quality of life.

This is one area where the simultaneous advance of medical knowledge needs to be brought into the equation, since if computers help us model, understand, and then manipulate the human body better then that would have massive implications. But pace Kurzweil I’m not sure about the evidence for that in the near future.

31

Sebastian Holsclaw 09.27.05 at 12:45 pm

“The fact that for it to occur, some tech advances are required that seem very close, but may in fact be very far away.”

I vote for battery technology as something that is well in need of an advance, seems on the surface like it should be dooable, but on further inspection seems very far away.

32

Bill Gardner 09.27.05 at 12:48 pm

Dr. Memory, points well-taken, but my use of the word ‘Google’ may have been misleading. I wasn’t interested in Google as a corporation. Rather, I was curious about what can be done with computing engines of Google’s scale.

paul also had an insightful comment:

“One thing that speeding up the easy parts gets you, though, is a spurious increase in the amount of computing being done… (It’s a very strange feeling looking back to the computer science research of the 70s and 80s, and seeing the toy examples and prototypes of that time now working much faster, more reliably and with somewhat greater scope. But not much new.)”

Yes, if you are talking about relational databases. However, one of those 80s CS technologies — with, of course, much older roots — was hypertext. One could say that it now works “much faster, more reliably and with somewhat greater scope” than HyperCard or the various lab projects. That wouldn’t do justice to what’s happened.

33

luci phyrr 09.27.05 at 1:18 pm

I think Joe O.’s right – maybe about 500-1,000 years.

34

abb1 09.27.05 at 2:08 pm

It probably took Paul Revere about an hour to ride his horse 10 miles from Charlestown to Lexington.

If someone told him that in 200 years people will have horseless carriages powerful enough to go 200 miles/hour, he would’ve imagined that his lucky grand-grand-..-children will be spending 3 minutes to get from Charlestown to Lexington, right?

You know how long it takes to get from Charlestown to Lexington today, using these amazing lightning-fast horseless carriages? About an hour.

35

fyreflye 09.27.05 at 2:21 pm

The concept of The Singularity comes out of the Extropian community via some s-f stories by Vernor Vinge, and was hijacked by Kurzweil for his own self-promoting books. Leaving aside the fact that most Extropian ideas came out of s-f (except for those they got from “Atlas Shrugged”) their basic economic position is libertarianism and, Glenn Reynolds hates government except when it subsidizes his salary, so there you have the basis of the current Unholy Alliance.

36

Brackdurf 09.27.05 at 2:27 pm

This does seem like an odd time to discuss this, like discussing the merits of Dow 36,000 in late 2001. Though I suppose with Saudi Arabia suddenly declaring it has an extra 200 billion barrels, perhaps I shouldn’t make another analogy to debates about infinite versus peak oil. Moore’s law is due to the amazing scaling potential of silicon processors, but as jw points out in comment 10, that scaling will obviously end quite soon. Moore’s law has never held for anything other than silicon transistors, and there’s no good reason besides hope to expect it will hold after silicon has been scaled down as far as it can go.

And echoing another of jw’s points, what’s most amusing is that this is in fact the year that Moore’s law seems to have ended! Intel and IBM have gotten completely stuck for at least the last year, and all new initiatives seem to be heading in the low-power, not higher-speed, direction. Maybe Kurzweil can be famous in the future for a book predicated on Moore’s law’s indefinite advance that was published on the year that Moore’s law stopped working.

37

Peter 09.27.05 at 2:47 pm

Unfortunately, software is still crap. All of the big “breakthroughs” in software development of the last few decades (high-level languages, object orientation, the ‘open source bazaar’, what have you) have (at best and not often even that) made it possible to continue development with a constant level of bugginess while the complexity of the underlying hardware increases.

Absent some highly unlikely breakthrough in software engineering methodologies, the only “singularity” that we’re approaching is that of IT budgets collapsing under their own weight and forming a fiscal black hole from which no information can escape.

The reason why software development hasn’t improved in the last 40 years is due to mismanagement. The issues raised in The Mythical Man-Month over 30 years ago, to this day, haven’t been dealt with in companies. Another book: Death March. Ea_spouse hit a raw nerve last fall when writing about her spouse working 90 hour work-weeks.

Some years ago, Denver was building a new airport, and they wanted a state of the art baggage handling system. The engineers said that such a system takes 4 years to build (based on past history making them). The sales department says the airport opens in 2 years, so we promised that it would be done in 2 years. Two years after the airport opened, the baggage handling system was working. On time or late? If you think it was late, you’ve been drinking the wrong kool-aid. You can promise to deliver a baby in 6 months, but that won’t change nature taking 9 months to make one.

Managers and developers like to take some super Butch/Marine Corps mentality towards developing code: you have 168 hours per week and 169 of them better be spent in front of a monitor. There is almost 100 years of operations/organizational research showing that working people more than 40 hours per week reduces productivity (working less than 40 hours per week also reduces productivity – if I didn’t state that, a zillion trolls would start frothing at the keyboard).

The humor behind Dilbert is that there are large numbers of companies, and mismanagers, who think that the lessons of the past are to be forgotten, and the future can be made out of slogans. Unrealistic budgets and unrealistic timeschedules are the reason that so much money has been wasted in IT.

Is “software is still crap” true? As long as people refuse to learn from history, yes. Back in 1968, NATO realized that programming could not stay in the same ad-hoc method, so they tried to turn programming into software engineering. In engineering, you learn from mistakes, preferably ones that someone else made. In current methods of software development, you make the same mistakes over again, because every lesson to be learned is covered by an NDA or CYA.

Singularity? That sounds like some baloney out of the marketing department, promised by yet another mismanager. They probably want it tomorrow, for 50 cents and some belly button lint.

38

No Nym 09.27.05 at 2:50 pm

You people just don’t have any appreciation for nerd pr0n, do you? After you download your sux0r a$$ into the global distributed superintelligence, you can score with Britney 3.0.5 [now open source]. Just think of the possibilities!

Alternatively, you might just end up as a janitor, polishing my silicon a$$. Seriously, how can you not be looking forward to all this.

Did I mention the pony?

39

abb1 09.27.05 at 3:42 pm

It’s not the management. It’s a normal slow evolutionary process with many a dead-end and long detours.

40

lemuel pitkin 09.27.05 at 3:45 pm

What I don’t get about the whole singularity argument is why people would think technological change is accelerating. Can John Quiggin or anyone else here straightfacedly argue that the period 1950-2000 (say) saw greater technological change by whatever metric than 1900-1950 or 1850-1900?

41

Erich Schwarz 09.27.05 at 3:47 pm

“I haven’t seen anything coming out of biotech in the last few decades remotely comparable to penicillin and the Pill for medical and social impact.”

There is indeed such an anything, but it’s something you didn’t see because it’s what didn’t happen. Because of the availability of ELISA and Western blotting — both techniques developed for and by academic molecular biology — the entire blood supply of the western world did not get totally infected with HIV between 1980 and 2000.

Not a small thing. But it’s a quiet big thing, and it involves avoiding a calamity that’s too horrible for people to be happy imagining. So, mentally, it’s invisible.

Anyway, there’s actually a lot of important stuff coming out of biotech, but it’s either stuff that is hard to see (like the above instance) or it’s stuff that is just beginning to have an impact on ordinary human life, and will probably take some decades to fully do so (as the study of electricity and magnetism did).

As for software being no better than before: check out the open-source human (and other organism) genome site http://www.ensembl.org. This site routinely handles billions of nucleotides of DNA sequence data. It would have been essentially impossible to handle these data without the advances both in hardware and in open-source software that have taken place over the last 20 years. Because we can handle these data, we have a serious possibility of being able to develop an entire new set of tools for medicine based on these genomic data — thus giving an instance of how two rather different technologies are synergizing as I write this.

42

bago 09.27.05 at 6:44 pm

It’s obvious that a lot of these commenters don’t understand information theory. When you run into one wall different approaches open up. When you run into that silicon wall you have to start parallelizing. This requires meta-data and orchestration, leading to a crude form of self awareness. This has been happening on chip for years now, with the instruction schedulers. Scale this up to multiple cores and multiple chips and soon you’ll be seeing an entire multichip system as one black box capable of even more power.

Programming has also gotten a lot better, with standardized error reporting mechanisms, code analysis tools to prevent common mistakes, an establishment of design patterns to solve common problems, and heuristics to measure and manage attack surfaces and data integrity. There are binary analysis tools available to automatically calculate the test matrix for any checkin in a complex system. Code can be viewed as data, refactored at whim, re-styled and loaded with arbitrary meta-data.

Ten years ago if I asked you the population of botswana, you’d take at least 30 minutes to dig that up. Now you can google it in ten seconds.

43

Andrew 09.27.05 at 7:07 pm

“Can John Quiggin or anyone else here straightfacedly argue that the period 1950-2000 (say) saw greater technological change by whatever metric than 1900-1950 or 1850-1900?”

That depends. 1950-2000 brought huge technological gains to a large number of people. Yeah, 1900 looked nothing like 1850, and to rich people they had some nice new advancements, but the vast majority were still living much like they did in 1850. Compared to 1950, when one third of the US population had cars, half had phones and 2/3s had indoor plumbing, I’d say things have changed more for the most people.

44

lemuel pitkin 09.27.05 at 10:11 pm

Quiggin, you’re not an economist, right, but a philosopher who writes about econommics?

Because the relevant question is not the elasticity of demand for computation, but how that elasticity varies with the price. We can calculate an elasticty for any good that has a price, some above 1, some below it. But for few if any of those goods would consumption actually approach infinity as price approached zero.

If, as the Kurzweils of the 1950s had it, nuclear power led to electricity “too cheap to meter,” how many desk lamps would you have in your office?

45

lemuel pitkin 09.27.05 at 10:12 pm

Andrew — all true, and all irrelevant. If Kurzeil’s argument was that as computers get cheaper, more people will own computers, no one would object — or buy his books.

46

John Quiggin 09.27.05 at 11:30 pm

Lemuel P, I don’t think productivity growth has accelerated, except in IT, where it has stayed at exceptionally high rates for a long time. So the question is: will the growth rate in IT spread to other sectors as IT is applied in these sectors.

On your point #44, I read you as claimingthat if the elasticity is currently above 1 it will fall below 1 for sufficiently low prices. I think its obvious, if you reread the post, that I have anticipated this possibility.

That may be so but your argument doesn’t prove it. The relevant question is: how many things would be electrified that currently are not. As an obvious example: if electricity were free we would drive electric cars and use oil to make petrochemicals.

And, as regards computation, your claim is demonstrably false if you replace “infinity” and “zero” by large finite numbers like “by a factor of a million”. The price of computation has declined by a factor of about a million in the last thirty years or so, and the number of computations demanded has increased by an even larger amount.

47

abb1 09.28.05 at 1:48 am

It’s not obvious to me that if electricity were free we would be all driving electric cars.

In fact, electricity is free: you install sollar panels (just like you have to develop software before you can start using your free computing power) – and then electricity is free. Yet few people are interested.

48

bad Jim 09.28.05 at 2:43 am

Even if electicity was free, an electric car would be more expensive to own than a gasoline car. Batteries don’t last very long and are very expensive to replace.

Cheap computing has made possible vast improvement in the productivity of manufacturing and in every other skill- and knowledge-intensive field, but it doesn’t necessarily lead to growth in the home country, since it just as easily facilitates outsourcing.

49

FMguru 09.28.05 at 2:44 am

Pumping up the speed of a computer is a pretty straightforward process – build a gizmo that will execute this list of instructions faster than the previous one. It’s pretty easy to throw engineering resources at such a well-defined problem (as Intel, et al, have done to great success over the decades).

But we don’t even have a clear idea of how to define AI, much less a clear path to implementing it. It’s not like there are a bunch of really promising ideas sitting around in AI researchers’ labs, quietly waiting for the day that computers become fast enought to implement them. Back in the 1970s, AI was the buzziest of next-big-thing buzzwords, and people were sure that another generation or two of faster hardware was all that stood between ourselves and the HAL 9000. They were wrong then, and there’s reason to believe they’re right now.

I mean, if you went back in time to 1951 and souped up all the UNIVACs and ENIACs by a million-fold, you don’t have the internet and Windows XP developing in 1952 – you still need decades to develop all of the useful CS ideas (like databases and filesystems and GUIs and 3D graphics and object-oriented programming and the very idea of a network protocol, to name just a few) before you can get to that point.

Once we have AI, then maybe we’ll start seeing the Nerd Rapture play out in front of us. But until a number of very tricky problems are solved (which could happen in one or a hundred years or not happen at all), all the fast hardware in the world isn’t going to help. Expecting AI to spontaneously arise from a great enough concentration of computing power is like an Egyptian Pharaoh trying to mount a lunar landing by throwing enough slaves at the problem.

50

ian 09.28.05 at 5:30 am

We already have AI – they just haven’t told us yet.

51

bago 09.28.05 at 6:50 am

*sigh* Most of the commenters are stuck in a linear extrapolation mode, fundamentally missing the exponential growth curve as technology is able to branch out and solve more deterministic problems faster than ever before. Who needs intuition when you’ve got the math to back yourself up?

Look at the human genome project. The first 500 genes were more expensive and time consuming to record than the last 2 million. Armed with this knowledge genetic research is going to take off even faster, as you can now run experements in-virtual as opposed to intravenously. The more data you have the more operations you can run on the data, leading to new and previously undiscovered bits of data. It’s an exponential curve, a feedback loop into itself. Pessemists just have no sense of imagination.

52

John Quiggin 09.28.05 at 7:29 am

Electricity isn’t free in the relevant sense if it requires expensive capital to generate, transmit or store. To make the hypothetical analogy work you have to imagine (as is true in the case of computers) that you can shrink a generation plant capable of yielding kilowatts of power down to a size that will fit inside a car.

53

Cranky Observer 09.28.05 at 8:55 am

> Programming has also gotten a lot better, with
> standardized error reporting mechanisms, code
> analysis tools to prevent common mistakes, an
> establishment of design patterns to solve common
> problems, and heuristics to measure and manage
> attack surfaces and data integrity.

I have been working with software for more than 30 years, and I can personally attest: no it hasn’t.

In fact, I am now seeing the second generation of graduates/new employees raised on Microsoft products: people who not only think that the Microsoft attitude is the right attitude, but who don’t even know that there was ever another way. Nothing terrifies me more than to be at the doctor’s office and see Embedded Windows(tm) running on test instruments where once there was Bill Packard-supervised HP code.

Back around 1995 Steve Gibson made a jokey prediction that by 2010 one half the world’s population would be employed supporting Windows(tm) for the other half. It doesn’t seem so funny today, and it makes the statements about massive improvements in human lives due to IT a bit ridiculous.

Cranky

54

lemuel pitkin 09.28.05 at 9:50 am

It’s only if the elasticity is very close to one that we continue on the path of the last couple of decades, with continuing growth at a rate of around 3 per cent.

Another way of explaining this is that the majority of both the increase in computing consumption and the fall inconsumption price is an artifact of the same statistical distortion introduced by hedonic pricing.

I.e. if the nominal price and nominal consumption of computers is roughly stable over a decade, but the gov’t statisticians decide that increases in processor speed mean that the “real” price of computers has fallen by a factor of ten, then “real” consumption appears to have increased by ten times also. Hey presto, elasticity of one and your knife edge. My money is this is eactly the story here.

Dean Baker’s done some good stuff on this.

55

Peter 09.28.05 at 10:40 am

It’s obvious that a lot of these commenters don’t understand information theory. When you run into one wall different approaches open up.

You aren’t describing Information Theory. Information Theory is about how much info can something hold, or how much information can you squeeze down a wire. Because the backbone of the telephone system uses a 56k bits/sec sampling rate (its actually 64kbps, but some regions of the US steal a bit per word, leaving only 56kbps that you can depend on), 2400 tokens per second. Back when 2400 baud modems were the fastest, you’d be sending a 0 or 1 per token. The really smart guys who worked for the modem manufacturers figured out ways of representing the tokens with amplitude and phase, so that you could send a lot more bits per token. So a 56k modem is sending 8 bits of information per token. The way the phone lines work, you can never send more than 2400 tokens per second (Nyquist proves it can’t be done, you’d learn that in electrical engineering).

Programming has also gotten a lot better, with standardized error reporting mechanisms, code analysis tools to prevent common mistakes, an establishment of design patterns to solve common problems, and heuristics to measure and manage attack surfaces and data integrity. There are binary analysis tools available to automatically calculate the test matrix for any checkin in a complex system. Code can be viewed as data, refactored at whim, re-styled and loaded with arbitrary meta-data.

This passage is so far removed from reality that it could be the plot of a sci-fi novel.

In the real world half the companies I’ve worked for refuse to use source code control. People I network with report similar results.

Standardized error reporting is almost always Microsoft Outlook, delivering a poorly worded nastygram. Outside of graduate school, I’ve never seen a company actually use code coverage tools, bug tracking or code analysis tools. Those are criticized as “ivory tower stuff.”

Code can be viewed as data, refactored at whim… That is the goal of one of the current fashion trends: reflection. Software development is infested with silver bullet syndrome, that some new magical thingie will come along and rescue us from hard work. Even politicians are infected with that syndrome, always hoping that some magical deus ex machina will rescue them before they have to pay the price of their pontifications, posturing and pedantry. People have been promising Programming without programmers for 30+ years. CASE, Software Factories, Reflection, Object Oriented, are all hyped as some new snake oil that will automagically cure everying. Fashion in software development is one of the things that it back from becoming a profession.

There are binary analysis tools available to automatically calculate… No. No possible way. Not even theoretically possible. The task you are describing is equivalent to the Turing Stopping Problem. Which has been proved to be unprovable. Everyone who takes a class on computability learns that (which you’d take if you were a computer science major). It is also why there can never be a virus checking program that detect viruses that have never been written yet: it is impossible. All they can do is compare what they see with something they’ve already seen in the past.

The saddest thing about all this is that academic research in software development is between 5 and 30+ years ahead of commercial software development. Things like relational databases took 10 years to go from academic paper (1968) to commercial product (1978), with lots of companies spending millions of dollars to get there (IBM, and another company now called Oracle).

No. On second thought, I’d love to work for any company that has an evironment like you’re describing. It would be such a wonderful change from every single company that I’ve worked at and interviewd at that I’d probably be willing to pay to work there. They’d probably score higher than a 6 on the Joel Test.

56

Doctor Memory 09.28.05 at 12:17 pm

Props to Peter for dissecting “programming has also gotten a lot better” very thoroughly and thus saving me the effort.

So, tackling the part that you didn’t:

Bago: This requires meta-data and orchestration, leading to a crude form of self awareness. This has been happening on chip for years now, with the instruction schedulers. Scale this up to multiple cores and multiple chips and soon you’ll be seeing an entire multichip system as one black box capable of even more power.

No on all counts. Metadata and whatever-it-is-you-think-you-mean-by-“orchestration” are not forms of self-awareness, crude or otherwise. Nor, for the love of god, are instruction schedulers: branch prediction is a straightforward (although not necessarily simple) exercise in probability gaming, not some sort of AI pixie dust. (Nor does superscalar processor design have anything to do with metadata in the what-is-this-web-page-about sense. Or really any sense at all.) And on-chip parallelism does not get you closer to anything but the whip-hand of Amdahl’s Law

Back to Peter: you won’t find a bigger fan of Brooks than me, but I personally suspect that it’s something of a copout to say that the problem of buggy software is merely a management one and that the solutions are “known”. Brooks described a number of helpful techniques to reduce bugs, but the fundamental problem is economic: producing quality code takes more time and more money than producing shovelware, and as long as shovelware solves 80% of the user’s problems and explodes infrequently enough that the user doesn’t feel like it’s worth their while to attack the learning curve of a competing product, shovelware usually wins because it was first to market. In other words, worse really is better, a fact which can drive a sane geek to drink. (But, I suspect, you know all this, and we are merely picking nits around the edges of a violent agreement.)

57

MTraven 09.28.05 at 12:28 pm

Re: 40, did the 1900-1950s have more or less technological change than the following half-century?

I’m far from a singularitarian, but I think Kurzweil’s argument for exponential growth is not entirely bogus. No doubt the earlier 50 years saw more technological impact on the population, as electrification and auto transport spread (and antibiotics). But if you look at invention rather than impact, which is probably Kurzweil’s bias, you get a different story. The sheer number of inventions, and the ability to combine them in complex devices, is what has really taken off in the last 50 years. Our lives may not be exponentially better, but our devices are exponentially more complicated.

58

lemuel pitkin 09.28.05 at 12:52 pm

The sheer number of inventions, and the ability to combine them in complex devices, is what has really taken off in the last 50 years.

Really?

In what areas other than computers?

Not transportation, where the state of the art in both 1950 and 2000 was internal combustion engines and jet aircraft.

Not medicine, where the major causes of death and their treatments were essentially unchanged between 1950 and 2000, after being transformed completely over the preceding 50 years.

Not in communications (telephone, radio, movies, TV: all new and world-transforming in the first half of the 20th century, all essentially uncahnged in the second half.) Not in the generation of power. Not in the construction of buildings. Not in food preparation.

Look around the home or office where you’re sitting and ask yourself how much of what you see would be unfamiliar to someone 50 years ago. Then imagine the same exercise for some preceding 50-year period. Then see if you still believe what you just wrote.

59

abb1 09.28.05 at 12:53 pm

It’s not the management and I don’t think it’s the ‘time and money’ either. Sometimes they spend tons of time and money and have adequate management and still produce crap. Software’s the answer, but they don’t know the question. Software designer is a rare talent.

60

John Quiggin 09.28.05 at 3:04 pm

Lemuel, that’s not how hedonic pricing works, though there are plenty of legitimate questions about whether the US has been too aggressive in doing it.

If things worked that way, your computer would be valued in the national accounts at millions of dollars.

I agree entirely with your point at #58, though. Outside computers and things they have affected, progress has been slowing down, not accelerating.

61

agm 09.28.05 at 3:15 pm

It is well-known that Moore’s law is over, simply on quantum mechanical grounds, in the next 10 years, 15 if there are a few miracles. So any predictions based on the advance of technology that rely on Moore’s law are crap. This is not a joke, it’s predicated in the semiconductor industry’s roadmaps.

62

Peter 09.28.05 at 3:22 pm

Dr Memory, there are things I can change, and things I cannot. All too often I cannot change things because of the whim of my boss (or his, or his). Many are of the “move the couch to the left. Move the couch to the right. Left a bit. No. I think it looks better under the window over there” sort of thing that many husbands have experienced (irrelevant changes for the sake of looking important). I can take care of my own education, but I can’t make silk purses out of sows’ ears just because the seagull mismanager won’t let silk into the shop.

63

lemuel pitkin 09.28.05 at 4:58 pm

Lemuel, that’s not how hedonic pricing works, though there are plenty of legitimate questions about whether the US has been too aggressive in doing it.

Admit I’m a little rusty on this stuff. Isn’t how it works in what way?

Anyway, do you disagree that if over-aggressive use of hedonic pricing exaggerates quality improvements in computers, that will bias the apparent elasticity of demand toward 1?

64

John Quiggin 09.28.05 at 5:45 pm

The central idea of hedonic pricing is that you estimate a market value for characteristics like CPU speed using regression analysis. wikipedia gives a good summary and you can get more detail from the BLS.

Hedonic pricing replaces a matching model where quality improvements are valued by looking at the market premium for the newer higher quality item over the older one for the period.

Over-aggressive hedonic pricing will bias the estimated elasticity of demand downwards.

However, you don’t need to worry about this problem for my analysis, since you can just look at the share of IT-driven products in total expenditure. If this is rising the elasticity of demand for the relevant services is greater than one in the sense I want.

65

Andrew 09.28.05 at 6:12 pm

“In the real world half the companies I’ve worked for refuse to use source code control. People I network with report similar results”

WHAT? You and your friends must work for tiny tiny tiny companies or ones that are so poorly run that they shouldn’t even be considered when arguing whether software development practices have improved. Myself, and everyone I’ve ever worked with has used source control on every multi-developer project since undergraduate.

66

Tracy W 09.28.05 at 10:11 pm

Interesting factoid to carry around in your head. Since about 1950, the amount of R&D in the States has about tripled (count this in real dollars spent or scientists/engineers per capita). Yet the growth rate per capita has not tripled.

One of my economics lecturers suggested that growth may have been so high in NZ and the US in the 1950s because this was when the internal combustion engine (cars and trucks) took off and governments really started building motorways. So goods could be transferred far faster and cheaper across land than before.

67

bago 09.29.05 at 12:49 am

Peter: Creating a functional callgraph within a closed system in possible. I’ve done it for windows. Any time someone checks into the windows source tree, a delta graph is generated that goes over managed, native, com, registry, and file dependencies. Download and use FxCop to run a static ananlysis of your code, and see how many issues pop up. Look up F# from MSR cambridge. Read a Design Patterns book. It sounds like you want to work for a good group at microsoft. Source Depot is integrated with the bugtracking system, which is integrated with mail, so you can perform deep analysis of your code churn, and optimize workflows like never before.

Just last week we hit zrbb one day after zabb, a first for the org.

People keep getting hung up on metrics that are easy to measure, and blind themselves to the massive set of possibilities that all of this new metadata presents for further refinement and acheiving your actual goals. It’s one thing to make widget A go 50% faster, and it’s quite another to discover you can abandon widget A because it’s not important to the big picture. The pessimists get so caught up in widget A that they can’t see the forest of new innovations that is rapidly eclipsing it.

68

bago 09.29.05 at 12:57 am

You know how long it takes to get from Charlestown to Lexington today, using these amazing lightning-fast horseless carriages? About an hour.

Do you know how long it would have taken him to inform people of this data with a cell phone network? 10 minutes.

69

FMguru 09.29.05 at 1:36 am

“Do you know how long it would have taken him to inform people of this data with a cell phone network? 10 minutes.”

Which is the same amount of time it would have taken with a long-distance landline call in 1950 or a telegraph message in 1880. Wooo, progress!

70

abb1 09.29.05 at 1:58 am

No way, man, they would’ve all ended up in Framingham jail.

Instead, he [bin Laden] has fallen back on ancient methods of communication, denying the U.S. and its allies the chance to track electronic footprints, satellite signals or even the radiation emissions from cellular phones. A grid of trusted human couriers, foot soldiers melding in with civilians, crisscross Afghanistan and flow into neighboring countries carrying written and whispered messages that are then electronically shot around the world.

Can’t beat trusted human courier.

71

John Quiggin 09.29.05 at 5:43 am

A belated thought in response to Erich S. “Because of the availability of ELISA and Western blotting—both techniques developed for and by academic molecular biology—the entire blood supply of the western world did not get totally infected with HIV between 1980 and 2000”

Not a small thing. But at the maximum, the significance of this can be no greater than that of the discovery of blood groups which permitted safe transfusion in the first place. And effective public health measures (restricting high risk people giving blood, self transfusion and so on) could have greatly reduced the risk of infection even without a test.

If Kieran is reading this far down the thread I’m sure he’d have something interesting to say here.

72

Brett Bellmore 09.29.05 at 5:52 am

Interesting factoid to carry around in your head. Since about 1950, the amount of R&D in the States has about tripled (count this in real dollars spent or scientists/engineers per capita). Yet the growth rate per capita has not tripled.

It’s the damned regulations. Government regulation is enormously more detailed and proactive than it formerly was. For instance, the whole complex rule system of medical testing and approval, which stops some tragedies, but creates an ongoing tragedy of greater proportions: Slowed medical progress.

73

Peter 09.29.05 at 7:52 am

Bago, now (in 67) you’re describing source code analysis tools. Earlier (in 42) you were claiming those were binary analysis tools. Very different things. All those tools sound nice, but if the boss refuses to allow them into the shop, it doesn’t matter if you, I or that rock over there like them.

Andrew, the smallest company I’ve worked for that used source code control had 2 developers. The largest company I’ve worked for that refused to use source code control had over 60 developers. I’ve also worked at one where installing a purchased source code product wasn’t possible due to the political power games between the keepers of the computers and the software developing group (it sat in a box on a shelf for over a year). Ineptly run? I agree. Many companies are ineptly run. Which is why I lay the blame for a lot of the foulups (in the lack of improvement of software development among other things) at the feet of mismanagement.

74

mikmik 09.29.05 at 11:12 am

#5 and #51 raise two very important points. Humans can only change and learn so fast, and the more powerful a technology, the more possible paths for development arise.

These two seem to be at odds. So I wonder, there are people that will always be dangerous to themselves using a sharp knife to cut vegetables, LOL, yet these same people can use an ATM.

I believe it is the cohesiveness of society that enables growth.
Photon switches, flash memory, who knows next. It is only a matter of time. It seems very arbtrary, and isn’t this the crux of Moore’s Law, that it is an observation, and not a theory? Growth is not smooth.

What happened with walkmans? People isolate?
How about direct brain-computer electro-chemical(what else, LOL) interface wireless connection to my 300 terraflop 4096 core 4gig desktop at home? Daydreaming always?

90 hour work week?

75

Austin Parish 09.29.05 at 12:44 pm

Oh my, such misunderstanding…

The fact that for it to occur, some tech advances are required that seem very close, but may in fact be very far away. Advances for which we don’t even (seem to) have the basic ingredients of a convincing implementation – e.g. human level computer consciousness.

Human level computer consciousness is not necessary for a singularity to occur.

Actually, there are clear signs that the rate of computing progress has slowed down, and physics dictates that Moore’s Law will halt its progress in the near future because it predicts transistor sizes smaller than a silicon atom by around 2020.

So what? What does this have to do with computing speed and power? With only crude nanocomputers factored in, any sort of limited predictions of possible computing speeds from our point in time become meaningless (see Drexler’s Nanosystems for the thick physics of nanocomputation). But let’s say that there is some magical barrier to computing speed and power somewhere in the future. What does this have to do with the singularity?

The singularity is a difficult thing to wrap your mind around. It just seems so impossible – like going to the moon seemed in 1920, like any major technological change seems before it occurs. It inspires the deep feelings of “How could something like this ever happen? How could it ever happen in my lifetime? How could change like this occur?” This thread is an exemplar of how most people react to these feelings.

And the singularity is especially unfit for public understanding, because it is not just a major change, it is the major change.

76

bago 09.30.05 at 1:53 am

Hey, the technology is there. Just because some managers are too stupid to use it does not invalidate the technology. If a company with 60 devs is not using source control they’re fucked. Period. If their bug tracking system is TPS reports every tuesday, they’re fucked.

My point is that the technology is there, and can be used in stupendously great ways. The companies that know this will succeed, while the inept ones will die in the weight of their own wasted effort.

See you in the future.

Comments on this entry are closed.