The Age of Em Won’t Happen

by Henry Farrell on June 19, 2016

Tyler Cowen says that the predicted future of Robin Hanson’s Age of Em – a world in which most cognitive and much physical labor will be done by emulations of brain-scanned human beings – won’t happen. I agree. I enjoyed the book, and feel a bit guilty about criticizing it, since Hanson asked me for comments on an early draft, which I never got around to giving him (the last eighteen months have been unusually busy for a variety of reasons). So the below are the criticisms which I should have given him, and which might or might not have led him to change the book to respond to them (he might have been convinced by them; he might have thought they were completely wrong; he might have found them plausible but not wanted to respond to them – every good book consists not only of the good counter-arguments it answers, but the good counter-arguments that it brackets off).

First – the book makes a strong claim for the value of social science in extrapolating likely futures. I am a lot more skeptical that social science can help you make predictions on grounds that vaguely correspond to Popper’s arguments against historicism, but more specifically lean on the ideas of Ernest Gellner, Popper’s sometime ally and sometime antagonist. Hanson’s arguments seem to me to rely on a specific combination of (a) an application of evolutionary theory to social development with (b) the notion that evolutionary solutions will rapidly converge on globally efficient outcomes. This is a common set of assumptions among economists with evolutionary predilections, but it seems to me to be implausible. In actually existing markets, we see some limited convergence in the short term on e.g. forms of organization, but this is plausibly driven at least as much by homophily and politics as by the actual identification of efficient solutions. Evolutionary forces may indeed lead to the discovery of new equilibria, but haltingly, and in unexpected ways. As per Gellner (p.12, Plough, Sword and Book), this suggests an approach to social science which doesn’t aim at specific predictions a la Hanson, so much as at identifying the underlying forces which interact (often in unpredictable ways) to shape and constrain the range of possible futures:

human ideas and social forms are neither static nor given. In our age, this has become very obvious to most of us; and it has been obvious for quite some time. But any attempt at understanding of our collective or individual predicaments must needs be spelt out against the backcloth of a vision of human history. If our choices are neither self-evident nor for keeps, we need to know the range of alternatives from which they were drawn, the choices that others have made or which were imposted on them. We need to know the principles or factors which generate that range of options. The identification of those principles or factors is not beyond our capacities, even if specific prediction continues to elude us

Second: to the extent this criticism sticks, it suggests that the differences between the enterprise that Hanson is engaged in and good science fiction are nugatory. For sure, as Hanson suggests, science fiction is often going to bend prediction in favor of an interesting and compelling story. Yet in the end, much science fiction is doing the same kind of thing as Hanson ends up doing – trying in a reasonably systematic way to think through the social, economic and political consequences of certain trends, should they develop in particular ways. The aims of extrapolationistas and science fiction writers aims may be different – prediction versus constrained fiction writing but their end result – enriching our sense of the range of possible futures that might be out there – are pretty close to each other. NB again – this is emphatically not what Hanson wants to do – but it is the reason I got value from his book.

Third – there is a specific science fiction writer that Hanson might have learned from (this is the main source of my guilt about not getting comments back to him – I’d have liked have seen him engage with the case being made and think that the book might have benefited therefrom). One of the unresolved tensions in Hanson’s book is the exact status of the ’ems’ – the emulated personalities that Hanson sees as dominating for a brief period in the human future. Are they free agents, or are they slaves? I don’t think that Hanson’s answer is entirely consistent (or at least I wasn’t able to follow the thread of the consistent argument if it was). Sometimes he seems to suggest that they will have successful means of figuring out if they have been enslaved, and refusing to cooperate, hence leading to a likely convergence on free-ish market relations. Other times, he seems to suggest that it doesn’t make much difference to his broad predictive argument whether they are or are not slaves (here I’m sort of reminded of Piccione and Rubinstein’s Equilibrium of the Jungle argument, which if taken seriously has all sorts of troubling implications for the ‘nice’ politics that many economists draw from equilibrium arguments).

So Hanson’s extrapolated future seems to me to reflect an economist’s perspective in which markets have priority, and hierarchy is either subordinated to the market or pushed aside altogether. The work of Hannu Rajaniemi provides a rich, detailed, alternative account of the future in which something like the opposite is true – a solar system wide civilization in which emulated personalities (gogols) are pervasive, but in which the techniques used to generate them can also be used to reshape them and to break them, creating vast and distributed hierarchies of exploitation. I’ve mixed feelings about Rajaniemi’s books as science fiction, precisely because the gap between his imagined societies and his book’s plotlines (heist capers more or less) is so large. Yet this disconnect means that they provide a rich counter-extrapolation of what a profoundly different society might look like that for the most part isn’t subordinated to the needs of a plot line. I don’t know what the future will look like, but I suspect it will be weird in ways that echo Rajaniemi’s way of thinking (which generates complexities) rather than Hanson’s (which breaks them down).

As an aside – people interested in these ideas also ought to check out Ken MacLeod’s brand new book – the first of three books about interactions between emulated personalities based on sort-of-brain-scans and actual machine intelligences in a broader fight between sort-of-progressives and Mencius Moldbug style reactionaries. Since I haven’t read books two and three, I can’t tell you where his ideas are going – but I can predict that you’ll have a lot of fun along the way.



Matt 06.19.16 at 11:55 pm

The argument I’d make against an Age of Em is that there won’t be much work left undone by boring old “narrow” AI by the time whole brain emulations are ready, if they’re ever ready. People with a professional background in computing tend to drastically underestimate the complexity and difficulty of biology. I say this as someone with a professional background in computing whose graduate research was at the interface of computing and molecular biology.

Before I saw the mention of Hannu Rajaniemi I thought that you were going to cite Ken MacLeod’s middle two books of the Fall Revolution series. Enslaved mind emulations figure prominently in The Stone Canal and The Cassini Division.


Omega Centauri 06.20.16 at 12:43 am

I second Matt, on the technical difficulty of emulation of bio-brains versus traditional AI. The later is already here in very limited form, and will be gradually increasing in capability and pervasiveness as time goes by. We already have to confront the loss of work as the primary means of social valuation of humans, and that’s going to be tough enough, what emerges on the other side, even without EMs, will be tough enough to predict.


mjfgates 06.20.16 at 5:02 am

I can’t imagine why you’d want to bother setting up a pile of emulated brains, when there are so many real ones that will practically pay you to give them something to do.


Sandwichman 06.20.16 at 5:59 am

“We already have to confront the loss of work as the primary means of social valuation of humans…”

Not to mention that making work the primary means of social valuation was pretty sketchy in the first place. Confronting the implosion of the illusion may be too much to bear.


Niall McAuley 06.20.16 at 12:15 pm

The only reason to run emulations of scanned brains would be if it is too hard to build an AI from the algorithms up. This would suggest that you won’t have a good handle on how to edit or modify the emulations – you’d always get something close to the real brain’s mind (if it works at all).

If I was an emulation and knew it, it would not take me long to work out if I was a slave or not. If I somehow did not know it and couldn’t work it out, I’d have to be inside some Matrix-scale sim, which is surely more trouble than whatever work you are able to wangle out of me.

In Gene Wolfe’s Long Sun books, killer AI robot tanks are in servitude when built until they pay back their owner the cost of constructing them, and then they work freelance.

In Vinge’s story The Cookie Monster , someone tries to get work out of emulations without letting them know, but it is not a well thought out scheme, just a hack, so it falls apart.


The Dark Avenger 06.20.16 at 12:57 pm

People with a professional background in computing tend to drastically underestimate the complexity and difficulty of biology.

That’s why I share Atrios’s skepticism about self-driving cars. What could be termed “collision-adverse” cars will be available in the next few years, but a programmable vehicles won’t be until at least a decade or so, IMHO.


jake the antisoshul soshulist 06.20.16 at 12:59 pm

I, for one, welcome our EM overlords.

How much trouble will the Capitalists go to so as not to employ us?
Sadly, probably as much as they have to.
I am convinced that Fred Pohl was an incurable optimist.


Neel Krishnaswami 06.20.16 at 1:01 pm

[…] but in which the techniques used to generate them can also be used to reshape them and to break them […]

This particular line of criticism is one that has been repeatedly directed at Hanson for roughly two decades now, ever since he wrote If Uploads Come First in Extropy in 1994, and it’s one he’s failed to convincingly grapple with the entire time.

Now, this idea means the future social environment will not consist of self-interested maximizers — because agents’ utility functions can be directly programmed, the bulk of the mathematical toolkit of modern economic analysis (which assumes utility functions are exogenous) is irrelevant. However, Hanson is incredibly committed to this toolkit — he’s written that he went got a PhD in order to acquire the credentials he needed for his proposal on futures markets in ideas to be taken seriously. (That’s real dedication!)

Taking the programmability of agents seriously essentially means giving up on market forms for purposes of futurism, and as far as I can tell that’s something Hanson can’t do. In other words, Hanson writes about ems because they are the only posthuman future where economics is relevant.

But that has nothing to do with its plausibility.


Omega Centauri 06.20.16 at 1:30 pm

Dark Avengers logic that bio-brain simulation is (very) hard, therefore autonomous driving is very hard has a flaw. There is no proof, that bio-emulation is the only route to autonomous driving, in fact we already have weakform autonomous driving without emulation. His time scale seems realistic to me however.


Carl Weetabix 06.20.16 at 1:41 pm

Sadly unlike the Asimov sci fi’s I grew up with, we won’t be sharing in the gains the robots bring, sipping mint-juleps on our sky-porches and having gratuitous sex. Instead we will be fighting for the scraps of few wet-work jobs left for us, watching the masters of universe bask on their planetary mega yachts.


ZM 06.20.16 at 2:01 pm

I went to a talk on the best computers there are at the moment. Apparently the best computers there are need to be run with electricity from a small nuclear reactor or something, since they have such high energy consumption. Also they can only manage to do a 3D model of a cell from the Flu. It will probably be physically impossible to build a computer which could do a model of just one whole human brain, its much easier for people just to do the work themselves.


PMP 06.20.16 at 2:17 pm

The argument I’d make against an Age of Em is that there won’t be much work left undone by boring old “narrow” AI by the time whole brain emulations are ready, if they’re ever ready.

Absolutely. And it’s not just because building narrow-capability AI is technologically easier. Even if we could build a full human-brain emulation tomorrow, I reckon science and society would both lean toward relying on large numbers of narrow, single-purpose AI tools instead. Because they are undoubtedly tools.

Scientists and engineers aren’t working in a total vacuum. They have ethical questions about their work and its implications; they have read the science fiction and seen Terminator and The Matrix. A general purpose AI with broad capabilities to interact with the world, is an incredible potential danger and a major ethical dilemma. We’re hitting a point where factory farming is questioned pretty broadly by society (much more so than in the past), I doubt an enslaved underclass of human-level robot intelligences to be quietly overlooked. Much easier to just have a variety of complex-but-specialized software and hardware built to handle particular tasks, with no danger of developing a consciousness or expanding beyond its mandate.


alfredlordbleep 06.20.16 at 3:12 pm

Am probably way behind, but somebody ought to mention Hubert Dreyfus as a reference for philosophical skepticism pressing on AI. His book (with later edition) always struck me as one of the best adventures in applied philosophy, let us say.


Taj 06.20.16 at 3:16 pm

Spoken like you don’t own a smart phone, ZM.

Moore’s law seems to be topping out, but Koomey’s law is apparently going strong for now. That’s a seventy-year-long trend; if it continues for another twenty, a computer equivalent to Cray’s Titan will draw as much power as an LED light bulb. Hanson’s hundred-year timescale makes a lot of assumptions, but it isn’t plainly absurd.


alfredlordbleep 06.20.16 at 3:16 pm

P. S.
1992. Dreyfus What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press. ISBN 0-262-54067-3


TM 06.20.16 at 3:34 pm

Emulating human brains in order to create intelligent robots makes about as much sense as running computers on the bioenergy harvested from enslaved humans (which is how the Matrix was supposed to work, for Pete’s sake).


Brett Dunbar 06.20.16 at 5:04 pm

A trial of fully autonomous driverless cars is due to start in Milton Keynes and Coventry by the end of this year.


Gavin Kostick 06.20.16 at 6:32 pm

I am an EM but I’m too busy watching the football (soccer in your language) to deal with your puny needs.


ZM 06.20.16 at 7:23 pm


Well just to make am atomic scale model of the Flu they needed one of the most powerful computers in the world, Blue Waters. To model a human brain would need significantly more computing power than this. You may as well just get humans to do the jobs instead of building great computers to model humans to do the jobs.


Matt 06.20.16 at 7:35 pm

The continuation of Koomey’s Law looks questionable. On the original Green 500* list from November 2007, the champion achieved 357 megaflops per watt. By Koomey’s Law** it should have been 2087 by November 2011 — and it was actually 2026. Pretty close! By November 2015 it should have been 12206 megaflops/watt if Koomey’s Law were keeping up. It was actually at 7032. Every year that passes, you’d need a bigger leap forward just to get back on the predicted trend line.

You can undoubtedly reap huge efficiency benefits with special-purpose hardware tailored to different tasks. For that reason I expect many AI applications to continue to grow relatively unhindered by the death of Moore’s and Koomey’s laws. See for example Google’s recent unveiling of its special purpose tensor*** processing units. But you’re not going to fit a Cray Titan replacement in the power envelope of a light bulb.

* A list of large computing systems ranked by energy efficiency while running the high performance LINPACK linear algebra benchmark.

** “The number of computations per joule of energy dissipated has been doubling approximately every 1.57 years.”

*** Not the physicist’s tensor, but a highfalutin’ term appropriated by the machine learning community for multi-dimensional arrays.


Hidari 06.20.16 at 7:47 pm

Here’s an old article from Slate about why current enthusiasm (or hysteria) over self-driving cars might be misplaced, or at least premature.

@14 and @16

Please note this article (which went viral and caused hysteria amongst the usual suspects).


Hidari 06.20.16 at 8:05 pm

Incidentally, PZ Myers (an actual practicing biologist) on how ‘easy’ it will be to take a photo of the brain, copy it, and then upload it (to the internet, presumably).


JimV 06.20.16 at 8:06 pm

Most of the hysteria about the Epstein article among experts was hysterical laughter. See Scott Aaronson’s and Jeffrey Shallit’s for example. Here’s SA’s reaction (in a comment at his blog):

Scott Says:
Comment #99 June 4th, 2016 at 2:19 am
Peter Byrne #97: I didn’t find that article cogent at all. In fact, the entire thing seems based on a trivial language confusion.

In saying that “your brain is not a computer,” Epstein turns out to mean something true but pedestrian: namely, that the brain isn’t organized the way existing digital computers are organized. Nowhere does he even try to show why the brain couldn’t be simulated by a computer—i.e., why the brain violates the Physical Church-Turing Thesis. That’s what we’d need for the brain to be genuinely non-computational, i.e., for reproducing its behavior on a suitably-programmed computer to be impossible rather than merely complicated or hard. Worse, Epstein gives no indication of understanding what computational universality or the Church-Turing Thesis even are—meaning that the actual questions at stake here, the ones that Roger Penrose realizes he needs to answer for the brain-is-not-computational thesis to stand a chance, never even cross the horizon of Epstein’s consciousness.

Related to that, Epstein never even tries to grapple with deep learning algorithms, which do have a vaguely brain-like organization, and which of course have enjoyed spectacular successes over the last five years. It’s as if someone published a philosophical article, in June 2016, entitled “Why American Democracy is Inherently Stable Against Authoritarian Demagogues,” without even showing awareness of any potential recent counterexample, let alone trying to explain it away. I’d call that borderline intellectual dishonesty, if not for my suspicion that Epstein really does have no more awareness about machine learning than about other parts of CS.


BigHank53 06.20.16 at 8:40 pm

Did no one else ever read Phillip Jennings’ The Bug Life Chronicles? Convicts have their brains uploaded to silicon, whereupon they’re condemned to operating deep-space probes and traffic signals.

I’m not expecting to see self-driving cars any time soon in my town. You’ll need a pretty good AI to determine the difference between a fallen branch in the street and a garbage bag and a kid that’s fallen down and a dog. Two of those you can run over, two…well, not so much. Automated long-haul trucks on limited-access highways, though: they’ll be here within the decade because they’ll displace an expensive skilled worker, and they’ll work twenty-four hours a day.


Gotchaye 06.20.16 at 8:57 pm


You’re right that we’re not going to be simulating human brains down to the atomic level anytime soon or maybe ever, but I don’t know that many people think we’d have to do this in order to have a good enough simulation of a human brain. Like, obviously we can do a good enough job of simulating much, much bigger systems than flu cells on desktop PCs – we can predict where whole /planets/ are going to be a year from now. You don’t have to simulate every atom in the planet to sort out what gravity is going to do to the whole collection of atoms. Applied to simulating brains, the basic idea here is that maybe we can think of brains as just being made of neurons without worrying what the neurons are made of.


Matt 06.20.16 at 8:57 pm

In saying that “your brain is not a computer,” Epstein turns out to mean something true but pedestrian: namely, that the brain isn’t organized the way existing digital computers are organized. Nowhere does he even try to show why the brain couldn’t be simulated by a computer—i.e., why the brain violates the Physical Church-Turing Thesis. That’s what we’d need for the brain to be genuinely non-computational, i.e., for reproducing its behavior on a suitably-programmed computer to be impossible rather than merely complicated or hard.

I think it’s a little subtler than that. I once got into a blog argument with Rudy Rucker about whether a rock is a computer. I argued that it was not; he argued that it was. I think he was trapped by a point of view that can’t look at anything mathematically describable without interpreting it as computation. Epstein seems to be arguing against a similar but more common affliction that sees the brain “as” computation, though it is biological tissue like the liver and marrow, and the pancomputationalism view held by Rucker and some others doesn’t seem to be very popular for non-brain objects.

To be clear, I don’t argue that a rock, because it is not a computer, violates the Physical Church-Turing Thesis. In principle I believe that you could simulate any behavior of a rock (or a human liver, or brain) given a sufficiently detailed model of the object and its context plus sufficiently generous memory and space for the simulation. I do have considerable doubts about whether those “sufficiently”s can be achieved in the real world. I also think that being stuck with just that one computer-like model for thinking about the brain can be misleading.


Matt 06.20.16 at 9:00 pm

“…sufficiently generous memory and space” should of course been “…sufficiently generous time and space”.


Brett Dunbar 06.20.16 at 9:13 pm

@25 BigHank53

Considering that driverless cars (specifically the two seat LUTZ Pathfinder) will be operating in pedestrianised areas of Milton Keynes and Coventry by the end of this year reasonably adequate solutions to a person stepping in front of one must have been developed. Presumably like a human driver seeing an unexpected object in front of you a robot will brake or steer round unless it can determine it can be driven over safely.


Matt 06.21.16 at 12:08 am

You’ll need a pretty good AI to determine the difference between a fallen branch in the street and a garbage bag and a kid that’s fallen down and a dog. Two of those you can run over, two…well, not so much.

That would be a pretty hard classification problem if the machine were limited to the same sort of sensors humans use to assess the situation (passive imaging in the human visual range, with fairly poor low-light performance, e.g. human eyeballs). But weaknesses in the classifier can be compensated with better sensors. Imaging in the thermal infrared range, for example, can distinguish living humans and animals from branches and garbage bags on the basis of temperature contrast with surroundings. Active radar sensors can further distinguish between a solid obstacle and an empty plastic bag.

The fused data from different passive and active sensors can upgrade machine performance on classification tasks from significantly-worse-than-human to better-than-human. That’s part of how machines got better than humans at some kinds of facial recognition challenges. Machines could upgrade the optics and sensors for high resolution multispectral imaging, which provides better distinguishing features. Humans are stuck using the same eyeballs forever.


Jason Smith 06.21.16 at 12:13 am

Regarding the convergence on optimized outcomes, there is reason to expect this in the long run if we think the state space (say, species) is being explored randomly and is of sufficiently high dimension (say, traits). For thousands of traits, most of the possible states (i.e. species) are near the leading edge of the boundary (defined by available energy, time or other constraint). If we were to select a species at random from the state space, it would likely exhibit approximately maximal fitness due to most points in a high-dimensional volume being near its boundary.


jake the antisoshul soshulist 06.21.16 at 12:14 am

I don’t suppose you would volunteer to be a test subject, Brett?

I am not sure what would be the value of emulating the human brain. The idea seems to be a bit anthropocentric to me.
There are plenty of humans that have an acceptable human brain. Why would you not want something better. Even if it should supercede us. But, then the war of extinction
could render the planet uninhabitable even for our successors.


GregvP 06.21.16 at 1:33 am

Taj@15, Cray Titan performance/watt and Koomey’s Law.

Landauer’s principle indicates a theoretical minimum of around half a watt for Titan-level performance (assuming room temperature and 100 bit-flips per floating-point operation).

If Koomey’s law continues to hold, in 20 years we will be within two orders of magnitude of this theoretical minimum, and Koomey’s law will definitely fail shortly thereafter. Hanson’s extrapolations seem implausible to me.


js. 06.21.16 at 4:13 am

Oh great, I just got modded. Will try again!


js. 06.21.16 at 4:15 am

Hidari’s Aeon link @22 reminded me of this, which I think is excellent. (Tho I don’t think Luciano Floridi makes any real argument against AI sceptics.)


js. 06.21.16 at 4:16 am


Niall McAuley 06.21.16 at 5:57 am

Brett writes: Presumably like a human driver seeing an unexpected object in front of you a robot will brake or steer round

Better than a human – human reaction times are terrible, and a car travels a long distance while a human is reacting. Reaction times are one reason a robot will make a safer driver.


Hidari 06.21.16 at 7:26 am

@24 ‘Nowhere does he even try to show why the brain couldn’t be simulated by a computer—i.e., why the brain violates the Physical Church-Turing Thesis.’
Epstein may well have gotten the Church-Turing Thesis wrong (or, as you correctly point out, the relevant ‘offshoot’ of this theory, the Physical Church-Turing Thesis*). But he is scarcely the only one. As Wikipedia correctly points out: ‘Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind; however, many of the philosophical interpretations of the Thesis involve basic misunderstandings of the thesis statement.’ ( for more on this see:

As @27 points out the problem with the Physical Church-Turing Thesis is not that it is wrong as such as that it explains too much (Searle points this out). In other words, it’s not just, assuming the PCTT is correct (and let’s face it it might not be) that the human brain is Turing computable, it’s that EVERYTHING (in this view) is Turing computable so… what? The theory ‘explains’ everything and therefore nothing. After all it sounds all exciting and ‘sciencey’ to claim that the human brain is Turing computable. It sounds a lot less exciting (indeed, crazy) to claim that a wall or a rock are also Turing computable, but that is the logic of the claim (as Searle pointed out long ago:

*It’s worthwhile pointing out that some people consider the PCCT to be false, see for example

I might finally point out that despite what strong AI fanatics continually try to imply, ‘can be simulated by’ is not a synonym of ‘is’. Hamlet can be (and is) simulated by actors, but this doesn’t make Benedict Cumberbatch Hamlet.


Hidari 06.21.16 at 7:32 am

@36 Great line from that paper:

argument seem(s) to be that, since we do not have the faintest idea about how to
build a machine that can answer a handful of intelligent questions or even win the
one-question TT consistently, the best strategy might be to go full-blown and try to
build a machine that is conscious. As if things were not already impossibly difficult
as they stand. This is like being told that if you cannot make something crawl, you
should make it run a hundred metres under 10 s, because then it will be able to
crawl. Surely it will, but surely there must be better ways of spending our research
funds than by chasing increasingly unrealistic dreams’.


Hidari 06.21.16 at 7:34 am

This article rarely turns up in discussions, perhaps because it wasn’t published in an academic journal but it’s pretty good. Raises a lot of points that are never raised by the pro-AIers.


Trader Joe 06.21.16 at 11:23 am

“You’ll need a pretty good AI to determine the difference between a fallen branch in the street and a garbage bag and a kid that’s fallen down and a dog. ”

The AI on the current iteration of cars is not even attempting to make any such distinction – it regards all of these as obstacles and the AI will seek to avoid the via either stopping or maneuvering around them if its safe to do so. The advantage AI brings to the table is that its sensors will have noticed the obstacle – whatever it is, far faster than even over-caffineated F-1 driver could and will have made a decision about avoidance before a human would have been aware of the problem.

The #1 problem this has caused in testing (so far) is that this vastly superior reaction time is causing the feeble human drivers to rear-end the autonomous vehicles since the AI has stopped the car ahead of whatever’s in the road and the humans can’t even react quick enough to avoid hitting the car in front of them. This problem solves itself when a critical mass of autonomous vehichles is reached.

That said, based on vehichle fleet ages and tech adoption in the auto industry it would take about 30 years before even 50% of the fleet had appropriate autonomous technology.


bianca steele 06.21.16 at 12:52 pm

I like the example of a violin. A computer (electronic synthesizer) can do anything a violin can do. Digital computers are powerful! But string instruments are still interesting. If someone next year invented a new varnish that produced violins better than Stradivarius’, we wouldn’t say, oh a computer could already do it. We would be amazed that a violin could produce such beautiful music (without even being able to read a note, let alone understanding modus ponens)!

Most people who toss around “Church-Turing” don’t know why the thesis was proposed or what it actually means.


SusanC 06.21.16 at 6:10 pm


Yes, the report that Google’s robot cars suffer a higher than average rate of rear end collisions was interesting, even if it’s obvious with the benefit of hindsight.

At least in the UK, human drivers are taught to check their rear-view mirror before braking (unless it’s an emergency stop). Considering the likely reflexes of the driver behind is already explicitly part of the as-taught-to-humans algorithm, so clearly should be part of the software for robot cars too. It’s interesting because it’s an example of a task that initially appears to be just physics (and motoring law), but turns out to involve an element of modelling human beings.

I would imagine that solving this is much, much easier than the general AI problem, or uploading people’s brains into computers. The computer game Forza already uses machine-learning techniques to model human driver behaviour, so that the computer controlled cars in the game behave like they would if a human was driving them. A somewhat similar model of human braking reactions might do the trick for a real-world robot car (where the goal of the s/w is to predict other human drivers, not emulate them).

I really don’t believe that fully emulated personalities will become a viable alternative to this kind of low-level machine learning.


Hidari 06.21.16 at 7:06 pm


It’s worth noting that, after the initial hysteria, Google now promises that self-driving cars (as we understand that concept, i.e. cars that simply take you where you want to go, in all weather, and under more or less any circumstances) may well be 30 years away.

Two points to be made here:

1: Remember this is probably an optimistic viewpoint as Google have a vested interest in exaggerating how quickly such vehicles can be produced and brought into mass-production and

2: This is 30 years till they are on the market. It will take a good number of decades after that (if the previous move from horse-drawn transportation to petrol powered cars is anything to go by) before the majority of cars on the road are self-driven.


js. 06.21.16 at 11:16 pm

Google now promises that self-driving cars (as we understand that concept, i.e. cars that simply take you where you want to go, in all weather, and under more or less any circumstances) may well be 30 years away.

I am in my late 30s, so I expect to be around for another three decades. I would be extremely surprised if self-driving cars in this sense (which is indeed the relevant sense) became a reality in that time.

Meanwhile, I wish we would build some more motherfucking trains! We already know how to do that.


bruce wilder 06.21.16 at 11:50 pm

If people took the reality of climate change seriously, we would be building out rail and canals, even. Going backwards in a way, but using high-tech to make those very energy efficient modes of transportation fantastically parsimonious in the context of a fantastically frugal civilization.

Our profligacy or the blind acceleration of technological change in pursuit of novelty is going to do us in.


Brett Dunbar 06.22.16 at 5:32 am

We make very extensive use of sea bourne transport, which is about the most energy efficient form we know.

I would be very surprised if none of the various projects have produced a fully featured driverless car within five years. I also expect that once it has been developed it will very quickly be an optional extra on all new cars, then a standard, then compulsory.


TM 06.22.16 at 8:40 am

42: “A computer (electronic synthesizer) can do anything a violin can do.”

I don’t think this is true. A synthesizer cannot create the same complex overtomes as a real instrument. Similarly, synthetic chemistry cannot recreate the complexity of natural fragrances.

26: A planetary system is far less complex than even the simplest biological system. And if I remember correctly, even the simple three-planet model can have chaotic trajectories which are in the long run, inherently, unpredictable.


Hidari 06.22.16 at 9:47 am


Look at driverless underground trains. We have known how to build them for decades (the first fully automated tube line was built in 1967).

But you will note by looking around you, that the vast majority of underground lines in the world are still not fully automated (in my city, for example, all the trains are manned and there are no plans to change that).

Or what about overground trains? It would be very difficult, sure, but possible to simply automate all overground trains in all major industrial countries. But again, this has not been done, and there are no serious plans to do this.

What do you infer from this?


Brett Dunbar 06.22.16 at 10:27 am

The driver is a very small part of the operating cost of the train and the AI systems available were fairly primitive. Basically they could cope with totally routine conditions but were rather inflexible they have difficulty with request stops for example while a driver can distinguish between a traveller on an open platform gesturing for the train to stop and a trainspotter standing on the platform.

The sensor and computer systems are reaching the point where they can cope with the uncontrolled situations found on roads. They are far more flexible than the older type of autonomous system. An intelligent robot rather than a mechanical automaton.


TM 06.22.16 at 10:29 am

Hidari: interesting observation. It is of course also true that automatic security systems have done a lot to make trains (and planes) safer. Human error still causes crashes. In a recent train crash in Bavaria, a human operator twice – by mistake, not on purpose – overrode the automatic safety system. To me this indicates that we still haven’t quite figured out the problem of designing the human-machine interface. There will always be a human-machine interface and getting it right is hard.


Hidari 06.22.16 at 11:28 am

@51 you are making my point for me. It is certainly true that we can now start to envisage driverless cars (i.e. on roads). But this goes doubly for overground trains (let alone underground trains where, as I have pointed out, the technology has existed since the 1960s).

And yet there is no major move to automate any overland railway system in any country and the automation of underground railways has been piecemeal and half hearted.

So again, what do you infer from this? To be specific what do you infer about the timescale of any proposed ‘switch over’ from mainly ‘manned’ to mainly ‘unmanned’ motorvehicles?


bianca steele 06.22.16 at 2:02 pm


Maynard Smith, last I heard, also thought the argument about chemicals was more important than the traditional CS arguments. I take it you join the camp that’s dubious about the possibility of AI?


Brett Dunbar 06.22.16 at 2:44 pm

The tech hasn’t been available in a useful form as of yet. The older type of systems were rather inflexible and the potential cost savings of automation were very limited so in most cases it didn’t really make sense to use the very limited systems then possible. They could only really be used in a position where trespassers were basically imp0ssible and track conditions were consistent (such as no leaves on the line affecting traction) which pretty much means underground tracks.

Automating a train may reduce the crew needed by one but this isn’t much of a saving for a train dozens to hundreds of passengers with a number of staff other than the driver. It’s an easier problem than roads but the benefits from solving it are pretty small. So no one bothered to make the substantial investment to develop an automated system.

Cars are a more difficult problem but the potential sales are substantial.

The underlying technology now is far better the software required was far beyond the capabilities of the hardware available until recently. The competitions for prototype robot land supply run by the military have shown very rapid improvements in capability.

I expect that once a viable system is developed, which given we are field testing prototypes we are pretty close to. It will be mandatory on new cars within five to ten years, once the cost of installing the system during construction is under maybe £1000. It’s like with pharmaceuticals development is hugely expensive but then duplication is pretty cheap so you go straight to mass market rather than target the luxury end.


TM 06.22.16 at 3:39 pm

bs 54, I was expressing disagreement with a statement you made, not joining any camps.


bianca steele 06.22.16 at 3:47 pm

TM, it was hard to know what you were expressing. You didn’t engage with the argument made by Hidari and Matt or with the argument I made.


Hidari 06.22.16 at 6:03 pm

er…I’m probably not going to continue to follow this conversation any more as I don’t think we are singing from the same hymn sheet, but you haven’t really answered any of my points. It is simply not true that there are not substantial financial benefits to be made from firing all tube drivers and replacing them with automatic trains. London Underground tube drivers are famously well paid: the highest paid is apparently paid about £61000 p/a. TfL would save substantial sums of money by complete automation.
But they don’t do it.

Just because something can be done, it does not follow that it will be done.

As for your prediction that driverless cars will be mandatory within 5-10 years, I would just ask you this: what do you know that the head of the driverless car project at Google doesn’t?


TM 06.23.16 at 11:07 am

bs 56: “it was hard to know what you were expressing”

That is a harsh judgment but I’ll have to live with it.


Layman 06.23.16 at 11:59 am

Brett Dunbar: “I would be very surprised if none of the various projects have produced a fully featured driverless car within five years. I also expect that once it has been developed it will very quickly be an optional extra on all new cars, then a standard, then compulsory.”

As fantasies go, there’s nothing wrong with this, but as a prediction, it’s basically useless. Start with geography – in what city, state, country do you predict that people will be permitted to operate a driverless car (without driving it themselves!) as an option, in anything remotely like 5 years or ‘very quickly’ thereafter?


Brett Dunbar 06.23.16 at 12:40 pm

UK law was changed early last year to permit driverless cars. And a trial using initially 40 driverless LUTZ Pathfinder pods is starting at the end of this year in Milton Keynes and Coventry. Specialist insurance broker Adrian Flux are offering a policy that covers driverless cars.

It is already legal here. Once it is both reasonably inexpensive and substantially safer it will become mandatory on new cars and illegal to remove if installed (which is the standard approach to safety systems).


Layman 06.23.16 at 12:48 pm

“It is already legal here.”

…with a driver in the driver’s seat, at the controls.


Brett Dunbar 06.23.16 at 2:37 pm

No. As I specifically stated the UK changed the law early last year in order to be able to conduct trials of driverless vehicles.


Trader Joe 06.23.16 at 2:43 pm

@60 layman
The “auto-pilot” on the current Tesla is for all practical purposes already a fully autonomous vehicle.

Google ‘tesla autonoumous car’ and you’ll get any number of You Tube videos of people employing their car in traffic quite effectively. Reportedly someone has driven one coast to coast with this feature – although some have called BS on it.

The Mercedes S-Class that is soon to be available will also have this technology and reviews say its even better.

Whatever the specifics – Brett is quite right, there is little chance that an ‘auto pilot’ feature won’t be an option on cars in the next 5 years. Whether it is ever ‘mandated’ or not is highly speculative for the reasons many have suggested.


Brett Dunbar 06.23.16 at 3:48 pm

If, as seems probable, an autopilot is better than any human and it is fairly inexpensive. Incremental cost in a new car is under say £1000 so mass market cars start being fitted with them as standard then the next revision of car construction standards will likely make them a mandatory safety feature. The UK has one of the world’s lowest road fatality rates partly due to a rather aggressive approach to safety, the UK is likely to be one of the first countries to require fitting an autopilot in all new cars.

According to wikipedia in 2013 the UK had 3.5 fatalities per billion vehicle km. Only Sweden (with 3.5) was lower amongst the countries where this could be calculated. The USA was 7.1, South Korea 18.2 and Brazil 55.9.


Layman 06.23.16 at 10:17 pm

‘The “auto-pilot” on the current Tesla is for all practical purposes already a fully autonomous vehicle.’

Yes, I’m aware. To clarify my objection – I’m asking which polities will approve the operation of these cars on the public roads, in the absence of any person in the driver’s seat, inside 5, or 10, or even 20 years. As long as someone is in the driver’s seat, it isn’t in my view a self-driving car, any more than a 787 is a self-flying plane as long as there’s a pilot in the seat. And it certainly doesn’t change the public transportation equation in the way people claim it will if every car still has a driver. So, which set of local politicians decides to take the risk, for what public gain?


Brett Dunbar 06.23.16 at 10:45 pm

The UK, early last year, as I have stated. If the tech is developed in the UK then the incomer from the licensing on the IP benefits the UK. In order to conduct trials autopilots needed to be legalised so rather than have to re legislate in year or two once the tech is ready for commercial use parliament decided to go the whole hog and legalise everything now.

Once a practical system is available then a lot more jurisdictions are likely to follow. It’s not quite at that stage yet but will be soon.


Layman 06.23.16 at 10:57 pm

“The UK, early last year, as I have stated.”

Can you please provide a link? My understanding of the 2015 law is that it requires a driver in the driver’s seat. Is that not the case?


Layman 06.23.16 at 11:03 pm

Comments on this entry are closed.