When the Machine Started

by Henry on March 9, 2011

The great “what will we do when the machines take over” debate continues, but surprisingly little attention has been paid to the arguments of licensed speculative economists science fiction writers, who have been engaged in this debate for some decades at least. The Bertram/Cohen thesis receives considerable support from Iain Banks’ repeated modeling exercises with slight parameter variations, which find that the advent of true artificial intelligence will free human beings to spend their time playing complicated games, throwing parties, engaged in various forms of cultural activity (more or less refined), and having lots and lots of highly varied sex. With respect to the last, it must be acknowledged that extensively tweaked physiologies and easy gender switching are important confounding factors.

But it isn’t the only such intellectual exercise out there. Walter Jon William’s Green Leopard Hypothesis (update: downloadable in various formats here – thanks to James Haughton in comments) suggests, along the lines of the Cowen/DeLong/Krugman argument, that a technological fix for material deprivation will lead to widespread inequality and indeed tyranny, unless there be root and branch reform to political economy. But perhaps the most ingenious formulation is the oldest – Frederik Pohl’s Midas Plague Equilibrium under which robots produce consumer goods so cheaply that they flood society, and lead the government to introduce consumption quotas, under which the proles are obliged to consume extravagant amounts so as to use the goods up (the technocrats fear that any effort to tinker to the system will risk reverting to the old order of generalized scarcity). This is a world of conspicuous non-consumption in which the more elevated one’s social position the less possessions one is obliged to have. Crisis is averted when the hero realizes that robots can be adjusted so that they want to consume too, hence easing the burden. One could base an entire political economy seminar around Pohl’s satirical stories of the 1950’s and 1960’s – he was (and indeed arguably still is, since he is still alive and active ) the J.K. Galbraith of the pulps. If, that is, J.K. Galbraith had been a Trotskyist. I’m sure that there are other sfnal takes on this topic that I’m unaware of – nominations?

{ 138 comments }

1

Courtney Ostaff 03.09.11 at 2:57 am

See also:
http://www.sigmaforum.org/

I suggest that these are all happening at once. Good sci-fi takes *one* aspect of society and exaggerates it. Real life is more complex.

Bertram/Cohen + Cowen/DeLong/Krugman = bread and circuses, no?

“The proles are obliged to consume and use up extravagant amounts so as to use the goods up” = current obesity crisis, too.

2

Sandwichman 03.09.11 at 3:01 am

Just a reminder that we are on the eve of the 200th anniversary of the start of the Luddite riots: March 11, 1811.

The Luddite disturbances started in circumstances at least superficially similar to our own. British working families at the start of the 19th century were enduring economic upheaval and widespread unemployment. A seemingly endless war against Napoleon’s France had brought “the hard pinch of poverty,” wrote Yorkshire historian Frank Peel, to homes “where it had hitherto been a stranger.” Food was scarce and rapidly becoming more costly. Then, on March 11, 1811, in Nottingham, a textile manufacturing center, British troops broke up a crowd of protesters demanding more work and better wages.

That night, angry workers smashed textile machinery in a nearby village. Similar attacks occurred nightly at first, then sporadically, and then in waves, eventually spreading across a 70-mile swath of northern England from Loughborough in the south to Wakefield in the north. Fearing a national movement, the government soon positioned thousands of soldiers to defend factories. Parliament passed a measure to make machine-breaking a capital offense.

But the Luddites were neither as organized nor as dangerous as authorities believed. They set some factories on fire, but mainly they confined themselves to breaking machines. In truth, they inflicted less violence than they encountered. In one of the bloodiest incidents, in April 1812, some 2,000 protesters mobbed a mill near Manchester. The owner ordered his men to fire into the crowd, killing at least 3 and wounding 18. Soldiers killed at least 5 more the next day.

Earlier that month, a crowd of about 150 protesters had exchanged gunfire with the defenders of a mill in Yorkshire, and two Luddites died. Soon, Luddites there retaliated by killing a mill owner, who in the thick of the protests had supposedly boasted that he would ride up to his britches in Luddite blood. Three Luddites were hanged for the murder; other courts, often under political pressure, sent many more to the gallows or to exile in Australia before the last such disturbance, in 1816.

3

Sandwichman 03.09.11 at 3:03 am

I misplaced the end of the blockquote in the above post. It should go all the way to the end.

4

marcel 03.09.11 at 3:15 am

I thought that J Holbo had monopoly rights to this sort of material on the CT site. How do you guys sanction one another?

5

Sandwichman 03.09.11 at 3:17 am

And then there is William Cobbett’s Letter to the Luddites (1816), an excerpt from which I quote below:

I shall now return to the subject of machines, and beg your patient attention, while I discuss the interesting question before stated: that is to say, whether machinery, as it at present exists, does, or does not, “operate to the disadvantage of journeymen and labourers.”

The notion of our labourers in agriculture is, that thrashing machines, for instance, injure them, because, say they, if it were not for those machines, we should have more work to do. This is a great error. For, if, in consequence of using a machine to beat out his corn, the farmer does not expend so much money on that sort of labour, he has so much more money to expend on some other sort of labour. If he saves twenty pounds a year in the article of thrashing, he has that twenty pounds a year to expend in draining, fencing, or some other kind of work; for, you will observe, that he does not take the twenty pounds and put it into a chest and lock it up, but lays it out in his business; and his business is to improve his land, and to add to the quantity and amount of his produce. Thus, in time, he is enabled to feed more mouths, in consequence of his machine, and, to buy, and cause others to buy, more clothes than were bought before; and, as in the case of the ten sailors, the skill of the mechanic tends to produce ease and power and happiness.

The thrashing machines employ women and children in a dry and comfortable barn, while the men can be spared to go to work in the fields. Thus the weekly income of the labourer, who has a large family, is, in many cases, greatly augmented, and his life rendered so much the less miserable. But, this is a trifle compared with the great principle, upon which 1 am arguing, and which is applicable to all manufactories as well as to farming; for, indeed, what is a farmer other than a manufacturer of corn and cattle?

That the use of machinery, generally speaking, can do the journeyman manufacturer no harm, you will be satisfied of in one moment, if you do but reflect, that it is the quantity of the demand for goods that must always regulate the price, and that the price of the goods must regulate the wages for making the goods.

I think, then, that it is quite clear, that the existence of machinery, to its present extent, cannot possibly do the journeyman manufacturer any harm; but, on the contrary, that he must be injured by the destruction of machinery. And, it appears to me equally clear, that if machines could be invented so as to make lace, stockings, &c. for half or a quarter the present price, such an improvement could not possibly be injurious to you. Because, as the same sum of money would still, if the country continued in the same state, be laid out in lace, stockings, &c., there would be a greater quantity of those goods sold and used, and the sum total of your wages would be exactly the same as it is now.

But, if machinery were injurious to you now, it must always have been injurious to you; and there have been times, when you had no great reason to complain of want of employment at any rate. So that it is evident, that your distress must have arisen from some other cause or causes. Indeed, I know that this is the case; and, as it is very material that you should have a clear view of these causes, I shall enter into a full explanation of them; because, until we come at the nature of the disease, it will be impossible for us to form any opinion as to the remedy.

Your distress, that is to say, that which you now more immediately feel, arises from want of employment with wages sufficient for your support. The want of such employment has arisen from the want of a sufficient demand for the goods you make. The want of a sufficient demand for the goods you make has arisen from the want of means in the nation at large to purchase your goods. This want of means to purchase your goods has arisen from the weight of the taxes co-operating with the bubble of paper-money. The enormous burden of taxes and the bubble of paper-money have arisen from the war, the sinecures, the standing army, the loans, and the stoppage of cash payments at the Bank; and it appears very clearly to me, that these never would have existed, if the Members of the House of Commons had been chosen annually by the people at large.

6

TGGP 03.09.11 at 3:21 am

One of the most obvious systematic biases of fiction is that it’s supposed to be a good story. In the Dreamtime we have lots of stories, but the Malthusian future (which I’m inclined to predict on grounds of evolution) of tiny replicators (however their pursuits may be engrossing to them) is at a memetic disadvantage.

7

James Haughton 03.09.11 at 3:22 am

The bleeding obvious example you have missed is Huxley’s “Brave New World”, which was all about engineering society to mass Fordist consumption/production parameters.

8

James Haughton 03.09.11 at 3:29 am

Ps you can download “The Green Leopard Plague” from the publisher’s website here: http://www.nightshadebooks.com/downloads ; along with some Paolo Bacigalupi and other cool stuff.

9

JP Stormcrow 03.09.11 at 3:35 am

I assumed there was some manner of taboo in the other thread against going the obvious sci-fi route.

Early Vonnegut–Player Piano–and Philip José Farmer’s “Riders of the Purple Wage” are two that resonated for me back in the day. You are probably quite aware of them, however.

10

bc 03.09.11 at 3:40 am

Long time lurker finally commenting…

Besides Banks several masterful novels describing The Culture universe, you should also include Neal Asher’s several Polity novels, Peter Hamilton’s Commonwealth series, Rudy Ricker’s Postsingular, as well as works by Charles Stross, Ken McLeod, and Alastair Reynolds. All of these authors have posited some form of strong AI, either envisioned as developing separately from human evolution, or else humanity evolving alongside or because of integration with such artificial intelligences.
The point here is that many, many writers have explored this territory, yet sadly few people seem to know this. The topic is central to most of the recent work I’ve read and enjoyed. And this body of work has made me much less pessimistic about humanity’s future. I’ve come to live with the hope that such a future might come into being.

11

PHB 03.09.11 at 3:44 am

Consider the fraction of clerical work that essentially consists of a person reading from one computer generated report and typing it into another computer. Computerization has greatly reduced the amount of internal paperwork corporations generate but have significantly increased the amount of paper exchanged between them.

Another major source of inefficiency is needless duplication of bureaucratic forms. When taking a job or enrolling a kid in school one can easily spend two or three hours filling in thirty plus forms, every one of which requires a full name address, social security number, etc. to be entered at the start.

The next phase of the Web is Web Services and they will enable direct transfer of information from one computer to another in machine readable form. Even a simple step such as automating the invoicing process would realize considerable savings.

12

Omega Centauri 03.09.11 at 4:09 am

I’m pretty skeptical about AI. True machine intelligence was thought to be merely ten years off in 1960! The estimate of how much longer it will take only grows with time. I have a great anecdotal storey about computer chess. I once taught an evening class at Los Alamos with a woman (I’m wracking my brain, but I can’t remember her name) whose claim to fame was that she was the first human who ever lost a chess game to a computer. They had just got their marvelous machine to play chess. From what I heard, it was actually simplified chess, as they only had enough memory for a six by six board. So they needed to find someone whose ches skills were sufficiently low, that the computer had a fighting chance. They started knocking on doors, and asking “do you know how to play chess?”. She was the first person to say no, and her lack of knowledge of the game was sufficient enough to lose the game.

13

Timothy Scriven 03.09.11 at 4:48 am

Who do you think you are, John Holbo?

14

Tom T. 03.09.11 at 4:56 am

John C. Wright’s Golden Age trilogy has some discussion of the economics and governance of a far-future world with true artificial intelligence and enormous computing power, and explores how much authority and privacy to cede to the machines. Wright’s thinking is ultimately probably too libertarian for most people here, but he deals with issues of class and poverty, and addresses society’s legitimate need to safeguard itself against excesses of individual action.

15

Matt 03.09.11 at 5:04 am

Cory Doctorow, After the Siege: Completely automated manufacturing makes it possible for people to freely duplicate almost any material goods. Nations that allow their citizens to do so get bombed and invaded by corporations/governments who claim ownership of the duplicated goods.

Charles Stross, Accelerando: Earth and the rest of the inner Solar System is devoured by incomprehensible AI that is the descendant of legal/financial software. A lot of other stuff happens too.

Ken MacLeod, The Stone Canal and The Cassini Division: Posthuman intelligence proves incomprehensible and hostile. There are two major factions of humans, one left-libertarian and one right-libertarian. Both develop postscarcity economies. The right-libertarians have cooler technologies in a few ways, but they also manage to reinvent slavery (of intelligent machines and mind-uploads) and nearly exterminate humanity when they try to open commercial relations with the posthumans.

Ken MacLeod, The Sky Road: A diverging future history connected to The Stone Canal and The Cassini Division, but very different from them. Post-singularity technology is ubiquitous, but generally low-energy and benign. Earth’s inhabitants are secure, peaceful, prosperous, and in no hurry to go to the stars. Not much happens, but enjoyable for the technological and social daydreaming; like Edward Bellamy for eco-conscious computer geeks.

16

Matt 03.09.11 at 5:22 am

“I’m pretty skeptical about AI. True machine intelligence was thought to be merely ten years off in 1960! The estimate of how much longer it will take only grows with time. I have a great anecdotal story about computer chess…”

This is because after a machine can do something that only humans could do before, it is retroactively deemed not-really-intelligence. Building a world-champion chess machine took longer than many researchers expected, and once it was done the skeptic’s refrain quickly moved on from “it will never happen” to “big deal.”

But — good news for this discussion! — the moving goalposts of what people will accept as “true machine intelligence” have nothing to do with the economic effects of more capable automation. The machines are being built to provide goods and services at lower prices, not to win fame in the Stanford Encyclopedia of Philosophy. If the displaced worker points at his replacement and exclaims “that machine LACKS QUALIA!” to the delight of his fellow unemployed, everyone involved is missing the mark by a mile.

17

SpeakerToManagers 03.09.11 at 5:25 am

Also, Charlie Stross’ Glasshouse for a look at a climax post-scarcity culture and the evils that still affect it. And a classic from back in the mid-20th Century, Damon Knight’s A for Anything: the birth pangs of post-scarcity economy when matter duplicators remove the need for manufacturing.

As for strong AI, it will probably happen eventually, but it’s going to take a lot longer than the researchers have been predicting. What AI has been good for in the last 50 years is finding out what intelligence isn’t. Not to say that’s not useful, but it wasn’t what the enthusiasts were hoping for.

18

Thomas 03.09.11 at 5:42 am

John Barnes: A Million Open Doors, Earth Made of Glass et seq. People largely adjust to the economic consequences, apart from a few weird religious extremists, but the fact that no-one has anything worthwhile to do is a problem. This seems to be Ezra Klein’s view.

19

Greg 03.09.11 at 6:30 am

http://en.wikipedia.org/wiki/Karel_%C4%8Capek

The word robot comes from the word robota meaning literally serf labor, and, figuratively, “drudgery” or “hard work” in Czech, Slovak and Polish.

The original and still the best.

20

joel hanes 03.09.11 at 6:51 am

Phillip Jose Farmer Riders of the Purple Wage

21

garymar 03.09.11 at 7:45 am

What a coincidence — I just finished my second reading of Stross’ Accelerando last night. The intelligence of the AIs he postulates are to our intelligence as ours is to the nematode worm.

So, when a nematode worm writes science fiction about humans, what does it talk about?
1) other nematode worms with which it would like to exchange genetic material;
2) how to avoid getting stepped on by humans;
3) the evil of nematophagous fungi;
4) talking to the one or two humans (out of billions) who for some inexplicable reason are interested in nematode worms and don’t step on them on purpose, and;
5) raising the next generation of nematode worms.

It’s a great book, just that postulating beings faroutside our ken means…they’re just too outside our ken! (I had a somewhat similar reaction to to Nivens’Ringworld series.)

22

Cian 03.09.11 at 8:26 am

Building a world-champion chess machine took longer than many researchers expected, and once it was done the skeptic’s refrain quickly moved on from “it will never happen” to “big deal.”

Well that’s also because computers win chess through brute force methods. That’s just about feasible for chess, apparently infeasible for Go and completely ludicrous for the day to day problems that humans (and for that matter animals) solve very easily.

Most of the other problems solved by AI, are actually algorithmic, probabilistic or statistical solutions to problems that humans manage in very different ways. Usually things for which humans didn’t evolve, and so our solutions are pretty suboptimal. Yes all kinds of things, usually very boring things (the legal and financial work that’s being computerised is extremely tedious) that requires very little creativity, or imagination. And a lot of these things aren’t really AI solutions, but mathematical solutions that have been algorithimized.

The industrial revolution eliminated the craftsman, but it didn’t eliminate the designer. Similarly, there’s no sign so far that computers are going to eliminate the legal strategist, or the skilled advocate.

23

Random lurker 03.09.11 at 8:29 am

Fritz Leiber, “The Silver Eggheads”, where the machines replace artistic work (writers) instead of manual work and robots have unions and so on.

Also, “Star Treck”, where a tool, the replicator, can produce everithing so that nobody does any productive work but everybody is then part of the army and embarked in a thinly disguised imperialist enterprise (militay keynesianism!).

24

Chris Bertram 03.09.11 at 9:31 am

Pedantic and humourless corrective comment:

The Bertram-Cohen-*Marx* thesis, is that the development of the material productive forces makes _possible_ the freeing of human beings from the need to devote most of their time to burdensome toil to satisfy their material needs, and hence means that people could have more leisure time or engage in the kind of labour that expresses their creative potential.

Only when combined with a doctrine of historical inevitability is this a prediction about what will happen. It isn’t, therefore, in conflict with the Krugman et al thesis. In fact, given your phrase “unless there be root and branch reform to political economy” it can agree with this view also. I believe that Rosa Luxemburg had a pithy phrase expressing the choice between these alternatives.

25

Henri Vieuxtemps 03.09.11 at 9:37 am

Well that’s also because computers win chess through brute force methods.

I’m pretty sure at least some of them have a self-learning component, though.

26

ajay 03.09.11 at 9:59 am

Well that’s also because computers win chess through brute force methods.

Cian, this is not true. Brute-forcing a noughts and crosses game is dead easy and can be done with pencil and paper. (here http://xkcd.com/832/) Brute-forcing a game of draughts is not so easy but can be done if you have a few years. (http://news.bbc.co.uk/1/hi/sci/tech/6907018.stm) Brute-forcing chess is orders of magnitude more difficult and has not yet been done at all, because it has about 10^45 possible positions.

(Humans are generally very bad at intuitively understanding exponential growth, witness the old story about putting a grain of rice on the first square of the chessboard, two on the second, four on the third…)

27

ajay 03.09.11 at 10:02 am

Most of the other problems solved by AI, are actually algorithmic, probabilistic or statistical solutions to problems that humans manage in very different ways. Usually things for which humans didn’t evolve, and so our solutions are pretty suboptimal. Yes all kinds of things, usually very boring things (the legal and financial work that’s being computerised is extremely tedious) that requires very little creativity, or imagination.

Again, I think this is a rather limited view. Computer software exists that can compose orchestral music that passes the Turing test: i.e. even musical experts can’t tell which pieces were composed by machine and which by humans. The whole “machines can’t be creative” argument is a bit 70s.

28

Ginger Yellow 03.09.11 at 10:07 am

BSG/Caprica of course: make them our slaves, then flee across the universe when they turn on us.

29

Latro 03.09.11 at 10:28 am

Nancy Kress “Beggars in Spain” and sequels, although there an important (central) aspect is human augmentation, but it ends up in a world where the vast majority or “normal” people are paid by the State to live lifes of absolute apathic leisure. Why do anything when a – energy is ultracheap, so everything is cheap and plentiful and b – there real truth of the world is that you Normal Proletarian are not capable of understanding anything at the level that Genetically Modified Genious (substitute for AI here if you like) do, so why study, create, or participate in politics?

It has some libertarian vibes that I dont know if are meant to be exalted or ridiculed or both things but leave me a bad aftertaste after reading it.

30

guthrie 03.09.11 at 11:13 am

Also Stross “Singularity Sky”, in which a highly advanced interstellar travelling entity called “The festival” starts granting peoples wishes, thereby waging economic and cultural warfare on a planet of techno-rejectionists whose government is firmly 19th century in outlook. The festival relies upon nanotechnology and such to manufacture pretty much anything that anyone asks for, and is thus cornucopian.

31

Keir 03.09.11 at 11:42 am

Cian, this is not true. Brute-forcing a noughts and crosses game is dead easy and can be done with pencil and paper. (here http://xkcd.com/832/) Brute-forcing a game of draughts is not so easy but can be done if you have a few years. (http://news.bbc.co.uk/1/hi/sci/tech/6907018.stm) Brute-forcing chess is orders of magnitude more difficult and has not yet been done at all, because it has about 10^45 possible positions.

I think that this is right tho’ in that chess machines just try things, they don’t have much intuitive play. Also, I think it is incorrect to say that machines can compose; rather, we know how to compose aleatory pieces that machines can implement.

32

Anonymoose 03.09.11 at 11:59 am

@21

There’s actually very good Go AI these days. This is yet another example of people going from “a machine will never beat a good human at Go” to “big deal”.

33

Cian 03.09.11 at 12:30 pm

Brute-forcing chess is orders of magnitude more difficult and has not yet been done at all, because it has about 10^45 possible positions.

Okay put it this way. It uses clever programming techniques that while making it tractable, rely heavily on brute force. Despite this, it can almost be beaten by a human being with very limited processing resources (short term memory/processing speed). Human beings and chess playing computers do not play in a similar fashion, which is how it is possible for a grand master to sometimes beat the world’s best chess computer.

Go is harder because there are more possible moves, but given enough time/computer power and research into programming techniques (which is where much of the intelligence comes from – the designers) it too will be solved and there will be a computer based world champion. Doesn’t get us anywhere closer to AIs running the world though. Anymore than a computer which can solve sums really fast does, or a computer that can fetch more records in a second than an office full of clerks could in a year.

What much of AI is good at is finding alternative ways of solving problems that humans can do, that do not require all the extra machinery of human intelligence. They are like a human being in the sense that a spinning jenny is. I think that’s a real achievement, important, but it has real limitations. And its dependant upon extremely smart and gifted programmers to encode all this stuff in the first place.

There are other styles of AI that are more successful in the real world, though they essentially work by mimicing animal intelligence (they’re embodied, use neural networks that are loosely patterned on the way that animal’s/insects/humans work). But they’re a long way from being able to replace us, though they might be able to replace a cleaner/driver in the not too distant future.

Again, I think this is a rather limited view. Computer software exists that can compose orchestral music that passes the Turing test: i.e. even musical experts can’t tell which pieces were composed by machine and which by humans. The whole “machines can’t be creative” argument is a bit 70s.

You can write software to write pastiches of particular composers, or styles. Its basically a form of algorithmic, or aleatory, composition. To my mind a not hugely interesting one (at least musically. The analytical work required to achieve it can be pretty interesting). You can also use both neural networks and genetic algorithms as filters on the compositional process to winnow out more interesting pieces, but the actual training is conducted by a human being. There’s no self-directed learning.

There’s also some visual art that’s generated this way, but again its just pastiche. Then there’s the poetry…

34

Cian 03.09.11 at 12:32 pm

There’s actually very good Go AI these days. This is yet another example of people going from “a machine will never beat a good human at Go” to “big deal”.

I’ve always assumed it would happen at some point. Moore’s law if nothing else. I just don’t think it demonstrates anything significant about true intelligence. Human beings can also do long division, but computers are much better at it. What do we conclude from that exactly?

35

Scott Martens 03.09.11 at 1:02 pm

I’m sceptical of the idea of the machines taking over in the good old fashioned strong AI sense too, and not over anything as silly as qualia. I think the moving goalposts argument for AI is a good one: As soon as we figure out how to make a machine do it, it doesn’t seem so smart anymore. The thing is, I think this implies a defintion of intelligence that contradicts the adjective “artificial.”

If we ever manage to build a machine that’s really capable of taking over, we’ll have so replicated the human reasoning process that it will no longer be artificial in any real sense. At which point, it’ll probably veg in front of the TV and get drunk at the pub like the rest of us rather than conquer the world. Until then, AI is just like the mechanical thresher: An extension of human abilities rather than an autonomous intelligence.

36

ajay 03.09.11 at 1:13 pm

32: there’s a difference, I think, between aleatoric music and what these programs are doing. It’s not just randomly recombining a list of musical phrases, it’s creating new pieces in the general style of a given set of input pieces.

I suspect this is another example of the retreat happening: we go from “computers can’t be musically creative, so they’re not intelligent” to “computers can write original music indistinguishable from music written by a human, but they can’t develop their own original musical styles, so they’re not truly creative, so they’re not intelligent”.

“It doesn’t count because they’re trained by humans”… most human artists are also extensively trained by humans!

I think that this is right tho’ in that chess machines just try things, they don’t have much intuitive play.

Deep water here on the nature of human intuition…

37

LFC 03.09.11 at 1:22 pm

C.B. @23
I believe that Rosa Luxemburg had a pithy phrase expressing the choice between these alternatives.

Unfortunately, some of us (or perhaps I should just speak for myself) suffering from less-than-great memories or cognitive overload or less-than-complete educations in the history of radicalism (or all three) remember offhand only one R. Luxemburg quote: “If I can’t dance I don’t want to be part of your revolution.” [printed on numerous T-shirts at one time and attributed, possibly wrongly, to RL]

38

LFC 03.09.11 at 1:27 pm

Ok, socialism or barbarism: thanks google for jogging my memory

39

Keir 03.09.11 at 1:35 pm

32: there’s a difference, I think, between aleatoric music and what these programs are doing. It’s not just randomly recombining a list of musical phrases, it’s creating new pieces in the general style of a given set of input pieces.

The issue is the transition from input to output; I think that the art content of that transition arises when the rules are laid out, not when they are followed.

But of course this is a moving goal post: the point is the changing border of art and craft, and changing notions of creativity. Descriptively, if a machine can do it, it isn’t creative. There’s a definite reluctance to let machines do art.
But a machine would never write 3’44’ ‘. That’s a problem. (The spectre of intentionality.)

40

Henry 03.09.11 at 1:55 pm

And a classic from back in the mid-20th Century, Damon Knight’s A for Anything: the birth pangs of post-scarcity economy when matter duplicators remove the need for manufacturing.

Thanks for this Bruce – was trying to remember the title of this book when I wrote the post and failing (I read it sometime in my teens).

41

ajay 03.09.11 at 2:07 pm

the point is the changing border of art and craft, and changing notions of creativity. Descriptively, if a machine can do it, it isn’t creative. There’s a definite reluctance to let machines do art.

This is very true.

But a machine would never write 3’44’ ‘. That’s a problem.

It’s a problem for a lot of art: most music isn’t as innovative as 3’44” and yet it’s still classed as creative. Haydn emulated Mozart’s style in a lot of his pieces. Does that mean that he wasn’t a creative artist? If he’d done nothing but churn out Mozartish music, he wouldn’t have been a great composer, but wouldn’t he still have been a creative artist?

42

Ed 03.09.11 at 2:13 pm

I’m surprised more people haven’t mentioned Player Piano, whose plot is quite germane to the other discussion.

43

Cian 03.09.11 at 2:47 pm

32: there’s a difference, I think, between aleatoric music and what these programs are doing. It’s not just randomly recombining a list of musical phrases, it’s creating new pieces in the general style of a given set of input pieces.

Its a stochastic process. Granted the algorithms being used are more sophisticated by virtue of having a computer to use, rather than some simple rules in a book, but its essentially a progression of the same idea. I think it shines an interesting light on what creativity is and isn’t (there are definite limitations, which are clear if you’ve ever played around with this kind of stuff, which I have). But all creative people use rules, processes, etc – the fact that computers can both automate and extend this isn’t surprising to me. But I don’t really consider that (as somebody who uses them myself) “creative” exactly. They’re tools that get you further along.

And the stuff is original, but its definitely a pastiche, and its possible that it works partly because people are familiar with the original oeuvre (i.e. it sounds familiar). I’m kind of doubtful that you could write a program that composed like Mozart if Mozart hadn’t already existed. BBCut is a program that generates DrumN’Bass style breakbeats, but somebody invented the idea first.

Its also really just a progression of something that’s been going on for a long time. Most music is essentially algorithmic, or at least rule based (rules for what works, what doesn’t), much of the creativity came from the way that artists challenged, tested and experimented with these. I don’t know of any composition software that does this in a particularly interesting way. And I think it would be quite hard to do so, as it would require a computer system that could hear/feel music like /we/ do. And that goes even more so for stuff like poetry and visual arts.

Deep water here on the nature of human intuition…

There’s a reasonable definition of what that means in practice, so less so than you’re implying.

“It doesn’t count because they’re trained by humans”… most human artists are also extensively trained by humans!

And some of them go beyond their training and produce something original. Computer software doesn’t currently.

I think the problem with both the term “intelligent” and “creativity” is that they’re not very well defined. The first is defined as “stuff humans do”, the second as “stuff creative people do”.

Neither are terribly good definitions when you get down to it.

Cian

44

Cian 03.09.11 at 2:54 pm

Until then, AI is just like the mechanical thresher: An extension of human abilities rather than an autonomous intelligence.

Quite. I think the example that Paul Krugman used was sifting through millions of documents to try and find something that’s relevant. Even if you had an autonomous intelligence, you really wouldn’t want to use it on that. It would get as bored as the rest of us. Its a similar thing with the chip design stuff he was talking about. That isn’t the creative stuff, its the dull stuff that has to be done, but which is quite dull and human beings are not particularly good at.

45

Gepap 03.09.11 at 3:14 pm

The idea that we can’t replicate intelligence artificially seems strange to me – there is nothing magical about human intelligence, it is a continuation of the cognitive abilities of other animals, as we know that other animals can solve problems, use tools, and communicate. Perhaps using a digital silicon based system is not the way we get to replicate the complexity of our synaptic system and its complexity, but there seems no reason to state that we won’t get there eventually.

Also, and this is to bring up an issue that Lem brought up – would we even know AI when it came into existance? The Turing test, the idea that intelligence can be seen when a machine can fool a human, is well, very anthropomorphic, but our intelligence can’t be separated from our past, from the reality that we are social apes and thus still at the base think like social apes. Why would true intelligence in say a collective being (the idea of the intelligent ant or bee colony) or a machine be like human intelligence? To assume that we have a monopoly on what intelligence can be or how it would work and think is also a bit silly to me.

46

JP Stormcrow 03.09.11 at 3:59 pm

Ed@42: I’m surprised more people haven’t mentioned Player Piano, whose plot is quite germane to the other discussion.

As someone who did mention it, I was having second thoughts as it really is more germane to the other discussion since it is not “post-scarcity”–the material lot of those who were not the Engineer* Masters of the Universe is pretty grim. “Purple Riders” is a bit more creative in looking at how the resulting Cognitive Surplus (to not coin a phrase) might be spent.

*Which now seems quaint in a ’50s “better living through chemistry” kind of way.

47

Anarcho 03.09.11 at 4:00 pm

Another pedantic and humourless corrective comment:

>The Bertram-Cohen-*Marx* thesis

that should be

The Bertram-Cohen-Marx-Proudhon thesis…

Proudhon, System of Economic Contradictions: Chapter IV

“What the economists ought to say is that machinery, like the division of labour, in the present system of social economy is at once a source of wealth and a permanent and fatal cause of misery”

and:

“very far from freeing humanity, securing its leisure, and making the production of everything gratuitous, these things would have no other effect than to multiply labour, induce an increase of population, make the chains of serfdom heavier, render life more and more expensive, and deepen the abyss which separates the class that commands and enjoys from the class that obeys and suffers”

His solution? Replacing wage-labour and the hierarchical capitalist workplace with workers’ associations — System of Economic Contradictions: Chapter V:

“it is necessary to procure for all the means of competing; it is necessary to destroy or modify the predominance of capital over labour, to change the relations between employer and worker, to solve, in a word, the antinomy of division and that of machinery; it is necessary to ORGANISE LABOUR: can you give this solution?”

A position, needless to say, libertarians like Bakunin, Kropotkin, Chomsky, follow.

48

mpowell 03.09.11 at 4:27 pm

One thing I wanted to mention in this discussion tangentially related to the Culture novels… maybe it’s a good thing our machine intelligence is developing more along the line of brute force than true creativity, intuition and intentionality. If you final develop a real AI, you create moral questions about whether you can just make it your slave (almost by definition, you can’t). So finding algorithmic ways of solving more and more problems is actually useful in finding a morally untroubled method for easing human work requirements. In Banks’ novels I believe there is an official intelligence classification system that allows machines of sufficiently low intelligence to be treated functionally differently. But these machines (the knife missile for example?) are still capable of quite extraordinary and useful functions.

49

ajay 03.09.11 at 4:31 pm

Even if you had an autonomous intelligence, you really wouldn’t want to use it on that. It would get as bored as the rest of us.

That’s another interesting point: is the capacity for boredom really an essential part of intelligence? Or would it be possible to build an AI that found legal documents really, really interesting and liked nothing better than spending all its time going through them?

50

ajay 03.09.11 at 4:32 pm

48 written before 47, but it does raise the question: if you wanted a legal-document-processing AI, would it be immoral to build one that really enjoyed processing legal documents?

51

JP Stormcrow 03.09.11 at 4:36 pm

Cian@44: Even if you had an autonomous intelligence, you really wouldn’t want to use it on that. It would get as bored as the rest of us.

I find the implicit anthropomorphism in the second statement to be rather misguided. To do the task well does require some relatively sophisticated model of human motivation, but no reason that it would need to be linked to the computer’s “motivation”.

52

JP Stormcrow 03.09.11 at 4:43 pm

50 likewise before reading 49, 48 & 47.

53

Ginger Yellow 03.09.11 at 5:20 pm

In Banks’ novels I believe there is an official intelligence classification system that allows machines of sufficiently low intelligence to be treated functionally differently

This is one of the more troublesome aspects of Banks’s novels (I would say of the Culture, but unlike other problematic aspects of the Culture Banks really doesn’t explore this very much). The supposedly low/restricted intelligence AIs in the novels seem to me to be very close to, if not at, the philosophical zombie stage of AI. They have minds at least sufficient to understand human language with a lot of nuance (which in turn suggests a fairly sophisticated theory of mind). Is it possible to thave that without the trappings that would make enslaving a sentient being morally wrong? Alas, it doesn’t seem to be something that Banks is all that interested in. He’s rather more amused by the conceit of the higher Minds benignly tolerating humans.

54

ScentOfViolets 03.09.11 at 5:41 pm

This is where I get to feeling really old. The people who wrote about matters econ0mic back in the day? Asimov, Bester, Clarke, Simak, Reynolds, Harrison, Sheckley, . . . and from all different perspectives too[1]. For example, Asimov has the working class solidly against the introduction of robots in “The Caves of Steel”; we see Daneel put down a mob rioting in a shoe store less than an hour after Elijah meets him for the first time. The very next novel in the series? An idle Utopia with the ratio of robots to humans 20,000 to 1, where everyone lives on an estate and no one works save to fulfill nonmaterial wants.

Of course, lately there seems to be a lot of dystopic stuff being written based upon the premise that B is inevitable, but getting from A to B is going to be a problem. You have everything from the Lem I quoted where the poor die off from benign neglect (and that’s actually from about forty or so years back) to stuff like the rich simply killing off the bottom 99.9% with plagues and terminator robots once their services are no longer required.

[1]I’ll see Pohl’s “Midas Plague” and raise you with Dick’s “Autofac”. Yeah, this stuff really has mostly all been done before. And not once, but many, many, many times.

55

mpowell 03.09.11 at 5:55 pm

@52:

You’re correct that Banks doesn’t spend much time investigating this issue, but it’s a really hard one! In general, determining when an AI becomes ‘real’ is not a question anyone actually knows the answer to and Banks just doesn’t try to address it, imo. But he also doesn’t create any moral dilemmas. A knife-missile may be capable of complex language processing, but it really isn’t represented as a being. On the other hand, the really stupid AI running a shuttle craft in one of his stories is represented as being a respected (by the Culture at least) person. Banks sort of posits a clear dividing line between AI and not-AI without really getting into the question of whether that is feasible.

56

Ginger Yellow 03.09.11 at 6:04 pm

[quote]Banks sort of posits a clear dividing line between AI and not-AI without really getting into the question of whether that is feasible.[/quote]

Yeah, that’s kind of what I was getting it with my philosophical zombie comparison. While knife missiles seem single-purpose and “brainless” enough to be relatively unproblematic in that regard, there are plenty of other non-person drones which couldn’t do what they do without a mind that I personally wouldn’t be comfortable enslaving. Searle might be, though.

57

mpowell 03.09.11 at 6:43 pm

What I was meaning to say and just sort of forgot to is that it’s tough to criticize an author just because they don’t want to get involved in an issue you’re interested in. SciFi is about asking questions about humanity under certain technological conditions. But the author gets to choose what issues he wants to investigate.

58

Matt 03.09.11 at 6:49 pm

You don’t need to give machines “true” autonomous intelligence and creativity (whatever that may be) for them to render conventional economic structures obsolete. If the only jobs left for humans are those as creatively demanding as composing new pieces of music in an original style then the number of humans still employed in 100 years’ time can comfortably fit in a single football stadium.

I would also note that computers have produced truly original designs, not preconceived by their creators or by any other humans, in the domain of machine-design. NASA has been using evolutionary programs to design optimized antennas for space missions for 5 or 6 years. You can easily find pictures of the first ones from 2006 by searching for NASA evolved antennas. The humans provided guidelines to the program, but the actual design doesn’t look like anything those NASA engineers or any other human engineer would have designed by hand. It looks more like part of a plant than any traditional antenna shapes, it outperforms traditional antenna shapes on all figures of merit, and it was faster to develop too.

59

SpeakerToManagers 03.09.11 at 6:59 pm

ajay @ 48:

We seem to be able to create humans who find legal documents really interesting, so I expect creating AIs that do so won’t be an insurmountable problem. But the ethical issues are very similar, and have been treated for the human side of the question in Vernor Vinge’s A Deepness in the Sky, where it is possible to turn neurotypical humans into what amounts to autistic spectrum individuals. The ethical question here of course is when, if ever, is it permissible to treat humans as machines, as objects? And the dual question is, when, if ever, is it obligatory to treat machines as humans?

Right now I’m reading Alone Together by Sherry Turkle, in which she confronts the question of how humans do, and how we should, relate to affective robots head on. It’s a fascinating and somewhat scary book.

60

Ginger Yellow 03.09.11 at 7:14 pm

What I was meaning to say and just sort of forgot to is that it’s tough to criticize an author just because they don’t want to get involved in an issue you’re interested in. SciFi is about asking questions about humanity under certain technological conditions. But the author gets to choose what issues he wants to investigate.

Of course. It just seems odd to me that he apparently doesn’t want to investigate this issue, especially given that he does play up the whole “computers run the Culture” side of things a lot, and he also enjoys questioning the Culture’s sense of moral superiority. This issue would seem to be a perfect one to pin a Culture novel (or several) on, but it doesn’t come up at all. Doesn’t stop me enjoying the books, but it does feel odd.

61

chris 03.09.11 at 7:29 pm

Banks sort of posits a clear dividing line between AI and not-AI without really getting into the question of whether that is feasible.

ISTM that the presence of the “A” in “AI” doesn’t really affect this question, except contingently. The real problem is deciding whether intelligence (in the relevant sense) vs. non-intelligence can be clear-cut, and if not, how your ethical or legal system should reflect that. Because the fact that no edge cases currently live on Earth (if, indeed, you don’t think that nonhuman apes are an edge case, or humans with certain mental disabilities, or humans at certain stages of cognitive development) isn’t something we can rely on in all possible future circumstances.

If the designation of “intelligent” is vulnerable to the paradox of the heap, then what? I picture someone trying to explain that a line has to be drawn somewhere, even if it seems rather arbitrary, to a being that falls just on the wrong side (although, if such an explanation is possible, you may have misclassified the being in question).

62

Metatone 03.09.11 at 7:31 pm

If the only jobs left for humans are those as creatively demanding as composing new pieces of music in an original style then the number of humans still employed in 100 years’ time can comfortably fit in a single football stadium.

Matt @57 nails it.

This is what has been on the fringes of thinking for the last 20 years or so, but Krugman is one of the first relatively respected ones to face up to it.

Someone less respected put it this way: We can probably already manufacture the entire world’s needs for physical goods using the population and manufacturing cities of China. Now it will probably stay more spread out than that, but the key questions are:

1) What are the rest of the people going to do in our economic system?
2) How do we get to there from here, given our economic and political system?

63

chris 03.09.11 at 7:34 pm

But the ethical issues are very similar, and have been treated for the human side of the question in Vernor Vinge’s A Deepness in the Sky, where it is possible to turn neurotypical humans into what amounts to autistic spectrum individuals.

Moon’s _The Speed of Dark_ contemplates the reverse operation. It might be interesting to compare their approaches.

Also: what happens if artificial intelligence is intelligent, but in a way at least as different from neurotypical *and* autism-spectrum humans as they are from each other, or even more so? Most of our species has basically no familiarity with communicating with beings whose minds function substantially differently from their own (autistics and their close acquaintances being arguable exceptions).

64

Cian 03.09.11 at 7:38 pm

@50: Animals get bored too.

I find the idea that intelligence can be created from seperate “models” of what we consider to be aspects of behaviour, human or otherwise, rather misguided. I would guess that boredom is emergent from a combination of other factors that are important. There’s reasonably good evidence that emotions, hormones, etc are fairly important to how human intelligence function – just as is the fact we operate in bodies. The idea that you could have a non-emotional creature that functions like humans do seems fundamentally misguided (or a disembodied one for that matter). Like many of the assumptions of the hard AI crowd.

@48 48if you wanted a legal-document-processing AI, would it be immoral to build one that really enjoyed processing legal documents?

What makes you think it would be possible to create one that could do it well? I mean you could theoretically do it with a human being – but I suspect the error rate would be sky high nonetheless. Computers really good at repititive behaviour, humans not so much. Perhaps that will turn out to be one of the tradeoffs moving forward?

People have a bad tendency to assume that intelligence is possible with a conventional computer, but there’s no evidence for this. The most successful attempts are either things with fairly low processing power (various kinds of embodied robot), or probabilistic (genetic algorithms, neural nets). The human mind is nothing like one. And if intelligence isn’t possible with a conventional computer, then assumptions based upon what computers are good at are probably not good ones.

65

bianca steele 03.09.11 at 7:39 pm

mpowell @ 47 wins the thread! (By explaining the hidden agenda behind computer scientists’ insistence that most of what “AI” can now do isn’t “really intelligent”: they are just trying to keep their creations from being protected by humanitarian considerations, and are not to be trusted.)

Actually, I was nonplussed to read on Wikipedia that Banks thought of his world as a left alternative. Dreaming up the Culture strikes me as something a rightward leaning person is more likely to do (though I’m not sure I can justify this feeling).

66

Ginger Yellow 03.09.11 at 7:53 pm

Actually, I was nonplussed to read on Wikipedia that Banks thought of his world as a left alternative. Dreaming up the Culture strikes me as something a rightward leaning person is more likely to do (though I’m not sure I can justify this feeling).

There’s a pretty strong left (or at least liberal) bent to both the Culture and the novels. His most recent book has some pretty unsubtle anti-Iraq war messaging, to pick the most convenient example. More generally, for all its flaws, the Culture is presented as ultimately benevolent and pretty much the pinnacle of galactic civilisation past and present (the Sublimed aside). And it’s a post-capitalist society which (nominally) prefers peace over war, has no commerce to speak of and is supremely socially and sexually liberal. There are certainly conservative spins you could put on it, but it’s pretty clearly not what Banks intends.

67

bianca steele 03.09.11 at 7:54 pm

Also, no one is saying, “You people aren’t able to do anything more complicated than sort through legal documents with 90% reliability, therefore we consider you to be AIs with no rights.” They may be saying, “Therefore, you are not going to be hired as associates,” or even, “Therefore, you will be hired as temps, and therefore, you will not receive health benefits.” It isn’t the same thing at all.

68

bianca steele 03.09.11 at 7:59 pm

@Ginger Yellow:
I guess one of the things that interests me is, are there people who read an SF book to learn something about the real world, and what do those who learn about something-like-the-Culture for the first time from an Iain M. Banks novel make of the fact that the world is presented as a left or left-liberal view of the world? (Additionally, I wonder how much the post-scarcity thing reads, to most US readers younger than about 50, as so utopian as to change the thing utterly.)

69

Cian 03.09.11 at 8:04 pm

You don’t need to give machines “true” autonomous intelligence and creativity (whatever that may be) for them to render conventional economic structures obsolete. If the only jobs left for humans are those as creatively demanding as composing new pieces of music in an original style then the number of humans still employed in 100 years’ time can comfortably fit in a single football stadium.

Don’t get too hung up on the example, but if you take music that’s probably the jingle writer and generic film score writer. Or in pop music the guy who writes the music for boybands (Girls Alive would probably be okay though). How much of law would be wiped out by programs that can find relevent documents?

There’s a bit difference between rendering conventional economic structures obsolete, the kind of scenario that Krugman describes though. The industrial and agricultural revolutions rendered conventional economic structures obsolete, destroying the majority of human occupations. And yet here we are.

The humans provided guidelines to the program, but the actual design doesn’t look like anything those NASA engineers or any other human engineer would have designed by hand. It looks more like part of a plant than any traditional antenna shapes, it outperforms traditional antenna shapes on all figures of merit, and it was faster to develop too.

This doesn’t surprise me, but this is essentially (well okay not really, but good enough for the purposes of this argument) a brute force solution to a tractable well bounded problem (like chess). Its using the strengths of a computer (processing speed, ability to be very repititive) and combining it with some clever algorithmic stuff to get round the NP completeness issues (good enough, rather than perfect – mating, etc). And its a well bounded problem because human beings have worked out the constraints, and they’re not too complex.

Computers are good at that stuff. What they’re not good at is wicked problems, which is what much of design is, because that involves playing with, exploring, testing, etc constraints. I suppose that might change, but there’s little sign of that on the horizon. So the genetic algorithms here are very sophisticated, but they’re still another tool. We’re a long way from having the shuttle being designed entirely by a computer.

70

Cian 03.09.11 at 8:08 pm

Someone less respected put it this way: We can probably already manufacture the entire world’s needs for physical goods using the population and manufacturing cities of China.

Less respected presumably because he didn’t know what he was talking about. Maybe that’s where we’re going, but currently there’s a reason why lots of stuff isn’t made there despite the cost.

71

albatross 03.09.11 at 8:32 pm

A new form of intelligence will probably be *vastly*, unimaginably different from our own, so much so that we won’t recognize it as “like us.” We’re not talking about the intelligence of a normal human vs. an autistic human, or even a normal human vs a smart chimp, we’re talking the intelligence of a normal human vs. the intelligence of an anthill with symbiotic fungi, or vs. the intelligence of the vertebrate immune system, if those things were competing with us in the realm of economics or technology where we perceive that our kind of intelligence is important.

It’s worth thinking about places where we run into adaptive, learning kinds of behavior we have a hard time overcoming. Think about pathogens or pests evolving around our attempts to control them. It’s not intelligence anything like we normally think of, and yet it adapts around our development of antibiotics and disinfectants, or around our antivirals, or our insecticides, as though it were a thinking system beating us at some complicated war game. I have the sense that competing with an AI would have that feel.

72

bianca steele 03.09.11 at 8:35 pm

Re. “AIs with complex language capacity,” it’s interesting to look at what Richard Powers, in Galatea 2.2, projects as the very first really intelligent AI. IMHO he has some intriguing misconceptions.

73

Cian 03.09.11 at 9:08 pm

@71: Oddly most of that stuff tends to be a result of a few simple rules interacting. Its amazing how simple the rules are that control flocking behaviour in birds for example. Or for that matter the behaviours of insects. You can do a lot of stuff that we think of as requiring intelligence, and which is quite hard for us, with a few simple rules.

74

Matt 03.09.11 at 10:14 pm

Consider watching a car with heavily tinted windows moving in traffic, so you can’t see the driver. Is it a machine or a human driving? Has maybe a non-human animal been trained to drive it? Is it conceivable that there’s no intelligence at all inside the vehicle, that some random or evolutionary process is coincidentally causing the vehicle to stop for red lights and activate signals before changing lanes? Claiming that there’s intelligence at work here if you open the door and see a human, but not if you open the door and see a computer, is absurd. I doubt that AI goalpost-carriers would demand to see the human driver’s portfolio of original art before conceding his or her intelligence, either.

You can of course object that the car is just a projection of the human intelligence that built it, that the robot car could not have learned to drive without human support and therefore doesn’t count as real intelligence. But humans who grow up without human support — feral children — are also not intelligent by colloquial standards.

The only real intelligence on Earth is the feral child who grows up to single handedly invent the space shuttle, and that child’s name is Godot.

75

Metatone 03.09.11 at 10:23 pm

@Cian @70

The point is about productivity, not China.

You’d look smarter declaring people as not knowing what they are talking about if you actually read what was said.

76

johne 03.09.11 at 10:29 pm

@50: “…if you wanted a legal-document-processing AI, would it be immoral to build one that really enjoyed processing legal documents?”

I think it’s the last of the Hitchhiker’s Guide to the Galaxy novels, in which an intelligent animal bred to be eaten tries to converse with the intended diner much as a chef would, pointing out especially delicious cuts of its own body, how they would fit the occasion, and suggesting mouth-watering ways of preparation. As I recall, the matter-of-fact conversation causes the would-be consumer to demur, only to be met with the animal’s earnest arguments that in so doing, he is placing trivial obstacles in the way of what will be its glorious fulfillment, the final consummation it has been anticipating and yearning for all its life.

Of all the satirical twists and perspectives raised in Douglas Adams’ series, this was the one I found funniest and most — thought provoking? Paradoxical? Horrifying?

77

JP Stormcrow 03.09.11 at 10:32 pm

Cian@64: The idea that you could have a non-emotional creature that functions like humans do seems fundamentally misguided [emphasis added]

Oh, I absolutely agree and for the tasks under discussion I never assumed that the computer would function like a human. I’m right with albatross in 71 on what the they are liable to be “like”.

“The question is,” said Humpty Dumpty, “which is to be master–that’s all.”

78

Cian 03.09.11 at 10:53 pm

Consider watching a car with heavily tinted windows moving in traffic, so you can’t see the driver. Is it a machine or a human driving? Has maybe a non-human animal been trained to drive it? Is it conceivable that there’s no intelligence at all inside the vehicle, that some random or evolutionary process is coincidentally causing the vehicle to stop for red lights and activate signals before changing lanes? Claiming that there’s intelligence at work here if you open the door and see a human, but not if you open the door and see a computer, is absurd. I doubt that AI goalpost-carriers would demand to see the human driver’s portfolio of original art before conceding his or her intelligence, either.

79

Cian 03.09.11 at 10:54 pm

Arrgh, try again:

@74: Consider a box. Suddenly after 5 seconds a light comes on. Is this the result of a human intelligence? Or a random element? Could be either. In and of itself its no demonstration of intelligence, despite the fact that the person inside the box might be Beethoven. The mistake here is to think that just because an intelligent person does something, the activity itself requires intelligence. Or that the activity can only be done in a way that requires intelligence. Which is essentially the mistake made by AI proponents for nearly 60 years now. Nobody thinks a computer is intelligent because it can divide 1032424 by 32. But that’s something that humans used to do all the time.

Or to put it another way – once the person emerging from the car can do other things.

As it happens we’re probably quite close to that car being at least theoretically possible. And yes I think it demonstrates a rudimentary form of intelligence much like you’d find in an insect (but nowhere near what you’d find in a crow). Its a long way from that to the human driving the car (and probably not as well as the insect brain). Its a huge leap from that to the kind of stuff that people are talking about on this thread. Nor is an insect brain likely to make many white collar workers redundant.

80

Cian 03.09.11 at 11:07 pm

The point is about productivity, not China.

If the point was to say that labour demand is falling and this is a trend which seems to be continuing (and I’m not sure that it is in absolute terms) for industrial production then he should have just said that. I mean you could equally say that all the world’s agriculture could come from China, and I guess in terms of labour force it probably could. But its still a dumb thing to say.

81

Salient 03.10.11 at 12:01 am

As soon as we figure out how to make a machine do it, it doesn’t seem so smart anymore.

“When a measure becomes a target, it ceases to be a good measure.” —Goodhart’s law.

Similarly, any distinguishing characteristic which becomes a target for convergence ceases to be a good basis for distinction. In particular, most any definition of intelligence in terms of performance capability will become obsolete over a sufficiently long time horizon (and with good riddance, because it wasn’t a terribly good definition in the first place).

People have a bad tendency to assume that intelligence is possible with a conventional computer, but there’s no evidence for this. […] What they’re not good at is wicked problems, which is what much of design is

Not just design. A large proportion of intractable-until-solved abstract problems were rendered tractable by an insightful reformulation of the question or hypotheses.

If we define “intelligence” as “the capacity to learn how to do things you weren’t initially certain you were capable of doing, relatively quickly and with relatively minimal guided instruction” then various specialized robot overlords likely have us matched within the century, and if we define intelligence as “the ability to reformulate characterizations of phenomena in frameworks which render previously tractable problems solvable” then there’s not much sense in calling it a human trait.

82

Matt 03.10.11 at 12:27 am

As it happens we’re probably quite close to that car being at least theoretically possible. And yes I think it demonstrates a rudimentary form of intelligence much like you’d find in an insect (but nowhere near what you’d find in a crow). Its a long way from that to the human driving the car (and probably not as well as the insect brain).

We’re not just close to that car being theoretically possible, it’s already here. The 2007 DARPA Urban Challenge demonstrated real cars driven by machines in a city environment, following rules of the road and contending with other traffic. Google hired a number of people involved with the winning teams and last year announced that it had built its own robot cars, and that they’d already driven thousands of miles on public California roads (albeit with a human driver always ready to snatch control away from the machine).

I don’t know how you concluded that such robo-cars are about as intelligent as insects, but nowhere near as much as crows — or what it could mean even if that statement were true in some sense. You’ll never teach an insect or a crow to drive a car across town even if you had a control interface suitable for mandibles or beak. You can’t compare intelligence without talking about the domain or domains where it’s applied. When we speak of general intelligence it’s just a collection of abilities that have traditionally marked their human bearers as intelligent. Are dolphins completely unintelligent because they don’t cultivate even basic literacy? Are humans completely unintelligent because they can’t interpret the most elementary prey-echoes? It’s not just AI critics who seem to think that intelligence is a scalar value; I’ve seen many a singularity-enthusiast talk glibly about future machines “millions of times more intelligent than a human” without understanding if there’s any semantic content in that prediction.

83

bianca steele 03.10.11 at 1:33 am

@ScentOfViolets,
The estates might be explainable by the fact that I don’t remember them at all, but do remember the “caves of steel,” the overcrowded apartment towers and communal dining halls, and the elaborate system of conveyor belts instead of subways.

84

StevenAttewell 03.10.11 at 2:41 am

One thing that’s always bugged me about this debate: the larger topic of the social organization of technology aside, what stops us from implanting microprocessors and micro drives into our heads to supplement what evolution has given us?

I.e, why do we assume ai/human conflict versus co-option?

85

derrida derider 03.10.11 at 2:41 am

I just don’t think it [an AI achievement] demonstrates anything significant about true intelligence. – Cian @34

The phrase “No True Scotsman” comes to mind. Which is exactly the point AI proponents make when they say that when a maqchine does something we tend to declare it cannot be “true intelligence” by definition.

86

Matt 03.10.11 at 3:26 am

One thing that’s always bugged me about this debate: the larger topic of the social organization of technology aside, what stops us from implanting microprocessors and micro drives into our heads to supplement what evolution has given us?

I.e, why do we assume ai/human conflict versus co-option?

I think that is perfectly reasonable and already happens even without surgical integration. For example, strong chess programs playing by themselves can beat the strongest human players. But strong chess programs guided by strong human players are even stronger, defeating the machines that have no human guidance. In many domains this pattern may repeat: the best machines are better than unaided humans, but skilled humans working with the best machines are better yet.

That doesn’t exactly refute the possibility of economic upheaval, though. The conflict there isn’t (never was) about human interests vs. machine interests but about the interests of owners and elites vs. those of everyone else. It might turn out that human-machine cooperation produces superior goods and services, but the machine-only product is good enough that the humans still aren’t in demand as partners to machines. By way of example, lace made with hand tools can still surpass anything purely machine-made in quality. But you have to know what to look for to appreciate the difference, and even then most people opt for the lower quality at the vastly lower price.

As I’ve said before, I think that if automated manufacturing ever closes the cycle and capital equipment is also automatically manufactured, it will mean the obsolescence of capital and labor alike to industrial production. The competitive pressure will always be there to automate a little more, and until the end it increases the bargaining power of capital against labor. But at the last capital will render itself obsolete if machines do all needful work. For once it’s a collective action problem where the problem favors the average human rather than elites! You only need to beg, buy, imitate, or steal the machine seeds once to take control away from the original owners.

87

David 03.10.11 at 3:42 am

@ajay: When you can point me to a cd of music composed by a computer that has the same emotional impact on me that first hearing a Beethoven of Mahler symphony had, then I’ll be impressed. A few years ago some guy embarked on digital renditions of classical works by taking literally a note here a note here from all the best recordings of these works to make a perfect performance of a given work. Had a demo up on the web that challenged you to tell which was an actual human conducted work and which was put together. I aced it four for four. May happen, but I’m not holding my breath.

88

David 03.10.11 at 5:09 am

I’m nonplussed at bianca steele being nonplussed about Banks and the Culture. He has had some very explicit passages in his last two Culture novels that excoriate capitalism and conservatives. On his own website he has been very explicitly left.

89

Salient 03.10.11 at 8:20 am

When you can point me to a cd of music composed by a computer that has the same emotional impact on me that first hearing a Beethoven of Mahler symphony had, then I’ll be impressed.

This sort of thing seems unfairly set up. Why should we compare fledgling artificial intelligence to the sort of people you would presumably uphold as among the greatest geniuses of all time? We don’t demand that of people, and humans as a species do have quite a head-start. A more reasonable standard might be, can I [someday] point you to a CD of music composed by a computer that you’d respond to with a, “say, that’s not bad, who is this?”

Consider:

Watson, the Jeopardy!-playing supercomputer, did give the right response. In a way. The clue, in the category “Rhyme Time,” was “A hit below the belt.” Though this was just a practice round, held years before the public man-vs.-machine challenge that airs this week, Watson was dealing with authentic game-show material. Back in 1992, when human contestant Marty Brophy saw that $200 stumper in a broadcast episode, he correctly replied, “What is low blow?” The state-of-the-art AI, by contrast, scanned its elephantine database of documents and came up with something else: “What is wang bang?

So, yeah, teach a computer to think and communicate and innovate somewhat like a human, and the first thing it comes up with is a dick joke. Give the poor things a little time to mature…

90

Cian 03.10.11 at 8:44 am

We’re not just close to that car being theoretically possible, it’s already here. The 2007 DARPA Urban Challenge demonstrated real cars driven by machines in a city environment, following rules of the road and contending with other traffic. Google hired a number of people involved with the winning teams and last year announced that it had built its own robot cars, and that they’d already driven thousands of miles on public California roads (albeit with a human driver always ready to snatch control away from the machine).

Fair enough. It was essentially a question of somebody throwing enough money at the problem, but not something I’d really been following since the first road drivers were created several years ago. Mind you, I’d be intrigued to see how well they functioned in a fairly hostile urban environment like, I dunno, N. London or something. But yay, a triumph for sensible approaches to AI.

I don’t know how you concluded that such robo-cars are about as intelligent as insects, but nowhere near as much as crows—or what it could mean even if that statement were true in some sense.

I based it upon knowing something about they work, the size of the brains and the principles they’re based upon. I also know that the state of the art is nowhere near creating a system that can use and create tools flexibily, or operate in a complex social environment. Like, for example, a crow.

You’ll never teach an insect or a crow to drive a car across town even if you had a control interface suitable for mandibles or beak.

No and we don’t know how to teach a robot to do this. A crow and insect can both get across town, without crashing into obstacles, etc. Which is all this device is doing. Its an embodied intelligence, only its body is a car. It turns out that it requires a very tiny brain, in combination with the neurophysiology of the body, to achieve this.

Much of engineering is about reducing a problem to its simplest form. Its requires much less intelligence for a neural thingymajob to control a robot car in its embedded in, than it does for a robot to drive a car. So that’s what engineers have done. They’ve recreated the problem in a form that requires less intelligence, which is what much of AI work actually is in practice. To conclude from this that we’re close to the point where an AI will be able to do all the other things that a human driver can do is wrong. We’re nowhere close to it.

However a crow operates in a complex social environment, and can create and use tools in new and creative ways to solve problems. Now a robot which could do that is way beyond what is currently possible, or even really imaginable.

You can’t compare intelligence without talking about the domain or domains where it’s applied. When we speak of general intelligence it’s just a collection of abilities that have traditionally marked their human bearers as intelligent.

I would agree that intelligence is an unhelpful word. Part of the problem is that we don’t really understand how the brain works, or what it does. Not helped by some of the more enthusiastic proponents of AI using computer models that are hopelessly out of date.

But actually I’ve been arguing that much of what we’ve considered the result of intelligence probably isn’t. Much of what humans do is fairly repitive – the intelligence is embedded in the algorithm that we follow. Other things that humans do which use our intelligence, can also be performed in non-intelligent ways.

To take an example. A blacksmith was a pretty smart guy generally, but nobody would argue that a steel factory (or whatever) is intelligent. Now there is intelligence involved, but its not there. Similar thing with many “AI”s today. They are tools. Flexible, sophisticated, tools in many ways. But still tools. Currently these things are less self-aware than the “tools” we’ve been using since we first discovered agriculture thousands of years ago. And there’s no sign of that changing any time soon.

Are dolphins completely unintelligent because they don’t cultivate even basic literacy?

Well no (though I wouldn’t assume that they’re hugely intelligent. We don’t really know what the big brain is used for), but literacy is a tool and without it there are all kinds of things that are impossible. Much of human intelligence is distributed in our tools, and socially embedded in the various social ways we solve and distribute problems.

Language is useful because it is socially and culturally embedded, has a certain shorthand, resonances, etc. I’m not sure how you can create an AI which can use language beyond the mathematical approaches currently being used (useful, but limited), without it also operating within that world. Without it having practical experiences, and being able to negotiate the ambiguities, interpretation, etc of human existance. Nor can I see why anyone should try there. Human beings evolved from very simple unintelligent creatures – seems there might be a reason for that.

91

Zamfir 03.10.11 at 9:10 am

I would also note that computers have produced truly original designs, not preconceived by their creators or by any other humans, in the domain of machine-design. NASA has been using evolutionary programs to design optimized antennas for space missions for 5 or 6 years.

I used to work bit in this field (not antennas, but similar optimization techniques for aerodynamic shapes). You might call it intelligent, though I am more in Cian “tool” camp. But the relevant part here is that it such techniques are not big labour savers.

A big part of engineering design work is defining the problems: what are the targets, how will you make trade-offs between them, what are the hurdles and limitations we will run into, do we understand these limitations, can we model them, if not do we have data from experiment or practice, if not can we create such data or otherwise add a safe margin around the uncertainty?

The other big chunk is interacting with other domains of expertise: you understand your part of the work fairly well, but how will your choices affect other parts? That takes up lots of time, from informal meetings to years-long design iteration cycles.

By the time such issue are clear enough to be turned into computer code, most of the work is already done. Programs can be very useful both in finding a better solution within the defined problem, and in fast communication with other domains. Surpisingly often, their main use is simply documentation: they force experienced designers to cast part of their experience in an explicit form, before they retire or move on to another field.

But the result is almost always better solutions with the same amount of human work, not less work for satisfactory solutions. And because making computer code is rather labour intensive too, the result is often better solutions at the price of more human work.

Might change in future. I don’t have a crystal ball. But right now, the trend is not that computers lead to across the board reductions in people to design stuff.

92

Matt 03.10.11 at 9:44 am

But actually I’ve been arguing that much of what we’ve considered the result of intelligence probably isn’t. Much of what humans do is fairly repitive – the intelligence is embedded in the algorithm that we follow. Other things that humans do which use our intelligence, can also be performed in non-intelligent ways.

I see the same thing, but interpret it differently: things that marked intelligence 100 years ago still mark intelligence today, even if they can now be done by repetitive logic applied at high speed instead of human brains. Rather than keep my conception of intelligence fixed as something which is ineffable and the near-exclusive property of humans, I keep the markers fixed and allow my conception to accommodate any successful completion of tasks that traditionally signified intelligence when performed by humans.

I don’t think AI research is going to produce Lt. Cmdr. Data in my lifetime, and I wouldn’t put good odds on it ever happening. Machines are divorced from our evolutionary and social context and their intelligence is task-oriented. We have few ideas and fewer resources directed to building a generalist machine intelligence that socially integrates with humans, and I don’t see that changing soon. But the lion’s share of formal economic activity is also rather specialized, repetitive, task-oriented, and not particularly sociable, so I am fairly confident that the narrow AI approach will continue to be fruitful and may even spawn a zeroth-order approximation of a “post scarcity economy” in my lifetime. Of course the precursor to those good times is probably the “70% unemployment economy” due to legacy economic structure and power so I’m not excessively impatient about it.

93

Cian 03.10.11 at 9:56 am

Salient, but is there any evidence that a computer is about to make the leap independently from pastiche (something for which the rules are fairly well known, and can easily be embedded) to something innovative that sounds good to a human ear. Evenly slightly innovative.

I’m really not convinced, and this is partly based upon my own personal experience, that pastiche leads inevitably to the second. That musical composition is somehow a linear thing from derivative dreck, to genius. We already had music schools that were teaching compositional rules to students, who churn out fairly derivative dreck. That that could be automated doesn’t seem terribly surprising. But we don’t know how to teach innovation, and yet every year plenty of people creative innovative music in a variety of fields.

Best I can see there are two kinds of phenomenon here. The first is the human ability to recognise and learn a pattern, and then consistently apply it. We’re now at the point where we can also teach a computer that pattern (though a human being has to be on hand to train it – hints here perhaps of where the jobs in a new economy might lie) and it can apply it far faster than we can. But we don’t seem to have learned how to do the second thing.

And these compositional tools will inevitably be applied by some nutcase in ways that never occured to anybody who designed them, and some new genre of music will arise. The drum machine and the sequencer were going to get rid of musicians. Never quite happened.

So, yeah, teach a computer to think and communicate and innovate somewhat like a human, and the first thing it comes up with is a dick joke.

Unless one of the programmers put that in deliberately (always pick dick when given choice), total coincidence.

94

ajay 03.10.11 at 10:08 am

When you can point me to a cd of music composed by a computer that has the same emotional impact on me that first hearing a Beethoven of Mahler symphony had, then I’ll be impressed.

Which, generalised, means that David has yet to be convinced that anyone who isn’t German is truly intelligent. When you can point him to a CD of music composed by an Irishman that has the same emotional impact on me that first hearing a Beethoven or Mahler symphony had, then he’ll be impressed.

95

Cian 03.10.11 at 10:17 am

I see the same thing, but interpret it differently: things that marked intelligence 100 years ago still mark intelligence today, even if they can now be done by repetitive logic applied at high speed instead of human brains.

Why did they mark intelligence 100 years ago? What was the criteria?

Again to look at it a different way. Say that there was an operation which used to be very difficult, but then a mathematician found a formula that meant anyone with a high school diploma could do it. Does being able to do this task still denote the same level of intelligence?

Or if we’re using cultural markers. Being able to read and write used to be markers of intelligence. Now not so much. Have we all somehow got more intelligent, or is just that our cultural measures of intelligence were not terribly good?

Rather than keep my conception of intelligence fixed as something which is ineffable and the near-exclusive property of humans, I keep the markers fixed and allow my conception to accommodate any successful completion of tasks that traditionally signified intelligence when performed by humans.

Why? What do you gain from doing this?

Currently we have really poor measures of human intelligence. We don’t really know how to measure it. We know very little about how the brain works. There’s increasing evidence that much of what we consider “cognitive processing” takes place outside the brain – either through the body, distributed through our tools and socially. Given that we know so little, I think making confident pronouncements about what intelligence might be is is profoundly misguided.

Machines are divorced from our evolutionary and social context and their intelligence is task-oriented.

Actually the car you mentioned above is not task orientated, its stimuli/response orientated, just like us. I think we’re so far from the point where computers could even deal appropriately with our social context, that its impossible to say what will happen there.

But the lion’s share of formal economic activity is also rather specialized, repetitive, task-oriented, and not particularly sociable

Is it? Parts of it definitely aren’t, other jobs where you think that would be the case actually have a surprising amount of tacit knowledge, fuzziness, compromise, etc. All things that computers are appaling at. Even something like accounting, which seems like its all hard edges, isn’t really when you look at it closely.

People made this confident predictions before with the IT revolution, yet it never really happened that way for exactly those reasons.

And while it might mean the elimination of jobs, it might equally mean a lot more jobs of a different kind. I mean look at the transformation of accounting, or all the jobs created by the database. The lesson from the Industrial revolution is that massive increases in productivity do not necessarily eliminate jobs, they can simply create new spaces where people are now needed as a consequence of the world created by that productivity.

That said, I think there is an economic problem looming. And that’s that markets don’t seem to be a particularly good way of managing service economies. And as these new technologies promise (perhaps falsely) to bring productivity improvements to service economy that have hitherto only been possible in the Industial economy, that could become a bigger problem.

96

Alex 03.10.11 at 10:33 am

Regarding automatically-composed music, it strikes me that nobody would consider implementing the same idea with a many-sided dice and blank sheet music to be evidence that musical originality doesn’t require intelligence. They might consider that the whole conceit of randomly recombining musical phrases and picking the good ‘uns was an original idea that requires unique creativity, basically a conceptual artwork, or perhaps that the real creativity was in the filtering process. But they wouldn’t think the process intelligent in its own right.

I wonder if there’s a threshold level of technological magic at work? Would they think it if we had two hundred students, each with their set of dice, at work in a production-line process? Or if we had a steam-powered Babbage engine?

97

dsquared 03.10.11 at 11:11 am

Brute-forcing chess is orders of magnitude more difficult and has not yet been done at all, because it has about 10^45 possible positions.

This is true if you were talking about brute-forcing a game of chess from beginning to end. However, there are lots of chess-like games (to take a non-trivial example, the game of “queen and pawn versus rook”) which are combinatorially smaller and which can be brute-forced. And I think Cian is right to say that the additional step which took computer chess from “good, but not able to beat grandmasters” to “better than the best human players” was the development of massive endgame databases (someone who knows more about computer chess than me might be able to gainsay this).

Interestingly, successful backgammon programs (which can currently beat most human players but not world champions) don’t look very much like successful chess programs at all; they’re based on neural networks and don’t use leaf-and-branch searches as their main method of analysis. I suspect that further success will be achieved in backgammon (ie, that there’s nothing really intrinsic to the stochastic element of the game or to the nonlinearities introduced by control of the doubling cube which makes it intractable), but they’ll be the result of advances in nonlinear nonparametric regressions rather than anything that looks like “intelligence”.

David in #87 is pretty much bound to end up being surprised one day; there are already machine-generated pseudoMozart compositions which can’t be told apart from the real thing. But but but …

I don’t think that the “No True Scotsman” objection is valid, and I think it’s actually sensible to say that all of these solved problems are not evidence of underlying intelligence. Intelligence isn’t about answering questions; it’s about knowing which questions to ask (there are elements of this in the discussion of 3’44” and of drum and bass – give a computer a musical genre and it can follow the rules, but inventing an entirely new musical genre is a lot more difficult, although I would guess still basically possible as it’s really just more problem-solving).

In other words, I for my part will be impressed when we have a computer that can come up with a convincing criterion for what might constitute a credible test of whether it is intelligent.

98

ajay 03.10.11 at 11:48 am

I for my part will be impressed when we have a computer that can come up with a convincing criterion for what might constitute a credible test of whether it is intelligent.

The danger is that you then encounter one that can then successfully convince you that you yourself don’t meet the criterion.
http://xkcd.com/329/

99

Matt 03.10.11 at 5:41 pm

Why? What do you gain from doing this?

Currently we have really poor measures of human intelligence. We don’t really know how to measure it. We know very little about how the brain works. There’s increasing evidence that much of what we consider “cognitive processing” takes place outside the brain – either through the body, distributed through our tools and socially. Given that we know so little, I think making confident pronouncements about what intelligence might be is is profoundly misguided.

Again, I see the same thing and draw the opposite conclusion. Since the mechanisms of human intelligence are so poorly understood, and so ill suited for comparing across different organisms or entities, it makes more sense to to me to judge intelligence by outcomes, at minimum outcomes that were widely understood as intelligence-markers before programmable computers made people defensive about what markers should count. I want a collection of yardsticks that don’t change in length all the time. If lifting 200 kilograms is easier now than it was before machines, and understanding a menu in a foreign language is easier now than it was before machines, why should I redefine strength or redefine intelligence to minimize the capabilities of machines? The alternative is as if the story of John Henry came to emphasize that a steam hammer couldn’t run or throw and didn’t use real muscles, therefore it was a cheat with no true strength.

100

bianca steele 03.10.11 at 6:22 pm

derrida derider @ 85 and the no true Scotsman thing
On the one hand, and not taking away from what Cian has been saying, whether an activity seems intelligent might have something to with how the program that does it works. If someone played chess by memorizing board positions, we would consider them pretty intelligent, but when a computer plays tic-tac-toe by looking up board positions calculated ahead of time by a human, the computer doesn’t seem to be intelligent. We’re not talking about a computer being able to learn the rules of the game by being taught them and working out the best strategy by itself.

On the other hand, things that don’t seem very intelligent turned out to be very hard for a computer to do: an example is walking with jointed legs. Another example is understanding arbitrary written stories, even simple ones, to the point of being able to answer unpredictable questions about them. If walking and understanding whether the waiter or the customer pays the check don’t seem to most people like criteria of intelligence, maybe it’s a mistake to think AI is about the dividing line between an intelligent animal and an unintelligent animal.

You can argue it should be if you like. You can even count a robot that can win a chess game but can’t find the door as more intelligent than a five year old child if that’s what you want to do. But these don’t seem to represent the issues people here are discussing. Nobody is saying being able to navigate with a map isn’t intelligent behavior, but it seems undeniable that it’s somewhat less impressive to follow step-by-step written instructions drawn up by someone else who used the map–it’s not just redefining terms to say of a robot that was supposed to do the former that it only did the latter, and in a sense was thus cheating.

David @ 88,
Fair enough. I hadn’t heard of Banks before I started reading Crooked Timber and haven’t yet read any of his books.

101

bianca steele 03.10.11 at 6:34 pm

From the other direction: a slide rule “knows” how to calculate cosines, but it isn’t intelligent. Admittedly, if an alien race turned out to have an actual physical analogue to slide rules in their brains, and thus could calculate cosines really well, we would call it intelligent. But we still wouldn’t say a person who looked up positions on a slide rule without any understanding could intelligently compute cosines (even though a slide rule doesn’t model human behavior well in this case just because we can’t compute cosines as well as it does).

102

ScentOfViolets 03.10.11 at 7:01 pm

This is true if you were talking about brute-forcing a game of chess from beginning to end. However, there are lots of chess-like games (to take a non-trivial example, the game of “queen and pawn versus rook”) which are combinatorially smaller and which can be brute-forced. And I think Cian is right to say that the additional step which took computer chess from “good, but not able to beat grandmasters” to “better than the best human players” was the development of massive endgame databases

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

The same can be said of any of a number of achievements attributed to machines; in reality, computers are doing nothing more than playing back the the thoughts of a dead man. They are a more sophisticated type of gramophone, playing prerecorded algorithms rather than soundtracks, nothing more.

The question then is what it is that humans do and what they mean by the word “think”. I would argue that the term has a lot a semantic baggage, in the same sense that concepts like “free will” are freighted with similar medieval nonsense.

103

ScentOfViolets 03.10.11 at 7:07 pm

Currently we have really poor measures of human intelligence. We don’t really know how to measure it. We know very little about how the brain works.

I’m guessing that ultimately it all comes down to just selection rules operating on what comes out of some process which has random variation built in. Yes, even that much-vaunted “creativity” that so many humanist types go on about.

The switcheroo and the mysticism comes in because the casual observer isn’t privy to all the failures. Yes, instinctual behaviour can be quite complex, in some cases seeming to be miraculously so. But focusing on the behaviour of individual members in a successful ant colony is kind of pointless unless one looks at the trillions of instances where other ants got it wrong and paid the price.

104

Matt 03.10.11 at 7:29 pm

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

Unless there’s a time limit, of course. “In principle” a human with a blackboard can produce a 5 day weather forecast just as well as computers, as long as the human is allowed a million years with a blackboard the size of Greater Los Angeles. Quantity has a quality of its own.

The same can be said of any of a number of achievements attributed to machines; in reality, computers are doing nothing more than playing back the the thoughts of a dead man. They are a more sophisticated type of gramophone, playing prerecorded algorithms rather than soundtracks, nothing more.

The people who who created today’s top chess programs could not defeat top human players if they played directly instead of writing programs to play. Top human players could not defeat today’s top chess programs if you gave them access to powerful computer hardware and a software development environment. A good chess program isn’t just a gramophone for chess play or a Wizard of Oz disguise for computer scientists.

105

Cian 03.10.11 at 8:17 pm

I’m guessing that ultimately it all comes down to just selection rules operating on what comes out of some process which has random variation built in. Yes, even that much-vaunted “creativity” that so many humanist types go on about.

You know you could read the odd book on what we do know, rather than just randomly guessing…

106

Cian 03.10.11 at 8:18 pm

Sorry that was unfair. Bad day.

107

Cian 03.10.11 at 8:25 pm

@99: Your argument is basically that you want to use a criteria that is fixed, chosen from an arbitrary point in time before people allegedly got “defensive” about intelligence because of computers (evidence of this forthcoming no doubt) and you don’t care how rubbish, unpresentative or plain wrong it is, just so long as its fixed DAMN IT. A hundred years ago people thought somebody who could add and multiply large sums in their head was pretty damn smart. Why not just use that, its no less arbitrary than anything else. Plus that would really prove that computers are more intelligent than humans.

Well I can’t stop you, but I don’t really see any point continuing the argument.

108

johne 03.10.11 at 8:30 pm

“…instinctual behaviour can be quite complex, in some cases seeming to be miraculously so. But focusing on the behaviour of individual members in a successful ant colony is kind of pointless unless one looks at the trillions of instances where other ants got it wrong and paid the price.”

Anyone who has had the chance to examine a colony of leafcutter ants, whose nests always are at the center of long files of workers traipsing out in different directions to a suitable tree, in order to return carrying a bit of leaf for the colony’s fungus gardens, has seen that there are always a few individuals who get turned around at some point, and are now returning their burden to the tree.

In a number of countries, politicians and economists have decided that the appropriate response to the current economic downturn is to further restrict economic activity by cutting government spending. The difference between instinctual and intelligent behavior then, seems to be that in the latter, the turned-around have more influence.

109

Cian 03.10.11 at 8:33 pm

@104: A top singer probably couldn’t make a good recording of themselves singing. Still a good singer. Your argument might need some work.

110

Matt 03.10.11 at 9:04 pm

There was no special revelation about the human brain or human thought that caused people to conclude that playing world-class chess is an unintelligent activity. It was the triumph of a computer at the task. It was enough to see the computer win to understand that chess playing is No True Intelligence. The yardstick for intelligence deflates at exactly the rate required to keep the machine a midget. Its deflation is also threatening the rest of us intelligent (for now!) humans who cannot claim to have created groundbreaking art, asked never-before-voiced questions, or found fortune as lone inventor-geniuses. My cynical take is that “intelligence” should be read as a sacred in-group mark of the tribe species that proves the gods appointed us lords of creation and infused us with divine fire, hence the mark must change as fast as out-groups try to counterfeit it.

111

chris 03.10.11 at 9:37 pm

There was no special revelation about the human brain or human thought that caused people to conclude that playing world-class chess is an unintelligent activity. It was the triumph of a computer at the task.

Well, technically, there is that one quote of Von Neumann. But his point of view wasn’t widely accepted at the time. He seems to have been right in hindsight, though.

112

Matt 03.10.11 at 11:17 pm

Well, technically, there is that one quote of Von Neumann. But his point of view wasn’t widely accepted at the time. He seems to have been right in hindsight, though.

His comment could (and I think was meant to) apply to all games with perfect information. Yet in practice it was harder to build champion chess machines than champion checkers machines, and it proves harder yet to build champion Go machines. Human Go champions do not seem vastly different from human chess champions, but machines have much more trouble with Go.

As another angle on the AI debate, it’s interesting to consider what sorts of challenges are rendered difficult by practical engineering limits versus those where we don’t yet know how to do well even with arbitrarily fast and capacious computers. Winning games with perfect information falls into the first class. Making AI professors of history falls into the second class. Facial recognition and many other tasks straddle the line: they required both considerable algorithm development and hardware advances before the performance went from abominable from admirable. If you took 2006’s champion facial recognition software and paired it with computers built in 1976, it would be useless because the computers weren’t good enough. If you tried to run 1976’s recognition software on 2006 computers it would give wrong answers much faster, but wouldn’t be much more accurate. And, similarly, although facial recognition machines can now outperform humans and require considerable computing power to do so, there is no indication that you could significantly decrease the error rate by running the same software on computers with 1000 times as much memory or processing speed.

113

David 03.10.11 at 11:37 pm

Yes, when a computer offers up River Dance, I’ll be most mightily depressed.

114

Salient 03.11.11 at 12:17 am

when a computer offers up River Dance, I’ll be most mightily depressed.

I once wrote a program to randomly sample from the most popular online music resources and ‘compose’ a mix-mashup from the results, but I had to discard it because it kept returning slight deformations of this.

115

ScentOfViolets 03.11.11 at 12:27 am

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

Unless there’s a time limit, of course. “In principle” a human with a blackboard can produce a 5 day weather forecast just as well as computers, as long as the human is allowed a million years with a blackboard the size of Greater Los Angeles. Quantity has a quality of its own.

Uh-huh. That’s part of the rules of the game and that’s way playing chess by mail over several a period of months is imp- Oh. Wait. That’s not true.

My point, for people like you who don’t seem to be terribly well versed in the subject is that the distinction between the machine and the algorithm must be kept firmly in mind at all times. This is basic stuff.

The same can be said of any of a number of achievements attributed to machines; in reality, computers are doing nothing more than playing back the the thoughts of a dead man. They are a more sophisticated type of gramophone, playing prerecorded algorithms rather than soundtracks, nothing more.

The people who who created today’s top chess programs could not defeat top human players if they played directly instead of writing programs to play. Top human players could not defeat today’s top chess programs if you gave them access to powerful computer hardware and a software development environment. A good chess program isn’t just a gramophone for chess play or a Wizard of Oz disguise for computer scientists.

Sigh. I had thought that the notion that machines implement algorithms that are put into them by hand would be not only not controversial, but universally known by anyone who would presume to speak with any authority on the subject.

Apparently I was wrong.

116

Charles Peterson 03.11.11 at 12:30 am

First I argue that assuming computers with greater intelligence than humans could be constructed, it won’t necessarily make life better for humans, and in fact it could make most people’s lives worse by furthering inequality, plutocracy, etc. And maybe *all* natural lives worse or nonexistant (though we seem to be doing a good movement in that direction all by ourselves…we are the among the most perfect grey goo enablers ever seen).

Second about intelligence itself. It means nothing like wisdom, or getting to the truth, compassion, etc., greater intelligence can simply mean the formation of more complex, useless or destructive lies, myths, etc.

Third about intelligence itself, there could be speed-of-light type limits of various sorts, which we might already be at or nearly so. This could work in either physical domains (such as speed of light, or quantum limits) or something like mathematical/topological/categorical/philosophical.

One illustration is the the advantage of adding more and more processors goes away when the job of coordination rises faster than the sum of capability.

Another illustration is that of determinism. Maintaining constrained determinism is expensive, and gets larger as systems get more complex. We generally like determinism in our computerized systems (that’s the way be build hardware and most software now), even though we seem to be far from that ourselves. We gain a huge advantage because of our acceptance of our own quasi-non-deterministic capabilities. It may be that more deterministic systems will never fully be able to compete with that, and/or it will be difficult to get them to fit our acceptiblity profiles. Of course, we don’t often fit our own profiles either, but we were already here.

Though the plutocracy keeps trying to have determinism (do as I say, etc) without having to pay the extra costs. Or to have perfect obedience with independent thinking, but the two are incompatible.

117

ScentOfViolets 03.11.11 at 12:32 am

I’m guessing that ultimately it all comes down to just selection rules operating on what comes out of some process which has random variation built in. Yes, even that much-vaunted “creativity” that so many humanist types go on about.

You know you could read the odd book on what we do know, rather than just randomly guessing…

Sorry that was unfair. Bad day.

To the contrary, it is painfully evident that I apparently know enormously more than just about anyone else commenting on the subject here. At least, judging by the comments ;-)

You may read “I’m guessing” as “noncontroversial mainstream opinion” if that helps any.

118

Matt 03.11.11 at 1:15 am

Sigh. I had thought that the notion that machines implement algorithms that are put into them by hand would be not only not controversial, but universally known by anyone who would presume to speak with any authority on the subject.

Surely, but your conclusions do not follow from that notion. An algorithm for playing a game is not a victory in competition any more than a program of research is a published result. In theory, tic-tac-toe, chess, and Go are equivalent as games of perfect information that a Turing machine can easily solve with fairly simple algorithms. In practice the algorithms must be implemented on a real machine and proved empirically.

119

Ebenezer Scrooge 03.11.11 at 1:19 am

Matt,
SoV is always correct. Its opponents are always snivelling fools, or arguing in bad faith. If you hang around this comment section long enough, you will know this irrefragable truth.

120

ScentOfViolets 03.11.11 at 1:28 am

Sigh. I had thought that the notion that machines implement algorithms that are put into them by hand would be not only not controversial, but universally known by anyone who would presume to speak with any authority on the subject.

Surely, but your conclusions do not follow from that notion. An algorithm for playing a game is not a victory in competition any more than a program of research is a published result.

Uh-huh. Why don’t you state what my “conclusions” were that “do not follow from that notion”? Specifically. Given your previous confusion about chess and your lack of distinction between an algorithm and what is implementing that algorithm, well, let’s just say you’d better be real specific and not drift off into airy-fairy land.

Particularly since everything I said is pretty much 100% mainstream. And on such a low level that you should know this stuff already.

121

Matt 03.11.11 at 1:53 am

Again, with feeling:

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

Plain old meat boppers, and even the computers of of a few decades ago, are not capable of defeating skilled human players by running modern chess algorithms because they are too slow to play a full game before expiring of old age. A computer implementing algorithms to play chess is not just a recording of old games, a sleight of hand trick for disguising the programmer’s chess skill behind a silicon mask, or something that plain old meat boppers could imitate by hand to win tournaments.

122

ScentOfViolets 03.11.11 at 2:06 am

Again, with feeling:

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

Plain old meat boppers, and even the computers of of a few decades ago, are not capable of defeating skilled human players by running modern chess algorithms because they are too slow to play a full game before expiring of old age. A computer implementing algorithms to play chess is not just a recording of old games, a sleight of hand trick for disguising the programmer’s chess skill behind a silicon mask, or something that plain old meat boppers could imitate by hand to win tournaments.

Really? “Again”, eh? The last time around you claimed that chess playing was timed. It’s not. You were dead wrong. Nor did I say anything like what you would have me say, in particular, the part that I have bolded. Perhaps that’s why you – quite by accident I’m sure – fail to quote me on where I said any such thing.

Now, can you at least admit that much? I get the sense that you know you’re wrong, but now you’ve thought yourself into the position that I’m the bad guy because I’m chivvying you. In which case I’ve got no time for your type of trolling (there’s another word for it, but I won’t bother to use it.)

123

Matt 03.11.11 at 2:24 am

Whether it’s a tournament with time control or not, you have to finish the game in one lifetime. That’s not very much time at all if you propose to win games by stepping through computer chess algorithms with pencil and paper.

I’m not trolling or arguing in bad faith. As proof I admit that I was wrong about your idea of recordings: you originally compared chess-playing machines to gramophones with recorded algorithms, not recorded games.

124

Matt McIrvin 03.11.11 at 2:25 am

In many domains this pattern may repeat: the best machines are better than unaided humans, but skilled humans working with the best machines are better yet.

This reminds me of when I was learning to use Mathematica to do heavy-duty symbolic algebra. After a while I got this definite impression (illusion?) that I was dealing with a kind of intelligence, but a completely nonhuman one. And that the important thing to learn to make progress was the precise nature of the boundary between tasks best done by me and tasks best done by the program. The machine (actually a 1990s Macintosh that would have been far outclassed by a modern-day cell phone) had a raw power in applying symbolic rules that I couldn’t match, but I had to do significant effort to get it pointed in the right direction. It felt almost as if I was coaxing a very strong but stubborn pack mule over a complicated mountain range.

125

Salient 03.11.11 at 2:52 am

I had thought that the notion that machines implement algorithms that are put into them by hand would be not only not controversial

Look, I hate to be the one to question you on your qualifications, but — you’re a math guy, right? You did take a neural networking class as part of your undergraduate studies, right? I’m assuming this is somewhat standard material for people interested in computers and maths. Try to recall the stuff about node-weighting and training data.

Roughly/loosely speaking, the whole point of a neural network is to design a computer that writes programs for itself, in a probabilistic rather than deterministic way, so that the end result behaviors are non-deterministic. Like, you can program a computer not just to drive a car, but also to respond to unanticipated phenomena. I remember I had a homework assignment where I wrote a program that let a car teach itself the best way to avoid moving obstacles, by trial and error. It kind of sucked, if I remember correctly,

But to be clear, I didn’t give the computer any instruction about how to avoid obstacles, I just taught it to reward itself whenever it happened to avoid an obstacle, and over time (lots of time) it taught itself (somewhat) to maximize its self-rewarding (maximize is too strong; it learned to not suck quite so badly).

So I’m just not sure what you mean. On a very meta level, it’s kinda sorta true. I mean, if I had never written a program, if I had never given the computer anything to start with, then sure, it’s a box of silicone.

But I didn’t write a program for obstacle avoidance; I wrote a program that said “avoidance is good; collision is bad; badness of a collision is a function of the area overlap and the net velocity (sprites could move through each other); now go trial-and-error and flounder around until you figure out how to do it well.” Awesomely, early on it just slammed on the brakes all the time, and over time it did more swervy type stuff and braked less. I didn’t teach/program it to compute velocities or trajectories other than its own; it just trial-and-errored its way through a field of moving rectangles. A friend of mine in the class chose to teach a van how to parallel park, which was funny because he didn’t know how to parallel park, at all. His program also kind of sucked, but it learned over time (it was rewarded for speed as well as for not hitting other cars).

Neither of us wrote deterministic algorithms for the computer to follow. The program acted randomly at first (literally – we used random.org data) and modified its behavior in response to how good or bad the outcome was. We definitely didn’t include any instructions for computing the velocity or trajectory of an object other than itself.

They are a more sophisticated type of gramophone, playing prerecorded algorithms rather than soundtracks, nothing more.

I guess, broadly speaking, you’re right, but what kind of gramophone behaves randomly? What kind of gramophone changes its own configuration and operation in response to crackling to improve signal-to-noise, without having been taught anything about what kinds of changes would be helpful?

I feel like your idea of a computer is deterministic. I dunno, maybe when you went through undergrad they didn’t encourage neural networking for computer types, so you’re working off an outdated mental model of a computer? I’m a lot younger than you IIRC, that might explain it. (Doesn’t explain your hostility, though.)

Nor did I say anything like what you would have me say, in particular, the part that I have bolded.

C’mon, you’re a math guy. Since when can you refute “A or B or C” on the grounds of not-A? It was/is a more than a little unclear what you meant with that initial statement of yours (below), so Matt took three guesses, increasingly general with each guess, so that C was very nearly a restatement of your claim.

I agree that A is not a super-accurate interpretation of what you said, but it takes quite a bit of effort to understand quite what you mean. Seriously, read what you wrote:

What often gets lost in the noise is that there is nothing stopping a plain old meat bopper from implementing those same algorithms using pencil-and-paper and beating any human who plays chess the old-fashioned way.

The trouble with this statement is that it’s technically true in a rather preposterous way. Nobody has the mental energy to memorize the ten billion or so twelve-piece end game algorithms. Buuuut, perhaps not nobody. Perhaps, I dunno, one person in ten billion can do that. At a rate of completely memorizing one algorithm per second, they could complete the task in about 310 years, so let’s assume that from the age of ten they can memorize ten algorithms per second, with time to practice general chess mastery, so they’re a double-plus-good grand master by 55. Technically, if they managed this, and they could recall each one instantly to mind, then yeah, they could pencil-and-paper it out.

But… it’s preposterous. I’m assuming that you googled and learned that hard-coding the reasonably characterized back-end of the back-end game, there’s billions of end-game states (2^33) with independent algorithms. (I have no idea if this has been implemented, and I might be badly off with that estimate, I didn’t exactly do a rigorous search or thorough read.) A computer can just look it up in a database. Even if you allowed the human lots of books so memorization wasn’t necessary, just storing in mind the organizational system necessary to find the right book would be horrendous and impossible.

Hell, why would they need pencil and paper? If you can recall billions of different procedures to mind, you presumably have the type of mind that could implement each of those billions of algorithms in your head, right? Why allow the person a pencil?

So, ok, I suppose someone could implement chess algorithms, in theory, in the same way that someone could, in theory, be able to state an arbitrarily chosen digit of pi with perfect accuracy. This is kind of abusing the phrase in theory, if you ask me, because to me, and Matt, and most everyone else, it’s totally inconceivable. But okay, whatever, I can’t prove it’s impossible for a super-genius to do, so… you win!

126

Matt 03.11.11 at 3:03 am

This reminds me of when I was learning to use Mathematica to do heavy-duty symbolic algebra. After a while I got this definite impression (illusion?) that I was dealing with a kind of intelligence, but a completely nonhuman one. And that the important thing to learn to make progress was the precise nature of the boundary between tasks best done by me and tasks best done by the program. The machine (actually a 1990s Macintosh that would have been far outclassed by a modern-day cell phone) had a raw power in applying symbolic rules that I couldn’t match, but I had to do significant effort to get it pointed in the right direction. It felt almost as if I was coaxing a very strong but stubborn pack mule over a complicated mountain range.

Macsyma, one of the earliest powerful symbolic algebra systems, explicitly embraced this “partnership” mode of work. If you asked it to do something, and certain assumptions (do you care about complex solutions? do you want assume N is greater than 0?) could substantially affect the outcome or the difficulty of the operation, it would prompt you with those sorts of questions instead of giving an error, guessing what you would like to assume, or doing something really slow and complicated. It was a more teachable mule. Unfortunately it lost out to Maple and Mathematica because of weak numerics and some dumb hardware-bundling tactics by the company that commercialized it.

127

Salient 03.11.11 at 3:12 am

Adding, I remember getting frustrated because the car had figured out that it could just hide in one corner of the screen and not get hit by stuff, and I couldn’t figure out how to teach it that staying still for too long was a bad thing. It had a tendency to exploit faults in my crude programming of what the rectangles would do, like travel at constant velocity in a straight line forever. A friend in the class helped me design (ok, a friend designed) a “vicious” rectangle that would actually hunt down the car (in a crude and deterministic way), and I added some kind of poorly implemented badness measurement for staying fairly still for fairly long. It resulted in kind of erratic and unproductive behavior by the car (probably my fault, I’m not the best programmer). I caught myself feeling a pang of hesitance, conflicted, as if I had somehow traumatized and emotionally scarred a living thing.

Also, for those who are curious about the details, IIRC Gurney’s intro to neural networks book [Amazon link] wasn’t our textbook, but it was on the recommended list on the syllabus and in the library reserves, and I remember enjoying reading it a lot more than our textbook. There’s probably better out there.

I always wanted to write a program with three cars, Cars A and B trying to avoid Car C and each other and Car C trying to hit Car A or Car B, to see what would happen. Like, would the prey cars learn to hide behind each other? When would the hunter car break off from chasing one car to chase the other? … but then that conflicted feeling returns. Take that for what you will.

128

ScentOfViolets 03.11.11 at 3:21 am

Salient:

Yes, all of what you say is pretty much true . . . and irrelevant to the question of whether or not machines “think” or can be said to be “creative”. None of the stuff I’ve said is controversial or obscure, btw.

Take for example this last bit (I don’t have a lot of time now):

So, ok, I suppose someone could implement chess algorithms, in theory, in the same way that someone could, in theory, be able to state an arbitrarily chosen digit of pi with perfect accuracy. This is kind of abusing the phrase in theory, if you ask me, because to me, and Matt, and most everyone else, it’s totally inconceivable. But okay, whatever, I can’t prove it’s impossible for a super-genius to do, so… you win!

This goes back to what it means to be a strong AI proponent. This group maintains that what they believe in is nothing more than a rejection of dualism. There is no mysterious paranormal stuff that is necessary for thought, no soul that animates a human like a tiny mannequin taking up residence inside their skulls. They say that in principle a machine intelligence is possible because all you need is a computer powerful enough to emulate all the atoms and molecules that make up the human brain (or larger or smaller units, going up to synapses and neurons or larger functional blocks, or going down to electrons and nuclear material – that quantum weirdness the Deepak Chopra types insist upon.) Disassemble a human brain and feed the mapping to the computer, hit the “go” switch and – voila! – instant machine intelligence.

And in those circles – quite standard, I assure you – the question of how fast the machine runs is considered completely irrelevant as to whether or not the machine can be said to think “like a human”. It could be that this hypothetical computer runs one twentieth or one one-thousandth as fast as your standard human. Doesn’t make a difference. Just as it wouldn’t make a difference if the machine turned out to run over a thousand times faster than the natural article. None of these strong AI types would say that this machine could only be considered to be thinking “like a human” if it ran at exactly the same rate as the strictly biological variety.

You may disagree, but it certainly seems like a reasonable position to me. And – as I’ve said repeatedly – the one taken by the overwhelming majority of researchers.

And since it is in that sense that we are talking (at least, insofar as I read the conversation), it doesn’t matter how long it takes someone to run through the algorithm or the size of the blackboard they have to use.

Now, if the conversation were to be of a different sort, a more pragmatic one where we were talking about the sorts of tasks we could expect these machines to do rather than arguing over whether or not they are doing them in a “human” or “thinking” or “creative” way, objections of this sort would be perfectly allowable. In fact, that’s more of the sort of conversation I’d prefer – discussions like this one descend very quickly into metaphysical wankery. But that’s not what’s been happening, unfortunately.

129

Matt 03.11.11 at 3:30 am

Salient, thanks for the support, but playing chess by algorithm-by-hand is actually even harder than you have indicated. The problem isn’t needing to memorize a vast table base, although it’s true that the very strongest programs incorporate them. You can still implement strong chess algorithms without large table bases, and the algorithms are within demonstrated human capacity to memorize.

The problem is the sheer number of operations needed to run the algorithm by hand! A modern PC with a strong chess program may execute on the order of 10 billion elementary instructions per second to generate and evaluate on the order of 10 million distinct moves per second. There’s a little wiggle room because a human working by hand may use slightly more complex elementary operations than a modern CPU offers, but not a lot. You’d need many lifetimes to replicate on paper what the computer did in the course of one quick game.

130

Matt 03.11.11 at 4:08 am

SoV, you’re the first one to bring up simulating molecular blueprints of people on some yet-to-be-built computer. In reality what we can do with Turing-like machines is limited by storage space and time, and I made this very explicit already in post 112.

131

Salient 03.11.11 at 4:53 am

The problem is the sheer number of operations needed to run the algorithm by hand!

Oh, ok, that makes sense. I was hacking together an estimate from a quick search, and probably bundled operation count with independent algorithm count without thinking about it (and come to think of it, I was assuming that each possible collection of pieces remaining on the board would require a different algorithm to execute, which probably isn’t true).

And in those circles – quite standard, I assure you – the question of how fast the machine runs is considered completely irrelevant as to whether or not the machine can be said to think “like a human”.

I feel like we’re just sliding around definitions. Look, I would say my puppy is intelligent. I taught him tricks and games with puppy toys. You can say, naw, that’s not intelligence. I don’t know if it’s “thinking like a human” because the word “like” is doing an awful lot of work there. How would I evaluate whether I am currently “thinking like a human” let alone my puppy? I don’t know what the phrase means.

None of the stuff I’ve said is controversial or obscure, btw.

Seriously, that earlier statement about humans being able to run designed-for-computer chess algorithms and play chess accordingly was super-duper-obscure. And then the part where you said chess is not a timed game? WTF? What game isn’t timed, and how do I sign up, so that I can be immortal and stuff?

And in those circles – quite standard, I assure you – the question of how fast the machine runs is considered completely irrelevant as to whether or not the machine can be said to think “like a human”.

Dude, we’re not trying to figure out if a brain can run like a human [at least, at this point in the conversation and for about the last X hours or so], we’re trying to figure out w.t.f. you meant when you said humans can run chess algorithms on pencil and paper. Or hell, at least that’s what I’ve been trying to figure out, and it sure seems to me that I’m not alone here.

I don’t know what the phrase “think like a human” means. To me, “intelligent” means something like “capable of independently adapting to new input” where “independently” probably just means “without additional strictly deterministic instruction.” So humans surely aren’t the only intelligent beings, by my definition of intelligence. Is there a better definition? I dunno. I’d like to hear it. And I got some intriguing ideas from some of the commenters here about it, which is great! So hey, what’s your definition of intelligence?

where we were talking about the sorts of tasks we could expect these machines to do rather than arguing over whether or not they are doing them in a “human” or “thinking” or “creative” way

Uh, ok. Lots of people were talking about that stuff, e.g. Cian upthread. If that’s what you wanted, you should’ve just started talking about that and given up on the vitriolic condemnation of people who want to talk about creative robots on a bleepin’ science fiction thread. The stuff you said about people playing chess was quite a distraction away from what your stated interest.

132

ScentOfViolets 03.11.11 at 5:06 pm

Uh, ok. Lots of people were talking about that stuff, e.g. Cian upthread. If that’s what you wanted, you should’ve just started talking about that and given up on the vitriolic condemnation of people who want to talk about creative robots on a bleepin’ science fiction thread. The stuff you said about people playing chess was quite a distraction away from what your stated interest.

Sigh. You’ve rather got the order of how things happened reversed, as well as apparently not bothering to read what I write.

It’s also quite apparent that no, you don’t know much about this sort of stuff and my interjections are going way over the heads of people like you and Matt (both of whom – oddly enough – seem to think of themselves as experts.)

So there’s no point in further conversation – apparently you think that actually learning something new to you counts as some sort of beat-down, and well, I get enough of your sort of oppositional attitude from my own students.

Enough.

133

David 03.12.11 at 1:05 am

ajay@#94: Ok, I’ll embrace the snark. To paraphrase Mencken, “there are two kinds of intelligence. German intelligence and bad intelligence.” Beethoven and Mahler were just the first two, and the earliest, that popped into my head. I could well have singled out Diabate, instead.

dsquared@#97: I’m not all that big on Mozart, but I’m willing to bet that aficionados hearing a meat orchestra performance of one of these pieces would have a reaction along the lines of this was a really off day for Amadeus, he must have phoned it in rather than, oh wow! an undiscovered Mozart masterpiece.

134

ScentOfViolets 03.12.11 at 2:12 am

But David, just what do you think creativity is anyway? It’s pretty much just repeated random variation filtered by criteria for good and bad.

This is sort of like modern debates about “free will”: what you’ve got to work with in the physical world is randomness and determinacy. That’s it. What you want to identify as “free will” in humans you have to whip up out of just those two ingredients.

Unless you’re a dualist of some flavor, of course.

135

SpeakerToManagers 03.14.11 at 4:47 am

what you’ve got to work with in the physical world is randomness and determinacy.

No, you’ve also got classical (as opposed to quantum) chaos, which is neither random nor deterministic in the normal senses of those words. And it’s the most important factor when you’re dealing with systems built up from 6 or 8 entangled self-organized layers, like the human mind: biomolecules -> organelles (axons, dendrites, etc.) -> neurons -> circuits -> cell layers -> maps -> specialized areas (cerebellum, broca’s region, etc.) -> brain -> nervous system is one way to slice it.

136

ajay 03.14.11 at 11:07 am

apparently you think that actually learning something new to you counts as some sort of beat-down, and well, I get enough of your sort of oppositional attitude from my own students.

I am utterly dumbfounded to learn that SoV doesn’t always get on well with his students.

137

ScentOfViolets 03.15.11 at 1:45 am

No, you’ve also got classical (as opposed to quantum) chaos, which is neither random nor deterministic in the normal senses of those words.

Nope. Not true. Not for this sort of stuff. Let me quote Scott Aaronson over at Shtetl-Optimised:

An algorithm doesn’t have to be something whose properties we can prove. It doesn’t have to work all the time. It doesn’t have to be “serial” or “deterministic” or “logic-based.” It can be inspired by evolution or any other natural process. Essentially any law-governed process you can possibly imagine can be modeled algorithmically—in other words, simulated by a Turing machine.

In fact, short of doing things like solving the halting problem, essentially the only way the brain could fail to be simulable by a Turing machine would be if it wasn’t governed by physical laws at all.

If rather than sophistically evading this point, the AI critics could show me that they understood it, I’d be much more interested in talking to them.

Note, btw, we had someone up above claim that neural nets didn’t follow an algorithm. It’s this sort of thing that exasperates me, when someone is only half-educated but feels that they are an expert; enough so that they feel obliged to slap anyone down who points out that a) that Just Ain’t So, and b) that this is well-known in the field itself.

In any event, these computationally difficult processes are irrelevant to the question. All you’ve got to work with, I repeat, is determinism and randomness. It’s sort of hard to go from those basic ingredients to what a lot of people apparently mean by “creativity”. Apparently it’s such an ineffable process that it’s impossible to even conceive of it being nothing more than an application of random variation coupled fitness selection rules. Even if they are very . . . complicated :-)

138

BlaiseP 03.15.11 at 2:26 am

I don’t like the term AI. Machine reasoning seems to make more sense, but I guess we’re stuck with AI, much as we’re stuck with “computer”, though precious little actual computing is done on them anymore. I write both rules-based AI and neural networks, usually a combination, mostly doing prosaic policy and quality control apps.

AI suffers from a serious problem, overtraining. We call it “brittle” AI, a system which gets hidebound and can’t adapt to even the smallest changes. We’ve all met people like that, as well. For some reason, it’s a serious irritation on the assembly lines: a robot which fails too much product gets blamed for the problem, not the upstream process which has obviously changed in some way the robot has spotted. Their first reaction is to open the QC slalom gates a little wider, which lets bad product through. It’s even more interesting, watching a group of underwriters override a credit scoring app.

It’s a perception problem: intelligence isn’t exactly equal to reasoning. We built these robots to take the drudgery out of inspection and enforce policy, only to instinctively doubt the conclusions.

Comments on this entry are closed.