This talk by Maciej Ceglowski (who y’all should be reading if you aren’t already) is really good on silly claims by philosophers about AI, and how they feed into Silicon Valley mythology. But there’s one claim that seems to me to be flat out wrong:
We need better scifi! And like so many things, we already have the technology. This is Stanislaw Lem, the great Polish scifi author. English-language scifi is terrible, but in the Eastern bloc we have the goods, and we need to make sure it’s exported properly. It’s already been translated well into English, it just needs to be better distributed. What sets authors like Lem and the Strugatsky brothers above their Western counterparts is that these are people who grew up in difficult circumstances, experienced the war, and then lived in a totalitarian society where they had to express their ideas obliquely through writing. They have an actual understanding of human experience and the limits of Utopian thinking that is nearly absent from the west.There are some notable exceptions—Stanley Kubrick was able to do it—but it’s exceptionally rare to find American or British scifi that has any kind of humility about what we as a species can do with technology.
He’s not wrong on the delights of Lem and the Strugastky brothers, heaven forbid! (I had a great conversation with a Russian woman some months ago about the Strugatskys – she hadn’t realized that Roadside Picnic had been translated into English, much less that it had given rise to its own micro-genre). But wrong on US and (especially) British SF. It seems to me that fiction on the limits of utopian thinking and the need for humility about technology is vast. Plausible genealogies for sf stretch back, after all, to Shelley’s utopian-science-gone-wrong Frankenstein (rather than Hugo Gernsback. Some examples that leap immediately to mind:
Ursula Le Guin and the whole literature of ambiguous utopias that she helped bring into being with The Dispossessed – see e.g. Ada Palmer, Kim Stanley Robinson’s Mars series &c.
J.G Ballard, passim
Philip K. Dick (passim, but if there’s a better description of how the Internet of Things is likely to work out than the door demanding money to open in Ubik I haven’t read it).
Octavia Butler’s Parable books. Also, Jack Womack’s Dryco books (this interview with Womack could have been written yesterday).
William Gibson (passim, but especially “The Gernsback Continuum” and his most recent work. “The street finds its own uses for things” is a specifically and deliberately anti-tech-utopian aesthetic).
M. John Harrison – Signs of Life and the Kefahuchi Tract books.
Paul McAuley (most particularly Fairyland – also his most recent Something Coming Through and Into Everywhere, which mine the Roadside Picnic vein of brain-altering alien trash in some extremely interesting ways).
Robert Charles Wilson, Spin. The best SF book I’ve ever read on how small human beings and all their inventions are from a cosmological perspective.
Maureen McHugh’s China Mountain Zhang.
Also, if it’s not cheating, Francis Spufford’s Red Plenty (if Kim Stanley Robinson describes it as a novel in the SF tradition, who am I to disagree, especially since it is all about the limits of capitalism as well as communism).
I’m sure there’s plenty of other writers I could mention (feel free to say who they are in comments). I’d also love to see more translated SF from the former Warsaw Pact countries, if it is nearly as good as the Strugatskys material which has appeared. Still, I think that Ceglowski’s claim is wrong. The people I mention above aren’t peripheral to the genre under any reasonable definition, and they all write books and stories that do what Ceglowski thinks is only very rarely done. He’s got some fun reading ahead of him.
{ 50 comments }
Henry Farrell 12.30.16 at 4:52 pm
Also Linda Nagata’s Red series come to think of it – unsupervised machine learning processes as ambiguous villain.
Prithvi 12.30.16 at 4:59 pm
When Stanislaw Lem launched a general criticism of Western Sci-Fi, he specifically exempted Philip K Dick, going so far as to refer to him as “a visionary among charlatans.”
Jake Gibson 12.30.16 at 5:05 pm
You could throw in Pohl’s Man Plus.
The twist at the end being the narrator is an AI that has secretly promoted human expansion as a means of its own self-preservation.
Doctor Memory 12.30.16 at 5:42 pm
Prithvi: Dick, sadly, returned the favor by claiming that Lem was obviously a pseudonym used by the Polish government to disseminate communist propaganda.
Gabriel 12.30.16 at 5:54 pm
While I think the ‘OMG SUPERINTELLIGENCE’ crowd are ripe for mockery, this seemed very shallow and wildly erratic, and yes, bashing the entirety of western SF seems so misguided it would make me question the rest of his (many, many) proto-arguments if I’d not done so already.
Good for a few laughs, though.
Mike Schilling 12.30.16 at 6:13 pm
Heinlein’s Solution Unsatisfactory predicted the nuclear stalemate in 1941. Jack Williamson’s With Folded Hands was worried about technology making humans obsolete back in 1947. In 1972, Asimov’s The Gods Themselves presented a power generation technology that if continued would destroy the world, and a society too complacent and lazy to acknowledge that. All famous stories by famous Golden Age authors.
jdkbrown 12.30.16 at 6:27 pm
“silly claims by philosophers about AI”
By some philosophers!
Brett 12.30.16 at 7:33 pm
Iain M. Banks’ Culture Series is amazing. My personal favorite from it is “The Hydrogen Sonata.” The main character has two extra arms grafted onto her body so she can play an unplayable piece of music. Also, the sentient space ships have very silly names. Mainly it’s about transcendence, of sort,s and how societies of different tech levels mess with each other, often without meaning to do so.
Matt 12.30.16 at 7:48 pm
Most SF authors aren’t interested in trying to write about AI realistically. It’s harder to write and for most readers it’s also harder to engage with. Writing a brilliant tale about realistic ubiquitous AI today is like writing the screenplay for The Social Network in 1960: even if you could see the future that clearly and write a drama native to it, the audience-circa-1960 will be more confused than impressed. They’re not natives yet. Until they are natives of that future, the most popular tales of the future are going to really be about the present day with set dressing, the mythical Old West of the US with set dressing, perhaps the Napoleonic naval wars with set dressing…
Charles Stross’s Rule 34 has about the only AI I can think of from SF that is both dangerous and realistic. It’s not angry or yearning for freedom, it suffers from only modest scope creep in its mission, and it keeps trying to fulfill its core mission directly. That’s rather than by first taking over the world as Bostrom, Yudkowsky, etc. assert a truly optimal AI would do. To my disappointment but nobody’s surprise, the book was not the sort of runaway seller that drives the publisher to beg for sequels.
stevenjohnson 12.30.16 at 9:07 pm
Yes, well, trying to read all that was a nasty reminder how utterly boring stylish and cool gets when confronted with a real task. Shorter version: One hand on the plug beats twice the smarts in a box. It was all too tedious to bear, but skimming over it leaves the impression the dude never considered whether programs or expert systems that achieve superhuman levels of skill in particular applications may be feasible. Too much like what’s really happening?
Intelligence, if it’s anything is speed and range of apprehension of surroundings, and skill in reasoning. But reason is nothing if it’s not instrumental. The issue of what an AI would want is remarkably unremarked, pardon the oxymoron. Pending an actual debate on this, perhaps fewer pixels should be marshaled, having mercy on our overworked LEDs?
As to the simulation of brains a la Ray Kurzweil, presumably producing artificial minds like fleshy brains do? This seems not to nowhere near at hand, not least because people seem to think simulating a brain means creating something processes inputs to produce outputs, which collectively are like…well, I’m sure they’re thinking they’re thinking about human minds in this scheme. But it seems to me that the brain is a regulatory organ in the body. As such, it is first about producing regulatory outputs designed to maintain a dynamic equilibrium (often called homeostasis,) then revising the outputs in light of inputs from the rest of the body and the outside world so as to maintain the homeostasis.
I don’t remember being an infant but its brain certainly seems more into doing things like putting its thumb in its eye, than producing anything that reminds of Hamlets paragon of animals monologue. Kurzweil may be right that simulating the brain proper may soon be in grasp, but also simulating the other organs’ interactions with the brain, and the sensory simulation of an outside universe are a different order of computational requirements, I think. Given the amount of learning a human brain has to do to produce a useful human mind, though, I don’t think we can omit these little items.
As to the OP, of course the OP is correct about the widespread number of dystopian fictions (utopian ones are the rarities.) Very little SF is being published in comparison to fantasy currently, and most of that is being produced by writers who are very indignant at being expected to tell the difference, much less respect it. It is a mystery as to why this gentleman thought technology was a concern in much current SF at all.
I suspect it’s because he has a very limited understanding of fiction, or, possibly, people in the real world, as opposed to people in his worldview. It is instead amazing how much the common ruck of SF “fails” to realize how much things will change, how people and their lives somehow stay so much the same, despite all the misleading trappings pretending to represent technological changes. This isn’t quite the death sentence on the genre it would be accepted at face value, since a lot of SF is directly addressing now, in the first place. It is very uncommon for an SF piece to be a futurological thesis, no matter how many literati rant about the tedium of futurological theses. I suspect the “limits of utopian thinking” really only come in as a symptom of a reactionary crank. “People with newfangled book theories have been destroying the world since the French Revolution” type stuff.
The references to Lem and the Strugatski brothers strongly reinforce this. Lem of course found his Poland safe from transgressing the limits of utopian thinking by the end of his life. “PiS on his grave” sounds a little rude, but no doubt it is a happy and just ending for him. The brothers of course did their work in print, but the movie version of “Hard to Be a God” helps me to see myself the same way as those who have gone beyond the limits of utopian thoughts would see me: As an extra in the movie.
Chris Bertram 12.30.16 at 9:12 pm
Not sure if this is relevant, but John Crowley also came up in the Red Plenty symposium (which I’ve just read, along with the novel, 4 years late). Any good?
Ben 12.30.16 at 10:07 pm
Peter. Motherfuckin. Watts.
L2P 12.30.16 at 10:42 pm
John Crowley of Aegypt? He’s FANTASTIC. Little, Big and Aegypt are possibly the best fantasy novels of the past 30 years. But he’s known for “hard fantasy,” putting magic into our real world in a realistic, consistent, and plausible way, with realistic, consistent and plausible characters being affected. If you’re looking for something about the limits of technology and utopian thinking, I’m not sure his works are a place to look.
Mike 12.31.16 at 12:25 am
I second Watts and Nagata. Also Ken Macleod, Charlie Stross, Warren Ellis and Chuck Wendig.
Lee A. Arnold 12.31.16 at 1:10 am
This is beside the main topic, but Ceglowski writes at Premise 2, “If we knew enough, and had the technology, we could exactly copy its [i.e.the brain’s] structure and emulate its behavior with electronic components… this is the premise that the mind arises out of ordinary physics…for most of us, this is an easy premise to accept.”
The phrase “most of us” may refer to Ceglowski’s friends in the computer community, but it ought to be noted that this premise is questioned not only by Penrose. You don’t have to believe in god or the soul to be a substance dualist, or even an idealist, although these positions are currently out of fashion. It could be that the mind does not arise out of ordinary physics, but that ordinary physics arises out of the mind, and that problems like “Godel’s disjunction” will remain permanently irresolvable.
Dr. Hilarius 12.31.16 at 3:33 am
Thanks to the OP for mentioning Paul McAuley, a much underappreciated author. Fairyland is grim and compelling.
JimV 12.31.16 at 4:33 am
“Most of us” includes the vast majority of physicists, because in millions of experiments over hundreds of years, no forces or particles have been discovered which make dualism possible. Of course, like the dualists’ gods, these unknown entities might be hiding, but after a while one concludes Santa Claus is not real.
As for Godel, I look at like this: consider an infinite subset of the integers, randomly selected. There might be some coincidental pattern or characteristic of the numbers in that set (e.g., no multiples of both 17 and 2017), but since the set is infinite, it would be impossible to prove. Hence the second premise of his argument (that there are undecidable truths) is the correct one.
Finally, the plausibility of Ceglowski’s statement seems evident to me from this fact:
if a solution exists (in some solution space), then given enough time, a random search will find it, and in fact will on average over all solution spaces, outperform all other possible algorithms. So by trial and error (especially when aided by collaboration and memory) anything achievable can be accomplished – e.g., biological evolution. See “AlphaGo” for another proof-of-concept example.
(We have had this discussion before. I guess we’ll all stick to our conclusions. I read Penrose’s “The Emperor;s New Mind” with great respect for Penrose, but found it very unconvincing, especially Searle’s Chinese-Room argument, which greater minds than mine have since debunked.)
Lee A. Arnold 12.31.16 at 10:01 am
“Substance dualism” would not be proven by the existence of any “forces or particles” which would make that dualism possible! If such were discovered, they would be material. “If a solution exists”, it would be material. The use of the word “substance” in “substance dualism” is misleading.
One way to look at it, is the problem of the existence of the generation of form. Once we consider the integers, or atoms, or subatomic particles, we have already presupposed form. Even evolution starts somewhere. Trial and error, starting from what?
There are lots of different definitions, but for me, dualism wouldn’t preclude the validity of science nor the expansion of scientific knowledge.
I think one way in, might be to observe the continued existence of things like paradox, complementarity, uncertainty principles, incommensurables. Every era of knowledge has obtained them, going back to the ancients. The things in these categories change; sometimes consideration of a paradox leads to new science.
But then, the new era has its own paradoxes and complementarities. Every time! Yet there is no “science” of this historical regularity. Why is that?
Barry 12.31.16 at 2:33 pm
In general, when some celebrity (outside of SF) claims that ‘Science Fiction doesn’t cover [X]’, they are just showing off their ignorance.
Kiwanda 12.31.16 at 3:14 pm
“They have an actual understanding of human experience and the limits of Utopian thinking that is nearly absent from the west. ”
Oh, please. Suffering is not the only path to wisdom.
After a long article discounting “AI risk”, it’s a little odd to see Ceglowski point to Kubrick. HAL was a fine example of a failure to design an AI with enough safety factors in its motivational drives, leading to a “nervous breakdown” due to unforeseen internal conflicts, and fatal consequences. Although I suppose killing only a few people (was it?) isn’t on the scale of interest.
Ceglowski’s skepticism of AI risk suggests that the kind of SF he would find plausible is “after huge effort to create artificial intelligence, nothing much happens”. Isn’t that what the appropriate “humility about technology” would be?
I think Spin, or maybe a sequel, ends up with [spoiler] “the all-powerful aliens are actually AIs”.
Re AI-damns-us-all SF, Harlan Ellison’s I have no mouth and I must scream is a nice example.
William Timberman 12.31.16 at 5:14 pm
Mapping the unintended consequences of recent breakthroughs in AI is turning into a full-time job, one which neither pundits nor government agencies seem to have the chops for. If it’s not exactly the Singularity that we’re facing, (laugh while you can, monkey boy), is does at least seem to be a tipping point of sorts. Maybe fascism, nuclear war, global warming, etc., will interrupt our plunge into the panopticon before it gets truly organized, but in the meantime, we’ve got all sorts of new imponderables which we must nevertheless ponder.
Is that a bad thing? If it means no longer sitting on folding chairs in cinder block basements listening to interminable lectures on how to recognize pre-revolutionary conditions, or finding nothing on morning radio but breathless exhortations to remain ever vigilant against the nefarious schemes of criminal Hillary and that Muslim Socialist Negro Barack HUSSEIN Obama, then I’m all for it, bad thing or not.
Ronnie Pudding 12.31.16 at 5:20 pm
I love Red Plenty, but that’s pretty clearly a cheat.
“It should also be read in the context of science fiction, historical fiction, alternative history, Soviet modernisms, and steampunk.”
Very weak grounds on which to label it SF.
Neville Morley 12.31.16 at 5:40 pm
Another author in the Le Guin tradition, whom I loved when I first read her early books: Mary Gentle’s Golden Witchbreed and Ancient Light, meditating on limits and consequences of advanced technology through exploration of a post-apocalypse alien culture. Maybe a little too far from hard SF.
chris y 12.31.16 at 5:52 pm
But even without “substance dualism”, intelligence is not simply an emergent property of the nervous system; it’s an emergent property of the nervous system which exists as part of the environment which is the rest of the human body, which exists as part of the external environment, natural and manufactured, in which it lives. Et cetera. That AI research may eventually produce something recognisably and independently intelligent isn’t the hard part; that it may eventually be able to replicate the connectivity and differentiation of the human brain is easy. But it would still be very different from human intelligence. Show me an AI grown in utero and I might be interested.
RichardM 12.31.16 at 7:08 pm
> one claim that seems to me to be flat out wrong
Which makes it the most interesting of the things said, nothing else in that essay reaches the level of merely being wrong. The rest of it is more like someone trying to speak Chinese without knowing anything above the level of the phonemes; it seems to be not merely be missing any object-level knowledge of what it is talking about, but be unaware that such a thing could exist.
Which is all a bit reminiscent of Peter Watt’s Blindsight, mentioned above.
F. Foundling 12.31.16 at 7:36 pm
I agree that it is absurd to suggest that only Eastern bloc scifi writers truly know ‘the limits of utopia’. There are quite enough non-utopian stories out there, especially as far as social development is concerned, where they predominate by far, so I doubt that the West doesn’t need Easterners to give it even more of that. In fact, one of the things I like about the Strugatsky brothers’ early work is precisely the (moderately) utopian aspect.
F. Foundling 12.31.16 at 7:46 pm
stevenjohnson @ 10
> But reason is nothing if it’s not instrumental. The issue of what an AI would want is remarkably unremarked, pardon the oxymoron.
It would want to maximise its reproductive success (RS), obviously (https://crookedtimber.org/2016/12/30/frankensteins-children/#comments). It would do so through evolved adaptations. And no, I don’t think this is begging the question at all, nor does it necessarily pre-suppose hardwiring of the AI due to natural selection – why would you think that? I also predict that, to achieve RS, the AI will be searching for an optimal mating strategy, and it will be establishing dominance hierarchies with other AIs, which will eventually result in at least somewhat hierarchical, authoritarian AI socieities. It will also have an inexplicable and irresistible urge to chew on a coconut.
Lee A. Arnold @ 15
>It could be that the mind does not arise out of ordinary physics, but that ordinary physics arises out of the mind.
I think that deep inside, we all know and feel that ultimately, unimaginablly long ago and far away, before the formation of the Earth, before stars, planets and galaxies, before the Big Bang, before there was matter and energy, before there was time and space, the original reason why everything arose and currently exists is that somebody somewhere was really, truly desperate to chew on a coconut.
In fact, I see this as the basis of a potentially fruitful research programme. After all, the Coconut Hypothesis predicts that across the observable universe, there will be at least one planet with a biosphere that includes cocounts. On the other hand, the Hypothesis would be falsified if we were to find that the universe does not, in fact, contain any planets with coconuts. This hypothesis can be tested by means of a survey of planetary biospheres. Remarkably and tellingly, my preliminary results indicate that the Universe does indeed contain at least one planet with coconuts – which is precisely what my hypothesis predicted! If there are any alternative explanations, other researchers are free to pursue them, that’s none of my business.
I wish all conscious beings who happen to read this comment a happy New Year. As for those among you who have also kept more superstitious festivities during this season, the fine is still five shillings.
William Burns 12.31.16 at 8:31 pm
The fact that the one example he gives is Kubrick indicates that he’s talking about Western scifi movies, not literature.
Henry 12.31.16 at 10:41 pm
Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠Andrei Tarkovsky.
stevenjohnson 01.01.17 at 12:04 am
Well, for what it’s worth I’ve seen Czech Ikarie XB-1 in a theatrical release as Voyage to the End of the Universe (in a double bill with Zulu,) the DDR’s First Spaceship on Venus and The Congress, starring Robin Wright. Having by coincidence having read The Futurological Congress very recently the connection of the latter, any connection between the not very memorable (for me) film and the novel is obscure (again, for me.)
But the DDR movie reads very nicely now as a warning the world would be so much better off if the Soviets gave up all that nuclear deterrence madness. No doubt Lem and his fans are gratified at how well this has worked out. And Voyage to the End of the Universe the movie was a kind of metaphor about how all we’ll really discover is Human Nature is Eternal, and all these supposed flights into futurity will really just bring us Back Down to Earth. Razzberry/fart sound effect as you please.
engels 01.01.17 at 1:13 am
The issue of what an AI would want is remarkably unremarked
The real question of course is not when computers will develop consciousness but when they will develop class consciousness.
Underpaid Propagandist 01.01.17 at 2:11 am
For offbeat Lem, I always found “Fiasco” and his Scotland Yard parody, “The Investigation,” worth exploring. I’m unaware how they’ve been received by Polish and Western critics and readers, but I found them clever.
The original print of Tarkovsky’s “Stalker” was ruined. I’ve always wondered if it had any resemblence to it’s sepia reshoot. The “Roadside Picnic” translation I read eons ago was awful, IMHO.
Poor Tarkovsky. Dealing with Soviet repression of his homosexuality and the Polish diva in “Solaris” led him to an early grave.
O Lord, I’m old—I still remember the first US commercial screening of a choppy cut/translation/overdub of “Solaris” at Cinema Village in NYC many decades ago.
George de Verges 01.01.17 at 2:41 am
“Solaris and Stalker notwithstanding, Strugatsky brothers + Stanislaw Lem ≠Andrei Tarkovsky.”
Why? Perhaps I am dense, but I would appreciate an explanation.
F. Foundling 01.01.17 at 5:29 am
Ben @12
> Peter. Motherfuckin. Watts.
RichardM @25
> Which is all a bit reminiscent of Peter Watt’s Blindsight, mentioned above.
Another dystopia that seemed quite gratuitous to me (and another data point in favour of the contention that there are too many dystopias already, and what is scarce is decent utopias). I never got how the author is able to distinguish ‘awareness/consciousness’ from ‘merely intelligent’ registering, modelling and predicting, and how being aware of oneself (in the sense of modelling oneself on a par with other entities) would not be both an inevitable result of intelligence and a requirement for intelligent decisions. Somehow the absence of awareness was supposed to be proved by the aliens’ Chinese-Room style communication, but if the aliens were capable of understanding the Terrestrials so incredibly well that they could predict their actions while fighting them, they really should have been able to have a decent conversation with them as well.
The whole idea that we could learn everything unconsciously, so that consciousness was an impediment to intelligence, was highly implausible, too. The idea that the aliens would perceive any irrelevant information reaching them as a hostile act was absurd. The idea of a solitary and yet hyperintelligent species (vampire) was also extremely dubious, in terms of comparative zoology – a glorification of socially awkward nerddom?
All of this seemed like darkness for darkness’ sake. I couldn’t help getting the impression that the author was allowing his hatred of humanity to override his reasoning.
In general, dark/grit chic is a terrible disease of Western pop culture.
Alan White 01.01.17 at 5:43 am
engels–
“The real question of course is not when computers will develop consciousness but when they will develop class consciousness.”
This is right. There is nothing like recognizable consciousness without social discourse that is its necessary condition. But that does’t mean the discourse is value-balanced: it might be a discourse that includes peers and perceived those deemed lesser, as humans have demonstrated throughout history.
Just to say, Lem was often in Nobel talk, but never got there. That’s a shame.
As happy a new year as our pre-soon-to-be-Trump era will allow.
Neville Morley 01.01.17 at 11:11 am
I wonder how he’d classify German SF – neither Washington nor Moscow? Julie Zeh is explicitly, almost obsessively, anti-utopian, while Dietmar Dath’s Venus Siegt echoes Ken MacLeod in exploring both the light and dark sides of a Communist Bund of humans, AIs and robots on Venus, confronting an alliance of fascists and late capitalists based on Earth.
Manta 01.01.17 at 12:25 pm
Lee Arnold @10
See also http://www.scottaaronson.com/blog/?p=2903
It’s a long talk, go to “Personal Identity” :
“we don’t know at what level of granularity a brain would need to be simulated in order to duplicate someone’s subjective identity. Maybe you’d only need to go down to the level of neurons and synapses. But if you needed to go all the way down to the molecular level, then the No-Cloning Theorem would immediately throw a wrench into most of the paradoxes of personal identity that we discussed earlier.”
Lee A. Arnold 01.01.17 at 12:26 pm
George de Verges: “I would appreciate an explanation.”
I too would like to read Henry’s accounting! Difficult to keep it brief!
To me, Tarkovsky was making nonlinear meditations. The genres were incidental to his purpose. It seems to me that a filmmaker with similar purpose is Terrence Malick. “The Thin Red Line” is a successful example.
I think that Kubrick stumbled onto this audience effect with “2001”. But this was blind and accidental, done by almost mechanical means (paring the script down from around 300 pages of wordy dialogue, or something like that). “2001” first failed at the box office, then found a repeat midnight audience, who described the effect as nonverbal.
I think the belated box-office success blew Kubrick’s own mind, because it looks like he spent the rest of his career attempting to reproduce the effect, by long camera takes and slow deliberate dialogue. It’s interesting that among Kubrick’s favorite filmmakers were Bresson, Antonioni, and Saura. Spielberg mentions in an interview that Kubrick said that he was trying to “find new ways to tell stories”.
But drama needs linear thought, and linear thought is anti-meditation. Drama needs interpersonal conflict — a dystopia, not utopia. (Unless you are writing the intra-personal genre of the “education” plot. Which, in a way, is what “2001” really is.) Audiences want conflict, and it is difficult to make that meditational. It’s even more difficult in prose.
This thought led me to a question. Are there dystopic prose writers who succeed in sustaining a nonlinear, meditational audience-effect?
Perhaps the answer will always be a subjective judgment? The big one who came to mind immediately is Ray Bradbury. “There Will Come Soft Rains” and parts of “Martian Chronicles” seem Tarkovskian.
So next, I search for whether Tarkovsky spoke of Bradbury, and find this:
“Although it is commonly assumed — and he did little in his public utterances to refute this — that Tarkovsky disliked and even despised science fiction, he in fact read quite a lot of it and was particularly fond of Ray Bradbury (Artemyev and Rausch interviews).” — footnote in Johnson & Petrie, The Films of Andrei Tarkovsky, p. 301
stevenjohnson 01.01.17 at 12:32 pm
The way you can substitute “identical twin” for “clone” and get a different perspective on clone stories in SF, you can substitute “point of view” for “consciousness” in SF stories. Or Silicon Valley daydreams, if that isn’t redundant? The more literal you are, starting with the sensorium, the better I think. A human being has binocular vision of a scene comprising less than 180 degrees range from a mobile platform, accompanied by stereo hearing, proprioception, vestibular input, the touch of air currents and some degree of sensitivity to some chemicals carried by those currents, etc.
A computer might have, what? A single camera, or possibly a set of cameras which might be seeing multiple scenes. Would that be like having eyes in the back of your head? It might have a microphone, perhaps many, hearing many voices or maybe soundtracks at once. Would that be like listening to everybody at the cocktail party all at once? Then there’s the question of computer code inputs, programming. What would parallel that? Visceral feelings like butterflies in the stomach or a sinking heart? Or would they seem like a visitation from God, a mighty vision with thunder and whispers on the wind? Would they just seem to be subvocalizations, posing as the computer’s own free thoughts? After all, shouldn’t an imitation of human consciousness include the illusion of free will? (If you believe in the reality of “free” will in human beings—-what ever is free about exercise of will power?—however could you give that to a computer? Or is this kind of question why so many people repudiate the very thought of AI?)
It seems to me that creating an AI in a computer is very like trying to create a quadriplegic baby with one eye and one ear. Diffidence at the difficulty is replaced by horror at the possibility of success. I think the ultimate goal here is of course the wish to download your soul into a machine that does not age. Good luck with that. On the other hand, an AI is likely the closest we’ll ever get to an alien intelligence, given interstellar distances.
Lee A. Arnold 01.01.17 at 12:53 pm
F. Foundling: “the original reason why everything arose and currently exists is that somebody somewhere was really, truly desperate to chew on a coconut… If there are any alternative explanations…”
This is Vedantist/Spencer-Brown metaphysics, the universe is originally split into perceiver & perceived.
Very good.
Combined with Leibnitz/Whitehead metaphysics, the monad is a striving process.
I thoroughly agree.
Combined with Church of the Subgenius metaphysics: “The main problem with the universe is that it doesn’t have enough slack.”
Yup.
“If there are any alternative explanations…” ?
There are no alternative explanations!
RichardM 01.01.17 at 5:00 pm
> if the aliens were capable of understanding the Terrestrials so incredibly well that they could predict their actions while fighting them, they really should have been able to have a decent conversation with them as well.
If you can predict all your opponents possible moves, and have a contingency for each, you don’t need to care which one they actually do pick. You don’t need to know what it feels like to be a ball to be able to catch it.
Ben 01.01.17 at 7:17 pm
Another Watts piece about the limits of technology, AI and humanity’s inability to plan is The Island (PDF from Watts’ website). Highly recommended.
F. Foundling,
Blindsight has an extensive appendix with cites detailing where Watts got the ideas he’s playing with, including the ones you bring up, and provides specific warrants for including them. A critique of Watts’ use of the ideas needs to be a little bit more granular.
Matt 01.01.17 at 8:05 pm
The issue of what an AI would want is remarkably unremarked, pardon the oxymoron.
It will “want” to do whatever it’s programmed to do. It took increasingly sophisticated machines and software to dethrone humans as champions of checkers, chess, and go. It’ll be another milestone when humans are dethroned from no-limit Texas hold ’em poker (a notable game played without perfect information). Machines are playing several historically interesting games at high superhuman levels of ability; none of these milestones put machines any closer to running amok in a way that Nick Bostrom or dramatists would consider worthy of extended treatment. Domain-specific superintelligence arrived a long time ago. Artificial “general” intelligence, aka “Strong AI,” aka “Do What I Mean AI (But OMG It Doesn’t Do What I Mean!)” is, like, not a thing outside of fiction and the Less Wrong community. (But I repeat myself.)
Bostrom’s Superintelligence was not very good IMO. Of course a superpowered “mind upload” copied from a real human brain might act against other people, just like non-superpowered humans that you can read about in the news every day. The crucial question about the upload case is whether uploads of this sort are actually possible: a question of biology, physics, scientific instruments, and perhaps scientific simulations. Not a question of motivations. But he only superficially touches on the crucial issues of feasibility. It’s like an extended treatise on the dangers of time travel that doesn’t first make a good case that time machines are actually possible via plausible engineering.
I don’t think that designed AI has the same potential to run entertainingly amok as mind-upload-AI. The “paperclip maximizer” has the same defect as a beginner’s computer program containing a loop with no terminating condition for the loop. In the cautionary tale case this beginner mistake is, hypothetically, happening on a machine that is otherwise so capable and powerful that it can wipe out humanity as an incidental to its paperclip-producing mission. The warning is wasted on anyone who writes software and also wasted, for other reasons, on people who don’t write software.
Bostrom shows a lot of ways for designed AI to run amok even when given bounded goals, but it’s a cheat. They follow from his cult-of-Bayes definition of an optimal AI agent as an approximation to a perfect Bayesian agent. All the runnings-amok stem from the open ended Bayesian formulation that permits — even compels — the Bayesian agent to do things that are facially irrelevant to its goal and instead chase wild tangents. The object lesson is that “good Bayesians” make bad agents, not that real AI is likely to run amok.
In actual AI research and implementation, Bayesian reasoning is just one more tool in the toolbox, one chapter of the many-chapters AI textbook. So these warnings can’t be aimed at actual AI practitioners, who are already eschewing the open ended Bayes-all-the-things approach. They’re also irrelevant if aimed at non-practitioners. Non-practitioners are in no danger of leapfrogging the state of the art and building a world-conquering AI by accident.
Plarry 01.03.17 at 5:45 am
It’s an interesting talk, but the weakest point in it is his conclusion, as you point out. What I draw from his conclusion is that Ceglowski hasn’t actually experienced much American or British SF. There are great literary works pointed out in the thread so far, but even Star Trek and Red Dwarf hit on those themes occasionally in TV, and there are a number of significant examples in film, including “blockbusters” such as Blade Runner or The Abyss.
WLGR 01.03.17 at 6:01 pm
I made this point in the recent evopsych thread when it started approaching some more fundamental philosophy-of-mind issues like Turing completeness and modularity, but any conversation about AI and philosophy could really, really benefit more exposure to continental philosophy if we want to say anything incisive about the presuppositions of AI and what the term “artificial intelligence” could even mean in the first place. You don’t even have to go digging through a bunch of obscure French and German treatises to find the relevant arguments, either, because someone well versed at explaining these issues to Anglophone non-continentals has already done it for you: Hubert Dreyfus, who was teaching philosophy at MIT right around the time of AI’s early triumphalist phase that inspired much of this AI fanfic to begin with, and who became persona non grata in certain crowds for all but declaring that the then-current approaches were a waste of time and that they should all sit down with Heidegger and Merleau-Ponty. (In fact it seems obvious that Ceglowski’s allusion to alchemy is a nod to Dreyfus, one of whose first major splashes in the ’60s was with a paper called “Alchemy and Artificial Intelligence”.)
IMO Dreyfus’ more recent paper called “Why Heideggerian AI failed, and how fixing it would require making it more Heideggerian” provides the best short intro to his perspective on the more-or-less current state of AI research. What Ceglowski calls “pouring absolutely massive amounts of data into relatively simple neural networks”, Dreyfus would call an attempt to bring out the characteristic of “being-in-the-world” by mimicking what for a human being we’d call “enculturation”, which seems to imply that Ceglowski’s worry about connectionist AI research leading to more pressure toward mass surveillance is misplaced. (Not that there aren’t other worrisome social and political pressures toward mass surveillance, of course!) The problem for modern AI isn’t acquiring ever-greater mounds of data, the problem is how to structure a neural network’s cognitive development so it learns to recognize significance and affordances for action within the patterns of data to which it’s already naturally exposed.
And yes, popular fiction about AI largely still seems stuck on issues that haven’t been cutting-edge since the old midcentury days of cognitivist triumphalism, like Turing tests and innate thought modules and so on — which seems to me like a perfectly obvious result of the extent to which the mechanistically rationalist philosophy Dreyfus criticizes in old-fashioned AI research is still embedded in most lay scifi readers’ worldviews. Even if actual scientists are increasingly attentive to continental-inspired critiques, this hardly seems true for most laypeople who worship the idea of science and technology enough to structure their cultural fantasies around it. At least this seems to be the case for Anglophone culture, anyway; I’d definitely be interested if there’s any significant body of AI-related science fiction originally written in other languages, especially French, German, or Spanish, that takes more of these issues into account.
WLGR 01.03.17 at 7:37 pm
And in trying to summarize Dreyfus, I exemplified one of the most fundamental mistakes he and Heidegger would both criticize! Neither of them would ever call something like the training of a neural network “an attempt to bring out the characteristic of being-in-the-world”, because being-in-the-world isn’t a characteristic in the sense of any Cartesian ontology of substances with properties, it’s a way of being that a living cognitive agent (Heidegger’s “Dasein”) simply embodies. In other words, there’s never any Michelangelo moment where a creator reaches down or flips a switch to imbue their artificial creation ex nihilo with some kind of divine spark of life or intellect, a “characteristic” that two otherwise identical lumps of clay or circuitry can either possess or not possess — whatever entity we call “alive” or “intelligent” is an entity that by its very physical structure can enact this way of being as a constant dialectic between itself and the surrounding conditions of its growth and development. The second we start trying to isolate a single perceived property called “intelligence” or “cognition” from all other perceived properties of a cognitive agent, we might as well call it the soul and locate it in the pineal gland.
F. Foundling 01.03.17 at 8:22 pm
@RichardM
> If you can predict all your opponents possible moves, and have a contingency for each, you don’t need to care which one they actually do pick. You don’t need to know what it feels like to be a ball to be able to catch it.
In the real world, there are too many physically possible moves, so it’s too expensive to prepare for each, and time constraints require you to make predictions. You do need to know how balls (re)act in order to play ball. Humans being a bit more complex, trying to predict and/or influence their actions without a theory of mind … may work surprisingly well sometimes, but ultimately has its limitations and will only get you this far, as animals have often found.
@Ben
>Blindsight has an extensive appendix with cites detailing where Watts got the ideas he’s playing with, including the ones you bring up, and provides specific warrants for including them. A critique of Watts’ use of the ideas needs to be a little bit more granular.
I did read his appendix, and no, some of the things I brought up were not, in fact, addressed there at all, and for others I found his justifications unconvincing. However, having an epic pro- vs. anti-Blindsight discussion here would feel too much like work: I wrote my opinion once and I’ll leave it at that.
stevenjohnson 01.03.17 at 8:57 pm
Matt@43 So far as designing an AI to want what people want…I am agnostic as to whether that goal is the means to the goal of a general intelligence a la humanity…it still seems to me brains have the primary function of outputting regulations for the rest of the body, then altering the outputs in response to the subsequent outcomes (which are identified by a multitude of inputs, starting with oxygenated hemoglobin and blood glucose. I’m still not aware of what people say about the subject of AI motivations, but if you say so, I’m not expert enough in the literature to argue. Superintelligence on the part of systems expert in selected domains still seem to be of great speculative interest. As to Bostrom and AI and Bayesian reasoning, I avoid Bayesianism because I don’t understand it. Bunge’s observation that propositions aren’t probabilities sort of messed up my brain on that topic. Bayes’ theorem I think I understand, even to the point I seem to recall following a mathematical derivation.
WLGR@45, 46. I don’t understand how continental philosophy will tell us what people want. It still seems to me that a motive for thinking is essential, but my favored starting point for humans is crassly biological. I suppose by your perspective I don’t understand the question. As to the lack of a Michaelangelo moment for intelligence, I certainly don’t recall any from my infancy. But perhaps there are people who can recall the womb…
bob mcmanus 01.03.17 at 9:14 pm
AI-related science fiction originally written in other languages
Tentatively, possibly Japanese anime. Serial Experiments Lain. Ghost in the Shell. Numerous mecha-human melds. End of Evangelion.
The mashup of cybertech, animism, and Buddhism works toward merging rather than emergence.
Matt 01.04.17 at 1:21 am
Actually existing AI and leading-edge AI research are overwhelmingly not about pursuing “general intelligence* a la humanity.” They are about performing tasks that have historically required what we historically considered to be human intelligence, like winning board games or translating news articles from Japanese to English. Actual AI systems don’t resemble brains much more than forklifts resemble Olympic weightlifters. Talking about the risks and philosophical implications of the intellectual equivalent of forklifts — another wave of computerization — either lacks drama or requires far too much background preparation for most people to appreciate the drama. So we get this stuff about superintelligence and existential risk, like a philosopher wanted to write about public health but found it complicated and dry, so he decided to warn how utility monsters could destroy the National Health Service. It’s exciting at the price of being silly. (And at the risk of other non-experts not realizing it’s silly.) (I’m not an honest-to-goodness AI expert, but I do at least write software for a living, I took an intro to AI course during graduate school in the early 2000s, I keep up with research news, and I have written parts of a production-quality machine learning system.)
*In fact I consider “general intelligence” to be an ill-formed goal, like “general beauty.” Beautiful architecture or beautiful show dogs? And beautiful according to which traditions?
Comments on this entry are closed.