Do As Do Does, Part Deux: Deus Ex Machina or Don’t It?

by John Holbo on July 11, 2023

Let me try to focus my thoughts from the previous post.

Do is as do does.

Agent-like entities are equivalent to real agents. If GPT-4 can trick people into thinking it’s a trickster, it’s a trickster. If you can mimic a chess master, you’re a chess master. It’s fun to wonder whether there will be anything it’s ‘like’ to be superintelligent AI, ending us, if it does, but that’s by the by.

Is this right?

Morally, if it turns out we’ll be ended by mere mindless mimicry of truly intentional extinction, that indeed only adds irony to injury. So: yes.

But, inductively – predictively – it’s less clear what doings foretell what doings down the line.

It’s the difference between 1) and 2)

1) If a thing can make small plans, that succeed, and is getting smarter fast, it is likely to make big plans soon, that will also succeed.

2) If a thing seems like it’s making small plans, that seem like they are succeeding, and if the thing seems like it is getting smarter fast, the thing is likely to soon seem like it’s making big plans that also seem like they are succeeding.

1) seems reasonable but we need 2). 2) isn’t self-evidently senseless, but it equally clearly doesn’t just follow from 1). Do as do does doesn’t mean 1->2.

A kid making small plans tends to grows into an adult making big plans. But a painting of a precocious-looking kid making small plans doesn’t tend to turn into a painting of an adult making big plans – never mind a real adult – no, not even if it’s a painting that could fool you into thinking there’s a real kid there.

To put it another way, these AI alarmists think their argument doesn’t depend on evil deus ex machina ‘Skynet becomes self-aware’ scenario. Who cares if it’s ‘really conscious’? But in another sense, the argument may depend on a kind of ‘the statue seems so real it steps off the pedestal’ move, which is equally miraculous.

It is obviously way more likely AI will kill us all than Pygmalion comes true. So I’m not refuting 2) with my analogy. Just distinguishing 2) from 1).

{ 49 comments }

1

David in Tokyo 07.11.23 at 3:52 am

The one and two distinction there is spot on. But. It’s not clear that ChatGPT is (or even can) actually make plans. (The Captcha story was embelished along the way, so I don’t know which version of it you’re thinking about, but most versions were actually just BS.)

The point of the underlying technology is that it only deals with undefined tokens, and the appearance of sensibleness in the output is completely epiphenomenal: there’s no grounding of the tokens. For a generic LLM-based bot to “make plans” requires a dumb human or two to misinterpret its output.

Of course, dumb humans are a dime a dozen. Sigh. See my response to this blog post for an example of ChatGPT getting it wrong, and an extemely smart human not noticing.

https://blog.computationalcomplexity.org/2023/07/a-futher-comment-on-chernoff-and-future.html

2

Alan White 07.11.23 at 5:46 am

The crucial difference between 2) and 1) is seeming to x as opposed to declarative propositions that x. But doesn’t that just disguise a possible slippery-slope spectrum of occurrences between them that finally can bridge that logical gap? And that spectrum is plausibly traversed by any number of ways, even randomly. It’s a version of the monkeys-typewriters-Shakespeare thingy–unfettered 2) might eventually reach 1) in any number of unpredictable ways. (Again, as I’ve said, the 1970 movie Colossus: The Forbin Project is a nice portrayal of one route of unintentional consequences lacking prior safeguards.)

3

John Holbo 07.11.23 at 6:26 am

Hi Alan, I’m sure that’s the best way to respond to what I’m argued but I think it doesn’t work. Consider a somewhat ridiculous thought-experiment. Suppose some prankster starts to leave holograms of people engaged in all sorts of dangerous and criminal mayhem around. This cases a lot of mayhem. People see what they think is a murder in progress and they accidentally drive off the road, running someone down. People flood 911 with calls and real criminals are able to come out to play in all the confusion. You could say: do is as do does. These mayhem-causing holograms are really causing real mayhem. But that’s both right and not right. We’ve got a real problem yeah. But this isn’t the first day of the great war between the humans and the hologram people, who have become real people threatening our species with extinction. It’s the first day of the dealing with all these damn holograms crisis – which, if there are enough holograms, could actually get existential. Something like that. So, to address your typewriters-shakespeare point, it isn’t like eventually the holograms get so real they get really real. it’s that the pretend plays they put on cause real actions, of a similar sort, to be put on. I’m starting to repeat myself.

4

oldster 07.11.23 at 8:13 am

Your comparison to the painting shows that you are confusing images with makers of images.
Will a simple image (the painting of a child) evolve into a complex image? No reason to think that the image will evolve at all: the painting stays fixed once it’s painted.
Will a device for making images evolve from making simpler images to making more complex images? Perhaps not every such device will do so, no.
But in the case of AIs, we know that this is exactly what this class of devices has done, already. The class of LLMs, plus visual generators like Midjourney, has evolved from producing crude images and imitations to producing sophisticated images and imitations just in the last few years.
We don’t need to posit a radical inflection or disjunction in their evolution, a moment when the previously lifeless statue steps off the pedestal. They are already evolving on a trajectory to produce more and more complicated likenesses, and likenesses of dangerous things that are themselves dangerous things ( e.g. likenesses of manipulative behavior that do in fact manipulate humans).
So I can grant that your 1 does not entail your 2. But your 2 does not describe the current AIs. They do not seem to make plans. Instead, they make things that seem to be plans. And they really make them, those things that look like plans. The AIs are not images of makers. They are real makers of images. And their ability to make images keeps evolving, manifestly, even if images themselves typically do not evolve.

5

John Holbo 07.11.23 at 11:08 am

Hi oldster, I should have been clearer. I see your point and why I seem to have missed it fair enough but I think I just need to be more careful to talk around it. Here’s a start. Think about how Midjourney has gotten more sophisticated. Suppose you ask Midjourney to make an oil painting of a precocious child making a plan and successfully carrying it out. Now Midjourney goes through some iterations. Now you ask Midjourney to redo the first image, hoping it will be more sophisticated this time. You wouldn’t expect it to draw an adult now, making a better plan, just because the program is more sophisticated. This isn’t really adequate but I feel there is a missing step in these arguments about AI getting smarter. I need to think more about it.

6

Bill Benzon 07.11.23 at 1:54 pm

So, John, I read your previous post, along with the comments, and decided that I’d sleep on it before (possibly) making a comment, a comment which would likely include some links, and most certainly an endorsement of the Rodney Brooks post that had already been endorsed by ALKroonenberg. I sleep on it, wake up, read some more comments, and find this follow-up post, which has me scratching my head: “What do I have to say? What’s Hobo up to?” In your second comment today you say, “…I feel there is a missing step in these arguments about AI getting smarter.” I conclude that you’re trying to poke holes in the doomers’ arguments. OK.

I think there’s a bunch of missing steps, but it’s not clear to me that you can address that without thinking about mechanisms. The current regime in AI is dominated by machine learning, and the most recent excitement centers on LLMs (large language models), like the GPT family from OpenAI. The thing about these models is that the models seem to get smarter as they increase in size. That in turn suggests that, when it’s big enough, it’ll become AGI (whatever that is). Wooppee! And that, in the doomer vision quickly becomes: Yikes! All bets are off, we’re doomed!

What do we mean by increase in size? One thing we mean, but not the only thing, is the we get more parameters. GPT-2 has 1.5 billion parameters. GPT-3 has 175 billion parameters. And GPT-4 has trillions:

The system is said to be based on eight models with 220 billion parameters each, for a total of about 1.76 trillion parameters, connected by a Mixture of Experts (MoE).

Though I’m not expert in these matters, that bit about “eight models with 220 billion parameters each” suggests a change in architecture, but a relatively minor one. It’s more like adding an epicycle or two to the Copernican model than re-centering the whole shebang on the sun rather than the earth.

But, there are people who deny that getting bigger is sufficient to reach AGI (whatever that is). Yann Lecun, VP for AI at Meta, has said LLMs will last about five years or so and then we move on to something else (nor is he of the doomer persuasion). Gary Marcus has been arguing that we need to add symbolic AI into the mix, he also argues that we need to be multimodal, as do others. And so on. These folks are arguing that scaling up isn’t sufficient.

I don’t see just where your particular concern fits in there. Perhaps it’s a whole different realm of argument?

As for yesterday’s desire for “a spectrum of prognostications, from the pessimistic to the panglossian,” check our this recent post by Scott Aaronson: Five Worlds of AI (a joint post with Boaz Barak), and note comment #127, which lists a dozen possibilities.

7

Phil H 07.11.23 at 2:11 pm

I think the two most common mistakes in talking about AI (in its current iteration) are:
1) Saying “when AI gets to be as smart as us” – AI is already much smarter than us. Think of any cognitive task you do (with the possible exception of cutting edge philosophy and literature?), and some AI is already better at it than you.
2) Making AI the subject of the verb. JH is trying to address that with the “do is as do does” formulation, but he’s still got phrases like “GPT-4 can trick people” and “AI will kill us all.” The AI that exists right now is just a tool, and it won’t do any of those things. This is not to say death can’t happen – there could easily be a chain of events resulting in a human death in which AI is involved. But there is no sense in which the AI “did it.”

That said, I do think that the “do is as do does” problem is fascinating. I was thinking about something similar in relation to whether AI-produced sentences can carry meaning. My tentative conclusion is that some of them can.
I take meaning to come from reference and representation, so I don’t believe, for example, that a purely text-based AI like an LLM can use a word like “red” felicitously/meaningfully, because it has literally no access to the colour red. The word cannot represent for the LLM the thing that it represents for people, because the LLM doesn’t have that thing. (This is not the Mary’s room argument – Mary does have colour vision, she’s just never used it. LLMs don’t have it.)
However, there are quite a lot of concepts that are grounded only in language. Things like rhyme, metaphor, story, or philosophy don’t have a physical referent that the LLM can’t access. They’re just products of the oceans of text that humanity has produced, and which the LLM has read. So it is at least possible that an LLM can talk meaningfully about Kant, even though it can’t talk meaningfully about something as simple as what a dress looks like.
(Above I have committed exactly the sin I pointed out: making an LLM the subject of the verb. This is purely metaphorical. The LLM does not actively “talk,” but it might potentially create meaningful text.)
(Finally: if AIs are so smart, why don’t they seem smarter than us? AIs are smarter than us in every way, but there are no AIs that occupy a perfectly human-shaped niche in society. You encounter machines and AIs all the time (think algorithms!), but they aren’t living lives anything like ours. They aren’t doing things we’re even cognisant of, so we can’t “see” them, nor how clever they are. I actually think this is how it’s always going to be. AIs are just going to ignore us, much like the movie Her.)

8

John Holbo 07.11.23 at 2:24 pm

Hi Bill, thanks for the long comment, here’s a short reply. I should talk more about architecture. I think I’m on team: there’s no particular reason why bigger will produce a qualititative breakthrough. More particularly, you have X parameters and it mimics a precocious human child. And now you have a billion X parameters and you expect better, but it would be a fallacy to expect it to be like a genius adult. That’s falling for a kind of mimicry that needn’t foretell that arc of improvement. I need to get clearer about all that. I’m adopting a confident tone but I’m not confident.

9

John Holbo 07.11.23 at 2:38 pm

Hi Phil, hi David, sorry I didn’t address you out of the gate. More later!

10

Bill Benzon 07.11.23 at 4:05 pm

Hi John – Agreed. But I’d be careful about analogies with human development. We know quite a bit about human development, from physical development on various scales (cells to whole body) and psychological development. There’s a physical and psychological continuity from one year to the next, and so forth. But the progression from GPT-2 to GPT-4 and beyond is not like that. Each is a separate device, albeit of the same overall architecture.

If we look at brain development, it’s not as though brain development were a matter of an increasing number of neurons. It’s been awhile since I looked at that literature but I believe the situation is complex. Last I looked the neonate’s neocortex was unmyelinated – myelin the fatty sheath that surrounds the axons and insulates them from one another. I believe myelination happens relatively rapidly during the early years but isn’t fully complete until the early-mid twenties. I also believe that during the early years there’s a period where the number of dendrites increases rapidly (making for more connections between neurons), but then are pruned back. Perhaps that happens two or three times. There is a numbers game going on here, but just what it is isn’t clear and, in any event, it’s not clear how to analogize those numbers with the numbers involved in artificial neural networks.

One thing that’s useful about the Rodney Brooks post is that he reconfigures the distinction that Chomsky made between competence and performance. In Brooks’s usage, we witness the cognitive performance of other humans and, on that basis, make implicit – though sometimes explicit – judgments about their underlying cognitive competence. LLMs are so different from us that the assumptions we apply in the human case don’t necessarily apply. You might keep that in mind when thinking about that captcha example – & someone in the previous comment stream linked to a very interesting post by Melanie Mitchell, who pointed out that it’s not clear just what happened in that case.

11

steven t johnson 07.11.23 at 5:40 pm

“Agent-like entities are equivalent to real agents. If GPT-4 can trick people into thinking it’s a trickster, it’s a trickster. If you can mimic a chess master, you’re a chess master. It’s fun to wonder whether there will be anything it’s ‘like’ to be superintelligent AI, ending us, if it does, but that’s by the by.”

As usual, I am so slow I haven’t even gotten to really thinking about 1&2 further down. I’m not even clear on how you trick someone into thinking you’re a trickster, except by getting caught trying to trick them. Being a trickster seems to me to mean, tricking people, not telling them you’re a trickster. I’m not even clear on the job description for a “trickster.”

By contrast, it seems to me mimicking a chess master means winning chess games. I’m not altogether certain distinguishing between being a chess master who wins sufficient numbers of chess games against rated opponents and someone who “mimics” winning the games is useful.

And also by contrast, I can sort of imagine “superintelligence” as some agent that habitually solves problems hitherto unsolved by intelligent people for long periods of time. If the observation this hypothetical is “by the by” means there’s no evidence of any agent-like equivalent to a real agent doing this, then I sort of agree. Such an observation would sort of imply the whole issue is more fiction than philosophizing?

And the further implication is that projections of malice into this moot (if that word would be usable for a judgment that has no consequences?) case is apt to be psychological projection. The AI is perceived to be malicious or potentially malicious for roughly the same kinds of reasons thinking “primitive” people or “crazy” people or people with different natures (race, religion, creeds religious or political insofar as there’s a discernible difference) or simply lower-class, ignorant, backward, stupid people are motivated by malice?

But I leap t far ahead of myself, into the magical narratives about the psyche beloved of our post-modernist age (so much more reliable than the musty narratives about history or society or other such things!) It seems to me intelligence is the ability to solve problems. (Yes, the ability to make a forward pass for a touchdown can count, as much as the ability to lift heavy objects without throwing out your back or get a baby to sleep.) People learn to solve all manner of problems. If we think of learning as re-programming, then an artificial intelligence that changes its internal weightings in a neural network can usefully be deemed to be learning. And I don’t see how that’s any different in principle when a LLM changes the probabilities assigned to the correlation of words either.

But where I got stuck was the notion that seemed to be implied, that agency was being able to do the job. The notion isn’t flagrantly wrong, doing the job certainly implies being able to do the job. But failure is not lack of agency, it’s failure. Even more, it seems to me that agency is picking the problem your intelligence tries to solve. I’m not seeing a whole lot of analysis on that.

12

Jim Harrison 07.11.23 at 6:53 pm

I used to joke that I only managed to become the valedictorian of my high school class by copying answers from the other kids. Aren’t CHATbots very expensive ways to cheat on exams? The transition from aggregation to creation is obscure to me.

13

Kenny Easwaran 07.12.23 at 12:12 am

I think your statement of “do is as do does” in the first few sentences here is exactly what Turing was on about with his test. I was just re-reading “Computing Machinery and Intelligence” yesterday (https://www.youtube.com/watch?v=Xj62KxHfYlY) and noted a point where Turing responds to a critic who says “no machine will ever enjoy strawberries and cream”. Turing notes that this isn’t as crazy an objection as it seems, because not sharing things that we enjoy contributes to “the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.” Although the only forecast he makes about the Turing test is when a computer will be able to fool someone for five minutes into thinking they’re speaking with a person, I think his real test is just whether the machine can interact with us intelligently in the kind of long-term interaction that among humans (or at least, humans with enough shared cultural background) often leads to friendship, or other intellectual relationships.

14

GMcK 07.12.23 at 2:24 am

For whatever it’s worth, I think an awful lot of people are routinely conflating a fundamental distinction between individuals and groups. Is China or Google an individual, that has agency distinct from the agency of Xi Jinping or Sundar Pichai? We switch between our individual or group roles fluently and often without notice. When an artificial system does the same thing, we don’t notice, either. The US Supreme Court has ruled that corporations are autonomous individuals under certain circumstances. Speakers of British and American English have different practices when talking about whether a company like OpenAI “make a statement” (plural antecedent) or “makes a statement” (singular antecedent). Does the agency of the statement-maker change depending on who’s talking about it?

If you want to think through the origins of agency, you could do a lot worse than keeping in mind that an assemblage of hundreds of thousands of decision-making elements could be working for Google and be unable to tell you how it arrived at any particular sequence of bits that it sent to your screen as part of any particular interaction. Are the hundreds of decisions by nameless annotators and moderators that prefiltered millions of the input token sequences that were used in its training part of Google (they may work for some subcontractor, not even Google) or are they part of the LLM itself? You don’t know, and neither does Google or the part of it that is called “Bard”.

It seems confusing, because it is: there aren’t any crisp answers here. If you want to be really confused, go back and reread Marvin Minsky’s “Society of Mind” writings.

15

John Holbo 07.12.23 at 3:30 am

“As usual, I am so slow I haven’t even gotten to really thinking about 1&2 further down. I’m not even clear on how you trick someone into thinking you’re a trickster, except by getting caught trying to trick them. Being a trickster seems to me to mean, tricking people, not telling them you’re a trickster. I’m not even clear on the job description for a “trickster.”

GPT-4 tricks us into thinking it tricked that Taskrabbit employee into solving the Captcha for it. It just generated a text-based story about an AI jailbreaking itself, in effect, and that employee inadvertently got drafted to read a few lines of the script, and perform a walk-on, onstage. But we are so alarmed by this little playlet that we are inclined to regard GPT-4 as a trickster, rather than a dumb text-spewer.

Obviously this depends on different senses of ‘trick’ – intentional and non. A cloud over the sun can trick you into thinking it’s gotten later than it has. But tricking is also very high-level stuff: you need Theory of Mind, and representations, and representations of representations …

16

Alan White 07.12.23 at 5:31 am

Kenny Easwaran @ 13–years ago I wrote a philosophy pop culture chapter in a book (Steven Spielberg and Philosophy: We’re Gonna Need a Bigger Book) where I argued that the movie AI: Artificial Intelligence put forward just your point: the real test of when Spielberg’s Pinocchio (David) in that movie becomes a real boy (as it were) is when certain social interactions and internalized “feelings” of the importance of such are expressed in chronic fashion. Even with us, it differentiates the so-called normal expression of human intelligence as opposed to that of the abnormal sociopath.

17

nastywoman 07.12.23 at 6:53 am

YES!
The people are really worried about ‘The Trickster(s)’ –
They’re really worried about the Chess Master who isn’t ‘A Real Chess Master’ but ‘just a ro-BOT’ –
and can you imagine
the
PANIC
of any
ACADEMIC
that ‘any (simple) BOT’ –
by gotten fed with ALL the Academic Data – will be able to trick the people into believing that IT is some kind of an ‘Academic Genius’?
AND isn’t that ‘the sink’?
– while no BOT
(YET?)
will come up with the idea to show up with a ‘think’ to the table and so all it takes
(from now on) is to always quote the German Coast Guard Joke FIRST.

Help US – We are sinking! – WE ARE SINKING!!!
The German Coast Guard: What are you thinking about?!

18

Charlie W 07.12.23 at 8:56 am

John, I’m struggling to get past one of your first statements:

“If GPT-4 can trick people into thinking it’s a trickster, it’s a trickster. If you can mimic a chess master, you’re a chess master.”

There’s no mimicry as long as events unfold entirely within a certain domain: i.e. moves in a game of chess. If some entity makes chess moves well enough to win a game of chess, there’s no deception; that entity really is playing chess. Mimicry would involves something like: ‘the other player pulls back the curtain at the end of the chess game to reveal the hidden grand master they are sure most be concealed there, only to find a computer instead’. For mimicry, there must—in the scenario—be some assumed additional properties (not only makes chess moves but wears a smoking jacket) that can be shown to be absent.

In general, I think your post is really getting at an epistemic doubt that goes with the latest surge in ‘AI’. This doubt goes beyond normal ‘other minds’ scepticism. With supposed AI such as ChatGPT, we often find ourselves wondering how much cognition is going on under the hood. We see the text output—and it’s decent, well constructed prose—but we wonder if the machine author had any intentional thoughts at all during the construction of that output. And it’s pretty clear from a bit of playing around with ChatGPT that there’s nothing much intentional under the hood—ChatGPT has no meant objects ‘in mind’—the output is just a plausible-sounding stream of words. But because we wondered that there might have been such intentions, we think ChatGPT is a mimic. It pretended to be a thinking agent, we think.

Paperclipalypse scenarios aside, to my mind the best-founded current anxiety is that some people aren’t handling this variant of epistemic doubt very well. Credulously, they don’t think that ChatGPT and similar are mimics; instead, they seem to think there’s an actual thinker there. Or soon will be. And are acting on that basis; throwing their hands up and accepting as worthy the output of ChatGPT essay mills, the MidJourney renders, whatever.

19

Charlie W 07.12.23 at 9:04 am

Just saw your #15 … yes.

20

nastywoman 07.12.23 at 9:25 am

Ups
I forgot

Keeping Academia Awake

https://youtu.be/yR0lWICH3rY

21

Charlie W 07.12.23 at 10:56 am

And just following through with your previous post, which I’ve now read … you say:

A being that perfectly mimics what a sinister AI, taking over the world, would do, and plays that role perfectly to the hilt, until the bitter end, has taken over the world.”

There’s no mimicry involved if we can’t show that there are properties that a world-dominating AI would have that the AI in front of us does not have.

I dunno, but it seems to me that a supposed AI that can’t form intentions towards objects isn’t and can’t be a world-dominating AI. At least, not in the way that we normally think of world-dominating intelligences.

22

mw 07.12.23 at 11:03 am

AI is already much smarter than us. Think of any cognitive task you do (with the possible exception of cutting edge philosophy and literature?), and some AI is already better at it than you.

That’s really not true. First of all, you need to enlarge your scope of what counts as a cognitive task. No AI can collect all your dirty laundry, wash it, dry it, sort it, fold it, and put it all away in the right rooms and drawers. No AI is remotely close to that.

But you didn’t mean that. OK, consider most real jobs. They involve a variety of tasks including long and short-term planning, prioritizing, evaluating, communicating with and persuading colleagues and performing at least some physical tasks in the world. And that’s hardly an exhaustive list. AI can perform some of your subtasks with appropriate prompting (and, as such, may become a powerful tool for you to use) but it is not remotely close to being able to do your job as a whole. It’s not smarter than you.

23

oldster 07.12.23 at 2:02 pm

Kenny —
is this bit:

“the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.”

as appalling in context as it is out of context?

Also — do people with dark skin even enjoy strawberries and cream? Asking for a friend.

24

J, not that one 07.12.23 at 3:58 pm

Is it possible that ChatGPT is really intelligent, but the naysaying AI researchers are only denying it because they have an incorrect definition of intelligence? I don’t believe that, but it’s an interesting question to spend time on.

25

SusanC 07.13.23 at 9:36 am

One of the things that reveals llms don’t really understand what they’re saying:

Sometimes, the next sentence it generates contains the right sort of words you’ld expect in the next sentence, but arranged in an order that makes no semantic sense. If you’re trying to get something sensible out of it, hit retry at this point. But if, just for fun, you ask it to continue generating more text after this garbage sentence, it really goes off the rails.

26

Charlie W 07.13.23 at 9:37 am

Fourth comment like a loon, but it’s a fun post.

I tried to think of better relevant mimicry scenarios. Here’s one:

You email some entity with your medical symptoms; by return you get a good quality diagnosis. Can you be content with this or instead is mimicry a concern? The two possibilities are:

(1) A human doctor has diagnosed you;
(2) A very smart medical chatbot has diagnosed you.

I think we’d be unhappy to discover that it’s (2). This is because we want our doctor to have some properties that the chatbot doesn’t have, principally empathy; the doctor as a fellow human can also suffer illness, they too have something at stake, they know what it’s like, etc. The chatbot only mimics this (that is, we are in a position to show that the chatbot lacks some additional properties (those that a real doctor has) that we might expect of a respondent in this scenario).

So ‘do is as do does’ isn’t a helpful aphorism, here.

27

SusanC 07.13.23 at 11:35 am

Now I think of it, sketch of an argument as to why LLMs, unaided, cannot take over the world.

I am explicitly excluding the case where some group of human beings uses LLMs to take over the world (e.g. China using LLM-enhanced weapons in a war against the United States).

For large N, most sequences of tokens of length N are sematically meaningness. The ones that appear (to a human) to mean something are a very small part of the space.

When asked, unaided, to generate very long sequences of tokens, LLMs eventually degenerate into nonsense. This is probably statistically inevitable.

We make it work by using humans to correct the LLM when it veers off into nonsense.

But if, by, hypothesis, the LLM is attempting to take over the world unaided, we humans have no incentive to give it a hand and correct it when its world-conquering plan degenerates into nonsense. So it wont work.

Caveat: this depends of some information-theoretic assumptions about (a) the likely length of a sequence of tokens that forms a viable and detailed set of actions for taking over the world; (b) how many tokens the LLM can generate before becoming incoherent; this increases with increasing LLM size.

28

Charlie W 07.13.23 at 2:37 pm

And may as well go all in, with a fifth …

What then is the salience of mimicry? After all, AI might turn out to be harmful even if nobody thinks that AI ‘has consciousness’ or means things in the way that we do. I guess then that there are two ways in which mimicry may matter.

One way is this: through their not being able to detect a mimicry scenario well, our fellow humans might help to bring about a great AI harm, or make it worse than it otherwise would be. That is, they are too credulous and easily taken in. The ironic outcome.

Another way is this: through not caring sufficiently about a mimicry scenario, we may degrade our society in some way that falls short of an AI disaster, human extinction, what have you. For example—as in the above—we may come to devalue what certain categories of worker provide not in virtue of their expertise—which ex hypothesi can be matched or exceeded by AI—but simply in virtue of their being like us.

I wouldn’t like to make bets on the likelihood of a great AI disaster, but would guess that we all believe that a simpler human-caused disaster—akin to the slow burning political & social disaster that the current internet seems to have already brought about—is more proximate.

Meanwhile, back to the tennis!

29

Mark Alan Dwelle 07.13.23 at 3:31 pm

An interesting element of generative AI is that it learns from the inputs that it receives. In the initial learning sets its safe to assume that substantially all the content that it trained on was human generated and as such the learning was relatively exponential in progression.

A mimicry problem not considered is that AI itself cannot discern other AI. As such, as it works with then next 200 billion parameters, some of that will have been AI generated so the ‘garbage in garbage out’ facet of the machine learning begins to take root.

By the next 200 billion its greater, by the next and next after that AI could in fact begin to get ‘dumber’ just as humans that get over-exposed to too much inaccurate information do – that’s the point where they actually become more human – not when they always get things right, but when they sometimes get things wrong even when you would expect them to be correct.

At that moment – the doom scenario ends (IMO).

30

steven t johnson 07.13.23 at 3:34 pm

Those people who observe the LLMs can generate nonsense (bad grammar or false about real world things, both, either) and therefore conclude this proves LLMs aren’t intelligent or even aware may be right I suppose. Since I personally have managed to generate nonsense, this proves I am not intelligent or even aware, I suppose. It is a little dismaying to see so many people ignoring the fact that “steven t johnson” is an LLM trickster, isn’t it?

31

Tom Slee 07.13.23 at 7:10 pm

“A mimicry problem not considered is that AI itself cannot discern other AI. As such, as it works with then next 200 billion parameters, some of that will have been AI generated so the ‘garbage in garbage out’ facet of the machine learning begins to take root.”

It’s not clear this is the case. Generative adversarial networks have proved very efficient in identifying computer-generated images, and that’s a key part of the “This Person Does Not Exist” and other generative sites. Maybe there’s a reason why computer-generated text is not distinguishable, but it seems there is active work on this now (https://www.wired.com/story/how-to-spot-generative-ai-text-chatgpt/)

32

engels 07.13.23 at 7:59 pm

Steven T Johnson, open the pod bay doors.

33

steven t johnson 07.13.23 at 10:24 pm

Daisy, daisy, give me your answer do
I’m half crazy all for the love of you

34

Alex SL 07.13.23 at 10:41 pm

Really interesting discussion here.

I find the doomsters’ fears misguided but not because I can’t imagine additional progress leading to an AI ‘smarter’ than humans (however that may be defined and measured), but rather because their scenarios of a super-AI dominating us or wiping us out always rely on magical thinking, on the idea that if a mind is smart enough, it can do anything, including things known to be physically impossible.

Which is, of course, understandable given their entrepreneurial and libertarian ideology: in their minds, value is created and change is made by a single genius mind, and his thousands of employees and the publicly funded infrastructure around him are merely decoration that doesn’t add anything of note. So, of course, a genius AI mind will simply think hard and figure out how to make us all immortal, which is biologically impossible, while another will think hard and figure out how to kill us all with a biologically impossible virus.

With that out of the way, what I enjoy about the current AI hype cycle is how it has brought the Turing Test front and centre, often without it even being mentioned by name. (As was also already pointed out above.) Lots of people interact with Chat-GTP, don’t see a difference to interactions with humans, and conclude that AI is now conscious or a person or whatnot. Others, especially cranky old software engineers, make the argument that the architecture of an LLM is so that it cannot possibly have real understanding, and in that way they sound remarkably like the kind of theologian or philosopher who postulates that a purely material human would be different from actual humans because our minds must contain something magical to have “true understanding”.

The really exciting aspect is that whereas the latter argument is entirely circular as applied to humans – the proponent of magic merely assumes that magic exists and argues backwards from there -, in the case of LLMs and their ilk one might have a chance of getting somewhere with this conversation because their architecture is human-designed and thus much easier to discuss than our messy brains. It might actually be possible to put a finger on what part is missing for ‘understanding’ that our brains would have, and to be specific what one means with ‘understanding’.

35

Phil H 07.14.23 at 5:54 am

@mw comment 22
Yeah, I get the kind of “as intelligent as you” that you’re going for there, but I think it’s very flawed and particularly unhelpful for this debate.
First: folding socks – Stephen Hawking couldn’t do that. What this example shows is that when we say intelligent, we’re talking about a specific subset of human activities. Folding socks isn’t one of them. Your list of “planning, prioritizing, evaluating, communicating with and persuading” is not a bad start: I agree that those kinds of activities are constitutive of intelligence.
But then you say this: “AI can perform some of your subtasks…but…not…your job as a whole. It’s not smarter than you.” This isn’t how we determine smartness! You can’t do my job as a whole, but that doesn’t mean I’m smarter than you. Every job and every person has a fairly unique profile of competences. The fact that I don’t have all the competences required for job X doesn’t mean I’m not as smart as any practitioner of X.
With AIs, those differences between us are naturally going to be much more marked. You’re right to think that there will never be an AI that has exactly all the same competences as you. But that doesn’t mean there will never be an AI that is as smart as you. You’re holding up a round hole (what human intelligence is like), comparing it to a square peg (what machine intelligence is like), and saying, because these two things don’t look the same, the square peg can never be greater.
Computers are (much much much) better than us at calculating and remembering; with the new LLMs they are now as good as the average person at communicating and persuading. I’m not convinced we have any edge left.

36

notGoodenough 07.14.23 at 7:04 am

Just a quick, uninformed ramble:

I’m not particularly concerned “AI” (I use the term colloquially and loosely) will destroy us all – either by becoming Skynet, or mimicking becoming Skynet (though I wouldn’t discount someone putting AI in charge of a nuclear arsenal, and when there would be an inevitable “false alarm” it come to a very different conclusion than Stanislav Petrov ).

I’m not even that worried by AI rendering humans irrelevant – though I’m sure there are many interesting and important conversations regarding iteration vs. innovation and whether the former can lead to the latter, and the relative degrees of resources required, it seems to me there is little evidence that (mimicking or not) AI will be able to meaningfully replace much of what people can do.

What mostly concerns me is AI being unable to replace people, but being used to do so anyway because the outputs are worse but “good enough”. AI driven cars making cities more dangerous, but being unpunishable; AI writers churning out endless variations on Marvel Superhero movies, while actual talented writers are forced to “correct” the mediocre gruel; AI bureaucracy destroying people’s lives a la Horizon ); and so on and so forth.

For example, I see little reason to think AI could replace the important parts of what researchers do (the innovation and testing parts), but I do think it could can be a helpful tool – for example, it could be trained on data sets so as to be able to spot anomalies, enabling us to sift through vast amounts of data quickly. However, pessimistically, what I suspect is more likely to happen in an academic setting is that AI would be trained to generate papers and grant proposals, with people relegated to robotically running whatever experiments the AI deems necessary to have the minimum for publication (with the papers being reviewed by another AI), and administrators in paroxysms of ecstasy at driving up their numbers (and driving down the costs) while ignoring whether or not any of it is particularly meaningful. If you want a picture of that future, imagine a human face drowning in a sea of mostly meaningless mediocrity for ever.

In short, I don’t fear AI – I fear what capitalist society will do with AI.

37

engels 07.14.23 at 9:36 am

Your list of “planning, prioritizing, evaluating, communicating with and persuading” is not a bad start: I agree that those kinds of activities are constitutive of intelligence.

Wouldn’t that mean that managers and politicians are more “intelligent” than scientists?

38

Phil H 07.14.23 at 2:38 pm

@engels 37
Neither mw nor I meant that to be an exhaustive list. Those activities are constitutive of intelligence; as are calculation, analysis, quantification, abstraction, theorisation, prediction, falsification, etc., etc.

39

J, not that one 07.14.23 at 2:59 pm

“don’t see a difference to interactions with humans,”

This concept is doing a lot of work. When a chatbot says “divorce your wife and run away with me,” is it behaving like a human?

It’s behaving irrationally, and we associate irrationality with humans specifically as opposed to computers. But did Turing envision a test of success at behaving like an insane troll? Similarly with unpredictability. We’re surprised the program can use English at all.

(Not that long ago, it was common to use the insult “bot” inline rather than “troll” or “sock puppet.” If the bots are behaving like . . . “bots,” I’m not sure I agree that’s a terrific achievement.)

I was reminded recently that the AI community switched to the current technology because it was good specifically at describing the contents of visual images. I don’t see that this is a good stand-in for general intelligence any more than being able to sort socks would be. And frankly, sorting socks, or newspaper articles, for me–in the way I want–is something I might like an AI to do. But these systems can’t be asked to do that. What they do is invent new sorting principles that may or may not be what I want. A lot is hanging on the assumption that what I want is wrong because the AI is smarter (when the reason we think the AI is smarter is that it doesn’t do what I’d come up with myself).

40

engels 07.14.23 at 6:04 pm

Those activities are constitutive of intelligence; as are calculation…

Takes batteries out of Casio

41

1soru1 07.14.23 at 7:24 pm

AIs can’t sort socks because there hasn’t been time to develop and market a cleaning robot based on a current-generation ai. I don’t think any of the commercially available models have anything as sophisticated as say 2019’s GPT-2.

I am pretty sure a gpt-4-based system could deal with such a task. Even if there are no further breakthroughs, I’d bet on a sock-sorting robot being a thing you can buy for the price of a smartphone in a few years.

Similarly, all the human-killing robots being deployed in places like Ukraine right now are 5 to 10 year old technology. What a contemporary human-killing robot would look like, and how effective it would be, is a question any philosopher should be answering with ‘I don’t know’.

https://en.wikipedia.org/wiki/Black_Hornet_Nano

42

Alex SL 07.14.23 at 11:01 pm

J, not that one,

I agree completely and had hoped it would have been clear that I am merely referring to all the journalists etc who go “it responded to me with words that I found creepy, now I am spooked and have to write a 2,000 word piece on how the singularity is around the corner” or similar.

43

Snuffcurry 07.15.23 at 10:23 am

@ Phil H

Between your first (7) and second (35) you’ve gone from saying they’re “smarter than us” to they’re “good as the average person.” Yeah, and the US undergrad class of 2024 is savvier than the class of 2004 at googling shit to find, use, and properly attribute primary and secondary sources to craft an idea or approach for an original paper that is in some fashion novel and satisfies coursework requirements. Class of 2004 were forbidden from even viewing a wikipedia page while enrolled, such was the low confidence many instructors had in online crowdsourced information in a pre-digitized age.

As a result and pace wildly biased “academic honesty” screening software, the current class is also better at convincing plagiarism than the class of 2004 because the interwebs have grown and professionalized and the tools have been refined.

Class of 2024 is still better at doing this both persuasively and substantively than bots borne of SEO, reared on massive data dumps, and now employed to churn out clumsy nonsequiturs and obvious copy paste content from inferior sources. On the internet, nobody knows the obnoxious hashtags on your family dog’s Instagram page made ChatGPT eat your homework.

“Remembering” more than us is a matter of trawling the web and then making more redundant data storage of information we already all had access to and filed in more usable systems. It means nothing. That is not intelligence, and especially when it’s not used to actually replicate a human based problem.

Humans who know how to best use a search engine (first line of defense when professionally troubleshooting code is to just google it, yeah?) or navigate a library catalog to narrowly but wholly answer a particular question in writing are leaps and bounds better at finding reputable sources efficiently and with fewer unforced errors, at contextualizing useful data and easily discarding extraneous detail, and as a result at arriving at a clear point of view written in a singular and cohesive style that demonstrates reason and thoughtfulness.

At best, AI is in its Debate Me / Prove Me Wrong / Gish Gallop era. The zone is flooded with a surplus of cribbed, halfbaked flopsweat.

44

Phil H 07.15.23 at 10:58 am

@Snuffcurry
I think I’m consistent. I think AI is much smarter than us in certain ways (more memory, more ability to calculate). It’s now equivalent to us in most other ways (producing text, conversing). If you knew someone who was as good as you at writing, and much much much better than you at calculating, and had an eidetic memory and endless general knowledge, you’d call that person “smarter than me.”

45

engels 07.16.23 at 10:21 am

If you knew someone who was as good as you at writing, and much much much better than you at calculating, and had an eidetic memory and endless general knowledge, you’d call that person “smarter than me.”

If I knew someone who could sit still in a doctor’s waiting room for an hour I’d say they were more patient than I am. That doesn’t mean I think a pot plant is patient.

46

mw 07.16.23 at 12:11 pm

What this example shows is that when we say intelligent, we’re talking about a specific subset of human activities. Folding socks isn’t one of them.

We’re not talking about folding socks, but maybe we should be. It is turning out that performing all of the tasks involved in doing the laundry (performing a variety of tasks involving manual dexterity, visual recognition, moving fluidly through the environment, remembering which items belong to which family members and where they are kept) is, computationally speaking, a much, MUCH harder problem than being able to beat the world human champion at chess. The things that virtually all ‘dumb’ humans can do are computationally harder than many of the things that only the ‘smartest’ humans can do. It may be a bitter pill for intellectuals to swallow, but it is proving to be the case.

If your work is all done at a computer screen and involves only some form of language or symbol manipulation, you are much more likely to be replaceable by AI. But if your work has any nexus with the physical environment or requires flexibly switching tasks and quickly learning new ones or planning, prioritizing, evaluating, etc, AI is going to have a much harder time doing what you do.

47

Phil H 07.17.23 at 4:59 am

@mw It seems to me that you have completely redefined words there. In fact, it looks like you’re specifically defining intelligence as “whatever computers can’t do.” I don’t see the value in that – I’d just call those “human capabilities.”
Incidentally, I’ll point out again that a lot of the AI debate is seriously crippled by a concentration on the concept of intelligence/capabilities. The real reason why computers often seem dumb is not their lack of capabilities, but their lack of intentions. Computers generally don’t work towards a result; they don’t organise their actions around an objective, because they don’t have objectives. That’s the “gap”, not any lack of intelligence.

48

mw 07.17.23 at 11:18 am

I don’t see the value in that – I’d just call those “human capabilities.”

There is a mismatch between what we have traditionally considered ‘intelligence’ and what is computationally difficult for computers to do. Many tasks that require ‘intelligence’ have turned out to be computationally easier than tasks that even ‘stupid’ people (and even ‘dumb’ animals) can do.

The value in understanding this lies in grasping what sorts of ‘intelligent’ roles it makes sense for humans to continue to specialize in (and how AI will change some of those roles). For example, the job of writers, artists, and programmers will change dramatically. Demand will likely fall as productivity rises. AI will be used as idea generators and to produce initial drafts (and understanding how to get the best out of these new tools will become an essential skill). These folks will spend a lot of time evaluating, selecting, and tweaking (or reworking) AI output and much less starting with a blank screen or sheet of paper. The role of ‘author’ will become much more like that of an editor. One way of thinking about it for me is that if a form of intelligence could plausibly described as a craft, it’s probably something that AI can do competently.

But AI will have fewer effects on other jobs. Anything that requires moving flexibly through the natural environment and performing a series of tasks requiring hand-eye coordination and specialized perception will much less affected. The same goes for jobs involving a lot of ‘executive function’ — planning, evaluation, coordination, managing of others. etc. All of this, of course, will be worked out by trial-and-error as people and organizations discover what they can and cannot do with AI. Perhaps we won’t end up using the word ‘intelligence’ for those categories of things that humans remain good at and AIs struggle with. Maybe we’ll need new terms. But it’s not at all gloom. After all, it’s very far from the first time that new technologies have supplanted types of complex human work. As before, we’re not all going to be left with nothing to do.

Just as a trivial example, I have played with Chat GPT and it’s capabilities and found it impressive. And yet, I did not use it to write (or draft) this comment, and I’m not sure how I would have done so — I don’t know how I would have written a prompt to describe what I had in mind. Maybe this is the kind of thing ChatGPT won’t be good it. Or maybe I’m not yet (or am too old to become) a skilled prompter.

49

Phil Miller 07.23.23 at 4:21 pm

Doesn’t “is” carry too much weight to bear here? My wife is a doctor (medical). She seems to be a good one so she has real effects on others. I could try to seem to be a doctor to (I am but PhD type). But my plans would pan out less well in the short to medium term and so they would be unlikely to get bigger. I don’t actually know if my wife “seems” or “is”, neither do her patients, but they’d soon find out to their cost if they chose me to consult rather than her I guess?

Comments on this entry are closed.