Twigs and branches

by John Q on January 1, 2022

To start the blog for 2022, an open thread, where you can comment on any topic. Moderation and standard rules still apply. Lengthy side discussions on other posts will be diverted here. Enjoy!

{ 68 comments }

1

nastywoman 01.01.22 at 7:05 am

so everybody does this: ‘Meine Guten Vorsätze für das Neue Jahr Thingy’ and so my New Years resolution is that 2222 is going to be ‘The Best Year of my Life’.

Anybody has something to say about that?
(but please keep it short)

2

nastywoman 01.01.22 at 7:12 am

OR in other words:
There is this theory that when the Nazis killed all these people a long time ago in Germany they also killed all humor in Germany and that’s why the US – afterwards became ‘The Leading Hegemon in the World BE-cause ‘Every American is a Comedian’ BUT the British Humor is ‘something completely different’ and should we finally talk about that?
(in this News and Greatest Year of them all?)

As Kant would say?

3

Ingrid 01.01.22 at 11:53 am

Wishing all the readers, writers, commentators, and lurkers of our blog a very happy and healthy 2022!
And also wishing all other independent bloggers of the world all the best for 2022 and beyond. Given all the assaults on genuine free and independent press and institutions in so many countries, there remains an important role and space for blogs – independent from political, financial or other types of control (alas, time demands/constraints, which have worsened for most of us during the pandemic, are affecting us too, but they are of a different kind).

4

Matt 01.01.22 at 12:36 pm

John – I was struck by this bit in the DLU thread: “I was viewing the whole movie in terms of our government’s sudden switch to a Let Er Rip strategy on Covid.”

What do you think the (state or federal) government in Queensland or Australia should be doing that they are not doing now? (One thing I think that they should be doing but, as far as I can tell, are not doing – a massive build-up of health care capacity. If nothing else, this has helped show that the health care system in Australia is at least as stretch and fragile as in other places, maybe even more so – it was clear to me in Melbourne some time ago that even very small “unusual” events put the health care system to capacity, and that the hope was just that things would be “normal” all the time. That clearly won’t suffice. But, I don’t see all that much being put into the system, beyond talks about dedicated quarantine wards and the like.) What more would you like to see done, that’s not being done?

5

Adam Roberts 01.01.22 at 12:57 pm

nastywoman: I sometimes wonder if Salman Rushdie’s theory is right: namely that (making allowances for variety and specificity and so on) what Americans find funny, broadly, is “isn’t it funny that … !” where what Brits find funny is “wouldn’t it be funny if …?” The difference, in other words, between on the one hand I Love Lucy, Friends etc, and on the other The Goons, Monty Python and all that. If he’s right, then we’d have to read the humour of, let’s say, Bojack Horseman as residing not in its surreal talking-animals premise but in the verisimilitude of its closely-observed life-in-LA representations.

6

oldster 01.01.22 at 1:53 pm

When djinnis give you three wishes, then the cheat-code — everybody knows this, it’s old hat — is to wish for more wishes.

Well, nobody’s granting my wishes for me, this or any year.
But my resolution on this first day of the year is to treat every day of the year as a day when I can make a new resolution to start doing something good, or to quit doing something bad.

Happy New Year to all CT readers, and my thanks to our hosts, the last survivors of the academic blog era.

7

David J. Littleboy 01.01.22 at 2:09 pm

I’m also planning for 2222 to be the best year of my life.

A friend from Boston announced that her 74th birthday party was the start of the second half of her life. These are all good ideas. Here in Japan, life expectancy has gone up over the 2 years of Covid-19*, and the other day the Japanese government counted over 10,000 men over the age of 100 and over 60,000 women.

*: The government’s handling of the pandemic has been terrible, but outside the LDP, Japan is remarkably devoid of the raving idiots so prevalent in certain other countries.

In somewhat seriousness, I’m planning on 2022 being the nerdiest year of my life, at least since the 1980s. Reading a lot of Japanese fiction (both schlock and serious), a couple of online math courses, guitar, Go, and juggling. I’m being way too greedy, but there’s so much fun stuff to do…

8

David J. Littleboy 01.01.22 at 3:25 pm

On another subject, in a now-closed thread, a couple of people mentioned that they were impressed with GPT-3 and thought that it somehow represented progress in AI’s language abilities.

I found that strange. Language is about communicating. (Of all places in our universe, you’d think that Crooked Timber would be the place that understands this point the best.) Communication only makes sense if the actor communicating has something to communicate. GPT-3 has no model or theory of “something to communicate”. None whatsoever. What it actually is, is a glorified gussied-up version of the Markov-chain random language generators of years past*. It spits out random recombinations of stuff in its database with no regard for meaning or intent or causality. (It explicitly has nothing like that.) It’s a random noise generator, no more, no less. Everything that you read into its output is something that it had no idea about, no intent about, and no way to reason about.

This last point is important: it doesn’t matter how good the output looks if it’s not the result of an intention to communicate something (or an intention to deceive, or whatever; language use is multifaceted, to say the least). And there’s nothing there to be communicated.

So I fail to get the interest in, or hype about, GPT-3. It’s exactly as meaningless as it’s inability to deal with meaning, said inability being complete, total, absolute, and intentional in its design.

*: https://markov-gen.herokuapp.com/

9

Orange Watch 01.01.22 at 5:09 pm

David Littleboy@8:

Speaking as someone who did research in this field in a former life, the reason there’s interest in – and/or hype about – this is because generating text that scans as naturally-generated is a hard task. You could draw a comparison to beating a chessmaster or Go master – the reason it’s exciting is not b/c we have a desperate need for AIs that can win at these games (or communicate the non-factual information they communicated based on their training data) but b/c it’s indicative of overcoming particular (traditionally hard) technical problems. The state of the art has moved forward, which means engineers are more likely to be able to apply these techniques to produce outcomes we actually care about.

If you only care about AI as a self-aware, reflective general intelligence, this is ofc still meaningless. If you care about the ability to have machines complete more complex (and simply more) tasks, it’s interesting. I expect there’s also some division of interest to be seen based on whether someone believes intelligence includes sentience or only sapience – if one does not accept the possibility that intelligence might be an emergent feature of a pile of shallow subprocesses, moving closer to human-level proficiency at a particularly front-facing subprocess is far less exciting than if one accepts that as a possibility…

10

J, not that one 01.01.22 at 5:12 pm

@8

There’s a debate about whether language communicates from one individual to another, or whether it reveals something beyond itself and its speakers. We frequently read intense meaning into essentially dumb or random computer generated texts, but where’s the point where it becomes illegitimate? Where’s the point, that is, where obscure prose stops being poetry and becomes nonsense? Can the meaning of existence really be revealed in the single syllable Om, and if it can, why not by GPT-3? If we physical beings can generate meaning, how are we so sure a computer can’t do what we do?

Of course there’s a vast range of options between one end of that continuum and the other.

11

Robert Weston 01.02.22 at 12:13 am

RE: Quiggin’s recent thread on the future of democracy and coming back from authoritarian rule,

There’s lots of talk about either civil war of secession as ways out of undemocratic one-party rule. However, I’ve seen nothing about the possibilities of, or opportunities for, mass civil disobedience. As a friend told me the night after the 2016 election, it may well be that at some point, we have to put our bodies at risk the way the way civil rights protesters did in the 1960s. I simply can’t tell how realistic the prospects for large-scale, sustained nonviolent protests are in case U.S. democracy really does go south; but I suspect African Americans – and in particular black women, like my friend – would be the ones doing much of the heavy lifting.

12

johne 01.02.22 at 3:30 am

I didn’t catch anyone in either half of “The Dawn of Everything” thread, mentioning David Graeber’s famous error in “Debt,” in which a ludicrous retelling of the Apple PC’s birth is passed off as fact. I believe Graeber’s final explanation of the debacle attributed it to a failure on the part of the book’s copy editors, but it was catastrophic enough, a distortion of recent events verging on common knowledge, to make me distrust the book, and indeed anything else he’s written. How did others justify the risk of something similar undermining whatever they learned from “Dawn”?

13

John Quiggin 01.02.22 at 3:37 am

Matt @4 The fundamental problem is that the idea of allowing rapid spread to make Covid a pandemic disease is disastrously wrong. As has happened elsewhere the hospital system will run into capacity limits, probably within a few days and restrictions will be reimposed.

Of the specific errors, the two that stand out as most totally gratuitous were the full reopening in NSW, which gave the virus a 9-day headstart, and the decision to go ahead with New Years Eve celebrations.

14

John Quiggin 01.02.22 at 3:40 am

Robert @11 I’m working on (struggling with) an article on this very point, treating the Civil Rights movement as a model. Given that the Republicans will be in a minority in nearly all cities, it will be in some ways a reverse of the original, with the Federal government and Supreme Court trying to suppress democracy at the local level. As you say, BLM and related struggles will be central

15

John Quiggin 01.02.22 at 3:42 am

Long before GPT-3, there was Ern Malley https://en.wikipedia.org/wiki/Ern_Malley_hoax

16

Alan White 01.02.22 at 3:56 am

JQ @ 14
But here’s my worries existentially. Last election, probably because I live on a main street in my 35k Wisconsin city, 4 of my Biden signs were stolen including one big one which had to have two people to remove it. Now after ’20 and the Big Steal lies, I contemplate scenarios where I put out Biden or whomever Dem signs in ’24, the Dem wins by slight margins, and my house is burned to the ground. These are real worries especially in red little cities like mine. The intimidation factor of the aggressiveness of the far right is a real and frightening one. The asymmetry between the deep emotional hate of the far right and the more tempered hate from the left is a real factor. When Trump was in office, even then I could not risk posting Fuck Trump signs in my yard for fear of real reprisal–but there are now several such in my town for Biden, and even more for the Let’s Go Brandon meme. It’s that strong cultish asymmetry that worries me about any uprising promoting Trump in ’24, even if Dems can win the White House.

17

David J. Littleboy 01.02.22 at 5:21 am

“Speaking as someone who did research in this field in a former life,…”

You ain’t the only one here: I was involved in AI on and off from ’72 to ’88.

(As you probably know…) Back in the day, AI had two flavors: “neat AI” and “scruffy AI”. Neat AI was about persuading the computer to do interesting and/or hard things by any means possible with no concern for theory, philosophy, or intellectual value (other than the problem domain). “Scruffy AI” was about having a theory about how people do things and coding it up to see how good it was. AI nowadays is (pretty much all) neat AI. It’s not my cup of tea. And the hype is irritating.

Re: the other J.

Reading meaning into randomly generated (or erroneously generated) computer output is as old as the hills, and great fun. It’s not exactly invalid. Computer generated music can function as music without having an explicit theory of what people like in, get from, or want to hear in their music. Ditto for computer generated art. But language without communication makes no sense. Part of the game in computer generated language is looking at some amount of output and selecting the stuff that’s interesting. Ditto in computer generated art and music. Thus there a certain amount of false advertising going on here. In particular, the sort of music or art generated will depend on the programmers and their relation to the current (or some historical period’s) music/art scene. Plug Bach or Mozart’s composition rules into a random number generator and what will come out will sound like Bach or Mozart. Because the music theory types explicated the rules for us to code up.

But:

“If we physical beings can generate meaning, how are we so sure a computer can’t do what we do?”

Because the bloke who wrote the program told us what it does: it generates random text with no theory of meaning, and it has no way of dealing with meaning other than accidentally producing things humans will sometimes misinterpret as a meaning.

But as a disciple of the absolute scruffiest of scruffy AI types, I’m quite sure that someday we’ll have better theories of meaning and human intelligence and the programs that implement those theories will be able do a lot of what we do. And those better theories will help us understand what it is about human intelligence that makes it so enormously and qualitatively different from “animal intelligence”.

18

Matt 01.02.22 at 5:56 am

Thanks, John – that seems reasonable. Here in Melbourne there were more limited NYE celebrations, I gather, though I’m not sure how safe they were. We have now largely eliminated our requirements to show vaccination status, on the grounds that we have a high enough % vaccinated. I wasn’t that happy to see that. My understanding is that the university where I teach is planning on having in-person classes in the next term, but we’ll see if that lasts. (I’m teaching over summer, and though we had the option to go back to in-person for the last 4 weeks, and though I miss it and in general think it’s a lot better, I decided not to do so for now.)

19

nastywoman 01.02.22 at 6:30 am

It’s ALL about this:
‘And then he lost it, just another out-of-control member of the great chorus of American I don’t wear nor F… maskstylefreedom.

“Have you seen a man in his 60s have a full temper tantrum because we don’t have the expensive imported cheese he wants?” said the employee, Anna Luna, who described the mood at the store, in Minnesota, as “angry, confused and fearful.”

“You’re looking at someone and thinking, ‘I don’t think this is about the cheese.’”

It is a strange, uncertain moment, especially with Omicron tearing through the country. Things feel broken. The pandemic seems like a Möbius strip of bad news. Companies keep postponing back-to-the-office dates. The Centers for Disease Control and Prevention keeps changing its rules. Political discord has calcified into political hatred. And when people have to meet each other in transactional settings — in stores, on airplanes, over the phone on customer-service calls — they are, in the words of Ms. Luna, “devolving into children.”…

and read the rest from Sarah Lyall…

20

nastywoman 01.02.22 at 6:40 am

AND –
one of the utmost recommended comments to the article of Sarah:

‘We just survived 4 years of Donald Trump, a President who tried to normalize anger and cruelty towards others. As the country’s preeminent authority figure, he gave people tacit permission to hurt others verbally, and as we saw on January 6 2021, physically as well. Add in the frustrations of the pandemic, made worse by his incompetence and polarization, and here we are, collectively exhibiting PTSD’.

21

David J. Littleboy 01.02.22 at 7:39 am

“generating text that scans as naturally-generated is a hard task. ”

Hmm. That’s not my recollection of things. From the Winograd block world to all of Roger Schank’s student’s programs, turning an internal representation into reasonable English was the easiest part. Coming up with that internal representation, on the other hand…

“The state of the art has moved forward, which means engineers are more likely to be able to apply these techniques to produce outcomes we actually care about.”

But the interesting thing is that “neat AI” tends to come up with ad hoc specific techniques that don’t apply elsewhere. The alpha-beta algorithm (based on the heuristic that if you already have a good move in hand, you don’t care how bad a bad move is) for chess and the like and Monte Carlo tree search in Go (a flipping brilliant idea that’s why Google’s programs are so good (and predates Google’s interest in Go)) are dependent on the universe being a game with rules and very specific properties. Someone quipped the other day that there are hundreds of medical imaging startups, but not one radiologist has been (me: or probably ever will be) replaced. (That’s because neural nets recognize textures, not shapes, can’t count (tell the difference between one elephant and two), and can’t generalize (except to notice that only things without green pixels aren’t cows, so they can’t recognize a cow on a beach). The articles on Watson’s performance in the medical world have all been behind paywalls, but the titles give the story away: Watson has failed miserably. (For the obvious reason that science is hard, and that real world data doesn’t actually include the sort of data that real science needs.)

To the best I can tell, the emperor is buck necked…

FWIW, the book “Rebooting AI” is highly recommended. It’s getting a tad long in tooth, but it still seems to have the right idea.

22

MisterMr 01.02.22 at 10:50 am

@johne 12

I did read Debt some years ago. My opinion, based on those parts of history that the book covers that I think I know enough without being a specialist (that is european history in the middle ages and early modern age), is that Graeber described those periods with an extremely broad brush, so that an imprecision of a century or two was to be expected because of the broad brush thing rather than considered an error.
This happens because Graeber is mostly speaking of general social trends rather than specific events.
If you ask me when Charlemagne was crowned emperor, I can tell you it was the year 800, on Christmas night. But if you ask me when the stagnant economy of the high middle ages started to grow again and turned into the mercantile economy of the late middle ages, well it depends, when I was in high school I was taught around the year 1000, now many historians say around the year 800 (go Charlemagne!), it depends on what zone of Europe you’re looking at, there were many fits and false starts/crises, it’s difficult to quantify how mercantile a society is, one can give many interpretations.

In this fuzzy situation, it seems to me that Graeber picked up the possible interpretation that best suited his argument. This isn’t wrong, because Graeber is using examples to explain his theory, however it can’t be an hard proof for the theory either.
This honestly is true for more traditional interpretations too, as history is complex and a lot of stuff happened in the past.

So I think the Debt book is a good book if you take it as a book explaining an interpretation of history, but if you take it as a book that is trying to prove an interpretation beyond reasonable doubt it is clearly not enough (honestly no single book can summarize 5000 years of history without using a veeery broad brush).

From this point of view, I think that the whole Apple brouhaha was overblown: it is really a minor example and not very important for the whole story. If you think that the book covers 5000 years of history of the whole world, how important can an error about the founding of a single corporation be? More or less than two centuries of difference in the estimation of when the turning point of the middle ages is?

However many people first hailed the book to the stars, and then threw it in the mud, because they made this interpretive error: that they expected the book to give hard proof for its theories, that is impossible for a book that covers 5000 years of history of the whole world.
If you take it as a book that uses hitory as an example to explain a theory (hence therefore cherrypicking a bit) the book is very good IMHO.

Overall I think people often underestimate how much traditional interpretations of history could be subjected to the same critiques.

23

steven t johnson 01.02.22 at 8:18 pm

MisterMr@22 Re “traditional interpretations of history?” The principle that familiarity is plausible has indeed supported much conventional wisdom. The inverse, that if it is not familiar, then it is not plausible, isn’t true either. And unarticulated Popperian presumption about “hard proof” are relied on, understanding gets even more clouded.

That said, I haven’t bothered to read Debt because I don’t understand in what sense
“debt” is a thing. Five thousand years predates the use of coined money. It includes slavery as payment for debt, debtors’ prison, bankruptcy. It includes both usury, either any interest at all or merely excessive, and the prohibition of usury. It includes both compulaory labor and free labor. And it includes formal debt between individuals and debts and debts between public entities and debts within families (which in principle includes tribes.) Aren’t grave goods payment of debts owed to the dead?

24

J, not that one 01.02.22 at 8:35 pm

DJB @ 17

I put a lot of effort into understanding extended context-sensitive languages (so long ago I’d have to look up the precise name by which they’re called) and find the move to Markov chains disappointing. Then again I also put a lot of effort into understanding the nuances of C++’s various features and found the move to Java and then Python disappointing too. But I can see why the latter took place. The former, maybe still not so much (my 13yo just reported that a search for “MCU movies in order” produced a list beginning with “Superman”). And obviously if facial recognition had an actual concept of “face,” rather than recognizing merely “oval-shaped collection of pinkish-white pixels with darker pixels at more or less set locations within it,” it would be less racist in implementation.

But that doesn’t really address the question whether all language involves a thing one person has that they intend to pass on to another person (or multiple people). Poets report all the time that they don’t know what they’ve said until they see the words on the page, and even then not until later. We see patterns that don’t always have conceptual explanations, but we don’t unsee them. Theorists suggest that “language speaks us” and not the reverse. I think that’s false at least some of the time, but my interest in the cases where it’s false (and where that fact is important), maybe, shouldn’t keep me from listening to people who say that for them it is true. And at times I wonder if I’m on the verge of grasping myself how this can be.

25

John Quiggin 01.03.22 at 12:38 am

Alan @16 I think a Trump win (legal or otherwise) is virtually inevitable unless he decides not to run, or is beaten by an overwhelming margin. And however he wins in 2024, there won’t be a real election after that.

The post-2024 models I’m focusing on are the Civil Rights and anti-war struggles of the 1950s and 1960s. As you suggest, most places in Red America are going to resemble the South with a combination of repression and extra-legal terror. The cities will I think be more like university campuses in the 1960s, dominated by opponents of the regime, but with occasional episodes of police violence.

I don’t see a good end, but all things must pass.

26

J-D 01.03.22 at 3:34 am

… no single book can summarize 5000 years of history without using a veeery broad brush …

No single book can summarise five days of history without using a veeery broad brush.

27

J-D 01.03.22 at 4:24 am

I think a Trump coup is virtually inevitable unless he decides not to run, or is beaten by an overwhelming margin.

It seems you are leaving no room for the possibility of his winning fair and square

I’ve fixed this, but your constant pedantic nitpicking is getting tiresome. You’re a regular reader, so you know I acknowledged this possibility in my last post. From now on, here and on my blog, I’m going to delete anything from you pointing out logical holes in other people’s writing. Stick to substantive comments, please. Also, no responses to this, please. Just do what I ask. – JQ

28

nastywoman 01.03.22 at 4:45 am

‘The intimidation factor of the aggressiveness of the far right is a real and frightening one. The asymmetry between the deep emotional hate of the far right and the more tempered hate from the left is a real factor. When Trump was in office, even then I could not risk posting Fuck Trump signs in my yard for fear of real reprisal–but there are now several such in my town for Biden, and even more for the Let’s Go Brandon meme. It’s that strong cultish asymmetry that worries me about any uprising promoting Trump in ’24, even if Dems can win the White House’.

But don’t you think there is some kind of… ‘backlash’ to all this crazy Right-Wing Racist Science Denying in America in general – and as I already drove through the whole country
four times and I know nearly all of ‘the pockets of insanity’ – and I will drive a fifth time through THE homeland in April – I just by planning it sense some kind of an ‘awakening’ from the bad trump-dream – as it is/was just too insane and stupid. And –
okay –
okay –
my fellow Americans might never give up yelling: I’m not wearing any f… mask.
AND the gubernment can’t make me do so –
BUT everyday – more and more of them get a little bit more…
‘educated’?

29

David J. Littleboy 01.03.22 at 5:37 am

“But that doesn’t really address the question whether all language involves a thing one person has that they intend to pass on to another person (or multiple people). Poets report all the time that they don’t know what they’ve said until they see the words on the page, and even then not until later. ”

I’m not concerned with “all language”; just the language used in everyday normal human interactions. Poetry is a different thing, as the Ern Malley hoax shows. If you say “I own three elephants”, then dealing with that statement sensibly (understanding that it’s ridiculous for most people and there’s either a back story (you inherited a circus) or are full of it) requires things that current AI isn’t getting any closer to doing.

Can one “do poetry” without being able to count elephants is a question that seems to me to have an easy and obvious answer: of course you can’t. Does it mean anything to write/read poetry if you can’t count elephants? Whatever, if you can’t count elephants, you can’t do everyday language. And current AI (including and especially GPT-3) refuses to bother counting.

By the way, Python as a language is hilarious: how do you express the programming construct “end block” in Python? By a negative amount of whitespace. Friggin’ insane. (I’m using Python for some amateur linguistics (comparing word usages in pre-war Japanese lit. with recent light fiction). I should have used C++, but making C++ handle Unicode nicely was irritating enough that I punted.)

30

JimV 01.03.22 at 2:36 pm

“And those better theories will help us understand what it is about human intelligence that makes it so enormously and qualitatively different from “animal intelligence”.”(DLJ)

The nematode C. elegans has 200 neurons and can use them to navigate and memorize a simple maze. Smart breeds of dog have about 500 million neurons and can learn hundreds of spoken commands. Humans have about 100 billion neurons. Quantity is quality.

Many groups of neurons have been hard-coded by millions of years of evolution to perform specific tasks such as vision. It is conceptually possible that H. sapiens has some such hard-coded set of neurons which no other animal has, but since the amount of Homo evolution is small compared to overall animal evolution, this seems very unlikely to me. I think it is the processing power of un-dedicated neurons which is the key.

31

SusanC 01.03.22 at 3:28 pm

GPT-3 fails the Turing Test spectacularly.

If you’re given a set of short texts, some written by a human and some written by GPT, it is really easy to spot which ones are by GPT. GPT output is a bit dream-like, in that it doesn’t keep consistency about the fictional scene it’s describing and combines elements that in reality don’t go together.

To get a reasonable output out of GPT, you need a human telling it when to retry – the human is providing the semantic understanding of the text that GPT conspicuously lacks.

It’s great for cheating on your French homework though, if you know the language well enough to read it but aren’t all that good at composition … just hit retry till you get a sentence you like, and you can write a short story that way.

Though sometimes, I find myself thinking “is that actually a French idiom, or just some random French words that GPT has thrown together”. (In my native English, I can tell a genuine idiom from a made up one. In French, you might be able to fool me with something plausible that no actual French speaking person would say.)

32

SusanC 01.03.22 at 3:54 pm

To further clarify the Turing test point.

Suppose that the probability p that the next sentence generated by GPT will be deemed acceptable by a human reader is about 60% (made up number, but it’s around that order of magnitude).

If the judge is allowed to ask GPT for many next sentences, and GPT fails the Turing test if it gets it wrong even once, then the 0.4 probability of failure at each iteration stacks up,

On the other hand, suppose we are playing a different game. The judge asks for multiple next sentences, but the AI is allowed to retry if the judge rejects the sentence. (This is not the Turing test, or even close to it). Here what we are concerned about is the expected number of tries needed to complete an essay of length N (e.g. to do your French homework for you). The AI is acceptable if the expected number of tries is less than N times some small constant (between 1 and 3, say). If p is near zero, the expected number of retries is very large and the AI fails. With p=0.6, it can get to the end of your French homework in an acceptable amount of time.

33

SusanC 01.03.22 at 4:06 pm

P.S. I”m expecting someone to mention Roland Barthes The Death of the Author sometime soon. (Plus Foucault, plus Derrida). If the author of the text is GPT, then it didn’t have any intentions…

34

Mark Pontin 01.03.22 at 6:04 pm

Today’s graph of current Chinese average life expectancy vs. that of US; US trending rapidly down — plummeting, in fact — while Chinese trends upwards quite steeply, passing US lifespans, at approx. 28 degree angle: –

https://pbs.twimg.com/media/FHYE7YYWQBgvF9C?format=jpg&name=medium

This on top of 800 million moved out of poverty. I wouldn’t say it’s quite game, set, and match to the ChiComs — the US can still launch a nuclear war against China (and suicide in the process, of course) — but close to it.

35

Frank Wilhoit 01.03.22 at 7:17 pm

David J. Littleboy @ 17:

“…Plug Bach or Mozart’s composition rules into a random number generator and what will come out will sound like Bach or Mozart….”

No, it won’t. It will sound, at best, like their numerous forgotten contemporaries; and at worst, because the rules are cram-full of edge cases, it will sound plainly wrong.

In order to sound like them, it would have to mimic their individual personalities, which are vastly more important than “the rules”.

36

reason 01.03.22 at 10:28 pm

nastywoman @19 – Isn’t this the cheese shop sketch from Monty Python?

37

Alan White 01.04.22 at 3:29 am

nastywoman@28

I honestly wish I could agree. But roughly 20-30% of Americans are staunchly drinking Cult 45 daily, and they will never sober up. I mean, damn, they’re condemning him for softly peddling the vaccines! There is no educating this segment of the population–hell, my own brother, a PhD like me, swallows all this Fox shit as if it were candy.

For my part, FTITAWAC–where the first T is a proper name, the first A is a body part, and the C is a desert plant.

38

nastywoman 01.04.22 at 5:05 am

‘Isn’t this the cheese shop sketch from Monty Python?’

NO –
it is actually ‘the archaeology today’ sketch – as it predated ‘Don’t Look Up’ so beautifully.

https://dai.ly/x2ymzrq

39

johne 01.04.22 at 6:09 pm

MisterMr @ 22

Thank you for your interesting and lucid take on the unevenness of Graeber’s “Debt.” I guess I like my traditional-style histories to have a bit more detailed brush-work as they get closer to the present, but we all vary in the amount of risk that we will comfortably accept.

40

Orange Watch 01.04.22 at 6:18 pm

DLJ@21:

I’ll start w/the caveat that I’ve been out of NLP since the late aughts, and that my area was automated semantic role labeling, so I was focused on a subproblem of generating a semantic structure for parsing rather than generalizing knowledge representation, and my focus was processing rather than generation. Having said that, though, generating fluent text is not an easy task even if there are harder ones out there. The purist view that anything but a rationalist general AI that can use intentionality to manipulate a knowledge representation structure to achieve specific goals via communication – which sounds like the standard you ultimately aspire to with your talk of neat vs. fuzzy – isn’t terribly appealing. Among other things, it’s got fairly serious ethical issues baked into it – it proposes the creation of actual intelligence to perform mundane, monotonous tasks and to both place fundemental constraints on their freedom, and to create, copy, destroy, and utterly invade the privacy of them at will. That’s a problem when for the most part we want narrowly-focused AIs that can perform narrow repetitive tasks. “Neat” AI is ethical for almost all AI applications in ways that “fuzzy” AIs are fraught.

However, setting all that firmly aside, your stance seems to be a dogmatic restatement of the rationalist vs. empiricist debate in AI, and essentially declares all empiricist models to be “uninteresting” because they do not produce systems that are easily intelligible to humans. That’s a rather problematic standard, however, since human cognition is not easily intelligible to humans. We only have one clear, unequivocal style of what we recognize as being fully sapient intelligence, and that is based on high-connectivity, low-speed biochemical neural networks rather than elegant, high-level and human-readable knowledge representation systems. There is no evidence that the rationalists will ever be able to create general intelligence, but by and large they’ve definitively failed to create practical specialist AIs that can scale to useful levels. I’m sympathetic to the rationalist POV – in my heart I’ve always been one – but that doesn’t mean the rationalist project has ever been feasible, let alone that the empiricist project is useless, doomed, or even uninteresting.

41

SusanC 01.04.22 at 6:51 pm

GPT can’t even handle negation properly, e.g. “There is no rhinoceros in the room.” And then it carries on to talk about the rhinoceros, because “rhinoceros” occurred previously in the text and it doesn’t really know what “no” means.

Some of the research into how human beings acquire language considers the ability to learn negation words to be an important feature. It’s revealing that GPT can’t even get that far.

42

Peter Erwin 01.04.22 at 7:57 pm

@johne 12

I stopped being able to take David Graeber seriously after I read his 2012 essay “Of Flying Cars and the Declining Rate of Profit”.

If you actually know something about science and technology and the recent history of both (and the history of science fiction as well), then it’s a remarkable exercise in dense, sustained ignorance and bullshit. It is fractally wrong. (That is, wrong on all levels, from individual sentences all the way up to the basic question and the supposed answer.) Graeber repeatedly demonstrates that he is scientifically, technologically, and historically illiterate, in a breezily confident style that admits of no uncertainty whatsoever.

And it’s not like it was some brain-fart he later regretted, since he republished it as the central section in his 2015 book The Utopia of Rules.

So while the “Apple was created by IBM engineers in the 1980s” howler from Debt isn’t really relevant to the arguments in that book, it does strike me as entirely characteristic of Graeber. I find him catastrophically unreliable.

As for The Dawn of Everything, this review by an actual historian indicates that the entire introductory chapter on the Enlightenment, Rousseau, etc. is really, really bad:

… [Graeber and Wengrow] begin by examining how Western thinkers have previously treated the subject, and in doing so they first turn to the French Enlightenment. This happens to be my own area of expertise, and I was curious to see what they would make of it. Quite frankly, I was appalled. Unfortunately, despite its promise, the work suffers from a slipshod and error-filled approach to this key moment in modern intellectual history.

Right at the beginning of the chapter, Graeber and Wengrow give readers a very big reason to doubt their scholarship…. These short sentences contain a simply astounding collection of errors…. the rest of the chapter is just as sloppy and erroneous.

43

David J. Littleboy 01.05.22 at 12:25 pm

“which sounds like the standard you ultimately aspire to with your talk of neat vs. fuzzy”

I don’t aspire to much of anything for AI. I think that any talk of “intelligence” in machines is at least multiple decades, if not centuries, premature. I think that what AI ought to be is a corner of psychology, nothing more, nothing less. Since “neat AI” has no theory, it’s a completely bogus field/universe. And is going to continue its recurring cycles of excessive hype followed by crash and burn. The current state of much of the field (with the exception of Gary Marcus and a few others) is problematic in the extreme.

On the other hand, I have no doubt that the human mind is a machine. Just one that’s way kewler and subtler and more wonderful than any of the current AI twats can imagine.

As far as ethics are concerned, it’s simple. AI programs are programs, and the person/organization that uses such a program must, as with any program, tool, or device, take responsibility for the output and how that output is used. “The computer said it so it must be correct” must be seen as unacceptable. Period. If the construction tool you are using regularly thwacks and kills passers-by, you can’t say “the machine did it”. Ditto for AI.

(Thanks SusanC, for doing my homework!)

JimV wrote:
“I think it is the processing power of un-dedicated neurons which is the key.”

I don’t. I think humans are qualitatively different. Lots of animals (great apes, elephants, some birds (I read somewhere recently that bird brains are actually better than mammalian brains on a per-neuron basis; they do really well without the convolutions mammalian brains need)) have nearly as many neurons as we do and aren’t even close. As I keep harping on, we’re the only animal on the planet that understands that sex causes pregnancy and pregnancy causes childbirth. That’s really wild: no other animal can reason about family relationships of any sort, at all. We’re the only animal on the planet that can understand weekends. The difference is enormous. Also, every time we figure out a circuit in some brain, we see that it’s gloriously well designed. (That is, there aren’t any undedicated neurons. Some that can be rewired, but as used*, they’re not undedicated.) But despite a lot of evolutionary effort, evolution only figured out intelligence (i.e. symbolic reasoning) once.

This “as used” bit is important. To learn something, you need to have the ability to do that something without learning. If you are going to theorize about “learning”, not understanding how to do X means you don’t know how to learn X, either. This is why I think that the current blind faith in “machine learning” is so misplaced; it’s completely backwards.

Frank Wilhoit @35 wrote stuff I actually agree with, but it seems folks nowadays aren’t as tuned in to Bach’s and Mozart’s art as one would hope, and programs (sorry, no references at hand) have been known fools folks into thinking the stuff it generates is real Bach. Sigh.

While I’m grinding axes, SusanC mentions the Turing test. (Using the usual concept thereof to make a valid point; I’m ranting off on a tangent. Sorry.) What most people call “the Turing test” isn’t what Turing had in mind at all. Turing wanted the computer to play a game. The judge in the game wasn’t judging the computer for it’s language, s/he was judging the computer for how well it played a game. What game was that? A game in which a WOMAN tries to persuade the judge that she’s a WOMAN and a MAN tries to fool the judge into believing he’s a WOMAN. The judge has to decide if they’re both women, or if one isn’t and which one is the one that isn’t. This is a seriously subtle game, not the stupid joke that most people talk about when they talk about the “Turing test”.

(I caught Daniel Dennett saying something inane about the Turing test (probably at 3 quarks daily) and pointed out (a) the above and (b) that it wasn’t me, but Roger Schank who noticed this. Dennett responded that he knew this quite well thank you and that he was the one who had told Roger about it. And he did not admit that what he had said in the conversation was inane. This irritated me no end, because if he knew it, he shouldn’t have been saying inane things. Maybe he knew it and was just careless. But he’s (supposed to be) one of the philosophers who “gets it” and has a responsibility to not be careless. And to admit it when he messes up.)

44

Lumierex 01.05.22 at 1:57 pm

@ Peter Erwin

I agree with you that Graeber is “catastrophically unreliable.”

“The Dawn of Everything” is biased disingenuous account of human history that spreads fake hope (the authors of “The Dawn” claim human history has not “progressed” in stages… so there’s hope for us now that it could get different/better again). As a result of this fake hope porn it has been widely praised. It conveniently serves the profoundly sick industrialized world of fakes and criminals. The book’s dishonest fake grandiose title shows already that this work is a FOR-PROFIT, instead a FOR-TRUTH, endeavor geared at the (ignorant gullible) masses.

Fact is human history has “progressed” by and large in linear stages, especially since the dawn of agriculture (www.focaalblog.com/2021/12/22/chris-knight-wrong-about-almost-everything ). This “progress” has been fundamentally destructive and is driven and dominated by “The 2 Married Pink Elephants In The Historical Room” (www.rolf-hefti.com/covid-19-coronavirus.html ) which the fake hope-giving authors of “The Dawn” entirely ignore, naturally. And these two married pink elephants are the reason why we’ve been “stuck” in a destructive hierarchy, and will be into the foreseeable future.

A good example that one of the authors, Graeber, has no real idea what world we’ve been living in and about the nature of humans is his last brief article on Covid where his ignorance shines bright already at the title of his article, “After the Pandemic, We Can’t Go Back to Sleep.” Apparently he doesn’t know that most people WANT to be asleep, and that they’ve been wanting that for thousands of years (and that’s not the only ignorant notion in the title). Yet he (and his partner) is the sort of person who thinks he can teach you something authentically truthful about human history and whom you should be trusting along those terms. Ridiculous!

“The Dawn” is just another fantasy, or ideology cloaked in a hue of cherry-picked “science,” served lucratively to the gullible ignorant underclasses.

45

Orange Watch 01.05.22 at 5:33 pm

SusanC@41:

It’s true that a tech like GPT-3 isn’t going to reveal a great deal about how language acquisition works, but that’s not the only interesting problem in language technology. There’s a lot of applications where fluent imitation of existing corpi is a satisfactory result, and more pointedly, it’s not impossible that a tech like this couldn’t be used as a subsystem in more rigor-demanding applications. As you said earlier, it can produce interesting results with human mediation, and that certainly invites comparison to machine-assisted translation, as well as the possibility of developing specialist AI capable of approximating semantic comprehension. Indeed, while I’ve been out of the field for a decade and a half, that was the general goal in semantic role labeling: identifying the semantic structures and relations in corpi so that other applications could exploit that information. I’m not specifically familiar with GPT in particular, but it seems possible that even if it’s a painfully naive statistical predictor a more robustly-tagged corpus could potentially produce significantly more complex results with it as an emergant feature by effectively encoding semantic rules in the training corpus rather than the generator…

46

alfredlordbleep 01.05.22 at 6:59 pm

Waking Up Dead
“We will not require hands on the steering wheel and we will not require eyes on the road,” Henrik Green, the automaker’s chief technology officer, said in an interview with The Verge. But when asked whether drivers will be able to take a nap in the car with Ride Pilot active, Green demurred.

“We’re still being very purposely non-distinct in the wake-up time that we require,” he said. “Taking a nap requires a wake-up time, so let’s see how far and when we can get there. You need to be able to assume control in a certain time and take back the driving responsibility.”—Volvo confident (from The Verge)

47

Tm 01.05.22 at 8:32 pm

Peter Erwin 42, thanks for the appraisal and link. My own impression is similar: what I read of Graeber seemed empirically and/or conceptually indefensible.

48

Tm 01.05.22 at 8:39 pm

Regarding the future of democracy, at the end of the thread there was some debate about betting. I find this puzzling but also typical for the mainly American obsession with prediction. Don’t you guys see (I was going to write on the other thread but it was already closed) that the point isn’t predicting the future, the point is making it better? The task isn’t predicting the rise of fascism, the task is preventing it? Does that really need pointing out???

49

nastywoman 01.06.22 at 8:33 am

and about the ‘betting thing’ – I absolutely LOVED to bet with Right-Wing Racist Science Denying Idiots – as they tend to believe such crazy things – and some of them are even willing to bet on it –
as winning against them –
is some of the easiest earned money –
in order to donate it afterwards for causes the Right-Wing Racist Science deniers absolutely HATE –
like fighting Climate Change…

50

JimV 01.06.22 at 3:03 pm

DLJ@43: Some good points–I did not know elephants had more neurons than humans. According to the Internet, no other primate does:

“Humans have more brain neurons than any other primate—about 86 billion, on average, compared with about 33 billion neurons in gorillas and 28 billion in chimpanzees. While these extra neurons endow us with many benefits, they come at a price—our brains consume 20% of our body’s energy when resting, compared with 9% in other primates.”

It is my understanding that in some known primitive societies, the link between sex and pregnancy was not known, implying this is something that has to learned, and once learned passed on. (I for one was not born with this knowledge either.) Humans have a great advantage over elephants in transmitting knowledge over generations through the use of technologies which require hands. (Word of mouth not being a reliable long-term method.)

I was born without the ability to play guitar. I trained my neurons to do so, over years. Undedicated is probably the wrong term, perhaps re-dedicatable is better. (An elephant has the neurons to do it, but not the hands.)

According to archeological evidence, it took human societies over 150,000 years to produce the wheel and axle. From that point, technology snowballed with gears, pulleys, capstans, mills, etc. (including hard drives). Combine this with enhanced knowledge-transmitting technologies, and and it seems to me that trial-and-error based on pattern-seeking plus memory accounts for all humanity’s accomplishments. (Which is what neural networks do.) What trial and error plus memory (plus manipulative ability) has produced once, it can do again. It just needs super-computers capable of running neural networks of about 100 trillion nodes (since a neuron currently needs 1000 nodes to simulate), running for perhaps 10,000 years. (Consuming lots of energy.) Or we can pre-program things humanity already knows, such as the rules of Go, and use smaller neural networks to optimize their use.

51

JimV 01.06.22 at 5:51 pm

P.S. Another piece of evidence which has helped form my opinion, is that according to Wikipedia there are a few reliable instances of human children raised in the wild without human contact or training, and when found as teenagers or older, they never learn to talk, or use spoons or forks, or other basic behaviors. This tells me that all the advantages humans have developed over many thousands of years are not hard-wired, but the result of the long training, passed on over generations, of a sufficient number of adaptable neurons.

52

Tm 01.06.22 at 8:38 pm

Lumierix @44 Thank you too for the link to the excellent (and very critical) review (www.focaalblog.com/2021/12/22/chris-knight-wrong-about-almost-everything), from anthropologist Chris Knight.

I also found very enlightening this older essay by anthropologist Camilla Power: ‘A response to David Graeber & David Wengrow’s “How to change the course of human history”,’ Libcom.org, Camilla Power (https://libcom.org/history/gender-egalitarianism-made-us-human-response-david-graeber-david-wengrows-how-change-cou)

Still puzzled by the hype about Graeber and Wengrow but I learn a lot from this debate so I’m grateful for that.

53

David J. Littleboy 01.07.22 at 1:31 am

JimV wrote “neural networks”

If what you meant by “neural networks” is what the current AI folks mean by “neural networks”, then you do realize, don’t you, that it takes a 1,000 element “neural network” (times 5 or 7 layers of depth) to simulate a single mammalian neuron? (A recent paper used a “neural network” to simulate a single neuron.)

This is one of my main criticism of current AI. AI’s “neural networks” aren’t even close to “modelling neurons”, even though pretty much every article you read tells you that they are. We’ve known, for a century now, the general shape of neurons (cell body with a long axon (thousands of outputs) and a wide dendritic tree (hundreds of inputs) and now know that the average mammalian neuron makes 7,000 connections, is extensive in space, layer 1 of the neocortex contains almost no cell bodies, etc. etc. etc.)

But, again, being “learned” doesn’t explain how a complex behavior is actually performed. (This is also a weakness of evolutionary arguments: evolution tries things; it doesn’t discover or explain the physics and engineering.) We don’t know what the data structures look like, how they’re stored. A recent paper found that the part of a rat’s brain that represents local space is also used at the same time to represent much larger expanses of space. We’re still figuring out basic signaling. (There’s a new book called “Spike!” that’s quite good.) Figuring out how neural circuits work is hard. There was an article on mouse whisker circuits that completely missed an alternate and quite reasonable possibility of how the circuit works, and the author twisted himself in knots because without that (other possibility that I quite randomly happened to think of) the mouse circuit really couldn’t do what his experiments showed it could do.

I do, however, think it’s a good point that we didn’t figure out a lot of the things we figured out until quite late in our history (writing, especially). What that observation has to be considered along with, though, is the abject failures of the attempts to teach “language” to our nearest relatives. (Holly Dunsworth* has been on this case; the whole “Koko” thing was egregious in the extreme; but was widely reported uncritically.). We learn things that we are capable of learning.

*: https://www.uri.edu/features/holly-dunsworth/

54

bad Jim 01.07.22 at 5:18 am

It’s hard to tell how much language a dog understands, though I feel comfortable in asserting that some dogs understand more than others.

Here’s an interesting finding: Dogs can distinguish speech from gibberish and tell Spanish from Hungarian

55

JimV 01.07.22 at 3:25 pm

I have seen credible reports of some birds (e.g., Steve the Grey Parrot, and some crows) learning some human language. Again, it seems to me it requires a lot of processing power, which most other animals don’t have to spare, just as my laptop can’t run AlphaGo.

There are innumerable fine details of what evolution has produced over billions of years in super-massive parallelism (billions of bacteria in a shovel of dirt), but the basic principles of trial and error plus memory plus manipulative ability seem firmly established to me. And yes, I believe I was the first here to point out the 1000 nodes/neuron ratio. That we remain a long way from matching biology’s processing power in electronics has been one of my consistent points. That doesn’t mean we couldn’t accomplish a lot with the electronic equivalent of a mouse’s brain, dedicated to a single task.

56

Jim Harrison 01.07.22 at 6:34 pm

I don’t think we would or should go to war to defend Taiwan, but nobody should have any illusions about how ghastly the invasion and occupation of the island would be or how much pressure the spectacle of the thing would put on the American administration to respond. We hadn’t expected to get involved in Korea back in ’49 until we did. It also strikes me that a China willing to take the huge risk of starting a war over Taiwan would be likely to engage in other adventures, presumably motivated by a need to change the subject from domestic problems and declining legitimacy.

57

Orange Watch 01.07.22 at 6:57 pm

DJL@43:

. I think that what AI ought to be is a corner of psychology, nothing more, nothing less. Since “neat AI” has no theory, it’s a completely bogus field/universe.

You’re falling into a mental trap cobbled together out of run-down ivory towers. There are many tasks that humans can do that we would like to automate. In the past, the rationalists crafted incredibly complex, hyper-specialized. and hard-to-produce-and-maintain expert systems to do such tasks, poorly. The empiricists, while unsatisfying from a theoretical standpoint, are demonstrating great proficiency at producing expert systems easily and efficiently. The issue that I’m seeing with your POV is that you want AI to tell us great truths about human cognition, and theories about human cognition. That’s… not what AI has ever been focused on. Most of the field is seeking to produce tools that can successfully approximate intelligent decision-making, and theories about how to produce such tools. In the past, the field favored the naive approach of “make programs that go through the process of abstract reasoning”, but that has been found to be impractical and cumbersome, to say the least. The rationalists have given way to the empiricists, and here we are.

Simply put, you’re conflating cognitive science with AI. There is relation and overlap between the two fields, but they are not and have never been the same, and your apparent disdain for the base motivations of AI does not change that.

As far as ethics are concerned, it’s simple. AI programs are programs, and the person/organization that uses such a program must, as with any program, tool, or device, take responsibility for the output and how that output is used.

It’s quite troubling that you on the one hand continue to advocate for AI as only being generalized sapient intelligence (and thus being centuries or more away), but then turn and describe such AI as programs and tools, whereby the only possible ethical outcome is how a person uses their output. If we eventually end up with the sorts of AI you consider to be “real” AI, but use them for the sorts of tasks we use modern, per-you-not-really-AI, we would be enslaving sentient beings, experimenting on them, creating and destroying them on a whim, and treating them as tools rather than intelligences. Your conception of AI is morally problematic, to say the least – and not merely if used to bad ends. Creating approximations of intelligence that are only capable of limited decision-making and no self-awareness is not morally fraught in the way the AI you seem to want us to strive for would be regardless of its use. If we devise real intelligences, and then limit both their freedom and cognition so as to force them to do whatever mundane and repetitive tasks we choose for them, it doesn’t matter if we’re using them to optimize disaster relief or predatory loan marketing. Any denial of this requires some assumption about the human mind whereby you do doubt that it really is just a machine…

58

SusanC 01.07.22 at 7:54 pm

Some horses understand about a dozen command words.

One thing that is odd about this: a horse clearly understands fairly high level plans, not just low level movement instructions. (E.g, get from here to over there by going through that course marked out with traffic cones, without knocking any of the cones over), but only knows words for low level concepts like stepping backwards, stating moving, stoping moving, increasing or decreasing speed.

59

bad Jim 01.08.22 at 6:02 am

An anecdote about Alex the parrot:

One time, during an experiment that bored him, he repeatedly asked for a nut—but the researcher ignored him again and again. He figured he had to spell it out for her. So he said, “Want a nut! Enn-Uuu-Tee!”

Crows spend a lot of time vocalizing; it seems highly likely that they very much enjoy the exchange of information.

60

David J. Littleboy 01.08.22 at 3:04 pm

JimV wrote ” just as my laptop can’t run AlphaGo.”

Actually, your next laptop will probably be able to run AlphaGo just fine. The Go programs running on (fast desktop) PCs were getting better and better, and beginning to give strong professional players problems. (The MCTS algorithm was the big breakthrough; prior to that, no one had a clue as to how to get computers to find good moves in Go.) The folks at Google threw a server farm at the problem (using exactly the same algorithm: MCTS) and beat the world champion. Then they had the idea of offloading the most computationally taxing part of the program onto GPUs and the brilliant PR/hype idea of calling it “neural nets”. (GPUs, the Go board, and “neural nets” all have, conveniently, the same geometry.)

Still, getting the “neural net” computational model to do pattern matching in Go is pretty kewl.

On the other hand, the AlphaZero idea of building the positional pattern database by having the program play itself is something anyone who had ever written a chess program would have thought of. Still, Google actually did it. Actually doing the work is important in Comp. Sci.

Furthermore, they wrote it up well enough that people could (and did) reimplement it. LeelaZero and Katago are two that do the training crowd-sourced.

To get back to the point, currently Katago (which comes bundled with the KaTrain UI program: just download the KaTrain .exe and run it) plays well above human strength on a desktop with even a mid-range GPU card. (My Go’s a tad rusty; against pros, I used to win with 4 stones and lose with 3. I hope to get to the point that I can hold on with 4 stones against Katago. One of my 2022 projects.)

But although pre-2021 laptop GPUs were not adequate for Katago, it looks as though 2022 laptop GPUs will be. So your next laptop will kick your butt.

Orange Watch wrote:

Stuff about morality.

You can always turn off the power, copy the program to another computer, start it up again. It’s a program. There’s nothing moral or immoral going on here. Nothing whatsoever. Until you use the excuse “the computer did it”.

Also,

” you want AI to tell us great truths about human cognition, and theories about human cognition. That’s… not what AI has ever been focused on. ”

Huh? That’s been a major focus of AI throughout it’s whole history. At least on this planet. Lots of people in AI (e.g. Minsky, Pappert, Schank) have been focused on exactly that. And a few still are (e.g. Gary Marcus). Heck, the field’s definition from the start was about explicating human cognition: ““The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it“.

By that definition, a program not based on a cognitive theory isn’t AI.

SusanC wrote:

“One thing that is odd about this: a horse clearly understands fairly high level plans,”

Exactly! Lots of animals do high level planning in the wild. (Especially big cats hunting.) And can’t do language worth beans. “Animal cognition” has to be wildly different from human cognition. And figuring out what animals actually can and do think about would seriously kewl. Hopefully the neuroscience/neuroanatomy types will begin to speak to this more explicitly.

61

LFC 01.08.22 at 4:02 pm

Peter Erwin @42 says Graeber’s 2012 essay in The Baffler is total bullshit. I’ve never really read Graeber, and I was curious so I quickly read the opening paragraphs until I came to this:

End of work arguments were popular in the late seventies and early eighties as social thinkers pondered what would happen to the traditional working-class-led popular struggle once the working class no longer existed. …

What happened, instead, is that the spread of information technologies and new ways of organizing transport—the containerization of shipping, for example—allowed those same industrial jobs to be outsourced to East Asia, Latin America, and other countries where the availability of cheap labor allowed manufacturers to employ much less technologically sophisticated production-line techniques than they would have been obliged to employ at home.

I’m not sure the above is completely correct, but it’s certainly not “fractally wrong” and total bullshit, which is what Peter Erwin said every sentence in this Graeber essay is.

As for the piece’s broader, underlying argument, that may well be wrong (I don’t know), but the idea that every word in the piece is incorrect seems exaggerated.

62

KT2 01.08.22 at 10:54 pm

JimV said “the 1000 nodes/neuron ratio.”

If possible, please provide a link to study. I think I read that paper but cannot pull it now. As I read it, my take away was that;
* one neuron ‘would’ be like a six layers net of 1,000 nodes*
 – so – if you have a server farm and a gpu factory – (just an onboard processor) – throw more processing and training at it. At some point it will provide a cost benefit for a given process to a capitist who will approvingly toss a human onto the street. No need of defining AI.

This AI / NN / ‘conscious’ AI seems like the processor vs storage storm in a tea cup waaay back in the 1980’s. Cheap chips / volume / storage overcame any ideas of smarter software “needed” to deal with data and processing due to storage and processing ( money) bottlenecks – threaded everything – the “argument” simply disappeared into history as chip & storage access and costs became immaterial.

This AI “argument” will go away too, and passed to a new taxonomic AI. Yes it will take from today to ???? – centuries as one commenter said – to arrive at a conscious AI. At that stage of development I think we are missing biological interfaces both ways. When will we accept a brain interface for decision enhancements? 

In the mean time, whilst we all chat about it;
– a robot takes a warehouse workers job, 
– a batista has a robot aid – as soon as coffee addicted realise the robot is “better” than the human au reviour barista
– many truck drivers
– mid level management 
– procedual art & artists
– most doctors routine work
– add your own category 
… are the above workers AI / NN / ML who cares! The effect of ” it is / not really AI” is irrelevant to the commons and tech progress bros & capital. Ala “oh, automation now x% cheaper than humans and falling. We will switch to robots when ROI positive after y ceiteria met, AND we don’t break the zeitgeist (zeitgeist most important for replacing humans)”. 

Our 2 food retailers in Australia have automated 4 warehouses now I believe with approx 2,000+ workers replaced – by what? Dumb AI? Smart ML? Big Data? Or ala Senge & The Fifth Discipline, just 5 systems put together to replace what we thought previously only humans could do.
*

I wish Karl Friston would write about this…
(admission – meeting and chatting with Karl Friston is on my bucket list)

“The mathematics of mind-time

“The special trick of consciousness is being able to project action and time into a range of possible futures

by Karl Friston + BIO

“I have a confession. As a physicist and psychiatrist, I find it difficult to engage with conversations about consciousness. My biggest gripe is that the philosophers and cognitive scientists who tend to pose the questions often assume that the mind is a thing, whose existence can be identified by the attributes it has or the purposes it fulfils.

But in physics, it’s dangerous to assume that things ‘exist’ in any conventional sense.”…

” To recap: we’ve seen that complex systems, including us, exist insofar as our Lyapunov function accurately describes our own processes. Furthermore, we know all our processes, all our thoughts and behaviours – if we exist – must be decreasing the output from our Lyapunov function, pushing us to more and more probable states. So what would this look like, in practice? The trick here is to understand the nature of the Lyapunov function. If we understand this function, then we know what drives us.

“It turns out that the Lyapunov function has two revealing interpretations. The first comes from information theory, which says that the Lyapunov function is surprise – that is, the improbability of being in a particular state. The second comes from statistics, which says that the Lyapunov function is (negative) evidence – that is, marginal likelihood, or the probability that a given explanation or model accounting for that state is correct. Put simply, this means that if we exist, we must be increasing our model evidence or self-evidencing in virtue of minimising surprise. Equipped with these interpretations, we can now endow existential dynamics with a purpose and teleology.”
https://aeon.co/essays/consciousness-is-not-a-thing-but-a-process-of-inference
*

https://en.wikipedia.org/wiki/Lyapunov_function

“The Genius Neuroscientist Who Might Hold the Key to True AI

“Karl Friston’s free energy principle might be the most all-encompassing idea since Charles Darwin’s theory of natural selection. But to understand it, you need to peer inside the mind of Friston himself.”
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/
*

Let us prepare for humans on the street whilst deciding “are we there yet” with AI – what ever AI is anyway. 

Thanks.

63

JimV 01.08.22 at 10:55 pm

It seems that no one is going to change anyone’s worldview. So far mine still encompasses humans, horses, dogs, and nematodes to my satisfaction. I can accept that the exact gestalt of how millions of linked neurons accomplish some complex tasks, and for that matter, why electrons exist, are beyond my grasp, without assuming any magic is involved other than the magic of scale.

Any need for a cogent explanation of why things work is ultimately doomed since explanations can only proceed from unexplained axioms. Some things work in this universe, including neurons, and others don’t. Trial and error can tell us which is which, and the mechanical details of their use. Then people may agree on a provisional story that seems to explain things, and is useful for training people in the future, until it is superseded by better data.

My recollection (increasingly faulty these days) of the AlphaGo paper is that it did use two self-trained neural networks to evaluate positions and choose moves, each about 200,000 nodes, as well as some (four?) specialized GPU’s for tensor processing. I have an Acer W7/64 laptop which will not run that system and plan to die with it (or a rebuilt replacement of similar capacity, costing maybe $200). The point of course was that different animals have different adaptable processing capabilities, like computers (and their upgrades would probably take millions of years).

64

JPL 01.09.22 at 1:57 am

I shall use the term ‘plutocrats’ to refer to the individuals (in any society, but in particular the current American one) making up the unholy alliance between the excessively rich and morally challenged (“capitalists”, I suppose you could say for the larger category) who aim to use money to get power, and the privately- oriented and ethically challenged politicians who aim to use power to get money. The current drive toward establishing a permanent non-democratic plutocracy being carried out by what is usually called “the right wing” (a complex socio-cultural phenomenon characterized by distinctive, and I would say maladaptive, ways of understanding the real world and the meaning of life) may be enabled by the credulousness of the fear and hate- driven masses making up the MAGA electorate, but it is the continuation and product of the long- pursued agenda of the plutocrats themselves. It now seems that what has been called “the coup plot”, the extra-judicial attempt to overturn the results of a legitimate election, that was extensively planned, presented to the recognizably delusional “president” and executed by members of the class of plutocrats, was derailed by the desperate, clueless and incompetent “president’s” incitement of the mob action on the capitol. But it is becoming clear that the “coup attempt” was just part of the larger agenda to eliminate as a functioning constitutional guideline the ethical principle that no aspect of the election process, from determination of districts, and availability of polling places to the counting and certification of results should be under the control of one of the participating parties to the exclusion (and harm) of the others. The recently considered or enacted “voting suppression” laws, that use “voting integrity” and “the big lie” as pretexts, no doubt make no logical or legal sense, but pure power play does not need logical sense. The plutocrats as usual are exploiting the weaknesses of their voting base, but those voters should know that the Republican party cares not the slightest whit for them, and when that party eventually enacts policies that callously and definitely harm them, as it inevitably will, even they in all their numbers will not be able to get the plutocrats out of there, since, really, elections are no longer needed. As an observer of the passing scene, this is what I see going on out there; this is no time for centrism (never actually seriously practiced on “the right”) in the public discourse. Referring to the objectively abominable, potentially catastrophic and utterly insane as if it were normal and refraining from critical judgment is “sticking one’s head in the sand” (a weird image). So, to the corporate, and especially the public, media: is all this kind of thing normal or is it insane? And that brings us to the recent film “Don’t look up”, whose analysis of the current situation is a bit incomplete and wide of the mark.

65

Orange Watch 01.09.22 at 3:33 pm

DLJ@60:

You can always turn off the power, copy the program to another computer, start it up again. It’s a program. There’s nothing moral or immoral going on here. Nothing whatsoever.

This is the same as saying slavery is moral as long as you don’t kill the slave. If we make the above POV slightly more sophisticated, we’d have to remove the slave’s memory of toiling for us (and thus suffering) or lobotomize them (thus removing their capacity to suffer, but also their utility as an intelligence to poke and prod). This also suggests that if we refine cloning to the point where we can create precise duplicates, it’s impossible for us to behave immoraly towards them, except by blaming them for things we do, b/c we can create a new one even if we eliminate an existing instance. And no, the parallel does not clearly break down b/c of potential loss of memory/experience b/c that’s entirely possible to inflict on an artificial intelligence. If human minds are machine, it must be possible to be immoral to created non-human intelligences (and not merely with them). If human cognition can be reduced to mechanistic processes, it’s monstrous to say that nothing I do to a human is immoral so long as I create a copy of its mental configuration and can boot up a new copy on some other platform. Anything else requires an assumption that morality requires something other than just a mind (shall we just come right out and speak of souls), or that some concept of ownership must be introduced which eliminates the possibility of moral claims against the owner.

It’s really weird to see you setting yourself up as an advocate of old, pure AI that treats with its highest traditions and theories, but then dismiss the logical end state of those theories that have been discussed for decades. I must say that it feels a bit telling that you want to lock away AI as nothing more than marginal subfield of psychology – what of philosophy’s millenia of work in this area? Cognitive science exists,, and it very appropriately draws from many fields.

Look, while it’s absolutely true that historically AI included theorizing about cognition, from day 1 it also included the pragmatic approximation of reasoning to solve problems, at least on this planet. Denying that as being real AI is as dishonest as ignoring that as young fields of study mature, they specialize. As computer science matured, AI became a subdiscipline of it, while establishing ties to other related disciplines – though also never eliminating all aspects of the original field from itself. As AI matured, it has done likewise, to include specializing portions of itself w/in subdiscplines and interdisciplinary study, and this process was well advanced by the 70s when you stated you left the field.. This is a normal progression within scientific inquiry. That AI has not retained an intellectual purity (which never actually existed) is beside the point. Portions of AI that demand cognitive theories based on human minds to operate still exist, but that does not preclude or delegitimitze study of decision making processes and procedures based on other methodologies – and as long as they’re constructed rather than discovered in nature, only the truest Scotsman could deny them a place under the expansive umbrella of AI.

66

johne 01.09.22 at 9:06 pm

Thanks to those who have contributed to a more nuanced understanding of David Graeber’s work. I gather that he may have brought up points worth considering, but also that he was at least sometimes careless with the details. Which seems to leave the reader with the task of deciding which of Graeber’s descriptions are details, and which are crucial to his arguments — as well as the larger question of what degree of carelessness is acceptable in serious discussion.

I wonder why none of this was brought up in the threads devoted to “The Dawn of Everything?”

67

David J. Littleboy 01.10.22 at 4:08 am

“that does not preclude or delegitimitze study of decision making processes and procedures based on other methodologies”

Sure. Fine. But, please. The “neural net” computational model has no similarities, whatsoever, to any neurons that occur in nature. Replace AI with “computation” or “computational”, replace “neural net” with SIMD or LRALCP (layered rectangular arrays of locally connected processors), and you have a lovely subfield of computer science. But, please. Don’t lie about what you are doing.

(Yes, there are people who have noticed this problem and are looking at creating computational models that look more like actual neurons. That’s good, serious science. It might even be reasonable to call it AI.)

(There’s also the problem that lying about what you are doing makes the failures (e.g. Watson, racist/sexist software) more painful, but that’s another problem.)

And, no, while the mind is best described as being a machine, no more no less, computers aren’t life forms, they’re pet rocks. Confusing pet rocks with human beings isn’t a mistake I’m making. If we used our knowledge of how life forms work to build new life forms (Dolly, test tube babies) they’d be life forms, not computers, and (even though we might be able to describe some aspects of their behavior in mechanical terms) we’d treat them as the life forms they are. Humanity has been there, done that, and largely gotten it right.

I’ve deleted some unhelpful language here. Please keep everything civil – JQ

68

Tm 01.10.22 at 10:12 am

Johne 66: “but also that he was at least sometimes careless with the details”

The criticism goes far deeper than “some details are contentious”. I very strongly recommend the reviews linked above. They raise many objections that go to the conceptual heart of the book. I’ll just mention one important point that should be highly relevant to the debate here (and should also have been relevant to the earlier threads, you are right about that).

G&W make the astonishing claim that private property was essentially a feature of all human history; a claim that I would have expected from right wing libertarians but not from a self-identified left anarchist. Scientifically the claim is completely untenable. This raises the question how such a claim fits into Graeber’s political intentions (since honestly there is little doubt that Graeber’s book was crafted for political more than scientific purposes), but also the question why so many leftist reviewers have overlooked such a howler.

Here’s an excerpt from Chris Knight (which whether you agree or not can hardly be dismissed as nitpicking about details):

G&W argue that private property is primordial because it’s inseparable from religion. By way of illustration, they refer to the trumpets and other paraphernalia used in some indigenous traditions during boys’ coming-of-age ceremonies:

‘Now, these sacred items are, in many cases, the only important and exclusive forms of property that exist… It’s not just relations of command that are strictly confined to sacred contexts…, so too is absolute – or what we would today refer to as ‘private’ – property. In such societies, there turns out to be a profound formal similarity between the notion of private property and the notion of the sacred. Both are, essentially, structures of exclusion.’ (p. 159)

Note how ‘absolute’ here gets translated as ‘private.’ The claim seems to be that if ritual property is sacred to an ‘absolute’ degree, then it qualifies by definition as ‘private property’. (…)

The authors then describe how ethnographers working with indigenous Amazonians discovered ‘that almost everything around them has an owner, or could potentially be owned, from lakes and mountains to cultivars, liana groves and animals.’ (p. 161) A spiritual entity’s sacred ownership of a species or resource sets it apart from the rest of the world. Similar reasoning, write G&W, underpins Western conceptions of private property. ‘If you own a car’, they explain, ‘you have the right to prevent anyone in the entire world from entering or using it’ (p. 159).

It is quite breath-taking to find G&W conflating traditional notions of spiritual ‘ownership’ with ideas about owning your own car. On what planet are they when they view modern private ownership as ‘almost identical’ in its ‘underlying logic and social effects’ with a supernatural being’s ‘ownership’ of natural resources?

When indigenous activists tell us that a lake or mountain is sacred to a powerful spirit, they are not endorsing anything remotely equivalent to ‘private property’. If the ‘Great Spirit’ owns the forest, the clear implication is that it is not for sale, not to be privatized, not to be claimed by a logging company.

Comments on this entry are closed.