Trolley Problems and AI

by John Holbo on July 15, 2023

More AI madness! Couple of months ago there was a weird Daily Beast piece. It’s bad, but in a goofy way, causing me to say at the time ‘not today, Hal!’

But now I’m collecting op-ed-ish short writings about AI for use as models of good and bad and just plain weird writing and thinking, to teach undergrads how hard it is to write and think, so they can do better. And this one stands out as distinctively bad-weird. First the headline is goofy: “ChatGPT May Be Able to Convince You Killing a Person Is OK.” Think about that. But it’s unfair to blame the author, maybe. But read the rest. Go ahead. I’ll wait. What do you think? It’s funny that the author just assumes you should NEVER let yourself be influenced by output from Chat-GPT. Like: if Chat-GPT told you to not jump off a bridge, would you jump off a bridge? There is this failure to allow as we can, like, check claims as to whether they make sense? A bit mysterious how we do this yet we do. And ethics is a super common area in which to do this thing: so it only makes sense that you could get Chat-GPT to generate ethical claims and then people could read them and, if they make sense, you can believe them due to that. Never mind that the thing generating the prospective sensible claims is just a statistics-based mindless shoggoth.

If a shoggoth is talking trolley sense about OK killing, believe it!

Anyway, I thought it was funny. [click to continue…]