AI is coming for bullsh*t jobs

by John Q on October 8, 2022

There’s been a lot of excitement about Artificial Intelligence (AI) lately, much of focused on long-standing “big questions” like “is AI really intelligent” (short answer, no)

I don’t have an answer to that, so I’ll stick to the easier questions like “will a robot take my job”. I’ve argued before that this isn’t a good way to think about the issue. New technology has been changing the way we work for centuries, and will continue to do so. But for particular jobs being transformed by technological change, it is certainly relevant.

One area that’s moved ahead very rapidly is the generation of human-like text. The cutting edge here is a program called GPT-3, launched in 2020, which can produce impressive looking philosophical discussions. The underlying research has already been commercialised with products like Jasper, which has the much more prosaic (literally!) goal of producing advertising copy, blog posts and so on.

Jasper clearly won’t pass a Turing test if you ask for anything complex, but it is very good for its intended purpose: turning out text that looks as if a human wrote it. This has big implications for a large category of jobs, notably including many that the late David Graeber called “bullsh*t jobs”.

As an example, a fair bit of the content of a typical newspaper consists of press releases that have been lightly edited and perhaps spiced up a bit. With Jasper, the time taken for this task goes from an hour or so to a few minutes. For that matter, the press release itself can be generated from a few prompts in a similarly short time.

As with previous tech advances, that’s not likely to create mass unemployment any time soon. But it will mean that this kind of routine copywriting will be done much faster, by writers who have the skills to give programs like Jasper the right prompts, and then to touch up the final output. And this will extend to lots of admin jobs that have previously been immune from technical change.

Lots of middle management jobs, for example, involve writing memos and reports justifying one corporate decision or another. After you read a few, they all seem the same. AI can distil the essence well enough to mimic the style. After that, it’s just a matter of fitting the verbiage around the desired conclusion.

Over the fold, a few examples.

Here’s an output sample I produced in a couple of minutes, using a couple of prompts such as “create change” (UQ’s current marketing slogan) and Shanghai rankings

If you’re looking for a top-quality university that will help you create change in the world, look no further than the University of Queensland. UQ has a proud history of producing leaders and changemakers, and its graduates are highly sought-after by employers all over the world.

UQ is consistently ranked as one of the world’s top universities. In the 2019 Shanghai Ranking, UQ was ranked 28th in the world and first in Australia for overall university performance. The Shanghai Ranking is one of the most respected university rankings in the world.

A longer version is here.

Most of the info has been scraped from websites, including UQ’s own. But Jasper provides a plagiarism checker to assure that it isn’t simply a cut-and-past job. The 28th figure looks a bit suspicious to me, but I assume someone at UQ has found a definition of “overall university performance” that fits the bill.

So, if I were told I had a morning to produce a puff piece for every university in Australia, I’d say I could do it with Jasper, and still have time for an early lunch.

And here’s what I got when I asked Jasper to argue that trail running is better than triathlon, using some first-person testimony. I’m almost convinced.

Finally, for those worried about contract cheating, here’s Jasper pitching its essay writing services, then denouncing itself as a threat to education.

{ 29 comments }

1

SusanC 10.08.22 at 8:58 am

“The 28th figure looks a bit suspicious to me, but I assume someone at UQ has found a definition of “overall university performance” that fits the bill.”

Factual claims like that in GPT output are often factually false. GPT is mimicking the form, and knows that a number is expected at that point, but it doesnt necessarily know which number is the correct one.

This is one of those “AI risks” … false factual claims in GPT output being accepted as true by readers who can’t be bothered to check.

2

SusanC 10.08.22 at 9:04 am

At my esteemed institution, it is customary for us to write end of term reports on each student,

Rumour has it that certain of the faculty have applied a certain amount of automation to this … input to the program is teacher assessment of how good the student is, ( from a pull down menu) output is some appropriate platitudes conveying the same idea.

No, I don’t do that myself.

Though these things are basically a binary variable with the common case being “yeah, fine” and the rare case being “I think the university should throw this student out, for the following reason..”

3

Alex SL 10.08.22 at 9:38 am

I think maybe the creation of new material that is guaranteed to be safe both reputationally and legally for the institution or team publishing it is still too risky to be left to an AI. Generate something to get started, okay, but then the real bottleneck is nonetheless to smooth it out and correct it, so how much efficiency is really won?

Much of journalism, however, has degraded to a mixture of (1) printing press releases after only minor edits, (2) hate propaganda and smear campaigns against minorities or leftist politicians, (3) click-bait, (4) badly mangled and misunderstood reporting on science and culture, where those in the affected field can only wince at how badly it is misrepresented, and (5) reposting other people’s outputs under license.

Also, in many countries there are clearly no reputational or regulatory repercussions for any of this, be it spreading deadly misinformation about Covid, blatantly lying about EU policies, or destroying somebody’s life by sending a social media mob after them. So, I can totally see text-processing AI taking over in news and journalism.

4

engels 10.08.22 at 12:48 pm

At my esteemed institution, it is customary for us to write end of term reports on each student, Rumour has it that certain of the faculty have applied a certain amount of automation to this … input to the program is teacher assessment of how good the student is, ( from a pull down menu) output is some appropriate platitudes conveying the same idea.

My school had this software in the 90s and the teacher who in charge of it pulled me out of class for the day so I could input the grades (unpaid ofc).

5

Mike 10.08.22 at 12:58 pm

Alex SL @3: I fully agree.

This brings up a question of whether human reading in general will be worthwhile. Already spammification of most institutionally produced communication makes reading most government, corporate, media, university, and advertising communications a waste of time. It will be increasingly important to create AI filters to find the few nuggets of valid information we are interested in.

6

Thomas P 10.08.22 at 3:16 pm

The main function will be spreading propaganda or ads on blogs, facebook etc. Search the web for relevant subjects and automatically create comments that push the view you want. Much cheaper than hiring a troll army.

Then you will get new bullshit jobs identifying these AI:s and erasing their comments.

7

SusanC 10.08.22 at 5:31 pm

I suspect that the position of “AI ethicist” falls into one of Greaber’s categories of bullshit job: specifically, the one about jobs whose purpose is to convince the government that the company is doing something when in fact it isn’t.

I,e. The purpose of an ai ethicist is to delay the government from introducing regulation, by claiming (probably falsely) that the industry is successfully self regulating.

8

Ebenezer Scrooge 10.08.22 at 5:53 pm

I gotta disagree with you on middle management jobs. Yes, middle management writes memos, which are largely cut-and-paste bullshit. But that’s not their job: it is just a legitimator attached to their job. Most of their job is getting people to work together: underlings, peers, and bosses. That entails understanding work and people: both well beyond the scope of current AI.

9

engels 10.08.22 at 9:48 pm

The five types of bullshit jobs were flunkies, goons, duct tapers, box tickers and taskmasters. AI generated text only really replaces some of the fourth category. There will still be plenty of bullshit jobs for people after automation, especially if we introduce a bullshit job guarantee.

10

Alex SL 10.09.22 at 12:36 am

Mike @5:

Oh, I hadn’t even thought of that aspect. Yes, that could become an interesting arms race of bots producing garbage versus bots filtering out the garbage.

That being said, the more likely outcome is probably that everybody will become extremely selective about what channels they trust. What channels those are will depend on whether the person in question seeks affirmation or accuracy. And to a degree, that is already the case – some get most of their news by perma-watching a right-wing TV network, and others turn to, say, Al Jazeera or CNN when they have reason not to trust their own country’s major news channels on some issue of the day.

11

KT2 10.09.22 at 3:43 am

Mike @5: “create AI filters to find the few nuggets of valid information we are interested in”.

See for example: CT & (most) comments

12

John Quiggin 10.09.22 at 4:06 am

Susan C @1 You’re right about the errors. I asked Jasper for a CV and it gave me visiting professorships at Harvard and MIT, not to mention a non-existent book and a new wife!

13

Mike 10.09.22 at 12:15 pm

Alex SL @10: “What channels they trust” has always been the human condition, probably since before human gossips evolved. Personally, I’ve always felt this problem acutely since I left Catholicism to become an atheist.

KT2 @11: Yes, and from this blog dsquared (Daniel Davies) has been one of the best filters.

“People seem to be faintly drawn to the idea that there might be more political dimensions than just “left” and “right”. Bullshit. Being in favour of allowing other people to take drugs, shag each other or read what they want isn’t a political position; it’s what we call “manners”, “civilisation” or “humanity”, depending on the calibre of yokel you’re trying to educate. The political question of interest splits fair and square down a Left/Right axis: either you think that it is more important to provide a decent life for everyone in the world, or you think it is more important to preserve the rights of people who own property. You can hum and haw as much as you like about whether the two are necessarily incompatible, or whether the one is instrumental to the other, or what constitutes a “decent life” anyway, but when you’ve finished humming and hawing, I’m still gonna be asking you the question, and your answer to it will determine whether or not we’re gonna have an argument. ”

“Good ideas do not need lots of lies told about them in order to gain public acceptance.”

I probably have a lot more from him saved away.

14

Phil 10.09.22 at 2:44 pm

As Susan C @1 says, anyone using Jasper still needs to fact-check – the 2019 Shanghai Ranking actually had UQ 2nd in Australia and 54th in the world.

15

Scott P. 10.09.22 at 3:50 pm

16

Chip Daniels 10.10.22 at 12:12 am

I believe that AI and machine learning will be to white collar professions in the 21st century what electro-motive power was to blue collar workers in the 20th.

It will be the replacement worker, always and forever hovering just outside the door waiting for your wage demand to cross some invisible line.

I am an architect, and I can see how there is actually very little that white collar professionals like lawyers, doctors and engineers/ architects do that can’t be automated.

17

Duke the lost engine 10.10.22 at 2:46 am

As Ebenezer @8 says, this is not a big part of middle management jobs, though perhaps it is part of the job that is more visible to outsiders. Rather it is an annoying side task that managers would of would happily have automated. It would better be framed as “coming of bullsht tasks” rather than “coming for bullsht jobs”

18

Kenneth Oliver 10.10.22 at 3:40 am

Tell the truth Jasper – you wrote this post, didn’t you?

19

MisterMr 10.10.22 at 10:57 am

My enlightened opinion about BS jobs:

In an idealized economy where there was no rent whatsoever, neither between businesses nor between workers, all jobs would be ultimately motivated by demand for consumption: either direct consumption, or intermediate demand for investiment, that is still justified by future consumption.

But in the real world, there are lots of “rents”, and therefore a lot of energy is expended to get the rents:
between businesses, there is competition through advertising, and this advertising damages other businesses, so it is zero sum; but for the single business it is necessarious.
between workers, there are big differences in income depending on the job you land, so this causes, for example, credential inflation, or workers might have to show off that they are hard workers doing even unnecessary stuff to please their bosses.
most of finance is also a way to allocate rents.
there might other similar situations.

So on the whole bullshit jobs do exists, but they are due to an economic reality, and not just to weird cultural expectations as Graeber, IIRC, poses them.

Going back to Jaspers, if it becomes possible to automatize say brochure writing, everyone will do it for free so said brochures will lose any economic value, but then people will need other ways to signal their hard work and/or to advertise their istitutions, so we will just have other new, improved bullshit jobs.

This reminds me of the novel “The silver egg-heads” (1961) by fritz Leiber, where most narrative is produced by machines called “word mills”, but they still have “authors” who pretend to have eccentric personalities to make the product of their word-mill sell better.

https://www.goodreads.com/book/show/11492012-the-silver-eggheads

20

TM 10.10.22 at 12:14 pm

I don’t particularly trust the “bullshit jobs” thesis and in any case journalism being replaced by AI garbage isn’t a good example – journalism is serious work that should be done by well-trained professionals. That corporations owning journalistic outlets are degrading the journalistic output for profit reasons is sadly true but still not a good reason to use the “bullshit job” condescension, to the contrary I would think.

An example that hasn’t been mentioned of neuronal networks potentially replacing white collar work is translation. Again by no means a bullshit job and it is still controversial how useful the “AI” contribution actually is in real world professional applications.

21

peterv 10.10.22 at 12:31 pm

It is strange that adding the label “AI” to something makes it seem novel. In 1991, I saw the automated news report generation system developed by IBM and deployed in Prodigy, their pre-Web interactive online information service JV with Sears and RCA. The software automatically generated financial news stories, along with appropriate headlines and graphics, from news agency feeds.

22

SusanC 10.10.22 at 1:01 pm

After looking into Stable Diffusion…

It turns out that there are really an astonishingly large number of pictures on the Internet that could reasonably be summarised as “anime girl with big breasts”. A really really large number of these went into NovelAI’s training set for their retrain of the SD engine. So, yes, making that kind of art is now automated. I understand that AI training for furry fan art and anthropomorphic ponies is also in the works. On the other hand, other forms of art may be safe for a while yet: (a) less predictable subject matter; (b) you don’t have a huge training set of similar images on the Internet.

23

engels 10.10.22 at 2:32 pm

they are due to an economic reality, and not just to weird cultural expectations as Graeber, IIRC, poses them

Are you sure he does? I haven’t read the book I remember him saying it’s “as if” someone is making up useless activities just to keep us occupied, which might be interpreted as suggesting a purpose of population control. I don’t that’s completely crazy when you consider eg how clearly British government policy is focused on subsidising shitty work rather than improving it or providing liveable out-of-work benefits.

24

JimV 10.10.22 at 6:27 pm

As I’ve mentioned before, according to Wikipedia, in true cases of feral children (who grow up without parental or other human training), once found as teenagers or older, they never learn language or the use of spoons, et cetera. Jasper was trained to mimic patterns of human writings, but aside from that it is feral. That doesn’t mean that the way it learned to mimic isn’t based on the way humans learn, which it is. (Unless you think there is something magical about the way humans think and learn; I don’t.) That’s where the AI designation comes in. (We had automated drafting systems at GE in the 1960’s and semi-automated design when I left, but it wasn’t based on neural networks, and had to be physically revised or re-coded for every new feature.)

I for one will welcome our AI overlords, in the form of judges and juries and doctors and engineers, (provided we do a good job on them) because, although it will take a lot of computer capacity to match our cerebral cortexes and a lot of training, once you have one AI Einstein, you can duplicate it (with the training included in the copy).

25

J-D 10.11.22 at 12:09 am

“What channels they trust” has always been the human condition, probably since before human gossips evolved. Personally, I’ve always felt this problem acutely since I left Catholicism to become an atheist.

Human biology being what it is, deciding not to rely on anything anybody tells you and deciding to rely on everything anybody tells you are both very bad choices. Biology being what it is, this has meant people evolving both at least some ability to be selective in their reliance on what they’re told and at least some ability to penetrate other people’s selective filters.

Other animals confront similar problems but not, I think, to anything like the same extent.

26

Edward Gregson 10.11.22 at 2:42 am

GPT-3 may be the only large language model with actual commercial products based on it, but it is not the cutting edge. The cutting edge changes every few months basically, and much larger models than GPT-3 (with seemingly qualitatively better capabilities) have come out.

For both the large language models and the image generation models like DALL-E-2 and Stable Diffusion, the models have achieved success by training to fill in deliberately made holes in their training data, whether guessing the masked word in an input sequence of text, or removing deliberately added noise from a training image. They learn about the domain of images or text, but solely as statistical relations between word or pixel groupings. They don’t know that the text/images are representations of a real world, and they can only seem to reason about the real world by virtue of the fact that they’ve learned to mimic text/art that actually does.

They’re probably not trustworthy for doing a middle manager’s whole job, and if they can speed up the manager’s memo-writing from an hour to a few minutes, that probably just means the company could have already sped up the report-writing process by replacing memo-writing with point-form emails or checking boxes in an online form. For copy-writing it could help, but advertising is competitive, and consumers will probably learn to spot AI-written copy.

27

engels 10.11.22 at 6:57 pm

OT but John Q’s “mental fitness” post might be catching on:
Princess of Wales: We must look after our minds like we do our bodies at the gym

28

Alex 10.12.22 at 6:12 pm

“As an example, a fair bit of the content of a typical newspaper consists of press releases that have been lightly edited and perhaps spiced up a bit. With Jasper, the time taken for this task goes from an hour or so to a few minutes. For that matter, the press release itself can be generated from a few prompts in a similarly short time.”

Something I’m noticing here is a value-neutral view on the depressing fact that newspaper content is regurgitated propaganda. Sure, stuff like that would be great to automate! But stuff like that should not exist in the first place! The goal should not be to continue to do bullshit, but with AI. The goal should be to end bullshit.

29

MisterMr 10.13.22 at 9:55 pm

@engels 23

No, I’m not sure, I’m working from my memories of Graeber’s argument but I don’t remember it very well.

Apologies for the late response.

Comments on this entry are closed.