Human beings are creatures who can describe their actions at various levels. Elizabeth Anscombe has famously introduced the example of a man who moves his arm, to pump water, to poison the inhabitants of a house, to overthrow a regime, to bring peace.* You can play with this case, or others, to create all kinds of variations: which of these descriptions does a person know of? Which elements could be outsourced to others, who might not know other descriptions? This is the stuff of comedies, tragedies, and detective stories. And arguably, it matters immensely when it comes to the introduction of AI and other digital technologies into our work.
I was reminded of this basic insight from the philosophy of action, about the multiple descriptions under which our actions can fall, when, the other day, I had to do the proofs for a paper. In the past, I had seen proofing as an act of care – a loving gaze that spots the last mistakes and makes the last improvements before a text goes out into the world. Not the most exciting part of academic work – the arguments have been made, after all – but a meaningful closure of the sometimes bumpy road to publication. I’m the generation who always got pdfs; generations before me did it on paper.
Not any more in the new digital era! Now you’re invited to log into a system, where you’re first bombarded by “author queries.” To be sure, some of them are important consistency checks, especially about bibliographical issues. I imagine that there must be people somewhere (India?) who do the checks and insert the queries. But then, there were also the things that seemed purely AI-generated, because no human being would have asked such stupid questions. I was asked to insert things into fields I could not change, and to provide information that was already there. The AI, if it was one, had added stuff that was in contradiction with the referencing system, and even changed the text when it apparently did not recognize words, or thought they were too obscure. And in one particularly strange incident, someone or something had inserted the words “is hard” after a journal title (“Implementing employee interest along the Machine Learning Pipeline”), making me wonder whether this was a poor overworked “ghostworker” who wanted to send me a message. Almost every time I made a change in the bibliography, it ended up not looking what it was supposed to look like, but the predefined, automated form did not give me a chance to correct it. I had to play many rounds of tetris to calm myself down, to get through all the queries, and to actually reread, and correct, the text.
What’s so interesting about this first-world-problem of Western academics, you might ask. Well, for one thing, it seems that commercial publishers are getting more and more involved in using ghost work, but then presumably also use the texts to feed new AIs. This adds to the many arguments for moving away from commercial publishing, but that case is normatively overdetermined anyway – it’s a collective action problem to get there.
But what I here want to focus on is how this use of AI relates to the meaning of work – or rather, undermines it in a rather specific way.
In a strange coincidence, on the same day that I did these proofs I also read about “action identification theory.” This is a psychological theory from the 1980s, which used experiments to understand at what level human beings describe their own actions, explicitly referring to Anscombe and other philosophers of action.** Through various experiments, the researchers explored – and largely confirmed – various theoretical claims, e.g. that people typically have a certain description of their action in mind that they use to guide their behavior, that when higher- and lower-level descriptions are available, they tend towards the higher-level description (and can be influenced into accepting different higher-level descriptions if they only have lower-level descriptions available), and that it is when the description in terms of high levels cannot be maintained that people switch to lower-level descriptions to make sense of what they do.
Applied to proofing: I can think of it in terms of a higher level description (“give the text a last polish”, or even something like “try to contribute to standards of clarity, precision, and elegance in academic writing”), or I can think of it in terms of lower level descriptions (“check that the formatting of the bibliography is consistent,” “replace a word,” etc.). And here is the thing that action identification theory holds (and it’s consistent with my n=1 experience): If I get interrupted in my action while I’m in the higher-level mode, because something doesn’t go as expected, then I’ll switch to lower-level descriptions. One funny experiment the researchers did to confirm this mechanism was to let participants drink coffee either from a normal cup or from “an unwieldly cup weighing approximately 0.5 kg.” People in the second group chose lower-level descriptions for describing what they were doing than those in the first group.
To broaden the focus to AI use more generally, the question is this: does it support us in understanding our actions at a higher level of description – or does it constantly interrupt us (or make mistakes) and thereby bring us back to the unnerving nitty-gritty details that make us forget the higher-level descriptions and make the work far more tedious than it would have to be?
I’m not an expert on AI, but I am pretty sure that it could be programmed in both ways. I can imagine a chatbot for proofing that would remind me that this is an important step in giving the last polish to my article, before it sees the light of the day, and help me in last-minute improvements of inelegant sentences. It could bring out my intrinsic motivation to deliver a good piece of work, and if this AI was run by a not-for-profit academic publisher, I would also not have to worry that it was just cynically trying to do so for some other purpose (e.g. getting good prose that other AI systems can get trained on). What we get at the moment, instead, is Taylorism of the worst form: the tasks broken down into tiniest units, and with patronizing comments about what you are supposed to do next that interrupt the workflow even more. The only redeeming feature is the open comment function, where one can explain if something doesn’t work, in the hope that a real human being takes care of it – but presumably, that’s something that companies would want to abolish in the future, because it’s more expansive than fully automated processes.
Why do companies go for the latter and not the former? You may be tempted to shout “capitalism,” and it’s hard to deny. But then again, why couldn’t one have even profit-oriented companies that try to work with the former mechanism? I guess apart from there being a cartel in the market for academic publishing, it’s simply the mentality of corporations in which micromanagement according is the rule. And it’s the fact that publishers are not interested in their own product per se. For them, an article has to fulfil formal criteria, and something like “beautifully written” is probably simply not part of the criteria. And there may also be all kinds of ulterior motives such as improving their literature databases or, as mentioned above, preparing texts in ways that allow for even fuller automation, or for the training of future AIs.
Proofing systems are only a miniscule part of the overall AI revolution, and if these problems existed only there, we could all learn to live with them, whine a bit, and it wouldn’t really matter. But I worry that these issues are might be typical for AI use on a much broader scale. They might take joy and meaning out of the work of the many individuals whose work is algorithmically managed, not only in a side task that happens once in a while, but in all of their work tasks, day in, day out.
Now, if you think that people only work to make a living, then this is not a problem – and insofar as it allows for productivity increases, it might even, theoretically, provide space for wage increases (whether companies provide those, is another matter…). I’m in the camp of those who think that many people see more in their work than a source of income, and that they should have a right to good, decent, maybe even meaningful work. From that perspective, this new Taylorism is deeply worrying. To be sure, many jobs were soul-crushing already before AI – but it can only get worse, and this Taylorism will probably eat its way into many jobs that were, before AI, still reasonably decent.
Does this make me a Luddite? No, I’m not anti technology. I’m anti technology-that-gets-used-only-for-extraction, in the sense that I want to see human workers put first, over profit motives, and I want them to have a say in how AI gets used in their work. And that includes the question of how the actions people do in their work can be described, and whether they can draw meaning from higher-level descriptions. To be sure, this question goes far beyond AI. But with AI taking over (or “helping us” with) more and more of the things we do, it’s all the more urgent to find better answers to it.
* G.E.M. Anscombe, 1957, Intention. Oxford: Blackwell.
** The seminal article seems to be Robin R. Vallacher and Daniel M. Wegner, “What Do People Think They’re Doing? Action Identification and Human Behavior.” Psychological Review 94(1), 3-15. I tried to find information about replication (or failure thereof) for this theory, but did not come across anything.
{ 2 comments… read them below or add one }
Chris Bertram 08.19.25 at 7:18 am
Thank you for such an interesting and insightful piece. I think we should probably be more Luddite than we are though!
Lisa Herzog 08.19.25 at 8:12 am
You’re probably right, Chris. I meant “not in principle”. But in practice, given what kind of technology we get, and what gets promised about it, and how it works in reality, I often think we should go back to the basics. What you often see is AI being praised for taking over tasks that should not be there in the first place, like complicated bureaucratic forms etc….