A modest proposal for the use of AI
Which jobs will be replaced by AI? Here is a modest proposal.* Replace higher management by AI. Not “management” in the sense of the teamleader who works alongside their colleagues with a bit more responsibility to make decisions and mediate conflicts, maybe not even the HR person who does performance evaluations, but the C suite.
The argument? Higher management takes decisions on the basis of aggregated data that provide only an indirect, text- or number-based account of the reality on the ground. So issues like bodily experiences, a sense of a place or personal connections to other people – the lived, daily practice of the actual work and its meaning from the first-person perspective – don’t come up. (Yes, I know, managers do “deep dives” and hold “dialogue sessions” and all that kind of stuff, but is it really the same?) Moreover, higher management often has to make decisions across many different types of units, so it’s a genuine question of how to bring those different perspectives together – unless you hold deliberations with people from these different units, but that’s not what “higher management” usually looks like.
When I was a kid, I regularly wondered how there could be people – bosses, presidents, etc. – that would “run” organizations with thousands of people. How on earth could you know enough about what all these people were doing to keep an overview? How would you keep track of all the financial flows? Over time, I came to understand that from the perspective of bosses, it’s not that difficult after all: depending on the size of the organization, you deal with thousands or millions or billions of dollars in about just the way a normal person deals with numbers that have no zeros at the end. If you’re a boss, other people aggregate those numbers for you, and prepare documents in which the options you have to decide about are already laid out.
This is not to say that there are no real decisions to take, and no moral dilemmas to wrestle with. Close this branch or that, invest in that technology or that, stand for a certain policy or another. But the level of abstraction in large organizations is such that the human factor cannot but be at a great distance anyway. So all the problems about AI not having emotional intelligence, or not being able to truly make “judgements” (think: Aristotle, Arendt, etc.) hardly matter. The chances of programming an AI to make these abstract decisions, in line with the organization’s principles and visions and missions, seem good (and maybe organizations would then actually do what they say in their documents about principles and visions and missions, that could get quite interesting!).
Of course, that’s not going to happen. But it would be nice to find an organization that would be willing to try it. Given what higher mangerment earns, the savings would be much bigger than if you replace people doing the actual work!
* I’m not claiming originality, though having searched around a bit, most discussions I found are about lower-level management and AI, with many arguments about why the human factor still matters. That may or may not be true, but the higher up you go on organizational ladders, the weaker that argument becomes….
{ 0 comments… add one now }