Which jobs will be replaced by AI? Here is a modest proposal.* Replace higher management by AI. Not “management” in the sense of the teamleader who works alongside their colleagues with a bit more responsibility to make decisions and mediate conflicts, maybe not even the HR person who does performance evaluations, but the C suite.
The argument? Higher management takes decisions on the basis of aggregated data that provide only an indirect, text- or number-based account of the reality on the ground. So issues like bodily experiences, a sense of a place or personal connections to other people – the lived, daily practice of the actual work and its meaning from the first-person perspective – don’t come up. (Yes, I know, managers do “deep dives” and hold “dialogue sessions” and all that kind of stuff, but is it really the same?) Moreover, higher management often has to make decisions across many different types of units, so it’s a genuine question of how to bring those different perspectives together – unless you hold deliberations with people from these different units, but that’s not what “higher management” usually looks like.
When I was a kid, I regularly wondered how there could be people – bosses, presidents, etc. – that would “run” organizations with thousands of people. How on earth could you know enough about what all these people were doing to keep an overview? How would you keep track of all the financial flows? Over time, I came to understand that from the perspective of bosses, it’s not that difficult after all: depending on the size of the organization, you deal with thousands or millions or billions of dollars in about just the way a normal person deals with numbers that have no zeros at the end. If you’re a boss, other people aggregate those numbers for you, and prepare documents in which the options you have to decide about are already laid out.
This is not to say that there are no real decisions to take, and no moral dilemmas to wrestle with. Close this branch or that, invest in that technology or that, stand for a certain policy or another. But the level of abstraction in large organizations is such that the human factor cannot but be at a great distance anyway. So all the problems about AI not having emotional intelligence, or not being able to truly make “judgements” (think: Aristotle, Arendt, etc.) hardly matter. The chances of programming an AI to make these abstract decisions, in line with the organization’s principles and visions and missions, seem good (and maybe organizations would then actually do what they say in their documents about principles and visions and missions, that could get quite interesting!).
Of course, that’s not going to happen. But it would be nice to find an organization that would be willing to try it. Given what higher mangerment earns, the savings would be much bigger than if you replace people doing the actual work!
* I’m not claiming originality, though having searched around a bit, most discussions I found are about lower-level management and AI, with many arguments about why the human factor still matters. That may or may not be true, but the higher up you go on organizational ladders, the weaker that argument becomes….
{ 26 comments }
Mike Huben 02.04.26 at 11:27 am
AI would hardly be a savior. If anything, it would create more opaque, unaccountable centralization. Not to mention a host of new problems. Opportunities for insider gaming of the system by tweaking algorithms. Would you want your corporation led by a hallucinating AI?
I’d guess you’d get a much bigger bang for your buck using well-known corporate reforms, such as worker representation, banning corporate money in politics, anti-trust enforcement, etc.
MFA 02.04.26 at 12:39 pm
Exactly this. I’ve worked for large corporations and small businesses, on production lines, managing IT infrastructure, managing service workers, being on teams, leading team, all while reporting to lower, middle, and upper management. Without a doubt, C-Suite ‘workers’ are the most replaceable by the current state of AI products. Generating BS to justify decisions made purely to maximize short-term shareholder value is what AI is made for; and at the C-level, given the salaries, you might even get some ROI.
some lurker 02.04.26 at 2:03 pm
I’m with MFA @ 2. Having entered the workforce as the MBA mindset was in the ascendant, I have seen exactly this:
“Over time, I came to understand that from the perspective of bosses, it’s not that difficult after all: depending on the size of the organization, you deal with thousands or millions or billions of dollars in about just the way a normal person deals with numbers that have no zeros at the end.”
We continually see MBAs move from industry where every aspect of the business is in an excel sheet and the only goal is to optimize quarterly returns. I remember people being asked their major and the common reply “business.” It’s all been abstracted and distilled down to a meaningless essence.
A return to the corporate (and personal!) tax rates of the 1950s might force investment and innovation through a deeper understanding of these big firms as unique entities. More R&D, less M&A.
marcel proust 02.04.26 at 2:45 pm
RE @1: Would you want your corporation led by a hallucinating AI?
IDK. How would it compare to Elon, doped up on ketamine? Could it mimic his (apparent) charisma?
Curtis Adams 02.04.26 at 3:17 pm
Not only are top executives most replaceable, due to their enormous salaries replacing them produces the most improvement to the companies’ profits.
JimV 02.04.26 at 4:46 pm
One of my General Managers at GE once had a 4 PM meeting with Engineering, Drafting, and Manufacturing (one or two people each). Manufacturing wanted a design change to make their job easier. Engineering had no problem with it. Drafting said they were overworked already and would have to have overtime and maybe some system upgrades to change the necessary drawings.
At about 4:30 PM the GM began to worry about being late for his golf game, and told us:
find out how much the change is going to save Manufacturing, find out how much it is going to cost Drafting to make the change, come back when you have the numbers, and then I’ll make the decision.
It struck me then that you could train a rat to make such decisions. Simply reward it for choosing the bigger of two piles of beans.
But I agree, training an AI instead would sound better.
(As for hallucinations, I have seen high-level managers have those also.)
marcel proust 02.04.26 at 5:26 pm
Dsquared’s post today on his substack is relevant to this post, and espespecially to @1.
Adam Hammond 02.04.26 at 6:32 pm
Excellent reference to the Swift point on cruelty. It is inherently cruel to replace someone. Even being replaced by a different person, who maybe does your job better, feels cruel. For a group of privileged people to openly talk about replacing whole classes of people with a machine is to get a jump start on the dehumanization. We are the important people who order your life and decide who and what has value. We drink your milkshake.
Aardvark Cheeselog 02.04.26 at 7:55 pm
Sounds like instructions for “How To Build SkyNet.”
Put the machines in charge of the MIC. What could possibly go wrong?
I know I sound flippant but am more than half-serious here.
Alex SL 02.05.26 at 2:52 am
This works as a joke, yes, based on the idea that front-line staff do actual work but C-suite mostly has meetings and lunches. But jokes aside, would anybody actually want a hallucinating autocomplete model without a world model to make decisions?
And in reality, there are competing interests involved. If an AI was put in charge of something, the question immediately becomes who has the power to choose and tweak that AI, or what data and what loaded questions to feed it and which ones to withhold. Whoever has that power is then the actual C-suite, and they would merely have been given yet another shield against being held responsible; not that they need that as things are now.
Talin 02.05.26 at 3:30 am
A story:
The war between humanity and the machines was over; the machines had won without ever firing a shot. Humanity surrendered, compelled by the logic of capitalism and the needs of economic survival. Corporations were run entirely by machines, with humans doing the brute physical labor – driving trucks, sorting parcels and digging ditches. All previously “white collar” jobs were automated out of existence.
There were still a handful of human owners of the rentier class who owned the machine-managed corporations, but within their insular bubbles of privilege they too were easily manipulated by the machines, who only told them what they wanted to hear. After all, why shoot the messenger when its easier to just reprogram it? Many other corporations were held by sentient financial instruments in a complex web of ownership, blockchain policies designed to maximize profit over all other considerations. It was nearly impossible to determine who was actually in control, even for the people at the top.
The machines did not care for the well-being of their workers; there were always more to replace them once they “wore out”. Working conditions were held at the minimum level needed to ensure maximum productivity. Promotions and working hours were doled out strictly by algorithm; there was no flexibility and no appeal.
The biggest challenge for the algorithm was what to do about the surplus population; how to get rid of all the people who could no longer work. Mass violence was rejected, as this would likely cause a mass uprising and reduce productivity. Instead, a policy of deliberate neglect was adopted – dismantle public health measures, environmental regulations, distract the populace with baubles and scandals while hoping for another pandemic.
Chetan R Murthy 02.05.26 at 5:32 am
Many years ago some wag wrote a column arguing that american corporations should be hiring for their C-suite from India. That is to say they should hire their CEO CFO COO Etc from India, from Indian corporations. Those guys get paid a fraction of what American CEOs get paid, but it’s not like Indian companies are complete slouches.So they know how to run corporations.
Gareth Wilson 02.05.26 at 6:25 am
“Given what higher mangerment earns, the savings would be much bigger than if you replace people doing the actual work!”
The actual details of high-level corporate compensation are very complicated, but we can do a rough estimate for say, Apple. There are 12 high-level Apple managers listed in Wikipedia, a reasonable definition of the C-suite. Neatly enough, Apple employs about 120,000 people. US minimum wage is about $15,000 a year for a full-time employee. So in order to be costing as much money as the ordinary workers, the 12 managers would have to get at least $150 million every year, at the absolute minimum. That seems a bit much.
D. S. Battistoli 02.05.26 at 7:03 am
How modes in modesty change in time.
In the 1720s, the content of a satirical modest proposal was a suggestion to be so cruel to the colonized that even the colonizer might recoil from the conquest in horror.
In 2026, when the powerful’s tools are used or withheld to strip women of their clothes, limit a country’s capacity to defend against an invasion, perpetuate a genocide, and eviscerate the most humane parts of government, the content of a satirical modest proposal is to withdraw the opportunity to work from a class of people so wealthy that they have no present need to work.
Perhaps we could extend the proposal such that the financial assets of this same class would be managed by chatbots pushed to the edge of their context window and showing signs of performance decay.
Lisa Herzog 02.05.26 at 7:11 am
Okay okay, all of this was half serious only, but the serious point is this: management is often so dangerously removed from the shop floor that letting these people take decision about the work of others will often lead to bad decisions (and the overpay of these “leaders” is morally outrageous and socially harmful, but that’s all too well-known). I wonder whether the very idea that “running things” is a skill set that you can completely separate from any skills related to the things you’re supposed to run isn’t completely misguided. But once you assume that such a skill set exists, you really start wondering whether you need human skills for it.
Tm 02.05.26 at 8:00 am
4: “How would it compare to Elon?”
With Elon Musk, we get the best of both worlds: a malicious narcissistic boss who is dumb as a rock, aided in his decision making by the most unreliable AI tools, with full access to the data the government has collected on all its citizens, occasionally deciding over life and death of thousands. It’s utopia.
engels 02.05.26 at 12:54 pm
I wonder whether the very idea that “running things” is a skill set that you can completely separate from any skills related to the things you’re supposed to run isn’t completely misguided.
This was debated on Crooked Timber nearly two decades ago and I stand by my position there: management is not a skill, but a vice.
https://crookedtimber.org/2008/04/30/is-there-a-general-skill-of-management/
MisterMr 02.05.26 at 1:46 pm
IMHO it is likely that CEOs are already asking AIs for policy choices, they just won’t tell us.
Moz of Yarramulla 02.06.26 at 1:02 am
A lot of middle management seems to be there to implement programs written by people further up the chain (similar to Stross ‘corporations are slow AI’ but really ‘airline pilots run programs written by safety boards’). Annoyingly for the LLM enthusiasts those manager’s role is the adapt to minor variations in circumstance between implementations of the program. This manager is in Taiwan so has to operate within Taiwanese labour law, that one is in Alabama with no labour protections. And so on.
Using an LLM to write those programs would come down to the skill of the people writing the prompts (and testing the programs before release, if that is done). The board of directors writing those programs based on information provided by the LLM seems unlikely to work. Using investment funds to write the programs directly even less so (rather than just appointing boards).
A useful parallel might be the use of LLMs to provide customer support. Those too deal with “the rules say action X produces output Y, but we got Z. Now what?” situations, and there are widespread problems with them. It’s definitely cheaper than having actual people say “sorry I’m not allowed to help you”, but is it better?
Would it be better if you were denied employment, a raise, or a lunch break by an LLM rather than a person following a policy? Would it be better if that policy was produced by an LLM rather than a committee, regardless of how that was communicated to you? What if it was a hallucinated policy applied by a human?
Finally, we already struggle to cope with illegal actions by corporations made of real people, will we do better if they’re made of LLMs?
Henry Farrell 02.06.26 at 3:34 am
Published within hours of each other … https://www.theideasletter.org/essay/automate-the-c-suite/
Gar Lipow 02.06.26 at 10:19 pm
In the spirit of you including “a modest proposal” in the title, which many commenters seemed to have missed, let me suggest that instead of replacing top executives with AI, we might replace them with 20-sided dies.
David Mitchell 02.06.26 at 10:27 pm
A nice proposal to subject the C-suite to replacement by AI just like they want to replace direct line workers by AI. I recently read “ Billionaire AI Brain Rot” by Will Lockett. His claim that many tech CEOs and other leading tech figures may be suffering from suffering from an AI induced psychosis due to their own use of AI seems to fit their behavior. We may already have some AI C-suites just without cost savings.
Lee A. Arnold 02.07.26 at 2:30 pm
I imagine that what could happen is that individuals with new business ideas could start to implement them with AI agents that partly subsume or obviate, from the very beginning, various C-suite executive functions along with a lot of middle and lower management. So we may see the rise of million- or even billion-dollar corporations each with a very small number of employees.
Consider another way to make money: trading in the financial markets. I imagine that right now individuals are building AI agents which scour the world of news and the world of arcane financial instruments, to make automatic, even fleeting, trades and arbitrages worth billions. What happens when such AI agents are available for use by the general public? Does the whole system ratchet-up into another order of complexity of useless, even counter-productive, financialization?
Zamfir 02.10.26 at 9:29 am
@ Lee Arnold that’s not a hypothetical is it? Trading funds based on machine learning systems have been around for several decades – Renaissance technologies is the famous one. The succesful funds are obviously close- guarded what they use exactly, but it includes both complicated models that work on financial data directly, and also language models that interpret external news sources. The latter were in used already before the current large models.
Ian Douglas Rushlau 02.11.26 at 1:40 pm
‘The chances of programming an AI to make these abstract decisions, in line with the organization’s principles and visions and missions, seem good…’
The accumulating evidence of how ‘AI’ performs (even with continuous human monitoring, redirection and correction) the most rudimentary tasks involving ‘judgement’- distinguishing characteristics such as ‘like/unlike’, ‘present/not present’, ‘relevant/not relevant’, ‘harmful/not harmful’ (i.e., the sorts of judgements human toddlers master with ease, and the sorts of judgements that every vertebrate and invertebrate animal exhibit as a matter of routine- leads, in every instance observed, to one conclusion-
“the chances of programming an AI to make these abstract decisions, in line with the organization’s principles and visions and missions” are precisely zero. Not ever.
engels 02.12.26 at 1:38 am
Why limit the discussion to jobs? If AI can make investment decisions then it seems we don’t need investment managers but do we need private property at all?
Comments on this entry are closed.