Referring back to this 2002 post defining “neoliberalism”, I find the claim that the “The (UK) Conservative party is hovering on the edge of extinction”. That wasn’t one of my more accurate assessments, and I’m bearing it in mind when I look at suggestions that the party is now “facing a defeat so dramatic it may not survive.” (that’s the headline, the actual suggestion is that the future may be one of “long periods of Labour with occasional periods of Conservative governments”
[click to continue…]
Posts by author:
John Q
Brad DeLong (in a recent post summarising a joint podcast with Noah Smith) walks back his previous suggestion that it was time for neoliberals, among whom he had numbered himself, to pass the baton to “the Left”.
The political basis for this is that 20 or so Senate Republicans have been willing to pass legislation from time to time, rather than shutting down the government altogether. I don’t find this compelling, but I also don’t want to debate the issue.
Rather, I’m interested in the following remark, which crystallized a bunch of thoughts I’ve been having for some time
”How has the left been doing with its baton? Not well at all, for anyone who defines “THE LEFT” to consist of former Bernie staffers who regard Elizabeth Warren as a neoliberal sellout.”
This is a classic, indeed brazen, motte-and-bailey[1], in which the hard-to-defend bailey “the Left of the Democratic party (of which Elizabeth Warren is a prominent member) is doing badly” is replaced by the motte “THE LEFT (as represented, in this case, by disgruntled former Bernie staffers) is doing badly”.
It’s International Workers Day, still celebrated as the May Day public holiday here in Queensland, at least when the Labor party is in office. So, it’s a good day for me to set out some tentative thoughts on work and its future.
Via Matt McManus, I found this quote from Marx ‘Fragment on Machines”.
The hand tool makes the worker independent — posits him as proprietor. Machinery — as fixed capital -posits him as dependent, posits him as appropriated
Reading this, it struck me that, whereas mainframe computers were archetypal examples of impersonal and alienating machines, personal computers are, or can be, regarded as extensions of their users, that is, as tools. Employers have long struggled to exert control over office computers and the workers who use them, making them extensions of the machine that is corporate IT. But these efforts have always been resisted, and have broken down, to a large extent, with the shift to remote work. My intuition, following Marx, is that this development presages a bigger shift in the relationship between between workers and bosses.
[click to continue…]
Robert Farley has replied to my recent post on the obsolescence of naval power. Unlike our previous exchange, a pile-on where I was (as he points out) in a minority of one, Robert’s tone is mostly civil this time, and I intend to reciprocate. Our disagreements have narrowed a fair way. On many points, it’s a matter of whether the glass is half-full or half-empty.
For example, Farley observes that despite Houthi attacks, 2 million tonnes of shipping per day is passing through the Suez canal. I’d turn that around and point out that 4 million tonnes of shipping per day has been diverted to more roundabout routes. However, since we agree that naval authorities overstate the macro importance of threats to shipping lanes, we can put that point to one side.
A more relevant case is that of China’s capacity (or lack thereof) to mount a seaborne invasion of Taiwan. I said that China has only a handful of modern landing craft and that their announced plan relies on civilian ferries. Farley points out that China has constructed 16 large, modern amphibious assault vessels in the past 18 years, with more on the way. That’s more than might normally be implied by the word “handful”, but not in a way that meaningfully challenges my argument.
According to Robert’s link, the ships in question can carry 800 troops, or about 10 000 if all of them were used. That’s enough to do a re-enactment of the Dieppe raid, but not to play a major role in an invasion of a country with a standing army at least ten times as large. And the implied rate of construction (one per year) suggests this isn’t going to change any time soon. This leisurely approach is consistent with the CCPs need to maintain a public position that it is willing and able to reunite with Taiwan by force, along with a private recognition that this isn’t possible and wouldn’t be wise if it were.
In all the discussion of Leif Wenar’s critique of Effective Altruism , I haven’t seen much mention of the central premise: that development aid is generally counterproductive (unless, perhaps, it’s delivered by wealthy surfers in their spare time). Wenar is quite clear that his argument applies just as much to official development aid and to the long-standing efforts of NGOs as to projects supported by EA. He quotes burned-out aid workers “hoping their projects were doing more good than harm.”
Wenar provides some examples of unintended consequences. For example, bednets provided to fight are sometimes diverted for use as fishing nets. And catching more fish might be bad because it could lead to overfishing (there is no actual evidence of this happening, AFAICT). This seems trivial in comparison to the lives saved by anti-malarial programs
Update Wenar’s claim about bednets, as presented by Marc Andreessen, was thoroughly refuted by Dylan Matthews in Vox earlier this year. (footnote 1 applies) End update
It’s worth pointing out that, on Wenar’s telling, a project that gave poor people proper fishing nets (exactly the kind of thing that might appeal to the coastal villagers befriended by his surfer friend) might be even worse for overfishing than the occasional diversion of bednets.
Wenar applies his critique to international aid programs. But exactly the same kind of arguments could be, and are made, against similar programs at the national level or subnational level. It’s not hard to find burned-out social workers, teachers and for that matter, university professors, who will say, after some particularly dispiriting experience, that their efforts have been worse than useless. And the political right is always eager to point out the unintended consequences of helping people. But we have plenty of evidence, most notably from the last decade of austerity, to show that not helping people is much worse.
Over the last year, three of the four most powerful navies[1] in the world have suffered humiliating defeats at the hands of opponents with no navy at all.
First, there’s Russia’s Black Sea Fleet. Until the invasion of Ukraine in February 2022, it was regularly touted as a decisive factor in any conflict, capable not only of blockading Ukraine but of supporting seaborne assaults on ports like Odesa. The desire to secure unchallenged control of Sevastopol in Crimea was widely seen as one of the crucial motives for the Russian takeover in 2014.
Two years after the invasion, most of what’s left of the Black Sea Fleet has fled Sevastopol to take refuge in the Russian port of Novorossiysk, which is, for now, safely out of the reach of Ukrainian drones and anti-ship missiles. (As I was working on this post, Ukraine hit Sevastopol again, damaging three ships and the ship repair plant there) The Black Sea Fleet has played no significant role in the war, except as a supplier of targets and propaganda opportunities for the Ukrainian side. Its attempted blockade of Ukrainian wheat exports has been a failure.
But for the large community of naval fans, the failure of the Black Sea Fleet hasn’t been a crucial problem. The ominous assessments of its capabilities made before the invasion have been retconned with a narrative of Russian incompetence, Soviet-era holdovers and so on.
The effective closure of the Suez Canal by the Houthi movement is a much bigger problem. Well before the war in Gaza, the US and its allies had a large naval force in the Red Sea and Eastern Mediterranean devoted to keeping this allegedly vital sea lane open. That force now includes two USN carrier strike groups, destroyers and frigate from the Royal Navy and other allies, and a long list of other warships.
At least so far, the Houthis don’t have the capacity to strike US warships. But, even with relatively unsophisticated weapons, they’ve already come close enough to require a US destroyer to use its last line of defence.
The main focus of Houthi attacks has been commercial shipping, particularly any that can be linked in some way to the US, UK and Israel. And it’s these attacks that the joint naval effort is supposed to stop.
The effort has been singularly unsuccessful in this regard. Houthi attacks have reduced shipping through the canal by around 70 per cent, even before the recent sinking of a UK-owned bulk carrier and the claimed escalation into the Indian Ocean. As shippers reconfigure their operations volumes are likely to fall even further.
[click to continue…]
Daniel Kahneman, who was, along with Elinor Ostrom, one of the very few non-economists to win the Economics Nobel award, has died aged 90. There are lots of obituaries out there, so I won’t try to summarise his work. Rather, I’ll talk about how it influenced my own academic career.
In a few days time, I’ll be lining up in the 65-69 category for the Mooloolaba Olympic triathlon (1500m swim, 40km cycle, 10km run)[1]. People in this age category are commonly described as “aging”, “older”, “seniors”, “elders” and, worst of all, “elderly” (though this mostly kicks in at 70). The one thing we are never called is “old”. But this is the only term that makes any sense. Everyone is aging, one year at a time, and a toddler is older than a baby. Senior and elder are similarly relative terms. And “elderly” routinely implies “frail” (a lot of old people are frail, but many more are not.
What accounts for the near-universal squeamishness that surrounds the term “old”? Apart from the obvious fact that you are a bit closer to death, it’s not that bad being old. Even if not everyone can complete a triathlon, most people maintain (self-assessed) good health to age 85 and beyond, In most developed countries, old people can live a reasonably comfortable life without having to work. And on average, that’s reflected in measures of happiness.
Yet, at least in the Anglosphere, old people don’t seem to be happy in political terms. It’s voters over 65 who provide the core support for conservative parties and are most likely to welcome the drift to the far right represented by Trump and his imitators.
The pattern is particularly striking in the UK where the YouGov poll shows the right and far-right leading easily among voters over 65 (37% Tory + 28 % Reform), while gaining essentially no votes from those aged 20-24, where the Tories tie for 5th place with the SNP, behind Labor, Green, Reform and LibDems https://yougov.co.uk/politics/articles/48794-voting-intention-con-20-lab-46-28-29-feb-2024 [2].Presumably that reflects Brexit, a particularly irresponsible piece of nostalgia politics inflicted mostly by the old on the young.
But it’s the same in the US, Canada, Australia and (though mainly among women) New Zealand. While there has always been a tendency for old people to support the political right, it’s more marked now than it has ever been. And as is particularly evident with MAGA, there’s nothing conservative about this kind of politics. Its primary mode is authoritarian Christian nationalism.
In part, I think this reflects the increasing dominance of culture war issues, where views that were dominant 50 or 60 years ago are now considered unacceptable. Old people whose views haven’t changed in many years are likely to support the right on these issues.
I’d be interested in any thoughts on this.
fn1. Not expecting to do well, thanks to the hottest and stickiest summer I can remember, but I plan to finish.
fn2. A poll last year had the Tories on 1 per cent among young voters.
My latest in Inside Story, reposted from Substack
Managers need to recognise that the best way to dissipate authority is to fail in its exercise
Authority is powerful yet intangible. The capacity to give an order and expect it to be obeyed may rest ultimately on a threat to sanction those who disobey but it can rarely survive large-scale disobedience.
The modern era has seen many kinds of traditional authority come under challenge, but until now the “right of managers to manage” has remained largely immune. If anything, the managers’ power has increased as the countervailing power of unions has declined. But the rise of working from home and, more recently, Labor’s right to disconnect legislation pose unprecedented threats to the power of managers over information workers — those employees formerly known as “office workers.”
To see how this might play out, it’s worth considering the decline of another once-powerful authority, the Catholic Church.
In the early 1960s, following the development of reliable oral contraception, the leaders of the church had to decide whether to accept the Pill as a permissible way for married couples to plan their families. Pope John XXIII established a pontifical commission on birth control to reconsider Catholic doctrine on this topic.
It was a crucial decision precisely because marriage and sex were the most important areas in which the authority of the Church remained supreme and precise rules could be laid down — and generally enforced — among the faithful.
Most people, after all, have no trouble observing the commandments against theft and murder. Other sins like anger, pride and sloth are very much in the eye of the beholder. But the rules regulating who can marry whom and what kind of sexual behaviour is permissible are precise and demanding, to the point that the term “morals” is commonly taken to imply sexual morals. The official celibacy of priests, who thereby showed even more restraint than was demanded of ordinary Catholics, added to the mystique of clerical power.
By the time the commission reported in 1966 John XXIII had been replaced by Pope Paul VI. The commission concluded that artificial birth control was not intrinsically evil and that Catholic couples should be allowed to decide for themselves about the methods they employed. But five of the commission’s sixty-nine members took the opposite view in a minority report.
In the encyclical Humanae Vitae, Pope Paul VI made his fateful rejection of all forms of artificial contraception. As an attempt to exercise and shore up authority it failed completely. The realities of raising large families and dealing with unplanned pregnancies were far removed from the experience of priests and theologians. And the church’s evident demographic motive (the desire for big Catholic families to fill the pews) further undermined the legitimacy of the prohibition.
Previously loyal Catholics ignored Pope Paul’s ruling, in many cases marking their first step away from the Church. Doctrines restricting marriage between Catholics and non-Catholics, including the requirement that children be raised as Catholics, also became little more than formalities commanding at most notional obedience.
The breakdown of clerical authority set the scene for the exposure of clerical child abuse from the 1990s on. Although accusations of this kind had been around for many years, the authority of the church had ensured that critics were silenced or disbelieved.
It is hard to know for sure what would have happened if Pope Paul had chosen differently. The membership and social standing of Protestant denominations, nearly all which accepted contraception, have also declined, though not as much as a Catholic Church that pinned its authority on personal morality. Humanae Vitae’s attempt to exercise papal authority succeeded only in exposing its illusory nature.
In the struggle over working from home and the “freedom to disconnect” we’re seeing something similar happen to the authority of managers.
Following the arrival of Covid-19 in early 2020, working from home went from being a rare indulgence to a general necessity, at least for those whose work could be done with a telephone and a computer. Hardly any time was available for preparation: in mid March, Scott Morrison and Anthony Albanese were still planning to attend football matches; a week later, Australia was in lockdown.
Offices and schools closed. Workers had to convert their kitchen tables or (if they were lucky) spare bedrooms into workstations using whatever equipment they had available. And, to make things even tougher, parents had to take responsibility for the remote education of their children.
Despite the already extensive evidence of the benefits of remote work, many managers expected chaos and a massive reduction in productivity. But information-based work of all kinds carried on without any obvious interruption. Insurance policies were renewed, bills were issued and paid, newspapers and magazines continued to be published. Meetings, that scourge of modern working life, continued to take place, though now over Zoom.
Once the lockdown phase of the pandemic was over, workers were in no hurry to return to the office. The benefits of shorter commuting times and the flexibility to handle family responsibilities were obvious, while adverse impacts on productivity, if any, were hard to discern.
Sceptics argued that working from home, though fine for current employees, would pose major difficulties for the “onboarding” of new staff. Four years into the new era, though, around half of all workers are in jobs they started after the pandemic began. Far from lamenting the lack of office camaraderie and mentorship, these new hires are among the most resistant to the removal of a working condition they have taken for granted since the start.
Nevertheless, chief executives have issued an almost daily drumbeat of demands for a return to five-day office attendance and threatened dire consequences for those who don’t comply. Although these threats sometimes appear to have an effect, workers generally stop complying. As long as they are still doing their jobs, their immediate managers have little incentive to discipline them, especially as the most capable workers are often the most resistant to close supervision. Three days of office attendance a week has become the new normal for large parts of the workforce, and attempts to change this reality are proving largely fruitless.
The upshot is that attendance rates have barely changed after more than two years of back-to-the-office announcements. The Kastle Systems Back to Work Barometer, a weekly measure of US office attendance as a percentage of February 2020 levels, largely kept within the narrow range of 46 to 50 per cent over the course of 2023.
This fact is finally sinking in. Sandwiched between two pieces about back-to-the-office pushes by diehard employers, the Australian Financial Review recently ran up the white flag with a piece headlined “Return to Office Stalls as Companies Give Up on Five Days a Week.”
This trend, significant in itself, also marks a change in power relations between managers and workers. Behind all the talk about “water cooler conversations” and “synergies,” the real reason for demanding the physical presence of workers is that it makes it easier for managers to exercise authority. The failure of “back to the office” prefigures a major realignment of power relationships at work.
Conversely, the success of working from home in the face of dire predictions undermines one of the key foundations of the “right to manage,” namely the assumption that managers have a better understanding of the organisations they head than do the people who work in them. Despite a vast literature on leadership, the capacity of managers to lead their workers in their preferred direction has proved very limited.
The other side of the remote work debate is the right to disconnect. The same managers who insisted that workers should be physically present at the office in standard working hours (and sometimes longer) also came to expect responses to phone calls and emails at any time of the day or night. The supposed need for an urgent response typically reflected sloppiness on the part of managers incapable of organising their own work schedules to take account of the need for work–life balance.
Once again, managers have attempted to draw a line in the sand. Opposition leader Peter Dutton has backed them, promising to repeal the right to disconnect if the Coalition wins the next election. It’s a striking illustration of the importance of power to the managerial class that Dutton has chosen to fight on this issue while capitulating to the government’s broken promise on the Stage 3 tax cuts, which would have delivered big financial benefits to his strongest supporters.
Can this trend be reversed? The not-so-secret hope is that high unemployment will turn the tables. As Tim Gurner (of “avocado toast” fame) put it, “We need pain in the economy… and employees need to reminded of who is boss.” US tech firms have put that view to the test with large-scale sackings, many focused on remote workers. But the other side of remote work is mobility. Many of those fired in the recent tech layoffs have found new jobs, often also remote.
In the absence of a really deep recession, firms that demand and enforce full-time attendance will find themselves with a limited pool of disgruntled workers dominated by those with limited outside options.
Popular stories — from King Canute’s attempt to turn back the tide (apparently to make fools of obsequious courtiers who suggested he could do it) to Hans Christian Anderson’s naked emperor — have made the point that the best way to dissipate authority is to fail in its exercise. Pope Paul ignored that lesson and the Catholic Church paid the price. Now, it seems, managers are doing the same. •Back to the office: a solution in search of a problem
Over the last few years, the Australian and UK Labor/Labour[1] parties, have followed strikingly parallel paths.
- A better-than expected result with a relatively progressive platform (Oz 2016, UK 2017)
-
A demoralizing defeat in 2019, followed by the election of a new more conservative leader (Albanese, Starmer)
-
Wholesale abandonment of the program
-
Failure of the rightwing government to handle Covid and other problmes
Because we have elections every three years, Australia is now ahead of the UK and we now have a Labor government led by Anthony Albanese. In its election campaign and its first eighteen months in office, Labor ran on a platform of implementing rightwing policies with better processes and minor tweaks to the most repressive aspects. This is, AFAICT, what can be expected from Starmer in the UK.
But over the last month or so, we’ve had a series of significant policy wins, which may set the stage for more.
[click to continue…]
Continuing my discussion of the recent upsurge in pro-natalism, I want to talk about the idea that, unless birth rates rise, society will face a big problem caring for old people. In this post, I’m going to focus on aged care in the narrow sense, rather than issues like retirement income, which depend crucially on social policy.
Looking at Australian data on location of death, I found that around 30 per cent of people die in aged care, and that the mean time spent in aged care is around three years, implying an average of one year per person. Staffing requirements in Australia amount to aroundone full-time staff member per residents. So the “average” Australian requires about one full-time working year of aged care in their lifetime, or about 2.5 per cent of a working life. This is, as it happens, about the proportion of the Australian workforce currently engaged in aged care.
But what if each generation were only half the size of the preceding one? In that case, the share of the labour force required for aged care would double, to around 5 per cent.
If you find this scary, you might want to consider that children aged 0-5 require more care than old people, and for a much longer time. Because this care is provided within the family, and without any monetary return, it doesn’t appear in national accounts. But a pro-natalist policy requires that people have more children than they choose to at present. To the extent that this is achieved by subsidising the associated labour costs (for example, through publicly funded childcare), it will rapidly offset the eventual benefit in having more workers available to provide aged care.
And that’s only preschool children. There’s a significant childcare element in school education, as we saw when schools closed at the beginning of the pandemic. And school-age children still require plenty of parental care. (I’ll talk about education more generally in a later post, I hope).
Repeating myself, none of this is a problem when people choose to have children, more or less aware of the work this will involve (though, as everyone who has been through it knows, new parents are in for a big shock). But it’s clear by now that voluntary choices will produce a below-replacement birth rate. Policies aimed at changing those choices will have costs that exceed their benefits.
Chris’s post on declining population has prompted me to get started on what I plan, in the end, to be a lengthy critique of the pro-natalist position that dominates public debate at the moment. My initial motivation to do this reflected long-standing concerns about human impacts on the environment but I don’t have any particular expertise on that topic, or anything new to say. Instead, I want to address the economic and social issues, making the case that a move to a below-replacement fertility rate is both inevitable and desirable.
I’m going to start with a claim that came up in discussion here and is raised pretty often. The claim is that the more children are born, the greater the chance that some of them will be Mozarts, Einsteins, or Mandelas who will contribute greatly to human advancement. My response was pre-figured several hundred years ago by Thomas Gray’s Elegy Written in a Country Churchyard. Gray reflects that those buried in the churchyard may include some “mute inglorious Milton” whose poetic genius was never given the chance to flower because of poverty and unremitting labour
But Knowledge to their eyes her ample page
Rich with the spoils of time did ne’er unroll;
Chill Penury repress’d their noble rage,
And froze the genial current of the soul.
Billions of people alive today (the majority of whom are women) are in the same situation today, with their potential unrealised through lack of access to education and resources to express themselves. Rather than adding to their numbers, or diverting yet more resources away from them, we ought to be focusing on making a world where everyone has a chance to be a great poet or inventor.
Here’s a piece I wrote for The Guardian. It’s also at my Substack. Some of it is Australia-specific but some may be of more general interests
The policy debate about the cost of living is among the most confused and confusing in recent memory. All sorts of measures to reduce the cost of living are proposed, then criticised as being potentially inflationary. The argument implies, absurdly, that reducing the cost of living will increase the cost of living.
The issue here is that the “cost of living” is an essentially meaningless concept, rather like the sound of one hand clapping. The problem isn’t the cost of buying goods, but whether our income is sufficient to pay for those goods. For most of us, that means the real (inflation-adjusted) value of our wages, after paying tax and (for homebuyers) mortgage interest.
I’ve been working a bit on inflation and the highly problematic concept of the ‘cost of living’ (shorter JQ: what matters is the purchasing power of wages, not the cost of some basket of goods). As part of this, I’ve been looking at how particular prices have changed over time, focusing on basics like bread and milk.
One striking thing that I found out is that, until quite late in the 20th century, the standard loaf of bread used to calculate consumer price indexes in Australia weighed 4 pounds (nearly 2kg). That’s about as much as three standard loaves of sliced bread. Asking around, this turns out to be the largest of the standard sizes specified in legislation like the Western Australian Bread Act which was only repealed in 2004, AFAICT.
Going back a century or so further, the Speenhamland system of poor relief in England specified the weekly nutrition requirements of a labouring man as a ‘gallon loaf” of bread, made from a gallon (about 5 litres) of flour, and weighing 8.8 pounds (4kg). Bread was pretty much all that poor people got to eat, so the amount seems plausible.
But why one huge loaf rather than, say seven modern-size loaves? And turning that question around, why are our current loaves so much smaller?
[click to continue…]
As 2024 dawns, Crooked Timber has a new member for 2024. Doug Muir, formerly with A Fistful of Euros, and more recently an insightful commenter here at CT will be blogging here from now on. I’m looking forward to what he has to say.