The Moral Economy of High-Tech Modernism

by Henry Farrell on March 1, 2023

[the below is the main text of Henry Farrell and Marion Fourcade, “The Moral Economy of High-Tech Modernism,” published in the Winter 2023 issue of Daedalus under a Creative Commons license. For the original in HTML form, click here, and for a nicely formatted PDF, click here.]


While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes.


Algorithms—especially machine learning algorithms—have become major social institutions. To paraphrase anthropologist Mary Douglas, algorithms “do the classifying.”1 They assemble and they sort—people, events, things. They distribute material opportunities and social prestige. But do they, like all artifacts, have a particular politics?2 Technologists defend themselves against the very notion, but a lively literature in philosophy, computer science, and law belies this naive view. Arcane technical debates rage around the translation of concepts such as fairness and democracy into code. For some, it is a matter of legal exposure. For others, it is about designing regulatory rules and verifying compliance. For a third group, it is about crafting hopeful political futures.3

The questions from the social sciences are often different: How do algorithms concretely govern? How do they compare to other modes of governance, like bureaucracy or the market? How does their mediation shape moral intuitions, cultural representations, and political action? In other words, the social sciences worry not only about specific algorithmic outcomes, but also about the broad, society-wide consequences of the deployment of algorithmic regimes—systems of decision-making that rely heavily on computational processes running on large databases. These consequences are not easy to study or apprehend. This is not just because, like bureaucracies, algorithms are simultaneously rule-bound and secretive. Nor is it because, like markets, they are simultaneously empowering and manipulative. It is because they are a bit of both. Algorithms extend both the logic of hierarchy and the logic of competition. They are machines for making categories and applying them, much like traditional bureaucracy. And they are self-adjusting allocative machines, much like canonical markets.

Understanding this helps highlight both similarities and differences between the historical regime that political scientist James Scott calls “high modernism” and what we dub high-tech modernism.4 We show that bureaucracy, the typical high modernist institution, and machine learning algorithms, the quintessential high-tech modernist one, share common roots as technologies of hierarchical classification and intervention. But whereas bureaucracy reinforces human sameness and tends toward large, monopolistic (and often state-based) organizations, algorithms encourage human competition, in a process spearheaded by large, near-monopolistic (and often market-based) organizations. High-tech modernism and high modernism are born from the same impulse to exert control, but are articulated in fundamentally different ways, with quite different consequences for the construction of the social and economic order. The contradictions between these two moral economies, and their supporting institutions, generate many of the key struggles of our times.


Both bureaucracy and computation enable an important form of social power: the power to classify.5 Bureaucracy deploys filing cabinets and memorandums to organize the world and make it “legible,” in Scott’s terminology. Legibility is, in the first instance, a matter of classification. Scott explains how “high modernist” bureaucracies crafted categories and standardized processes, turning rich but ambiguous social relationships into thin but tractable information. The bureaucratic capacity to categorize, organize, and exploit this information revolutionized the state’s ability to get things done. It also led the state to reorder society in ways that reflected its categorizations and acted them out. Social, political, and even physical geographies were simplified to make them legible to public officials. Surnames were imposed to tax individuals; the streets of Paris were redesigned to facilitate control.

Yet high modernism was not just about the state. Markets, too, were standardized, as concrete goods like grain, lumber, and meat were converted into abstract qualities to be traded at scale.6 The power to categorize made and shaped markets, allowing grain buyers, for example, to create categories that advantaged them at the expense of the farmers they bought from. Businesses created their own bureaucracies to order the world, deciding who could participate in markets and how goods ought to be categorized.

We use the term high-tech modernism to refer to the body of classifying technologies based on quantitative techniques and digitized information that partly displaces, and partly is layered over, the analog processes used by high modernist organizations. Computational algorithms—especially machine learning algorithms—perform similar functions to the bureaucratic technologies that Scott describes. Both supervised machine learning (which classifies data using a labeled training set) and unsupervised machine learning (which organizes data into self-discovered clusters) make it easier to categorize unstructured data at scale. But unlike their paper-pushing predecessors in bureaucratic institutions, the humans of high-tech modernism disappear behind an algorithmic curtain. The workings of algorithms are much less visible, even though they penetrate deeper into the social fabric than the workings of bureaucracies. The development of smart environments and the Internet of Things has made the collection and processing of information about people too comprehensive, minutely geared, inescapable, and fast-growing for considered consent and resistance.

In a basic sense, machine learning does not strip away nearly as much information as traditional high modernism. It potentially fits people into categories (“classifiers”) that are narrower—even bespoke. The movie streaming platform Netflix will slot you into one of its two thousand–plus “microcommunities” and match you to a subset of its thousands of subgenres. Your movie choices alter your position in this scheme and might in principle even alter the classificatory grid itself, creating a new category of viewer reflecting your idiosyncratic viewing practices.

Many of the crude, broad categories of nineteenth-century bureaucracies have been replaced by new, multidimensional classifications, powered by machine learning, that are often hard for human minds to grasp.7 People can find themselves grouped around particular behaviors or experiences, sometimes ephemeral, such as followers of a particular YouTuber, subprime borrowers, or fans of action movies with strong female characters. Unlike clunky high modernist categories, high-tech modernist ones can be emergent and technically dynamic, adapting to new behaviors and information as they come in. They incorporate tacit information in ways that are sometimes spookily right, and sometimes disturbing and misguided: music-producing algorithms that imitate a particular artist’s style, language models that mimic social context, or empathic AI that supposedly grasps one’s state of mind.8 Generative AI technologies can take a prompt and generate an original picture, video, poem, or essay that seems to casual observers as though it were produced by a human being.

Taken together, these changes foster a new politics. Traditional high modernism did not just rely on standard issue bureaucrats. It empowered a wide variety of experts to make decisions in the area of their particular specialist knowledge and authority. Now, many of these experts are embattled, as their authority is nibbled away by algorithms whose advocates claim are more accurate, more reliable, and less partial than their human predecessors.


One key difference between the moral economies of high modernism and high-tech modernism involves feedback. It is tempting to see high modernism as something imposed entirely from above. However, in his earlier book Weapons of the Weak, Scott suggests that those at the receiving end of categorical violence are not passive and powerless.9 They can sometimes throw sand into the gears of the great machinery.

As philosopher Ian Hacking explains, certain kinds of classifications—typically those applying to human or social collectives—are “interactive” in that

when known by people or those around them, and put to work in institutions, [they] change the ways in which individuals experience themselves—and may even lead people to evolve their feelings and behavior in part because they are so classified.10

People, in short, have agency. They are not submissive dupes of the categories that objectify them. They may respond to being put in a box by conforming to or growing into those descriptions. Or they may contest the definition of the category, its boundaries, or their assignment to it.11 This creates a feedback loop in which the authors of classifications (state officials, market actors, experts from the professions) may adjust the categories in response. Human society, then, is forever being destructured and restructured by the continuous interactions between classifying institutions and the people and groups they sort.

But conscious agency is only possible when people know about the classifications: the politics of systems in which classifications are visible to the public, and hence potentially actionable, will differ from the politics of systems in which they are not.

So how does the change from high modernism to high-tech modernism affect people’s relationships with their classifications? At its worst, high modernism stripped out tacit knowledge, ignored public wishes and public complaints, and dislocated messy lived communities with sweeping reforms and grand categorizations, making people more visible and hence more readily acted on. The problem was not that the public did not notice the failures, but that their views were largely ignored. Authoritarian regimes constricted the range of ways in which people could respond to their classification: anything more than passive resistance was liable to meet brutal countermeasures. Democratic regimes were, at least theoretically, more open to feedback, but often ignored it when it was inconvenient and especially when it came from marginalized groups.

The pathologies of computational algorithms are often more subtle. The shift to high-tech modernism allows the means of ensuring legibility to fade into the background of the ordinary patterns of our life. Information gathering is woven into the warp and woof of our existence, as entities gather ever finer data from our phones, computers, doorbell cameras, purchases, and cars. There is no need for a new Haussmann to transform cramped alleyways into open boulevards, exposing citizens to view.12 Urban architectures of visibility have been rendered nearly redundant by the invisible torrents of data that move through the air, conveying information about our movements, our tastes, and our actions to be sieved through racks of servers in anonymous, chilled industrial buildings.

The feedback loops of high-tech modernism are also structurally different. Some kinds of human feedback are now much less common. Digital classification systems may group people in ways that are not always socially comprehensible (in contrast to traditional categories such as female, married, Irish, or Christian). Human feedback, therefore, typically requires the mediation of specialists with significant computing expertise, but even they are often mystified by the operation of systems they have themselves designed.13

The political and social mechanisms through which people previously responded, actively and knowingly, to their categorization—by affirming, disagreeing with, or subverting it—have been replaced by closed loops in which algorithms assign people unwittingly to categories, assess their responses to cues, and continually update and reclassify them. The classifications produced by machine learning are cybernetic, in mathematician Norbert Wiener’s original sense of the word. That is, they are self-correcting: categories are automatically and dynamically adjusted in light of the reactions that they produce.

The changing politics of credit in the United States helps illuminate these differences. Until the 1970s, broad demographic characteristics such as gender or race—or high modernist proxies such as marital status or the redlining of poor, primarily Black neighborhoods—were routinely used to determine a person’s creditworthiness. It is only when categorical discrimination was explicitly forbidden that new actuarial techniques, aimed at precisely scoring the “riskiness” of specific individuals, started to flourish in the domain of credit.14

This did not just change how lenders “saw” individuals and groups, but also how individuals and groups thought about themselves and the politics that were open to them.15 Redlining was overt racial prejudice, visible to anyone who bothered looking at a map. But credit scoring turned lending risk evaluation into a quantitative, individualized, and abstract process. Contesting the resulting classifications or acting collectively against them became harder. Later, the deployment of machine learning—which uses even weaker signals to make its judgments, like using one’s phone’s average battery level to determine their likelihood to repay their loan—made the process of measuring creditworthiness even more opaque and difficult to respond to.16

Predictive scores that rely on behavioral measures eschew blatant racial discrimination. But it would be a mistake to think that they eliminate racial disparities—they just make them harder to see, sometimes allowing them to ramify further.17 This is why the political struggle against algorithms has emphasized historical biases embedded in training data sets and the inherent unfairness and poor performance of nontransparent, automated decision-making. The European Commission has proposed to regulate the use of “high risk” algorithms that endanger fundamental rights, subjecting them to frequent human review.18 This would include the use of algorithms for public benefit eligibility, credit scoring, law enforcement, immigration control, employment, and more. Finally, traditional high modernist professionals—including judges, journalists, and law enforcement officers—have also pushed back against the use of algorithms in their work, treating them as irrelevant, inefficient, or a status threat.19


The moral economy of high-tech modernism is market-driven, both practically and ideologically. Many algorithm-based start-ups want to expand market share rapidly and aggressively. Once revenues exceed fixed costs, the additional cost of adding a new user is comparatively tiny. Platform companies like Facebook or YouTube can serve billions of customers with tens of thousands of employees. Machine learning algorithms can gather data about users and dynamically provide and adjust flows of content, while auction and matching algorithms can maintain dynamic markets for advertisers who want access to customers with specific demographic characteristics.

Algorithms institutionalize competition between units (whether people, organizations, or ideas) by fostering a market-based vision of fairness.20 The threat of being automated away looms large for all workers. Algorithmic technologies can also be implemented to hire and fire, to predict performance, influence, and riskiness, or to surveil, discipline, and arrest. They do so by rank-ordering according to their own particular versions of merit.21 It is as though anyone who applies themselves can do well, and social structure and existing power allocations did not matter. (The irony is that while high-tech modernist firms are happy to turn the market screw on everyone else, they strive to establish monopoly for themselves).22

Just like the behavior of individuals, the distribution of knowledge must be subjected to the market test. High-tech modernism claims to represent popular judgment against the snobbishness of elites. Remember that Scott identifies high modernism as inherently antidemocratic because it enforces categories and objectives decided on by elites who “know better.”23 High-tech modernism, by contrast, systematically undermines elite judgment, fueling a crisis of expertise.24 Algorithms purport to read X-rays better than radiologists, predict purchases better than market researchers, understand people’s sexuality better than they themselves do, and produce new text or code better than many professional writers and engineers. Meanwhile, they elevate a kind of bottom-up wisdom. The network leaves it up to the crowd to judge what is worth knowing, generating collective sentiments through likes, clicks, and comments. Viral trends and online multitudes provide a kind of pseudodemocratic, if extremely volatile, vox populi.

The absence of visible hierarchy legitimates high-tech modernism’s claim that clouds and crowds best represent people’s wishes. Its new elites echo early libertarian arguments about cyberspace, and quasi-Hayekian defenses of the market, facially justifying the notion that search engines and other algorithms are disinterested means of processing the internet’s naturally dispersed stock of knowledge.25 They flatter high-tech modernism as defending the liberties of the individual, freed from physical and social bonds, against traditional status hierarchies. The abundant data that people “freely” upload or leave behind as they roam cyberspace become “an unqualified good,” fostering beneficial competition for everyone and everything.26

The awkward fact is that hierarchy has not disappeared. It has only become less visible. Platform companies’ business priorities determine the algorithms that are employed, as well as their “objective functions,” the weighted goals that they are supposed to maximize on. Social media corporations employ algorithms that maximize “engagement,” keeping consumers scrolling through feeds or watching video clips so that they keep seeing paid content that may itself be misleading. Amazon, in contrast, cares more about getting people to buy things, and, according to legal scholar and Federal Trade Commission Chair Lina Khan, uses its detailed transaction information and ability to rank search outcomes to fortify its market dominance.27 Platform companies dislike even tweaking their algorithms in response to regulators’ demands for fear that it might create a precedent for further interventions that would conflict with their business model.

As search engines have transformed from general-purpose technology to personal digital assistants, they have elevated searching the web and forming an opinion “for oneself” into a normative principle. People think of search engines as oracles, but as sociologist Francesca Tripodi and others have shown, they work more like distorting mirrors that variously confirm, exacerbate, or take advantage of people’s priors.28 Our interests and beliefs are embedded in the vocabulary we use, the questions we ask, perhaps our whole search history. YouTube, Facebook, and other social media present content based on what we have wanted to see in the past, and what other people who are like us across some array of dimensions have wanted to see.

In this way, platform companies have become knowledge intermediaries, like newspapers or school curriculum boards, while insulating themselves from traditional accountability. Their algorithms and (perhaps just as important) sharing and search tools help foster categories that can become self-reinforcing private universes of discourse, producing echo chambers in which other voices are silenced, or epistemic bubbles that guide users to apparent authorities who actively look to discredit other sources of information.29 However, the invisibility of hierarchy allows these knowledge intermediaries to justify themselves on laissez-faire principles, not telling the public what to trust, even while they quietly sink deeper into the Augean mire of moderating offensive, false, or untrue content.

Our universe of accessible knowledge is shaped by categorization processes that are invisible and incomprehensible to ordinary users, according to principles that have little regard for whether it is well sourced. The outcome is that the way that people “take [their] bearings in the world” is slowly changing.30 Visible feedback loops between the people being categorized, the knowledge they have access to, and the processes through which the categories are generated are replaced by invisible loops mediated through algorithms that maximize on commercial imperatives, sometimes creating incompatible and self-sustaining islands of shared (“post-truth”) beliefs among micropublics who have been categorized in particular ways, and who may themselves act to reinforce the categories. A new terrain of political struggle has arisen, involving the exploitation of information systems and algorithmic dynamics for partisan advantage.

This is a different set of moral pathologies than those suggested by social psychologist Shoshana Zuboff, who emphasizes platform companies’ manipulation of people’s wants and beliefs, which might or might not succeed.31 The more corrosive threat may be that people have been convinced that the high-tech modernist system of knowledge generation is an open buffet where “anything goes,” and that keeping it that way is essential to their own freedom. Anyone can offer content, anyone can be their own expert, and it is up to the algorithm to sort it out. Further, the new existential condition of transparency has provided everyone with potent tools to expose or doubt others, only moderated by their own vulnerability to be exposed in turn—an inherently agonistic situation.


At the end of the day, the relationship between high modernism and high-tech modernism is a struggle between two elites: a new elite of coders, who claim to mediate the wisdom of crowds, and an older elite who based their claims to legitimacy on specialized professional, scientific, or bureaucratic knowledge.32 Both elites draw on rhetorical resources to justify their positions; neither is disinterested.

The robust offense and disbelief that many people feel about algorithmic judgments suggests that the old high modernist moral political economy, faults and all, is not quite dead. The new moral political economy that will replace it has not yet matured, but is being bred from within. Articulated by technologists and their financial backers, it feeds in a kind of matriphagy on the enfeebled body (and the critique) of its progenitor. Just as high modernist bureaucracies did before, high-tech modernist tools and their designers categorize and order things, people, and situations. But they do so in distinctive ways. By embedding surveillance into everything, they have made us stop worrying about it, and perhaps even come to love it.33 By producing incomprehensible bespoke categorizations, they have made it harder for people to identify their common fate. By relying on opaque and automated feedback loops, they have reshaped the possible pathways to political reaction and resistance. By increasing the efficiency of online coordination, they have made mobilization more emotional, ad hoc, and collectively unstable. And by insisting on market fairness and the wisdom of crowds as organizing social concepts, they have fundamentally transformed our moral intuitions about authority, truth, objectivity, and deservingness.

We are grateful to Jenna Bednar, Angus Burgin, Eric Beinhocker, danah boyd, Robyn Caplan, Federica Carugati, Maciej Ceglowski, Jerry Davis, Deborah Estrin, Martha Finnemore, Sam Gill, Peter Hall, Kieran Healy, Rebecca Henderson, Natasha Iskander, Bill Janeway, Joseph Kennedy III, Jack Knight, Margaret Levi, Charlton McIlwain, Margaret O’Mara, Suresh Naidu, Bruno Palier, Manuel Pastor, Paul Pierson, Kate Starbird, Kathy Thelen, Lily Tsai, and Zeynep Tufekci for comments on an earlier version of this essay.

1 Mary Douglas, How Institutions Think (Syracuse, N.Y.: Syracuse University Press, 1986), 91.
2 Langdon Winner, “Do Artifacts Have Politics?” Dædalus 109 (1) (Winter 1980): 121–136.
3 Virginia Eubanks, “The Mythography of the ‘New’ Frontier,” MIT Communications Forum, 1999.
4 James Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, Conn.: Yale University Press, 1998).
5 Robyn Caplan and danah boyd, “Isomorphism through Algorithms: Institutional Dependencies in the Case of Facebook,” Big Data & Society 5 (1) (2018): 1–12.
6 William Cronon, Nature’s Metropolis: Chicago and the Great West (New York: W. W. Norton, 1991).
7 Marion Fourcade and Kieran Healy, “Seeing Like a Market,” Socio-Economic Review 15 (1) (2017): 9–29.
8 Luke Stark, “The Emotive Politics of Digital Mood Tracking,” New Media and Society 22 (11) (2020): 2039–2057.
9 James Scott, Weapons of the Weak: Everyday Forms of Peasant Resistance (New Haven, Conn.: Yale University Press, 1985).
10 Ian Hacking, The Social Construction of What? (Cambridge, Mass.: Harvard University Press, 1999), 103–104.
11 Geoffrey Bowker and Susan Leigh Star, Sorting Things Out: Classification and Its Consequences (Cambridge, Mass.: The MIT Press, 1999).
12 Georges-Eugène Haussmann was the prefect responsible for the renewal and reimagining of Paris in Napoleonic France.
13 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016); and Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms,” Big Data & Society 3 (1) (2016): 1–12.
14 Martha Poon, “From New Deal Institutions to Capital Markets: Commercial Consumer Risk Scores and the Making of Subprime Mortgage Finance,” Accounting, Organizations and Society 34 (5) (2009): 654–674.
15 Greta Krippner, “Democracy of Credit: Ownership and the Politics of Credit Access in Late Twentieth-Century America,” American Journal of Sociology 123 (1) (2017): 1–47.
16 Kai-Fu Lee, AI Superpowers: China, Silicon Valley and the New World Order (New York: Harper Business, 2018).
17 Solon Barocas and Andrew D. Selbst, “Big Data’s Disparate Impact,” California Law Review 104 (3) (2016): 671–732; Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge, Mass.: Polity, 2019); and Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).
18 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (Brussels: European Commission, 2021).
19 Angèle Christin, Metrics at Work: Journalism and the Contested Meaning of Algorithms (Princeton, N.J.: Princeton University Press, 2020); and Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (New York: Oxford University Press, 2020).
20 Barbara Kiviat, “The Moral Limits of Predictive Practices: The Case of Credit-Based Insurance Scores,” American Sociological Review 84 (6) (2019): 1134–1158.
21 Marion Fourcade, “Ordinal Citizenship,” The British Journal of Sociology 72 (2) (2021): 154–173.
22 Peter Thiel, “Competition Is for Losers,” The Wall Street Journal, September 12, 2014.
23 Scott, Seeing Like a State.
24 Gil Eyal, The Crisis of Expertise (Cambridge, Mass.: Polity, 2019).
25 John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation, February 8, 1996; Friedrich von Hayek, “The Uses of Knowledge in Society,” American Economic Review 35 (4) (1947): 519–530; Friedrich von Hayek, “Competition as a Discovery Procedure,” The Quarterly Journal of Austrian Economics 5 (3) (2002): 9–23; and Evgeny Morozov, “Digital Socialism? The Socialist Calculation Debate in the Age of Big Data,” New Left Review 116/117 (2019).
26 Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (Princeton, N.J.: Princeton University Press, 2015), 157.
27 Lina M. Khan, “Amazon’s Antitrust Paradox,” Yale Law Journal 126 (3) (2016–2017): 710–805.
28 Francesca Tripodi, Searching for Alternative Facts. Analyzing Scriptural Inference in Conservative News Practices (New York: Data & Society, 2018).
29 C. Thi Nguyen, “Echo Chambers and Epistemic Bubbles,” Episteme 17 (2) (2020): 141–161.
30 Hannah Arendt, “Truth and Politics,” in The Portable Hannah Arendt, ed. Peter Baehr (London: Penguin Classics, 2000), 568.
31 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019); and Tim Hwang, Subprime Attention Crisis (New York: FSG Originals, 2020).
32 William Davies, “Elite Power Under Advanced Neoliberalism,” Theory, Culture and Society 34 (5–6) (2017): 227–250; and Jenna Burrell and Marion Fourcade, “The Society of Algorithms,” Annual Review of Sociology 47 (2021): 213–237.
33 Nitsan Chorev, “The Virus and the Vessel, or: How We Learned to Stop Worrying and Love Surveillance,” Socio-Economic Review 19 (4) (2021): 1497–1513.



J, not that one 03.01.23 at 7:47 pm

There seems to be an inherent — and unbridgeable — conflict between James Scott’s idea that technical categories are oppressive because they don’t arise from the lived worlds of the people they govern, and the opposition to algorithms (taken broadly) for being corrosive of those who’ve spent years learning those technical categories. The very idea of people who seem to agree with Scott on many things, but take for granted that technical categories are enlightened, natural, and unchallengeable, is more difficult for me to grasp than the idea that people might believe (as Scott says) the existence of the kilogram and the dollar are inherently colonialist and oppressive. But I don’t know how to do justice to both groups.

There seems to be an additional contradiction between an algorithm as an amasser of observed data (Netflix) and an algorithm as a way of managing conflicts of opinions. The true critic of consumer capitalism may lament the explosion of choices for television viewers, but I would be reluctant to assimilate that to conspiracy theories about cabals in pizzerias.

The idea that there’s a wisdom of crowds runs deep in US culture in particular (and indeed the idea that the many are generally more correct than the few – the few being identified for being the ones who object to the way things are – is hardly solely American). Algorithms do give us an interesting handle on the set of issues raised by that.


Tim Worstall 03.01.23 at 8:08 pm


“Yet high modernism was not just about the state. Markets, too, were standardized, as concrete goods like grain, lumber, and meat were converted into abstract qualities to be traded at scale.”

Is that actually a standard assumption? Things like those are called commodities, the point being that they can be viewed as very close substitutes (at worst, if not exactly the same) which is what the standardisation is. All the capitalists desperately try to distinguish their products so that they are not price takers – as commodity producers have to be – through market segmentation, branding and so on. The actual capitalist battle is to distinguish one’s production from being a commodity.

As to the AIs and so on, still wandering away from the above but perhaps closer to the point still. I’ve always been amused by those who complain that AIs are racist because they are trained upon the racist materials that so infest society. To which the answer is yes, that’s the point. If society is racist (a fairly common claim, no?) then an AI which is not racist is of no use in decision making in the current society.

If the AI is to be non-racist in a racist society (and please do pick your own -ist or -ism, misog, misand, misanthro) then it’s useless as it’s not making decisions relevant to the extant society. We’re back with the Marxists, awaiting New Soviet Man to turn up to make the system work, aren’t we?


Henry Farrell 03.01.23 at 9:42 pm

Is that actually a standard assumption? Things like those are called commodities, the point being that they can be viewed as very close substitutes (at worst, if not exactly the same) which is what the standardisation is. All the capitalists desperately try to distinguish their products so that they are not price takers – as commodity producers have to be – through market segmentation, branding and so on. The actual capitalist battle is to distinguish one’s production from being a commodity.

Yes – it’s completely standard (pun not intended). You simply can’t have markets at scale without commonly understood technical standards . We are specifically riffing on William Cronon’s Nature’s Metropolis here, but you will find all sorts of literature on standardization processes if you look. And it doesn’t just apply to commodities. Try to make any even moderately sophisticated technical product that isn’t compliant with the relevant standards and you are probably in a world of hurt (there are narrow exceptions for bespoke products or companies with a ton of a rather specific kind of market power, but they are exceptions, not the rule).

There seems to be an inherent — and unbridgeable — conflict between James Scott’s idea that technical categories are oppressive because they don’t arise from the lived worlds of the people they govern, and the opposition to algorithms (taken broadly) for being corrosive of those who’ve spent years learning those technical categories.

One interesting question is the extent to which ML circumvents some of the Hayekian objections to technical reason by providing a statistical approximation to Michael Polanyian tacit knowledge that can’t be formalized using ordinary means. We hint at this, and I’ve mentioned this to people who would know if there is a there there, and they think there is, but I’ve not seen anyone develop the point.


Lee A. Arnold 03.01.23 at 11:05 pm

It might help to characterize the different levels that algorithms and machine learning can respond to, or be trained upon: 1. All of the material on the internet: to generate new patterns like LLMs. 2. All of a customer’s choices: to generate similar choices, perhaps informed by the patterns of other like-minded customers. 3. All of the moves in a game: to win at chess or go. 4. All of a data set in nature, whether pixels or some other data: to find new patterns in physical nature or biology, e.g. a find a new chemistry for photovoltaics or a new cell target to cure a disease. (There will be other levels.) Machines can use more data and memory on certain kinds of inputs, for pattern discovery (and pattern generation) that is beyond human ability. Pattern discovery & generation = naming = classification = legibility = hierarchy. The process is mysterious, whether human or machine. We can say that the standardization makes it easier for the central actors (bureaucrats, tech giants, scientists) to solve their various problems. For the decentral actors, it also reduces costs in time and effort (e.g. writers forming prose, customers buying books or booking an Uber, ill patients now healing faster). There are different levels. Seeing like a state is seeing like an individual.


J, not that one 03.02.23 at 12:26 am

@2 it isn’t clear to me that if you had a population that wasn’t equally split among some recognizable trait, a representative training set would produce the results we’d like. The problem is, as a lot of discussions of ChatGPT have noted, a lack of concepts. We look at a human and see “human.” The AI sees something more like “close enough to the average human to get me the digital equivalent of a doggie treat.”

@3 I picked up a book by Leonard Mlodinow (I think), but never finished it, that developed the idea that we learn simply as these neural net models do, by forming mental statistical models based on induction over cases. It seems like something that would be appealing to certain kinds of people. When I think about it, though, I imagine someone like Roy Cohn at the beginning of Angels in America, surrounded by his bank of phone lines, all feeding into his mental model of the world (which is hardly accurate, or useful for any purposes other than self-serving ones). I don’t think such a theory avoids the problems AI has. But on the other hand, a model formed by someone who actually, say, has a craft, is different from a managerial approach. I feel like that’s the important split.


Tim Worstall 03.02.23 at 9:13 am


“Yes – it’s completely standard (pun not intended). You simply can’t have markets at scale without commonly understood technical standards . We are specifically riffing on William Cronon’s Nature’s Metropolis here, but you will find all sorts of literature on standardization processes if you look. And it doesn’t just apply to commodities. Try to make any even moderately sophisticated technical product that isn’t compliant with the relevant standards and you are probably in a world of hurt”

I guess it sorta depends upon what level of granularity you look at. As someone who spent a couple of decades in the physical metals markets well, standards, umm, yes. Even at the level of the LME – a futures and options exchange as well as physical, real commodity stuff – the futures and options run on a standardised contract, sure. Here’s the spec for LME qualifying nickel, for example. But when you take physical material out of a warehouse then prices do vary. This producer (say, a 99.9% Ni, although don’t hold me to these percentages, from memory) is LME plus 1%, this other producer (99.7% Ni) is LME minus 1%.

In one sense I’m fine with standards being the basis for commodification. As above, futures don’t really exist without there being such a standard. But the real world allows at a detailed level an awful lot of variation around said standards. Really rather a lot, The US still trades nickel in lbs, Europe in kgs and tonnes. When you get to the stuff I really know about (rare earths and other weird metals) then sure, we have “standards” like this is a 99% Dy2O3 but then the next question is always show me the real chemistry (which is defined down to 0.01% of each element usually) of this specific batch and also, who made it? Because certain manufacturers are known to produce material with certain amounts of this or that which make it better suited to – say – light bulbs or magnets. I’m also going to test that chemistry on arrival at my factory (no, I do not have a factory). We don’t believe other peoples’ analyses as anything other than a starting point.

Also possible to come at this from the other end. There isn’t a futures market in rare earths and almost certainly never will be. Because the variance from each manufacturer is so large as to stop us from having a standardised product given the tolerances we use for specific uses of the material.

Which is pretty circular really, we can’t commoditise, in the sense financial markets use the word commodity, rare earths because they’re not standardised. Pretty big market too, global, couple of hundred thousand tonnes a year, $ billions (more than $1 billion, less than $100 billion) and so on.

Standardisation, commodification, OK, useful as a concept, sure. But reality, in my experience at least, is messier.


Lee A. Arnold 03.02.23 at 10:39 am

What are the Hayekian objections to technical reason?
I would ask AI, but I don’t trust its answers!


Henry Farrell 03.02.23 at 1:37 pm

Also possible to come at this from the other end. There isn’t a futures market in rare earths and almost certainly never will be. Because the variance from each manufacturer is so large as to stop us from having a standardised product given the tolerances we use for specific uses of the material. … Standardisation, commodification, OK, useful as a concept, sure. But reality, in my experience at least, is messier.

If you think that standardization is just “OK, useful as a concept, sure,” and no more, you’re mistakenly generalizing from personal experience in what is clearly a fairly atypical market to the vast majority of large scale capitalism, which is all about standards, whether for bulk commodities (food, oil, plastics), or any product that you can plug into a wall (there is actually a tiny literature on socket standards and their failure to converge on a global norm). My personal favorite (via David Lazer) is the Codex Alimentarius’s technical standards for different stages of putrescence in fish.

By your own description, rare earths are non-commodifiable – they can’t be traded without inspection, and can’t be abstracted into futures etc. Nor are they reliant on technical product standards to make. That is unusual – there are other cases (e.g. many forms of art) but generalizing from them to arrive at big conclusions about modern capitalism is a mistake. If you’re interested in finding out more, Buethe and Mattli’s The New Global Rulers is one useful starting point (there are others).


Trader Joe 03.02.23 at 3:05 pm

Maybe I’ve lost the thread a bit but isn’t this just a long way of saying “buyer beware.”

Whether you are a consumer of information or actual products there will be times when you need a high degree of precision and will want to thoroughly understand how information is delivered to you, how it has been categorized and the quality it contains.

Other times (probably most times) you just want a reasonable answer to a general question and many of the details and concerns expressed here are either moot or ultimately not all that germane the particular project at hand.

As a question to the author – I’m guess I’m left with “Now what?” as a question. If I understand what you are saying and agree with your conclusions – what am I meant to do now that I know this? Google and all the other algo runners aren’t going to change.


LFC 03.02.23 at 4:06 pm

I think the very last sentence of the piece needs more empirical support than the essay provides. Indeed, if the last sentence’s assertion that “our moral intuitions” have been transformed was entirely correct, then it would be hard to explain the resistance to the algorithmically mediated “wisdom of crowds,” both from “old” elites and, especially, from elements of the “crowds” themselves.

I haven’t read James Scott, but presumably resistance to high modernism was fueled not only by narrowly conceived self-interest but also by appeals to some collective good rooted partly in “moral intuitions.” So too, presumably, with resistance to high-tech modernism. By suggesting that “our moral intuitions” have already been “transformed,” the last sentence implies that resistance to high-tech modernism is likely going to fail. That implication clashes with the earlier references in the piece to actual resistance, whether “from above,” e.g. via EU regulations, or “from below.”


MisterMr 03.02.23 at 4:07 pm

“there are other cases (e.g. many forms of art) [of non-commodifiable products]”

I don’t think that this is the case. If we speak of art in general, like comics, novels, movies etc., usually they are grouped in genres, that is a form of classification.
This is useful for the buyers because, for example, if I buy a detective novel or watch a superhero movie I sorta know what I’m going into.
Commercial producers will try to use this classification either to know what people will like or to advertise something in such a way to pull into as many customers as possible by, for example, framing a certain story as pèart of a certain genre (e.g. people might frame Star Treck as hrd sci-fi or as space opera depending on what sells more in a certain period).

In the past this kind of classification existed, but it wasn’t equally detailed, there were perhaps just drama, epic or comic, either because there were less novels produced or because this sort of marketing was less developed.
When we speak of some form of weird, non classifiable art, though, it usually is produced to be sold in certain markets, that more or less require it to be non classifiable, so it is just a specific genre in itself.

I think the “non classifiable” thing mostly happens for stuff that is produced for personal consumption or as an hobby, and therefore sorta misses the demands of the markets.
In pre-industrial production, when every producer was an artisan, this standardisation didn’t happen, but then if you needed shoes you only could buy them from the local shoemaker; once you get shoe mass production and the fact that you can buy shoes produced in a different continent, some kind of classification will arise by necessity.


Lee A. Arnold 03.02.23 at 7:19 pm

It’s possible that the new politics will put an end to the expertise of market economists first and foremost, while the algorithms are eventually forced to follow the increasing salience of commonsense thinking in the face of rising contradictions, and thus overthrow market fundamentalism. After all, one thing AI could obviously be designed to do is run a fairly good non-market institution.

Consider that the criticism of centralized bureaucracy always takes the worst case as the norm. Yet we all intuit that a centralized rule can increase the individual freedoms of time and attention, because you don’t have to bother to continually make choices about things which ought to be provided for everyone, and you can trust that your other choices in life, in the market, do not impinge on those social goods for others. In other words you are freed from care, which is good because your time and attention, your cognition, is limited. The necessary condition is that this centralized bureaucracy, this institution, follows a very strict set of “rules for good rules”: it is tightly focused on a single problem, the specific rules are short and simple, there is transparency, there are easy procedures for redress and change, and so on. This can also be an algorithm.

A great failure of the left, so far, has been to suppose that you can keep piling on more rules and regulations to fix things. This won’t work because individual cognition is limited. Non-market institutions must be focused on certain established problems with the shortest, simplest rules possible, so everyone can understand it. Then it will work.

Limited cognition is also ignored by market-favoring philosophers, who misconstrue a negative claim by Hayek, that a central authority cannot know all the facts of local, individual knowledge (an observation originally made by Adam Smith, by the way), and mistake it to mean the positive assertion that the market system makes up for all limited cognition (e.g., so that there is a “wisdom of the crowds”). Nothing could be further from the truth.

For one thing, the market does not transmit distributed knowledge, it ignores almost all of it: by “legibilizing” the resulting supply and demand, using a numerical system (money) that is 1. made scarce, while still 2. used for all transactions. Money is a language, a simple set of rules that reduces the information. Otherwise, your own limited cognition could never deal with everything.

For another thing, supply and demand does not transmit information about morals, labor conditions, criminality, etc., nor about the health of places, neighborhoods, ecosystems, climates. There is no realistic way to adjust prices to reflect these and of course there are long-standing arguments that these should be administered by non-market institutions, which ought to be well-designed bureaucracies.

Finally, the market does not make up for limited cognition because the ubiquity of market failures, defined as generated by the market itself, permanently prevents efficient allocation.

The right wing could argue (and likely it will) that algorithms in free-market information systems might fix some of these things. But their problem is they need to “curate,” or employ content-moderation, against offensive, inflammatory, false content or else it will be shunned as embarrassing and untrustworthy, as the current problem of Twitter shows.


Tim Worstall 03.02.23 at 7:27 pm

“My personal favorite (via David Lazer) is the Codex Alimentarius’s technical standards for different stages of putrescence in fish.”

Ah, yes, but then I differ on the CA. Some seem to think that it’s a set of standards which should be written into law (as the EU does). Which is how we get the bendy bananas thing. The EU took the CA definition of Class 1 bananas – free of excessive curvature – then made that the legal definition. £5,000 or 6 months pokey for breaching it. What the CA really is is a set of rules of thumb – classifications as with “this is Dy2O3” – specific to particular industries. Class 1 bananas exists so that Bert can say to Fred “Send me a box of class 1” and they’ve a good idea what both are talking about. OK, in one sense that is commodification. It’s a set of agreed standards etc.

On the other hand it’s a rule of thumb or even a slang. There’s still a lot of room for variance within “excessive” for example. But the big point about the CA is that it’s not a book of rules that government or any authority has written. Instead it’s a book of all the ways that specific industries and trades have defined things over the years. That certain anal retentives (the EU) then try to make it law doesn’t change that.

Think France with Academie Francaise telling everyone how to say email in French. Or English finding out what everyone calls email and then puts it in the dictionary. Or from personal experience English editors like neologisms and variances in grammar – if they work. Americans have read Strunk and White and won’t have them at all (given my failures at grammar, possibly a good idea). The CA is that English way even if all too many treat it the French or American.

Not even Michale Caine knows this but I wrote an international standard once. The Minor Metals Traders Association wanted a model contract for scandium trading. I spent 20 minutes writing down how I was willing to trade it. Whose warehouse, how tested, when did money change hands etc. That is, two decades later, the model international scandium contract (which about 3 people a year might use but still). These international standards rarely even rise to the level of a codification, they’re a set of observations about “Aha, that’s how it’s done!”.

As such they’re a lot closer to that linguistic example than folk think. OK, so we all need to agree that teal is not taupe when discussing paint colours. That’s just defining a word though, not a technical standard.

Sure, GSM is different, TCP/IP is too. But an awful lot of what people think are “standards” is just observation about linguistic definitions. Well, they are among sensible people, perhaps not the French.


Lee A. Arnold 03.02.23 at 11:20 pm

If you don’t depend upon standardization of a market product, then you use a standardized language (e.g. English and numbers) to read the manufacturer’s advertisement or to communicate with the manufacturer about the chemical composition of the product. You still use standardization somewhere. Or call it regularization or legibilization or rectification or alignment.

This is not a trivial dodge. If you call all of those things over there “giraffes,” you have momentarily stripped away a lot of other information, to form a single-word classification that will fit into a conceptual hierarchy (e.g. regarding the kinds of “animals” that are found on a “savannah”). Language standardizes for easier communication, in the same way that the market uses prices to regularize and interrelate the scarcity of all products for easier comparison and decision-making, and in the same way that the state designs streets for easier transportation or designs administrative areas for easier administration.

I think that Scott’s characterization of “high modernism” is an historical, political-economic case of a far more general process in organisms and organizations at any level. In a standardized recipe for growth, there is necessarily other information lost, or temporarily put aside, in order to reduce the cost, or to complete the communication, transaction or connection among the component subsystems.

So far as I know, the first intimation of the need to standardize some products is in Babbage’s Economy of Machinery and Manufactures (1832), where he observes that some goods can be adulterated so in addition to supply and demand there is the cost (to the purchaser) of verification. This is conceptually akin to the concept of a transaction cost, much later discussed by the institutional economists who were concerned with the various strategies (including non-market strategies) to reduce them. I think Coase mentions somewhere that he read Babbage.


Peter T 03.04.23 at 11:08 am

Jogn Ellis in Language Thought and Logic usefully observed that language is first a means of classification and only then a means of communication. This issue of standardisation is that, while it enables ease of control, it removes from the ever-changing actuality. Algorithms would seem to amplify this distance.

Note the many dead social mediator platforms:


Tim Worstall 03.05.23 at 8:39 am

“So far as I know, the first intimation of the need to standardize some products is in Babbage’s Economy of Machinery and Manufactures (1832), where he observes that some goods can be adulterated so in addition to supply and demand there is the cost (to the purchaser) of verification. This is conceptually akin to the concept of a transaction cost,”

Indeed, and thus the rise of the brand. Heinz soups worked not because of their taste so much as gaining a reputation for poisoning fewer consumers with early canning techniques.


John Q 03.06.23 at 5:57 am

In Australia we are having a massive inquiry (called a Royal Commission) into something called “robodebt”. As it started out, a computer would flag Social Security benefit recipients whose income (measured annually) seemed too high for the benefits they were receiving (typically every two weeks). Then a bureaucrat would examine the case more closely, and, in the absence of any obvious explanation, pursue it further. At some point, the recipient could be found to have received benefits for which they were ineligible. A debt would then be raised against them.

“Robodebt” simply cut out all the intervening steps. As soon as you were flagged by the computer, you got a call from debt collectors demanding repayment or, alternatively, proof that you didn’t owe the money. People committed suicide, suffered severe deprivation and so on. Eventually, the program was found to be illegal. Most of the money was repaid and the public servants and government ministers responsible are now being grilled (most entertainingly, even though they demanded evidence going back many years from those they pursued, the most common response is “this was five years ago. I can’t recall).

In crucial respects, AI classification is just like this. The basic models (discriminant analysis and multinomial logit) were implemented for computers 50 years ago. Add in stepwise regression techniques to search a big space of potential explanatory variables and you have all you need for a high-speed classification system. All you need is to remove the person checking the output who might see evidence of data dredging, spurious regression and so forth, and you have AI. Then you can change old-fashioned “computer model” (no one believes those any more) into “AI algorithm” (a misuse of language, but who cares) and you have high-tech modernism.

Comments on this entry are closed.