In the Next Great Transformation AI will not eliminate genuine expertise; rather it will make it more valuable

by Eric Schliesser on March 2, 2026

About thirty years ago, a Stanford educated philosopher, Paul Humphreys (1950-2022), realized that when connectionist models started to be developed within AI, that a set of questions and debates about Monte Carlo simulations might be salient.* In particular, the fact that connectionist networks might be very complex, inscrutable matrices need not be an objection to their epistemic usefulness. This inscrutability of AI is known as ‘the Black Box problem’ in recent scholarship. After all, some Monte Carlo simulations were in practice also inscrutable, but this didn’t prevent physicists from using them. (There is a nice, accessible discussion by Eric Winsberg of the significance of Humphreys’ work in the philosophy of simulation here.)

In the course of his many papers on related topics, Humphreys coined a term, ‘epistemic opacity’ or Humphreys opacity, that characterizes one of the key aspects of such inscrutability.  (See also here; or here). Such epistemic opacity — and now I paraphrase Humphreys — involves the inability to surveil the steps of a process from a known input to a known and desirable (or truthful, useful, beautiful, etc.) output in a timely manner to the decision-maker or responsible agent. I put it like that to make clear that this ignorance is pragmatic in character and could be modelled in terms of trade-offs between the quality or benefit of the output and the cost of surveillance. (Of course, it’s possible the opacity is not pragmatic, but ontological in character.) In addition, I use the ambiguous language of ‘surveillance’ because the process can be computational, social, or natural in character.

I make no claim that epistemic opacity is unique to AI. Often human minds are opaque to each other in this very sense. And in other cases such opacity is characteristic of our self-knowledge. Even if one wishes to keep one’s distance from Freud and his school, it is uncontroversial that there are lots of brain processes that are inaccessible to ourselves even though we can track the input and output to them.

In fact, epistemic opacity in Humphreys’ sense has been long recognized in the study of natural, psychological, and social processes. For example, for a very long time ‘sympathy’ (?????????), was the term used to describe (a/the) cosmic and psychological mechanism(s) in which the process was invisible, even though the start and end of the process were visible. My interest below is not in this particular example, but I will suggest that the history of social awareness of the significance of epistemically opaque mechanisms may illuminate our discussion of the unfolding impact of AI.

In the quoted passage from the pull-quote at the top of this post, Terence Tao (a Field’s medalist in mathematics) describes the very specific species of ignorance that I have been calling ‘Humphreys opacity’ (or, if you prefer, ‘epistemic opacity.’) What’s neat about this particular instance, is that at the moment, Tao’s state of opacity about the process (the ‘journey’) that led to the AI proof mirrors the opacity of the machine that ‘helicoptered’ there. At the moment there is no way of recovering the machine’s journey to its answer. (Presumably with time and effort some kind of reverse engineering might be possible, even if it involves an intentional stance.)

Tao’s view is that in mathematics the process of discovery is very valuable, even though that process may be slow and involves a lot of possible dead-ends. We may say that during the older process of discovery, one didn’t just learn the truth, but also quite a bit about the tools of the trade that can be used to discover the truth (and how different mathematical objects and fields relate to each other). Now that AIs start reaching truth quickly, or to put it more precisely, without access to the underlying mathematical landscape, we encounter a trade-off between truth and (let’s call it) informativeness.

In the interview, as reported, Tao never uses the phrase ‘truth.’ Rather, he phrases his analysis in terms of the ‘answer’ the machines provide. It’s worth conveying how he puts it:

One very basic thing that would help the math community: When an AI gives you an answer to a question, usually it does not give you any good indication of how confident it is in this answer, or it will always say, I’m completely certain that this is true. Humans do this. Whether they are confident in something or whether they are not is very important information, and it’s okay to tentatively propose something which you’re not sure about, but it’s important to flag that you’re uncertain about it. But AI tools do not rate their own confidence accurately. And this lowers their usefulness. We would appreciate more honest AIs.

In reflecting on Tao’s comments, it’s worth distinguishing between two issues: first — and this is the topic highlighted by Tao –, the AI machine excelling at mathematics does not report its own ‘confidence’ in its own answer accurately. Second, even if it offered such confidence accurately, it could still be wrong about the answer it provides (and, perhaps, misreporting its own confidence.) This is especially so, with AI that is embedded in LLMs (Large Language Models). After all, there is no evidence that such AIs have eliminated hallucinations altogether, or that this is even possible (at low enough cost and time).

To be sure, the current generation of commercially available flagship LLMs (GPT 5/Claude OPUs 4.5, etc.) are genuinely impressive. (And presumably the ChatGPT that solved these outstanding math puzzles, on which Tao comments, is even more ahead of the curve, etc.) During the last month, they have finally reached the level of interesting research assistance in my own field. But second, don’t let anyone claim they have stopped hallucinating. (If you dislike that phrase, I am happy to call it ‘ungrounded content.’) Crucially, for a lot of purposes this makes LLMs inefficient tools, because you often can’t just eyeball the errors–you really need to pay attention and double-check to their output. Keep this in mind, too.

There are super-interesting issues lurking here about what it would mean to have AI’s internally model or represent their own confidence. (Would they be simulating human confidence reports as if Terence Tao or some much lesser mathematician, or would they develop their own approach; would they have debates about Bayes? etc.) But that’s not my present main interest.

As readers will be undoubtedly aware, there is a persistent strain and increasingly vocal line of thought that AI will eliminate all knowledge work. And it is no doubt the case that the fate of junior and mid-level computer coders in the moment foreshadows a more general disruptiveness. Let’s stipulate that AI will indeed threaten lots of white- collar work (I call this the ‘next great transformation’).1 And that even in the sciences it will transform discovery and how disciplines will interact with each other, as Tao suggests. (Go read the interview.) So, philosophy of science will have a busy time ahead.

My main interest is this: Tao’s comments alert us to the fact that there are a class of problems where answers supplied without surveyable information on the means or steps for finding it are themselves fragile. At the research frontier, somebody very skilled needs to check the ‘answer.’  This is why even in mathematics there is a social component to a process of justification. And, as AI eliminates all the low hanging fruits, the difficulty and costs of checking themselves go up as understanding of the landscape becomes very thin. Even if we could build machines to check the AI (and so on), there would be a need for diagnostic tools that need to be maintained and repaired and so on. Since these machines will suffer from Humphreys opacity, this challenge becomes endemic.

As the next great transformation advances, before long AI may well drive discovery and justification and, thereby, as Tao suggests, transform different sciences. We may well have to get used to playing second fiddle to AI practices. However, Tao’s remarks also suggest that genuine expertise will be at a premium as we transform to a world suffused with modern AI. This is because modern AI systematically introduces Humpreys opacity and hallucinates alongside the cutting-edge answers it provides. As the output or ‘answers’ of AI scale up, we will need the skilled judgment of humans as part of quality control. Of course, to what degree genuine expertise will be able to capture that value in our oligarchic data economy is a different question.

That’s the main point I wanted to make. But there is a second point lurking here. The institutional infrastructure of a universe full of Humphreys opacity is itself quite dense. In his (1755) Third Discourse, Rousseau notes that epistemic opacity is introduced as governments switch in scale from estate management to management of whole peoples and nations. That is, such a sovereign cannot survey the population in real time. In fact, the sovereign must introduce regular government and begin to rely on intermediaries (a bureaucracy, viceroys, tax-farmers, delegated parliaments, etc.) in order to overcome the effects of this first-order epistemic opacity. Unfortunately, the very mechanism by which epistemic opacity is tackled often introduces a different, second-order (or ‘derivative’) epistemic opacity.

This phenomenon can be illustrated by the following thought: once one diagnoses how the very social mechanism (say a bureaucracy) one introduces to tackle first-order epistemic opacity generates higher-order epistemic opacities, one may be tempted by two strategies. First, one may introduce monitoring mechanisms that surveil the bureaucracy one has instituted. Second, one may invest in mechanisms (a census, real time street cameras, infrared tags, a machinery of record that tracks births, deeds, etc.) that make the population that is being governed more legible. In both cases, these mechanisms will themselves generate further kinds of third-order (‘second derivative’ etc.) opacities and so on. So, for example, inspired by Rousseau, and the scandals at the East India company, Adam Smith begins to diagnose principle-agent problems. Eighteenth century European thinkers look to China to learn how to develop an effective bureaucracy and the accompanying institutions that allow the management and governance of dispersed and heterogeneous populations.

From the middle of the eighteenth century to the present, in we can discern a fairly large growth in institutional structures to manage epistemic opacity over time since. Governments have organized not just records to make populations legible (as Scott and Foucault might put it) and to provide public goods that can coordinate commerce, but have also developed and maintained mints, measures, weights, and all kinds of data/information/statistics on the natural and social environments not the least the economy and environment. This has, of course, generated opportunities for profit as well as strategic agents who may wish to undermine trust in the government’s machinery of record and measures.

This is not just a role for government. Companies both encounter and generate epistemic opacity in their own products. Sometimes the quality control to maintain the standards and homogeneity of a mass-produced product may require as as costly as the manufacturing and packaging of the underlying product. And companies may well come into conflict with each other as they do so.

This natural growth in government activity in managing epistemic opacity and providing a legal framework for managing these conflicts is, by the way, the enduring lesson of Walter Lippmann’s (1937) The Good Society. Accurate information that is appropriately public — perhaps even, to adapt a phrase from Tom Pink, witnessed as truth (recall) — requires an enormous machinery of record and a legal infrastructure that helps adjudicate conflicts over identity of and property rights in that information and the consequences of its use. Even some lawyers may survive the next great transformation.

 

*As it happens, during the last two weeks, I had opportunity to have long talks with Katie Creel (Northeastern) and Ryan Muldoon (Buffalo) during my visits to their programs. Their views have shaped my own here. In addition, I have benefitted from Nick Cowen’s and Neil Levy’s comments on earlier post at DigressionsNimpressions (here).

 

{ 1 comment… read it below or add one }

1

J-D 03.02.26 at 1:51 am

As the next great transformation advances, before long AI may well drive discovery and justification and, thereby, as Tao suggests transform different sciences. We may well have to get used to playing second fiddle to AI practices.

Something like this may happen.

Or, then again, it may not happen.

It seems an estimate of the odds might be useful, it anybody can produce one.

Leave a Comment

You can use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>