AI tools are like taking a helicopter to drop you off at the site. You miss all the benefits of the journey itself. You just get right to the destination, which actually was only just a part of the value of solving these problems.—Terence Tao interviewed in The Atlantic, February 24, 2026 [HT: Ryan Muldoon]
About thirty years ago, a Stanford educated philosopher, Paul Humphreys (1950-2022), realized that when connectionist models started to be developed within AI, that a set of questions and debates about Monte Carlo simulations might be salient.* In particular, the fact that connectionist networks might be very complex, inscrutable matrices need not be an objection to their epistemic usefulness. This inscrutability of AI is known as ‘the Black Box problem’ in recent scholarship. After all, some Monte Carlo simulations were in practice also inscrutable, but this didn’t prevent physicists from using them. (There is a nice, accessible discussion by Eric Winsberg of the significance of Humphreys’ work in the philosophy of simulation here.)
In the course of his many papers on related topics, Humphreys coined a term, ‘epistemic opacity’ or Humphreys opacity, that characterizes one of the key aspects of such inscrutability. (See also here; or here). Such epistemic opacity — and now I paraphrase Humphreys — involves the inability to surveil the steps of a process from a known input to a known and desirable (or truthful, useful, beautiful, etc.) output in a timely manner to the decision-maker or responsible agent. I put it like that to make clear that this ignorance is pragmatic in character and could be modelled in terms of trade-offs between the quality or benefit of the output and the cost of surveillance. (Of course, it’s possible the opacity is not pragmatic, but ontological in character.) In addition, I use the ambiguous language of ‘surveillance’ because the process can be computational, social, or natural in character.
I make no claim that epistemic opacity is unique to AI. Often human minds are opaque to each other in this very sense. And in other cases such opacity is characteristic of our self-knowledge. Even if one wishes to keep one’s distance from Freud and his school, it is uncontroversial that there are lots of brain processes that are inaccessible to ourselves even though we can track the input and output to them.
In fact, epistemic opacity in Humphreys’ sense has been long recognized in the study of natural, psychological, and social processes. For example, for a very long time ‘sympathy’ was the term used to describe (a/the) cosmic and psychological mechanism(s) in which the process was invisible, even though the start and end of the process were visible. My interest below is not in this particular example, but I will suggest that the history of social awareness of the significance of epistemically opaque mechanisms may illuminate our discussion of the unfolding impact of AI.