Martians and the Gruesome

by Brian on January 23, 2007

One of my quirkier philosophical views is that the most pressing question in metaphysics, and perhaps all of philosophy, is how to distinguish between disjunctive and non-disjunctive predicates in the special sciences. This might look like a relatively technical problem of no interest to anyone. But I suspect that the question is important to all sorts of issues, as well as being one of those unhappy problems that no one seems to even have a beginning of a solution to. One of the issues that it’s important to was raised by “Brad DeLong”: yesterday. He was wondering why John Campbell might accept the following two claims.

* There is an important and unbridgeable gulf between our notions of physical causation and our notions of psychological causation.
* Martian physicists–intelligences vast, cool, and unsympathetic with no notions of human psychology or psychological causation–could not understand why, could not put their finger on physical variables and factors explaining why, the fifty or so of us assemble in the Seaborg Room Monday at lunch time during the spring semester.

I don’t know why Campbell accepts these claims. And I certainly don’t want to accept them. But I do know of one good reason to accept them, one that worries me no end some days. The short version involves the conjunction of the following two claims.

* Understanding a phenomenon involves being able to explain it in relatively broad, but non-disjunctive, terms.
* Just what terms are non-disjunctive might not be knowable to someone who only knows what the Martian physicists know, namely the microphysics of the universe.

The long version is below the fold.

The story starts with some relatively banal observations about explanation and understanding. [1] Imagine that I throw a stone at a window, it strikes the window with momentum _m_, and the window breaks. Now one way to explain the window’s breaking is to say that it was struck by a stone with momentum _m_. But while that might be in some sense a complete explanation of the breaking, there are other explanations that promote greater understanding. This explanation doesn’t make explicit the salient fact that the window’s shattering was not particularly dependent on the precise momentum of the projectile, or that the projectile was a stone. Someone who explains the breaking in terms of it being struck with a projectile of momentum between _m1_ and _m2_, where these are the rough limits for what is (a) sufficient to shatter the window and (b) plausible given that I threw the projectile, seems to have a better explanation of the shattering, and a greater understanding of why the window shattered.

It’s tempting to conclude at this stage that if one explanation explains an event in terms of some particular being F, and another explains it in terms of that particular being G, where being F entails being G but not vice versa, then the second explanation provides a deeper understanding of what happens. In short, broader explanations (explanations that are made true in more ways) are better. But that principle seems to have clear counterexamples. Imagine a third ‘explanation’ that says the window broke because it was either struck with a projectile of momentum between _m1_ and _m2_, or struck by a window-breaking spell. This explanation is even broader than the two previous, since the truth of the “explanans”: is entailed by, but does not entail, the truth of the explanans in the previous cases. But it is, intuitively, a worse explanation than what came before. It certainly doesn’t provide a deeper understanding. Indeed, someone who offers this ‘explanation’ has very little understanding of why the window broke.

Still, it seems a qualified principle might work. Broader explanations are better as long as the terms they use are not *disjunctive*. The idea that some terms are disjunctive and others aren’t goes back at least to Goodman’s _Fact, Fiction and Forecast_. Goodman famously defined up a new term _grue_. Something is grue, I’ll say, iff it is green and observed or blue and unobserved. As Goodman noted, observing lots of emeralds and seeing they are all grue provides us with no reason to think the next emerald we see will be grue. This kind of simple induction doesn’t work when dealing with terms like ‘grue’. Various authors, most importantly “David Lewis”: have argued that the distinction Goodman pointed towards, between disjunctive terms like ‘grue’ and non-disjunctive terms like ‘green’, has many implications for across philosophy. Following tradition, I’ll call the ‘grue’-like terms gruesome, and ‘green’-like terms natural. (And I’ll often suppress the fact that the difference between gruesomeness and naturalness is a matter of degree, as there are a spectrum of cases in the middle.)

So what we want for understanding, I think, are explanations that are as broad as possible but not involving gruesome terms. Or perhaps explanations that strike the best balance between breadth and gruesomeness. I’ve only ‘argued’ for that by a single case, but the principle seems pretty plausible to me. At least it’s plausible enough to be the first half of my reason for believing a position like Campbell’s.

Now the relatively difficult part. It’s easy enough when talking about toy physical explanations to say what are natural and gruesome terms. It is, to say the least, somewhat harder to do the same thing in special sciences. Is the term ‘seasonally adjusted fall in sales’ natural or gruesome? By the standards some philosophers use, it is pretty gruesome because it is explicitly defined relative to calendar dates. (After Goodman’s work it wasn’t too hard to find philosophers saying this was a tell-tale sign of the gruesome.) But there are very good explanations that make use of terms like this. For a simpler case, is ‘weekday’ a natural or gruesome term? Again, it looks pretty gruesome from the perspective of micro-physics. But it is hard to explain/predict/understand traffic patterns without talking about weekdays and weekends. Progress here is not going to be easy.

David Lewis proposed that how gruesome a term is could be a function of the shortest definition of that term in terms of fundamental physics. But that also looks to rule out most special science explanations out of court pre-emptively. We have no idea how to define, say, a rise in demand for widgets in terms of micro-physics, but I bet that any such definition will be immensely complicated. So complicated that if we took Lewis’s idea seriously, we’d say that explanations of anything in terms of rising demand for widgets would be impossibly gruesome. But such explanations can be perfectly good. So Lewis’s idea fails.

The problem is that no one seems to have a better idea. One constraint on an answer is that we can’t use the contents of mental states in the answer, because there is good reason to think that the naturalness of various terms is part of what makes our mental states have the content that they have. (Again, see Lewis for the arguments for this.) The best answer I know of that doesn’t appeal to beliefs, intentions etc of humans is Lewis’s answer in terms of definition length. And that’s a non-starter I think.

So no one really knows how to answer, or even to make much of a start on answering, the following two questions.

* What makes a term natural rather than gruesome?
* How could one know that various terms, from biology, psychology, anthropology, economics, etc, are natural rather than gruesome? (In particular, could one infer the naturalness of various terms from a micro-physical description of the world?)

The two questions might be related. If there is an answer to the first question that relates the naturalness of special sciences to facts about micro-physics in some relatively straightforward (or at least Turing computable) way, then one way to know what’s natural and what’s gruesome would be to find out all the micro-physical facts, and then do the relevant computation. (That is, the answer to the parenthetical question is yes.) But we don’t know whether such a metaphysical story is true.

What we do know is that we didn’t find out that terms such as ‘cell’, ‘belief’, ‘society’ or ‘demand’ are (relatively) natural by their relation to micro-physics. One somewhat troubling (to reductionists like me) prospect is that the only ways to find out the naturalness of these predicates is something like the way we found out that they are natural. (Not that we know what that way is either.) And if that’s right, it’s possible that the Martian physicists can’t tell the natural from the gruesome terms in biology, psychology, anthropology, economics, etc.

If that’s all correct, then I think there’s a pretty good sense in which the Martian physicists won’t understand human behaviour. (Or, for that matter, the behaviour of all sorts of complex systems. The particular kind of way that human brains are complex might not be doing very much work in this argument.) They might know all sorts of counterfactuals saying that if certain objects were different in certain ways, then certain events would not have come about. But knowledge of such counterfactuals only consistitutes understanding if the ways in question are relatively natural. Remember, someone who takes the best explanation of the window’s breaking to be that it was either struck by a projectile with momentum _m_ or a spell doesn’t really understand why the window broke. And that’s the case even though had the window not been struck by a projectile with momentum _m_ or a spell, it wouldn’t have broken.

I think there’s a case to be made that, for all we now know, the Martian physicists will be in an analogous position with respect to understanding/explaining our behaviour. And that’s why I think, or at least worry, that Campbell’s position might be true. And it’s one of several reasons I worry about us lacking any story about what makes special science terms non-natural.

fn1. Banal these may be, but I wouldn’t have learned of them if not for discussions with “Michael Strevens”: Many of the important points relevant to this post are discussed in “this paper”: (PDF), especially in section 5. Michael has a book on explanation coming out some time in the near future which I very highly recommend.

{ 1 trackback }

Depth First Search » Blog Archive » Martians and the Gruesome
01.24.07 at 11:11 am



Jay 01.23.07 at 4:52 pm

Einstein remarked that explanations should be as simple as possible, and no simpler. If relatively gruesome explanations can explain a phenomenon, and no less gruesome explanation will suffice, then the gruesome explanation should be embraced.

If alien intelligences evolved, they will probably have the ability to understand complex systems with “intent” at a relatively broad level, as well as at the micro level. After all, such an ability is likely to be a prerequisite of large-scale cooperation. Large scale cooperation is likely to be a prerequisite for developing scientific knowledge.


SamChevre 01.23.07 at 5:04 pm

OK–what is a “special science?”


"Q" the Enchanter 01.23.07 at 5:06 pm

“intelligences vast, cool, and unsympathetic with no notions of human psychology or psychological causation”

I wonder if this is even a coherent specification. An intelligence able to develop a luxuriously complete physics as contemplated would itself have to be endowed with some psychology. And so however alien human psychology would be to them, it couldn’t be completely alien. (An analogous point could be made with respect to any intelligence capable of “understanding” in any sense meaningfully related to our idea of understanding.)


robert the red 01.23.07 at 5:07 pm

Even within physics, does a Martian who groks the totality of the Cosmic All at the individual particle level understand the ideas of temperature and entropy? These are average quantities used to simplify descriptions of large assemblies of particles — what use does the Martian have for them? (Unless it is to communicate with the lower orders.)


Brian 01.23.07 at 5:18 pm

Samchevre: Sorry, I shouldn’t have been so jargony. I use “special science” to mean science of any particular special subject matter, so in practice everything except all-encompassing physics.

“Q”: I agree it’s a bit of a mystery how this talk of Martians is meant to be clarifying. One possibility is there are many different ways to grasp the physical truths, and you can do this without having states distinctive of _human_ psychology. Perhaps a better one is (as comes up a bit in Brad DeLong’s thread) that we just treat the Martian psychologist as a metaphor, and just think in the abstract about what’s knowable on the basis of what. Then Campbell’s claim is that the best explanations of human psychology aren’t inferrable from a micro-physical description of the universe.

Robert: I think I agree with the basic point here. If something like Campbell’s position is right, it won’t be because there is something distinctive about the psychological, rather it will be because inferential connections between microscopic properties and macroscopic properties is a trickier business than we reductionists are given to assuming. Even if the Martian can understand averages, it isn’t obvious that they will be able to understand the importance of them merely on the basis of microphysical knowledge.


John 01.23.07 at 6:53 pm

Deep waters…

IMHO, “naturalness”, like Goodman’s projectibility is a philosophical nonstarter– there is no logically prescribed language from which to judge the disjunctiveness of predicates. Another, mathematical, way of saying this is that predicate encodings are not invariant in any non question-begging way (specifically, they are homeomorphic).

Rather, the existence of a discontinuity in the mathematical sense, between the micro and macro-levels presents a real problem for reductionists, because discontinuity implies uncomputability (trust me). However, I am skeptical that there are any uncomputable macro-predicates in the special sciences, though this belief is subject to an interesting paradox, as it’s determination is, itself, uncomputable.

Another interesting notion of emergence is information theoretic and bound together with questions of computational intractability and complexity. In contrast to the uncomputable case, which I take to be ontological, the information theoretic approaches are pragmatic.

Sorry if the above sounds obtuse, but I haven’t the time to elaborate at the moment.


Bedau, M., “Weak Emergence”. In James Tomberlin, ed., Philosophical Perspectives: Mind, Causation, and World, pp. 375-399. Blackwell Publishers, ISBN: 0631207937

Boschetti & Gray, “Emergence and Computability” Journal Paper, to be submitted to Emergence: Complexity and Organization.

Kelly, K. & Glymour, C., “Why You’ll Never Know whether Roger Penrose is a Computer”, Behavioral and Brain Sciences, 13, 4, Dec. 1990.


John Quiggin 01.23.07 at 7:34 pm

The way I would think about your three explanations is that the first provides a sufficient condition, the second provides necessary and sufficient conditions, and the third provides an additional sufficient condition that is bad because it can’t be fulfilled. So it’s obvious that the second is the best, but I can’t work out how to relate this to the concerns you list.


Brian Weatherson 01.23.07 at 7:45 pm

My talk of spells here might have been distracting. We could ask instead why the window no longer exists. One answer would be that it was either struck with a projectile or melted. I think that is a worse explanation than saying it was hit with a projectile, because it offers a disjunctive condition. What I’m interested in is why it’s true (assuming it is true) that some weakenings of the explanation (from _hit with a stone_ to _hit with a projectile_ for example) make explanations deeper, but some weakenings (from _hit_ to _hit or melted_ for example) make explanations worse. Whether the weakening involves things that don’t happen in worlds like this one isn’t central.


john c. halasz 01.23.07 at 7:45 pm

I fail to see why the “reductionist” thesis that the laws of micro-physics should supply the terms of all explanations should be regarded as necessary, plausible, or even possible. Sure, if there were no physical reality, i.e. matter and energy, nothing else would exist. But such a dependency does not reverse into the thesis that the laws of micro-physics are the necessary and sufficient “metaphysical ground” of all real phenomena. Let’s suppose one were to elaborate an explanation of biological organisms in entirely aggregative micro-physical terms. Note that biological organisms are negentropic organizations, which “violate” the general laws of physics, such that they must be offset by environmental increases in positive entropy. So, starting considerably along the way from the most basic micro-physical laws, there is a planet orbiting a star at a certain distance, with a certain chemically mixed composition, including a gaseous atmosphere, bodies of water, a solid core possibly allowing for land masses or shallow waters, etc., such that it is in a state of quasi-permanent thermodynamic disequilibrium, “fed” by energy from its star, resulting in weather disturbances and thermal vents, which might synthesize new chemical compounds, which compounds might, under certain minutely probably conditions, combine into autocatalytic chemical processes that sustain and reproduce themselves, such that elementary organisms emerge and begin to differentiate themselves according the local environmental conditions, as well as, feed off of each other and their various metabolic waste products, such that a biosphere emerges, altering climatic conditions and facilitating the fusion of elementary organisms into more complex forms, which eventually evolve central nervous systems, and in a few cases, those central nervous systems evolve emergent mental properties, (“psychology”), and in one case, natural language, such that those mental properties, as well as, physical ones, can be talked about and distinguished, and so forth. Suppose that all of this could be described in terms of elementary physical particles and forces, but specifying all the particular conditions of their “motions”. My objection is not quite that such an overall explanation would be so over-elaborate and complex that it would scarcely measure up the the requirement of economy in explanation. Nor even quite that such a hugely elaborate explanation might amount to so much “noise”, such that it would be virtually impossible to distinguish information from “noise”. It’s rather that such an explanation, starting from the most basic micro-physical laws and regularities would be so vanishingly improbable by those very laws as to be almost indistinguishable from explaining an impossibility. It would be tantamount to explaining biological life as “a wrinkle in time” or “a singularity”, which is the very sort of disjunction you’re complaining of. (It would be of little avail, if there were more than one known instance, since that would only be multiplying an exponent with a very large negative number, and since those two or more physical “disturbances” might not be physically alike, other than with respect to their improbability).

At this point, I would ask why is this notion of an “ultimate” reduction to micro-physical laws a “problem”? Why is it “philosophy” and not just silliness? Doesn’t what one chooses to see as a “problem” involve some significant “stakes” and the capacity to take responsibility for those stakes? I think the problem with this “problem” is that it partakes of a (Eleactic?) notion of “pure” knowledge, as the counterpart to and placeless representation of sheer existence, entirely abstracted from any of our actual (or possible?) cognitive practices, as born out of the need for orientation in the world on the part of worldly beings. What sort of knowledge would it be that has no conditions of application?

I might add from the other end that the notion of providing an explanation in micro-physical terms for the meaning of our words is absurd. What would be the differentiatable micro-physical conditions for “prevarication” or “exageration” or “oxymoron”? Yet all knowledge consists in linguistic, or, at least, symbolic, statements, which presuppose the applicable meanings of words and symbols. And such statements claiming knowledge are supported or “justified” by further staements about reasons and evidences. The desire to escape this “elemental” condition of knowledge is passingly strange, as if the motive force of knowledge were to pass “metaphysically” beyond this world and attain salvation by becoming “Martian”.


John 01.23.07 at 8:28 pm

What I’m interested in is why it’s true (assuming it is true) that some weakenings of the explanation (from hit with a stone to hit with a projectile for example) make explanations deeper, but some weakenings (from hit to hit or melted for example) make explanations worse.

What you call weakenings sounds like the addition of parameters (an thus wider compatibility with possible worlds) in the distasteful case, and generalizing a parameter in the desirable one. If this is the case, there are interesting works addressing the subject that explain why the former is undesirable, while the later is desirable. Kevin Kelly, for instance, has an account of simplicity that jives with this.

Full disclosure: I was, once-upon-a-time, a student of Kelly’s.


vivian 01.23.07 at 9:09 pm

Some of the extra mystery here comes from equivocation (or at least lack of specificity) about what we want explained. If you or the martians care about why the window broke-rather-than-melt when hit by a very hot projectile, we can give a pretty naturalistic and even reductionist account of properties of materials, or perhaps molecules. Why it broke spontaneously might require discussion of extreme local weather, and why it broke right before a small person emitted a loud crying sound would require some social/psychological regularities be invoked.

If the explanatory context is physics, then explanations invoking spells are bad because they (by physicists’ convention) beg physical questions. But “Because I wanted to break it” is also bad for not answering why me throwing the rock worked. It would be a great answer to give in a therapy session, where invoking physics would be avoiding the “real” issue.

Unless you consider better specifying the kind of explanation sought to be adding disjunctive conditions. I’d argue that they don’t, but it’s your context here.


"Q" the Enchanter 01.23.07 at 9:11 pm

I’m still trying to work out your argument Brian, so let me risk disproving the old saw about there being no dumb questions. One thing I’m not grasping is just what the puzzle with respect to what you would call “weak broad” arguments (“WBA”) is supposed to be. The earlier examples you give are simply collections of (1) independent explanations and (2) false causal claims labeled as explanations. The later examples of gruesome predicates you give are time-indexed predicates that strike us as intuitively natural, don’t in the manner of the earlier examples invoke pseudovariables or pseudocauses and don’t in the manner of recht gruesome predicates preclude induction over the class of objects or phenomena they encompass. I know you know this. What I don’t know is what philosophical point I should draw from this.


Z 01.24.07 at 5:20 am

I see three different problems here. The first seems to be “What knowledges can be reduced to micro-physics?”. It is an interesting question, though I surmise many physicists believe nothing outside of micro-physics can be reduced to micro-physics. I distinctly remember a researcher in quantum chemistry saying so (a reduction of sociology to micro-physics seems therefore a slightly too bold project to even contemplate). Beside, I wonder why you would single out micro-physics here. Why not asking for a reduction to pure math? Or even to axiomatic set theory? Or logic? Or, since logic is a manipulation of abstract symbols we understand, to linguistics (which can be defined as the study of abstract symbols we understand, I guess)? And from then on, your reduction process bootstraps, doesn’t it?

The second problem seems to be “What can we know from what else?”. Again, interesting, but hard. I have no particular insights here, I’m afraid (but like you Brian, I am a reductionist).

The third problem would be “How can we decide if a notion is grue?”. Here I think it is possible to give a perfectly reasonable answer, but not, I’m afraid, one that reduces the notion to micro-physics (again, why micro-physics?). A notion is grue in proportion of the degree of specification needed in the definition of the observer of this notion for it to make sense. So “weekday” is more grue than “water boils at 100°C” because you clearly have to specify the properties of someone using “weekday” very highly (arguably enough that a Martian would not understand this notion) whereas a clearly smaller amount of specification is needed for the second notion. Still “water boils at 100°C” is way more grue than a description of the phase transition of H2O in terms of molecular agitation and so on. OK, so how do we measure grueness in general (note that it just happened in our example that we reduced to micro-physics, it wasn’t our intent but rather a natural byproduct of our endeavours to underspecify the observer)? I would say a good enough test to measure grueness is to try to interchange potential observers and see how long the notion makes sense. Using this simple criterion, it is easy to see why “cell” is less grue than “pet” which in turn is less grue than “demand”. An explanation A is better than B if all else being equal, A is less grue.

Right. To me at least, this answer is satisfying. Maybe I misunderstood something though, since disjunctive statements play no particular role in my proposition. I fear my redefinition of grue is too far from the one you presented in your post.


Matt Weiner 01.24.07 at 9:54 am

z, I think there may be a problem with saying that ‘weekday’ is just brutely grue. We want to figure out whether certain propositions are gruesome because we want to know whether explanations that advert to them are good explanations. But, as Brian points out, some explanations involving ‘weekday’ are good explanations.

For instance, suppose I’m interested in explaining why there are n people at the local pool. It’s possible that the best explanation may involve the local temperature (a nice physical property) and whether it’s a weekday. We wouldn’t want to say “No, you can’t use ‘weekday’ in your explanation, it’s a gruesome property,” because when we’re explaining why there’s no room at the pool there isn’t any better property to use. At least, it’s not obvious that we can find a better explanation that uses more interchangeable terms.


Z 01.24.07 at 10:19 am

At least, it’s not obvious that we can find a better explanation that uses more interchangeable terms.

Indeed, I don’t think there is, and so we have to live with an explanation that is grue. This is probably not surprising, as the question itself is highly grue (in my opinion). As the problem presupposes a high degree of specification (if only via the concept of pool) in its definition, I think we should expect a high degree of specification in the answer. So the weekday explanation is not bad. A bad explanation would be one that introduces a higher degree of specification than required by the hypothesis. In practice, I realise that it might be hard to distinguish if additional specification has been introduced.


Brian 01.24.07 at 10:56 am

I need to think a little more about what Q and Z are saying, but I think the beginning of an answer is this.

On the one hand, we do seem to get the right results about what is more grue than what if we allow ourselves to use as inputs the reactions of human observers. But it would be interesting to know whether those humans are tracking something metaphysically deeper by their reactions. Some philosophers think not. What’s grue and not is just what strikes creatures like us as grue and not, so the quality of any explanation is relative to human conceptual schemes. That might be right – I doubt it, but it might be. But to know whether it is, we have to be able to specify what it is the human observers even might be tracking.

On the other hand, some of the things we take to be good explanations, like explanations in terms of beliefs or demand or the like, might take just as long to explain to a non-human observer as explanations we would find quite gruesome and disjunctive. That’s to say, if you tried to explain what beliefs and demand curves were in purely physical terms, you’d take such a long time that someone else could, in the same time, define up predicates that anyone would say are too gruesome to use in any explanation.

Here’s what I’d like to be able to say about all this. Despite that last observation, explanations in terms of beliefs or demand curves are better explanations than explanations in terms of massively gruesome predicates. And that’s not because human observers find those explanations better, there really is some feature of reality that we’re tracking when we say they are better explanations.

What I don’t know is what that feature might be.

The connection to Campbell is the possibility that the feature might be a brute feature of reality, invisible to the Martian physicists, but visible to us. That seems too spooky to me to be true, but I find most anti-reductionist claims too spooky to be true. So I’d like to have a story about what the feature is.

(I should have said in the post that Barry Loewer has some unpublished work that he’s been presenting at various places on this topic that, if it’s correct, starts to solve this problem. But it’s not in print, or even in a public online site.)


Michael Greinecker 01.24.07 at 12:42 pm

One can formulate “fundamental physics” usung gruesome predicates, so Lewis way is of no use. The only possible way out I see is to determine the choice of predicates and laws jointly and impose some coditions on both of them together (something like a complexity index). The problem even comes up when learning a language. The Wittgenstein-Kripke puzzle of private language is basically a variant: If you tell me that emeralds are green, how can know you didn’t want to tell me that emeralds are grue?

@6: “Another, mathematical, way of saying this is that predicate encodings are not invariant in any non question-begging way (specifically, they are homeomorphic).”

?!?!? Wether they are homeomorphic depends on the topology chosen, and their is neither a natural predicate space nor a natural topology for it.

“Rather, the existence of a discontinuity in the mathematical sense, between the micro and macro-levels presents a real problem for reductionists, because discontinuity implies uncomputability (trust me).”

A dicontinuity of a real function with the usual topology, that is. I don’t see the relevance of the problem here.


John 01.24.07 at 1:06 pm

One last try at being relevant.

I have been assuming too much about the conversation being about the accounts of explanation available to us from philosophy of science (Deductive-nomological account, Statistical-relevance, etc.). In this mode the question is tinged with considerations like those I addressed earlier and reduction is understood in a kind of Nagelian way: T reduces T′ just in case the laws of T′ are derivable from those of T. Then, in the case of microphysics to mind, T = our microphysical theory and T’ = our theory of mind. Further, the language of choice is not relevant to derivability, thus it is not relevant to reducibility. Further still, while the assumption of reducibility is quite useful, but the epistemic determination of metaphysical reducibility is not decidable.

However, once a language of inquiry is fixed, though it is not philosophically special, there are plenty of desirable features of hypotheses and explanations that correspond to non-gruesomness. In model selection, for example, the hypothesis that optimizes tradeoff in simplicity and fit is proffered. Under the AIC criteria, philosophically endorsed by Eliot Sober and Malcolm Forster, the preferred model balances simplicity and fit while increasing predictive accuracy. Simplicity here refers to minimizing parameters and fit, minimizing distance (actually Kullback-Leibler divergence) from observed values.

The generalization from hit with a stone to hit with a projectile for example abstracts away irrelevant features (e.g. further, object with mass m, velocity v), but some “weakenings” (from hit to hit or melted for example) make explanations worse by adding unnecessary parameters (e.g. heat h for melting, perhaps).

For what it is worth, I would suggest an end-run around the disjunctive predicates issue and address what it means to be a fruitful theory of reducibility and explanation in the scientific context. There is, by my lights, no question-begging account of a privileged language from which to judge naturalness. For instance, there is an evolutionary story about the veridical nature of our natural concepts, but this fails to provide suitable grounds for our concepts for several reasons, including natural selection being about good enough concepts (survivable in a human day-to-day, heuristic sense), not true concepts.

I fear none of this will appear relevant as the discussion seems rooted in the post-Kripkean conceptual analysis mode where philosophical intuitions are plumbed for metaphysical implications without fleshing-out the logical and mathematical features of our philosophical intuitions. I take this to be a doomed methodology, but that is another matter.

Perhaps I have been hopelessly warped into a methodological monomaniac by my time at CMU.


soru 01.24.07 at 4:00 pm

‘The connection to Campbell is the possibility that the feature might be a brute feature of reality, invisible to the Martian physicists, but visible to us. ‘

Two robots play a game of chess inside a box. Everything inside the box is physically understood by the Martians – electrical signals, motors, chemical batteries, and so on.

Except they never happen to spot the rules of the game being played.


Neel Krishnaswami 01.24.07 at 6:23 pm

20: soru, that’s entirely possible.

If I understand Brad, he’s arguing that Martian axioms will consist of the rules of logic (which are the same for everyone) and basic physical law (which are also the same for everyone), and hence the Martians can deduce anything a human can.

This is, unfortunately, not true, even though I agree with him that Martians will use the same (up to some translation) logic as us, and will have the same laws of physics. The trouble is that the Martians will understand the principle of proof by induction, and when you add that principle to logic every proof search procedure become incomplete. This is because proof by induction can require inventing an induction hypothesis, and this is fundamentally a creative act. That is, there is no mechanical procedure for reliably cooking up the right induction hypothesis. (You can demonstrate this by showing that the subformula property fails to hold in a logic with induction over the natural numbers.)

So as a result, it’s entirely possible that we can come up with mathematical formalizations that the Martians would never have been able to, and vice-versa, even though they and we agree on what mathematics and physics are. We would be able to verify that a given Martian argument is correct, and likewise they would be able to verify that a given human argument is correct, but it’s entirely possible that the standard explanatory schemas we use are ones that Martians will never naturally come up with, and vice-versa.


John 01.24.07 at 7:41 pm

Apologies for this digression…

Wether they are homeomorphic depends on the topology chosen, and their is neither a natural predicate space nor a natural topology for it.

You are completely right about homeomorphism being relative to topology, but whether or not there is a natural topology for representing problems is debatable. The topology used by Kelly captures levels of underdetermination as understood as levels of complexity on the Borel hierarchy. The topology he uses to represent problems is the Baire space.

He present it this way:

Goodman’s point was that syntactic features invoked in accounts of relational support (e.g., “uniformity” of the input stream) are not preserved under translation, and hence cannot be objective, language-invariant features of the empirical problem itself. The solvability (and corresponding underdetermination) properties of the preceding problem persist no matter how one renames the inputs along the input streams (e.g., the feather [Baire space really] has the same branching structure whether the labels along the input streams are presented in the blue/green vocabulary or in the grue/bleen vocabulary). Both are immune, therefore, to Goodman-style arguments, as are all methodological recommendations based upon them. (from "The Logic of Success")

From his rigamarole he develops a system whereby empirical problems may be classified into complexty classes corresponding to notions of decidable/verifiable/refutable with certainty/in n mind changes/in the limit/gradually. My point, along these lines, is that the coding does not matter to the computability of the reduction (more on this below).

A dicontinuity of a real function with the usual topology, that is. I don’t see the relevance of the problem here.

Indeed, discontinuity in a real function presents is uncomputable. This becomes relevant to the current discussion if reduction is understood in the following Nagelian way: T reduces T′ just in case the laws of T′ are derivable from those of T. Derivability, then is taken as ‘computably decided from’. That is, given microphysical facts, there exists a computable function mapping these facts to the psychological, or other special sciences. What is derivable, is of course indexed to the capacities of the agent. In this case I suppose humans are Turing equivalent, though we can modify this assumption up or down (FSM or analog computer, say), and uncomputability/underivability will reassert itself. This, I think presents an intrinsic problem for reduction relative to the capacities of an agent, whereas gruesomeness does not. Gruesomness does, however, present a legitimate coding problem that can be treated information theoretically, but that is another very long story.
You can find some relevant philosophical papers on my abortive attempt at a formal epistemology blog. It’s ugly, but it has some classics.


"Q" the Enchanter 01.24.07 at 8:11 pm

That helps Brian. I’ll be thinking about this…


James Killus 01.24.07 at 8:14 pm

Way too much deviousness here, I think.

The detection of neutrinos requires considerable amounts of equipment, and for some energies, substantial amounts of computing power as well. No one really denies the neutrino’s existence, however.

The detection of (say), the mood of a friend requires a human being as a detector.

Why is a large particle detector and all the attendant foldorol considered to be philosophically different (we can agree that there are qualitative and quantitative differences; there always are), than a human “detector?”

If the Martians are that damn smart, they can learn the language and ask somebody.

Also, something is grue if it is green, but turns blue at some time in the future, i.e., something that cannot be observed currently, but can eventually be observed. The statement in the original post misses that distinction.


Michael Greinecker 01.25.07 at 7:25 am

“This becomes relevant to the current discussion if reduction is understood in the following Nagelian way: T reduces T′ just in case the laws of T′ are derivable from those of T. Derivability, then is taken as ‘computably decided from’. That is, given microphysical facts, there exists a computable function mapping these facts to the psychological, or other special sciences. What is derivable, is of course indexed to the capacities of the agent. In this case I suppose humans are Turing equivalent, though we can modify this assumption up or down (FSM or analog computer, say), and uncomputability/underivability will reassert itself.”

My problem is another one: Why should the real line be a good model of science? If we are living in a discrete world, jumps wouldn’t be a problem for computability.


John 01.25.07 at 11:59 am


Again you are correct, however, I am not claiming that the real line is the canonical model for science, nor am I claiming that “jumps” are somehow inherintly computationally problematic. In my haste, I was simply being to imprecise about the sense of discontinuity I was using.
The universe may indeed be discrete, as the digital physics folks think. As Richard Feynman wrote in The Character of Physical Law:

It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.

Without begging the question, however, we are sill stuck with the epistemic conundrum of discovery, whereby, the discrete nature of the universe cannot be decided with certainty, but can be converged on in the limit. It is not so much about the real line representing the universe, as it is representing asessment and discovery complexity with whatever space is appropriate, as proven by representation theorems. As Nancy Cartwright wrote, “…the representation theorems for the concepts we offer in use in modern science that we find our best candidates for “constitutive principles”. These are the preconditions for the application of our concepts to empirical reality” (from "In Praise of The Representation Theorem").
I am afraid that all of this takes us too far afield from Brian’s post. If you wish to discuss this further, please email me.


John 01.25.07 at 1:02 pm

Apparently, I cannot spell “inherently”.


"Q" the Enchanter 01.25.07 at 1:18 pm

So I’ll be thinking about these issues for the next couple of years or so, but for now here are my nascent thoughts.

Brian, you suggest we abstract human psychology away and see if we can find any authentic ontological features our macro-level causal concepts actually track. I’d say that perhaps all we could hope to find are the underlying microphysical phenomena as our idealized Martian physics (“MP”) would predict they would be experienced by beings like us. That is, if MP really does account completely for the way in which the microphysical processes and structure of reality play out dynamically in the generation of human-like consciousness (otherwise, I take it, it wouldn’t be a “complete” physics), then by hypothesis there would be a reliable translation between microphysical descriptions and predictable macrophysical percepts.

Obviously, this account of what’s been called in the discussion “gruesome predicates” isn’t purely psychology-free (for how could anything that required an assay of “understanding” in the explanans be purely psychology-free?), but the residue of psychology doesn’t appeal to the actual existence of percipient beings like us; it only requires that the underlying physics predicts what beings like us would perceive.

Now it may be that we find it impossible to imagine how such a physics could provide beings like our hypothetical Martians with an “understanding” of our own appeals to macro-level causal abstractions like “demand curves” or “society” or “Tuesday.” But this (as I suggested in an earlier comment) is entirely due to the construction of the hypothesis. There’s no reason to think their mode of understanding should be any more comprehensible to us than ours is supposed (by hypothesis) to be to them.

Of course, none of this is to say explanatory reductionism is true, but only that on the face of it MP seems to pose no challenge to it.


soru 01.26.07 at 8:31 am

‘The best answer I know of that doesn’t appeal to beliefs, intentions etc of humans is Lewis’s answer in terms of definition length. And that’s a non-starter I think.’

Could you do something interesting there with counting symmetries (in the physicist’s sense of, roughly speaking, something that could demonstrably be more complex but isn’t)?

For example, the rules of chess are symmetrical between different chess games, between different moves in the same game (mostly, the ‘can castle only once’ rule is an exception), between different pieces of the same type, between different squares on the board, and between the two sides. None of those symmetries has any meaningful physical cause, none can be plausibly shown to be a consequnce of underlying phsyics symmetries.

That’s why it can be said that the rules of chess are a good explanation for a chess game.


Rob 01.26.07 at 9:14 am

Didn’t Davidson have something to say about gruesome predicates in ‘Anamolous Monism’? As I remember, and this may say more about my memory than Davidson, his argument was that the fact that gruesome predicates don’t work shows that there can be predicates that don’t fit into a particular causal schemes, and that precisely because there can be such predicates, we needn’t worry about it being apparently impossible to get whole sets of other predicates to fit into other causal schemes, because that doesn’t mean that there aren’t other descritptions of the properties which those predicates happen to pick out which could be made to fit into the causal scheme in question. You might put it another way: reductionism is eliminativism. I guess though, you’d probably want to avoid that.


Anarch 01.26.07 at 2:14 pm

The trouble is that the Martians will understand the principle of proof by induction, and when you add that principle to logic every proof search procedure become incomplete.

This isn’t correct, fwiw. Robinson’s Q, which is basically Peano Arithmetic without induction, is already undecidable; and in fact Kripke showed, by a fairly simple extension of the proof of the Godel Incompleteness Theorem, that any theory encoding the quantifier-free truths of arithmetic is undecidable too. [In brief: the Godel statement G is Sigma-1, so if it were falsified you could actually pick an explicit witness to that effect.] You might be thinking of existential quantifers here — the unbounded search I think you’re talking about — in which case it depends on the base theory: unbounded search over, say, the axioms of a dense linear order is perfectly acceptable (DLO is complete, and hence decidable, up to the specification of endpoints), while an unbounded search over the usual axioms of arithmetic produces undecidability in the presence of negation.


Neel Krishnaswami 01.26.07 at 5:59 pm

anarch: I’m talking about the principle of induction over the natural numbers, added to the first-order logic of your choice (classical, intuitionistic, linear, relevant, affine, whatever). Adding it makes the logic incomplete, end of story.

The unbounded search I’m talking about doesn’t have anything to do with existentials. It arises from the failure of the subformula property, which is the property that any derivation of a provable proposition in a sequent calculus involves only subformulas of the goal proposition. This property holds for ordinary first-order logic (which is why it is semi-decidable), but it does not hold for calculi with induction.

This is because in order to use the principle of induction, you have to choose the induction hypothesis to apply it to (because it is schematic over predicates). Unfortunately, the hypothesis you need to make up is not necesarily a subformula of the goal; you may have to come up with a more general induction hypothesis. This creates the infinite search problem — your search space of hypotheses is all formulas, which is an infinite set.


Anarch 01.27.07 at 12:22 am

We might be talking about different things here — in particular, I’m not sure what either your theory or metatheory are — but…

Adding it makes the logic incomplete, end of story.

Well, no. Assuming standard first-order logic* and a basic description of the natural numbers (e.g. Robinson’s Q), it was already incomplete — you don’t need induction anywhere in Godel’s proof. In particular, you don’t need to a) add induction as a mathematical axiom to your system (hence Q not PA), nor b) apply induction metamathematically anywhere in the proof, as neither the Diagonalization Lemma nor the fragment of the Representation Lemma required to represent the relevant predicates necessitate induction.** Adding induction doesn’t “recomplete” it nor does it make it “incompleter”, of course — although I think the Turing degree of (the theory of) Q is strictly below the Turing degree of (the theory of) PA — but it doesn’t produce the incompleteness phenomenon you seem to be ascribing it.

* Things get more complicated in alternative logics simply because there are too many, well, alternatives, but the same gist applies to most of the major ones I know (2nd order, L_{\kappa \lambda}, etc.).

** This is a subtle but hugely important point: you only need Representation up to Sigma-n for some n large enough to diagonalize “not provable”. In fact, I think you can even get away with just diagonalizing “not provable”, or maybe its subformula closure.

This property holds for ordinary first-order logic (which is why it is semi-decidable),

I’m confused. Ordinary first-order logic is semi-decidable/c.e. because its notion of “proof” is explicitly decidable and hence “provable” is automatically semi-decidable. [Of which I’m sure you’re well aware, but it should be said nonetheless.] This holds for any decidable notion of “proof” — provided you don’t do something wacky like using K to encode formulas or whatever — irrespective of whether it possesses the subformula property or not.***

That said, I think I’ve found the problem: I think what you’re calling induction is more commonly called “omega-completeness” in the mathematical literature, the property that if T is a theory (extending Q or somesuch) which proves A(n) for each standard n then T proves (x) A(x). [I’m guessing here; you seem to be a proof-theorist whereas I’m a computability/set-theory guy.] This is an inherently infinitistic proof principle, absolutely, but it’s not induction, which applies equally (by definition) in standard and non-standard models of arithmetic. If that’s the principle that you’re invoking, fair enough — but it’s not required to get incompleteness, it’s almost never used in mathematical logic, and the problem isn’t “unbounded search” per se, it’s that it’s not even formalizable finistically except in a much stronger metatheory (e.g. ZFC or L_{\omega_1 \omega} or what have you) than one usually invokes in this context — because you need to diagonalize over predicates, as you noted above — and that formalization itself carries with it incompleteness and the standard/non-standard dilemma, just at a higher level.

*** I’d normally throw in the disclaimer that a “proof” ought to be finite too but I think, by Rice’s Theorem, that being decidable suffices.


Neel Krishnaswami 01.28.07 at 4:28 pm

Hi Anarch: so, first, I definitely agree that you don’t need induction to get undecidability — I’m saying it’s sufficient, not necessary. (You don’t even need quantifiers; propositional linear logic is undecidable.)

I am a proof theorist (or rather, I do research in programming languages, which has a very hazy boundary with proof theory). When you formalize induction in type theory, we really do formalize it the way you call omega-completeness. It’s very nearly a forced choice if you want to be faithful Martin-Loef’s judgmental method. (Roughly, you have to be able to give a logical connective introduction and elimination rules without reference to the other connectives, plus other stuff irrelevant to a blog comment.)

I can’t say a whole lot more, because I’m embarassingly ignorant of mathematical logic — I come at the subject from an odd angle.

Comments on this entry are closed.