Trompe le Mond: Deceiving Demons and Universal Holograms

by John Holbo on February 17, 2010

Matthew Yglesias says the necessary to talk people down from the ledge.

(Me? Last week I taught my students everything’s made of monads; mere universal holograms seem fairly ho-hum.)

But there is one point that should be made in these connections that almost never is: deception is a very different concept than error. Deception is a game for two: one to fool, one to be fooled. Whereas you can be wrong all by yourself. You can smudge the distinction with favorite epistemologist phrases like ‘if it turns out I am massively deceived about the way the world is …’ But if you dramatize the possibility of systematic/fundamental error by imagining deceiving demons, Evil Gods, Agent Smith, mad scientists with brain vats, caves equipped with the latest in projection technology, or giant holograms, you confuse people’s intuitions. Specifically, you confuse them into thinking that error is more conceivable (or differently conceivable) than it may really be. Telling people the universe is a hologram makes it sound as though the universe actually intends to pull the wool over their eyes. Reality itself is the ultimate Long Con! But if you just tell them matter is made of atoms, or water is really H20, that doesn’t make it sound as though the micro entities think all the macro-types with minds are marks and suckers.

To put it another way, the hologram hypothesis makes it sound as if it is somehow the proper function – the telos – of the universe to fool us: trompe-l’Å“il. Without pretending I understand the hologram hypothesis one little bit, I’m pretty sure that’s not it.

As a result, I think essays like Chalmers’ “The Matrix As Metaphysics” end up – not being wrong, but irrelevantly eroding their own plausibility by bad choice of illustration; a pattern stretching back to Descartes’ “First Meditation”, possibly Plato’s Cave.

On the one hand, it’s reasonable to suppose my beliefs are about the things that they tend to be caused by (very crude first approximation to a causal theory of mental content). So if I am a brain in a vat, living in a ‘Dream World’, oddly my beliefs turn out to be mostly true! They are about the Dream World and are, mostly, true of that world. ‘Hey look, a woman in red!’ (Read Chalmers’ paper for the longer version. Or read Donald Davidson, or anyone else who has argued in this anti-skeptical vein.) Mutatis mutandis, if it turns out tables and chairs are really made of atoms and quarks, it doesn’t follow that there really aren’t tables and chairs, after all. What has been discovered is just what the tables and chairs really are. (Get it? Sort of?)

All the matrix razzle-dazzle Chalmers helps himself to is supposed to make that last point vivid, but it semi-obscures it. Because the one sort of universal appearance-generating brain vat that this wouldn’t be clearly true of would be … the sort of vat you see in movies like The Matrix. Namely, ones with the proper function of deceiving their inhabitants. Here I emphasize function rather than bad intention because … well, put it this way: suppose we had a serious theory that H20 appears to humans as a colorless, tasteless, odorless liquid because somehow this is, functionally, camouflage for the micro thingies. They are ‘trying to hide from us’ by exhibiting this appearance. (Yes, I know it doesn’t make any sense.) It doesn’t matter whether it’s intentional, just that we’ve decided that this is a reasonable scientific sizing up of the functional set-up. If that turned out to be the case, I think we might well say it turned out water didn’t exist after all, just as we say of a stick bug that it isn’t really a stick. (I pick the bug because, presumably, it doesn’t intend to deceive. Yet, functionally, it is a deceiver.)

Chalmers writes:

It is common to think that while The Truman Show poses a disturbing skeptical scenario, The Matrix is much worse. But if I am right, things are reversed. If I am in a matrix, then most of my beliefs about the external world are true. If I am in something like The Truman Show, then a great number of my beliefs are false. On reflection, it seems to me that this is the right conclusion. If we were to discover that we were (and always had been) in a matrix, this would be surprising, but we would quickly get used to it. If we were to discover that we were (and always had been) in The Truman Show, we might well go insane.

This is true, I think, but confusing. Because, by ‘the matrix’, Chalmers means something like … the hologram hypothesis. He doesn’t mean a situation in which Agent Smith is seriously messing with our heads. I think that’s right.

Probably some fine, upstanding academic epistemologist has made this ‘deception not the same as error’ point already, but I missed the memo, so I will be happy to give credit when properly informed, in comments.

{ 147 comments }

1

chris y 02.17.10 at 10:30 am

2

salacious 02.17.10 at 10:59 am

While this all seems right, I will say that certain errors seem to be much more “deceiving” than others. For instance, the existence of subatomic particles revealed the every-day understanding of matter to be incorrect, but it doesn’t make that lay understanding feel particularly deceptive. Quantum mechanics and mixed states on the other hand…. That knowledge still has the power to make me feel like my subjective experience is pulling the wool over my eyes. Another example might be the discover of cell biology(doesn’t make the previous understanding feel deceptive) vs. the discovery of evolution (does make the previous understanding feel deceptive).

Most likely, this intuition isn’t philosophically relevant. I’m interested to hear, however, whether others feel the same way.

3

Hidari 02.17.10 at 11:19 am

I had a huge long post about this which I brilliantly managed to delete. However, very briefly, I think the statement ”Mutatis mutandis, if it turns out tables and chairs are really made of atoms and quarks, it doesn’t follow that there really aren’t tables and chairs, after all. What has been discovered is just what the tables and chairs really are’ is wrong. I don’t think you have discovered anything apart from the fact that you can cut chairs into smaller and smaller bits. It’s like cutting up a chair with an ax and then saying ‘You see! It wasn’t really a chair at all! It was just fragments of wood!’. If you cut a chair up into smaller and smaller bits until it was down to atoms, you would have proved you had a really amazing ax, but that’s it. My point here is about the meaning of the word ‘reductionism’ which I think is an epistemological not an ontological phenomenon, a way of looking at the world, not (or at least not necessarily) a way of making ontological statements.

To put it even more clearly: IMHO reductionism is neither ‘true’ nor ‘false’. It is useful or not useful. But I don’t think you can infer ontological statements from an epistemological method.

4

Marcus Pivato 02.17.10 at 12:34 pm

Perhaps the difference between `error’ and `deception’ is this: If you are merely in error about the underlying metaphysics of the universe, then there remains the possibility that `true’ universe still obeys causal laws which are well-approximated by the causal laws you thought it obeyed. But if you a re being deceived by Agent Smith or Descartes’ Evil Demon or whatever, then all bets are off —the universe is totally capricious, and pretty much anything could happen at any time, depending on the desires of the Deceiver.

For example, at some point in our education, we learn that `folk physics’ is wrong and classical mechanics is a better description; later, we learn that classical mechanics is wrong, and quantum mechanics is an even more accurate description. But these `revelations of error’ do not throw us into a state of epistemological panic, because the new model is still governed by very precise laws (which we think we know), and furthermore, it turns out that the old (incorrect) model is actually a pretty good approximation of the new model, at least within certain parameter ranges (which, of course, is why we believed it in the first place).

But if we learn that quantum mechanics is wrong and that we are being deceived by demons, then the predictive power of our models is totally undermined. We can’t even try to replace quantum physics with a theory of `demon psychology’, because if the demons are trying to _deceive_ us, then they will deliberately act so as to conceal their true intentions and to contradict our expectations. No matter how much scientific effort we devote to studying the behaviour of the demons, there is always they chance that they will do something completely unexpected, simply to thwart us. Epistemologically, you have no legs left to stand on.

5

Salient 02.17.10 at 12:52 pm

A comment on Yglesias’ site:

…when they say “the universe is a hologram!” what they really mean is “we’ve discovered that the mathematical rules describing the behavior of these systems are similar to the mathematical rules describing these macroscopic objects we have intuitive experience with”.

That sounds about right, except in this case it’s not exactly even a discovery. It’s someone like Brian Greene trying to invoke a sense of wonder in people (and to entertain people) by saying something eye-popping. Maybe it works for some folks. I mean, I was inspired to care about science in middle school by hearing about wonder-provoking nonsense like this and wondering what it meant.

I mean, we’ve had Stokes’s Theorem for forever now, and nobody’s freaked out about it except the Calculus & Analysis students who have to learn it each semester. And the idea that often, sufficient information is encoded in boundary data to recover unique configuration inside the boundary is fundamental to partial differential equations, and it’s not a whole lot more complicated than saying something like, “if we observe this pattern of temperatures at the boundary of our container, we can accurately estimate the temperature inside the container.” Not really surprising, since we know how heat flows.

So sure, if you knew everything about the boundary of our universe, maybe you could accurately estimate the location and momentum and so forth of each particle inside: absolute determinism at the boundary of the universe (assuming that boundary is well-defined and C^1^) maybe implies determinism inside the boundary. But calling this a “hologram” is, in what would be the common and reasonable conception of that, like saying there’s not actually water in your water balloon because the shape of the balloon’s outside tells us where the water is.

6

Salient 02.17.10 at 1:01 pm

Also this comment at unfogged is quite useful:

since that article was written, the experimenters have claimed to find the real instrumental cause of their noise, and the one guy who was making the predictions has changed his predictions. So, as every reasonable person expected from the start, it was a bunch of hype about nothing.

John, we talked about this deception vs. error idea in the AP psychology class I took in high school, though I don’t think we used the term “error” (a discussion of the differences between being mistaken and being confused and being misled and being insane, I think). But I don’t know that that’s helpful, sorry.

7

John Holbo 02.17.10 at 1:26 pm

“I don’t think you have discovered anything apart from the fact that you can cut chairs into smaller and smaller bits.”

Sorry, I didn’t mean to suggest that we should commit ourselves, without argument to obviously extreme form of reductionism. Take ‘really are’ to be elliptical for ‘really are made of’, which is just a longer-winded version of ‘are made of’.

“John, we talked about this deception vs. error idea in the AP psychology class I took in high school, though I don’t think we used the term “error” (a discussion of the differences between being mistaken and being confused and being misled and being insane, I think).”

I don’t think the notions themselves are actually so problematic. Everyone has them ready to go. I think people don’t roll them out at the appropriate time, mostly because there’s now a long and noble tradition of ‘evil demon’ arguments, I suspect.

8

alex 02.17.10 at 2:02 pm

“We can’t even try to replace quantum physics with a theory of `demon psychology’, because if the demons are trying to deceive us, then they will deliberately act so as to conceal their true intentions and to contradict our expectations. No matter how much scientific effort we devote to studying the behaviour of the demons, there is always they chance that they will do something completely unexpected, simply to thwart us. Epistemologically, you have no legs left to stand on.”

This is why all acts of interpretation devoted to written texts can only end in aporia, of course. That, and Derrida said so…

9

Zamfir 02.17.10 at 2:38 pm

Salient while you’re point about PDE’s seems spot on, I am not sure Stokes’ theorem does the same. With just Stokes’ theorem and no PDE, you can use the infomation on the boundary to say something about average properties on the inside, not reconstruct the entire inside.

But your general point about holograms is very true. The use of the word just makes the subject too attractive for popular science magazines, who can use fancy pictures that don;t have anything to do with subject at all. Holographic data storage suffers fromt he same problem.

I am not sure whether scientists coin these phrases because of the press attention it gets them or despite of it. As far as I can tell, “holographic” really is a good way to describe both ‘t Hooft’s theory and the data storage method, both entirely in their own way.

10

HP 02.17.10 at 2:55 pm

Man, remember The Prisoner? That was a cool show.

11

andrew cooke 02.17.10 at 3:06 pm

in reply to salient (5) above:

slightly more was being claimed here, because you had two “ways of looking at something”, which were dual, and which both supported physical intuitions that (1) were not consistent and (2) which appeared to be decided by observations in favour of one of the views.

so from a physics pov it was an interesting idea, even if the reason if was interesting to non-phsyicists had nothing to do with that.

12

Salient 02.17.10 at 3:20 pm

True, Zamfir, I’ve been sitting here regretting the way I constructed that paragraph and the mention of Stokes’ Theorem in particular; I’d only meant to suggest that the milder idea “some information about what’s happening inside the boundary can be obtained from boundary data” is quite uncontroversial. I need to apply Henry’s essay writing advice to my blog comments. :-/

13

bianca steele 02.17.10 at 3:25 pm

I’m inclined to agree with both Salient and Yglesias but to add a Jamesian or Wittgensteinian kind of question about whether we’re supposed to take the implications of these illustrations seriously, like is it supposed to be helpful or interesting that stuff we read about holograms can be applied to the world. And is it possible for The Matrix to be WRONG, for it to be more or less wrong than, say, Dark City? If not, why not? I need to think more about the “error” thing, but I think they are saying that “illustrations” like these are not even wrong (so there is nothing to choose between one and the other), and also that the implications in all their hairy detail are pretty much extraneous to the sort of Aha! effect the “illustration” provides for us.

14

kid bitzer 02.17.10 at 3:26 pm

maybe another reason why it’s useful to distinguish error from deception is this:

if we are victims of deception, then there are other agents who, epistemically speaking, are like us but unlike us.

they are like us in that they have beliefs and intentions of their own. in particular, they have theories of mind about other agents and can pass the false-belief task; all of that is built into the very description of their trying to deceive us.

on the other hand, they are unlike us in that many of their beliefs about the broadest structure of the world are different from our own. (we believe that we are separate animals walking on an earthy sphere; they know that we are only [insert you favorite cave/demon/vat/matrix scenario here].) they can understand our beliefs about the world–hell, they wrote this video-game!–but they know that they are false beliefs. and for each of those false-beliefs, they have a matching true belief, maybe even knowledge, about how the world really is.

so they are like us in being epistemic agents of roughly the same sort we are, and unlike us in the content of their beliefs. but since they are like us in these important ways, then it seems like we ought to be able to occupy their epistemic position vis a vis their beliefs, as well. we ought to be able to learn the truth about the world that they already know.

none of that is guaranteed to us by a mere error theory. it is compatible with our being in error that there are no epistemic agents who have the right view about things, or that there may be, but they are totally unlike us, or that the right view is not one that any epistemic agent could occupy. and that suggests a very different moral for the future of our predicament.

if we are deceived, then it is at least in principle possible for things pretty much like us to be undeceived–after all, they’re the ones deceiving us! if we are in error, then no inference of that sort follows. it may be that the true view awaits us if we persevere. or it may be that the true view is just intrinsically beyond anything’s grasp.

on a different subject: what clever bit of cleverness lies behind your spelling of “monde”?

15

Bloix 02.17.10 at 3:34 pm

“Probably some fine, upstanding academic epistemologist has made this ‘deception not the same as error’ point already”

As Einstein put it: “Subtle is the Lord, but He is not malicious.”

16

Salient 02.17.10 at 3:38 pm

slightly more was being claimed here

I guess my intended point was, more or less, that it should not have been being claimed here in that article, for at least two reasons — one, it’s a forum which bypasses subjection to scrutiny and review, and some reasonable verification of the legitimacy of the results (which have since been dismissed as a consequence of instrument functionality, in my understanding), and two, the audience hasn’t been provided the background necessary to understand what’s actually being said, and so will walk away misinformed. (Didn’t mean to suggest the stuff is uninteresting.)

But I guess this is kind of off the topic of error versus deception, which might be interesting when applied to topics like discrete probability. If someone takes a coin assumed to be fair, flips it 100 times, gets 75+ heads — let’s say in arbitrarily many trials, enough trials so that the person starts to question their assumptions — we want them to question the representativeness of the trials, or the fairness of the coin, before they question the whole concept of theoretical probability. Neither case is a case of deception, but it sure is tempting to draw a distinction and talk about the coin ‘tricking’ the coinflipper.

17

Michael Drake 02.17.10 at 3:39 pm

I think that it’s more plausible to suppose that if we discovered we were in a matrix, the idea of the “external world” would go the way of the luminiferous aether or absolute simultaneity as a theory that (it turns out) did not at all fit with the discovered fundamental structure of the empirical world. (Cf. “We’ve discovered the true nature of the luminiferous aether; it’s actually a vacuum.”)

18

kid bitzer 02.17.10 at 3:42 pm

o bodies weighed to music/o bright ones peeved,
how can we know deceiver from deceived?

19

thomas 02.17.10 at 3:57 pm

The distinction between deception and error here is not quite the same, but is similar, to the distinction that Dennett spends the first chapter of Elbow Room on — the difference between events being deterministic and being controlled by a bogeyman.

20

JoB 02.17.10 at 4:03 pm

If there is a Lord, he is most certainly malicious and … everything but subtle.

Luckily, we’re on our own.

21

Platonist 02.17.10 at 4:34 pm

Is “Mond” (sans “e”) a reference to Newtonian theory, or just a typo? Pedantic, sorry. To bad Leibniz was last week, you could have used “tromple le monad.”

“If it turns out tables and chairs are really made of atoms and quarks, it doesn’t follow that there really aren’t tables and chairs, after all. What has been discovered is just what the tables and chairs really are. (Get it? Sort of?)”

This makes perfect sense, in that it means we have not thrown out our ordinary understanding of objects but added to it — increased the resolution, as it were. But it doesn’t account for the very real sense in which we have overturned prior conceptions. The ordinary understanding of a chair is, in its ordinary usage, incompatible with the atomic version: if it’s true a chair is mostly empty space, then it is false that it is a hunk of stuff. The word is, in fact, less solid, less well defined, less secure than we had assumed.

So “there aren’t really chairs” is a false alternative, rather: chairs aren’t what we thought they were.

22

Bloix 02.17.10 at 4:44 pm

“it sure is tempting to draw a distinction and talk about the coin ‘tricking’ the coinflipper.”

It’s my suspicion that a by-product of consciousness is the prediliction to impute consciousness to things that don’t have it. We have evolved to be able to understand that other people, whose consciousnesses are unavailable to our direct perception, nonetheless have them. I suspect that this evolutionary development spills over onto other things, so that people are inclined to infer consciousness in all sorts of objects — either that they are literally conscious, or that they have spirits that embody their consciousnesses.

23

M 02.17.10 at 4:57 pm

I don’t think it’s even true to say that tables and chairs are “mostly empty space.” Just as “tables and chairs” are defined by the macroscopic phenomena that we put our pasta bowls and butts on, respectively, the foundational definition of solidity or emptiness is the quality we find inhering in tables or skeletons. Saying a table is mostly empty space is like saying New York is mostly uninhabited because most spatial coordinates in it aren’t physically occupied by human beings.

24

Keith 02.17.10 at 5:04 pm

The Matrix comparison falls apart, due not so much tot he philosophy but to the laziness of the writer. They Bros. Wackowski fall back into the anthropomorphic assumption, making the villain be responsible for the world’s evils (and even the world itself) rather than players in the world, like the heroes.

It would be an apt comparison if it turned out that when Neo, Morpheus and the gang woke up, they discovered that the Matrix was a holographic simulation that had been running on autopilot and that it’s creator was them, that they voluntarily plugged themselves in and went to sleep, forgetting their dream world was just a dream.

Of course, this would make Agent Smith and the other programs thoughtforms, which would open up another existential problem: does a self aware tulpa have agency?

25

Platonist 02.17.10 at 5:06 pm

“Saying a table is mostly empty space is like saying New York is mostly uninhabited because most spatial coordinates in it aren’t physically occupied by human beings.”

An apt comparison–and I agree completely. However, here my point was simply that the atomic view is incompatible with the ordinary view. And in this respect, it amounts to the same: to say the ontology of a table is like that of a city is equally at odds with the common view that the basic reality of things is their materiality and, specifically, their material unity.

A better ontology, like the one you imply in your example, is better precisely because it can account for both levels of experience, while the common view is inferior because it excludes the other.

26

Dave Maier 02.17.10 at 5:27 pm

I haven’t read the Chalmers paper, but given his Cartesianism about the mind generally, I’d be surprised if he really had the right to any recognizably Davidsonian response. It’s true that in the Matrix situation, our beliefs turn out to be mostly true, but the reason for this is not well given as that “my beliefs are about the things that they tend to be caused by” (as you note in calling this a “very crude approximation”). That sounds like Putnam, and as Chalmers notes, Putnam’s anti-skeptical argument begs the question and thus fails.

I do think it’s helpful to compare The Matrix with The Truman Show. Truman’s interlocutors are universally attempting to deceive him. But in the Matrix scenario, we are all in the Matrix, while only Truman is deceived. For Davidsonian purposes, this is huge, because this means that I can triangulate with my fellow Matricians (“look, a woman in red”) in a way/to an extent that Truman cannot (and which is not captured by the very idea of a “causal theory of content”).

For Davidson, the content of beliefs is not simply given by causal relation with the world, but instead fixed in interpretation. This requires that the interpreter share a perceived world with his interlocutors, so that they may fix belief and meaning at the same time. This is what is lacking in the Cartesian scenario but present in the Matrix, and partly for Truman too (but only partly).

This is why we can say that our (Matrix-) beliefs are generally true: they’re about what we take them to be caused by in fixing their content via mutual interpretation. But in a way Truman isn’t even speaking the same language as his interlocutors (in the relevant sense): they’re not interpreting in good faith and so never fix shared meanings in that way.

Still, what makes the Truman scenario more disturbing is less that more of my beliefs are false (after all, it’s still true, on Davidsonian reckoning, that most of even poor Truman’s beliefs are true), but the perfectly understandable concern that his “closest friends,” and even his “wife,” are conspiring behind his back to deceive him. This has less to do with the metaphysics of epistemology or semantics than with morality.

Oh, and I agree w/M @23 about chairs/empty space.

27

kid bitzer 02.17.10 at 5:39 pm

i think i’ll be going before we fold our arms and start to weep
i never thought for a moment that human life could be so cheap
‘cos when they finally put you in the ground
they’ll stand there laughing and tromp le mond down

28

shah8 02.17.10 at 5:53 pm

Maybe we should take steps beyond The Truman Show and go to Invisible Man or some other literature of its like.

We all decieve each other with our descriptions of the world, and much of that, really most, intentionally. Yet deciding what is a chair is made up of ideas more than it was ever made up of atoms–utility describes what a chair is, after all. Saying that it’s empty space is merely remarking on a universal fact. Yet the world we all live in are beset by demons of our fellows altering the words and meanings that guide us.

29

Hidari 02.17.10 at 5:58 pm

‘Sorry, I didn’t mean to suggest that we should commit ourselves, without argument to obviously extreme form of reductionism. Take ‘really are’ to be elliptical for ‘really are made of’, which is just a longer-winded version of ‘are made of’.’

Well, obviously rephrased like that I couldn’t have any problem with it. My real issue (and I went into it in a lot more detail in my ‘now lost forever’ post…the man from Porlock has nothing on my shitty laptop….) was with, as you say, ‘greedy reductionism’. This normally manifests itself in terms of the social sciences. As Kenan Malik pointed out (in his book, Man Beast and Zombie, which is actually a very good book) some people seem to think that because (e.g.) the United Nations is ‘really’ composed of its individual member states, or the United States is ‘really’ just a bunch of people on a specific rock, that therefore the US and UN don’t actually exist! Therefore, any science (e.g. sociology, political science) that begins with the assumption of , say, classes, nations or organisations is somehow automatically dubious from the start.

In psychology, of course, we have the no less flawed assumption that since all behaviour can, in theory (and this phrase is apt here) be reduced to brain states that, therefore, human beings are nothing but brain states……Andy Clark recently wrote a book pointing out that this is nonsense.

Incidentally, since we are on the subject of the Matrix, and since I was accused of being anti-Heideggerean a while back, here’s an excellent essay by Dreyfus on these very issues.

http://www.dvara.net/HK/matrixessay5.asp

(originally on the Matrix website but that seems to have gone awol in a slightly mysterious ‘was it all an illusion?’ stylee)

30

Gareth Rees 02.17.10 at 6:16 pm

The idea that solid objects are mostly “empty space” is based on the Bohr–Rutherford model of the atom, which imagined tiny electrons orbiting the nucleus like planets orbiting the sun. This model was superseded back in the 1920s—at some point it would be nice if popularizations caught up.

31

alex 02.17.10 at 6:39 pm

@30 – propose another definition of ‘solid’, ‘object’ and ’empty space’ that fits the current paradigm, then.

32

Platonist 02.17.10 at 6:47 pm

“This model was superseded back in the 1920s”

Fine, but the turtle’s still not turtles all the way through, so it’s still at odds with the common sense model that “real” = physical, solid, unity.

33

john c. halasz 02.17.10 at 6:51 pm

How many pseudo-problems can dance on the head of a pin?

34

lemmy caution 02.17.10 at 6:53 pm

There are religious traditions where life is actually a test and a mere illusion when compared to heaven. The test isn’t any different than what people call real life, but the point is that you are supposed to act differently. It makes a difference to believers that God is a deciever.

Life within a computer simulation could change at any time. The program could just stop or they could just add dragons or something. It would make a difference if we found out that life was a computer simulation.

35

kid bitzer 02.17.10 at 7:05 pm

“i felt so vertiginous when electrons were tiny planets, orbiting a nuclear sun at astronomical distances. how insubstantial it all felt! but then when i learned it was really all just delocalized probability distributions, i got back that cozy comfy feeling of solid earth under my feet. now my tables and chairs work just the way i thought they did in the nursery! (so round about, and round about and round about i go.)”

36

salacious 02.17.10 at 8:53 pm

“Life within a computer simulation could change at any time. The program could just stop or they could just add dragons or something. It would make a difference if we found out that life was a computer simulation.”

Live in the real world could do this too. The problem of induction has nothing to do with whether we are in a simulation or not.

37

bad Jim 02.18.10 at 2:26 am

The point would seem to be that we never do observe dragons or any generally public miracles. The most capricious things we observe which are not either animals or human creations are storms, and even they seem to follow definite rules. If there are demons in this world, they are the most boring, repetitive, obsessive-compulsive demons imaginable.

38

Richard Chappell 02.18.10 at 2:33 am

John – “by ‘the matrix’, Chalmers means something like … the hologram hypothesis. He doesn’t mean a situation in which Agent Smith is seriously messing with our heads.

What makes you think this? Chalmers’ discussed “Matrix Hypothesis” is defined as follows: “I have (and have always had) a cognitive system that receives its inputs from and sends its outputs to an artificially-designed computer simulation of a world.” Notice that he explicitly invokes the ‘Creation hypothesis’ as part of this story, which corresponds to the aspect of The Matrix whereby the Matrix-world is intentionally created by other beings. (Agent Smith “seriously messing with our heads”, as you put it; except, if memory serves, that Agent Smith was himself merely a being within the Matrix, not the actual Creator AI.)

Of course, if Chalmers’ story is right, then we aren’t really deceived by this creation process. Unless you’re suggesting that Agent Smith is ‘messing with our heads’ and deceiving us about the counterfactual-supporting causal structure of the manifest (Matrix) world? Because I think that’s the only thing that would count as genuine deception, on Chalmers’ account. Merely having an external agent who envatted us [from birth] so as to prevent us from exercising agency in his world just does nothing at all to undermine the reality of the world that he shifted us into.

So while you can certainly distinguish deception from mere error, it’s not clear to me that this distinction is particularly important, or that it is doing any real work in these sorts of cases. (I say a bit more about what I take to be the more important distinctions here in my post: ‘Hallucination, Virtual Reality, and Reality‘.)

39

John Holbo 02.18.10 at 3:10 am

“How many pseudo-problems can dance on the head of a pin?”

John, let’s start with just one: I actually think the problem discussed in the post isn’t a pseudo-problem because its part of the general problem of the nature of intentionality. What determines the content of mental states? I would have thought that you, with your interest in the phenomenological tradition, would find the topic of intentionality interesting, not ‘pseudo’. Am I wrong about that?

40

jholbo 02.18.10 at 3:47 am

Hey, it’s funny, I was thinking about ‘tromp le monad’ but I was also misremembering the title of the Pixies album. I had a distinct – but it turns out wrong – recollection that the title was misspelled as Tromp Le Mond. Ah well.

41

Zamfir 02.18.10 at 11:11 am

“This model was superseded back in the 1920s”

Fine, but the turtle’s still not turtles all the way through, so it’s still at odds with the common sense model that “real” = physical, solid, unity.

Nah, the problem is that we somehow got in our heads that electric fields are not “solid” but atomic nuclei are.

42

Zamfir 02.18.10 at 11:12 am

Brr, layout. In the above, the first two blocks are cited, the last line was my addition.

43

bigcitylib 02.18.10 at 11:30 am

“Probably some fine, upstanding academic epistemologist has made this ‘deception not the same as error’ point already”

Probably J.L. Austin, in S&S.

Philosophers are still prattling on about this stuff? The field is still stagnant, then.

44

John Holbo 02.18.10 at 12:24 pm

“Probably J.L. Austin, in S&S.”

Nah, not in there.

45

John Holbo 02.18.10 at 12:41 pm

Actually, that was a bit glib. In case you actually aren’t aware, “S&S” is sort of late ordinary language-style philosophy and – for better or worse – it is quite different in character from the sort of thing Chalmers writes, and I write (although of course there are probably similarities well). If you are saying that you think Austin was actually pretty good, and everything since has been downhill … well, I would have to disagree somewhat. But probably you wouldn’t have picked the verb ‘prattle’ if you thought Austin was awesome. So maybe you should give a second look. (It’s sort of like saying you are pretty sure this rock n’ roll the kids are into is just a bunch of same-old because you heard Elvis singing “Hound Dog” in 1956 and it wasn’t that great. 1955 was a while ago, actually, if you’ll check the calendar.)

46

Hidari 02.18.10 at 1:34 pm

Funnily enough I was just thinking about Austin who, if memory serves (and it might not) does indeed make the basic point of the difference between being wrong and being lied to. . If you don’t like Ordinary Language Philosophy (and you should) then perhaps the discussion of S and S in Putnam’s The Threefold Cord: Mind, Body and World would be more to your taste.

Of course Putnam was making a point about ‘sense data’, not The Matrix, but the basic point being made is the same: you can of course be in error about things you see (optical illusions etc.) but can you be fundamentally in error? In other words can you really, permanently be wrong about everything, as the character Neo is in the Matrix? Austin thought you couldn’t, and Putnam came round to agreeing with him.

I should have perhaps pointed out that the Dreyfus paper linked to in my post is a sorta/kinda response to the Chalmers paper linked to in the original post: they are well worth reading together as examplars of differing approaches in philosophy.

47

Zamfir 02.18.10 at 2:03 pm

Probably J.L. Austin, in S&S

I completely read that as “Jane Austen in Sense and Sensibility”

48

John Holbo 02.18.10 at 3:00 pm

“I completely read that as “Jane Austen in Sense and Sensibility”.

Well, it IS a pretty good author-title pun. Quite intended, I am sure.

49

John Holbo 02.18.10 at 3:05 pm

“Funnily enough I was just thinking about Austin who, if memory serves (and it might not) does indeed make the basic point of the difference between being wrong and being lied to.”

I don’t remember that he makes this basic distinction, although I’m not surprised. He loves his distinctions. But I know that his proposed dissolution of the skepticism issue is unconvincing – utterly different than Chalmers’ or mine, at least. So I’m certain he couldn’t have made an application of the distinction along the lines that I do in the post. But if I’m misremembering … well, it’s always possible to be radically in error about things, I say.

50

Hidari 02.18.10 at 3:47 pm

‘But I know that his proposed dissolution of the skepticism issue is unconvincing.’

Well we will have to disagree about that.

51

Platonist 02.18.10 at 4:16 pm

@40: Oh! Now the ‘e’ is missing in trompe! And yet you get l’œil right, which I can never remember how to spell.

Appropriately, that Pixies album begins with:
“Why do cupids and angels continually haunt her dreams like memories of another life?” (adding, to further the meta-ness: “is written on her shirt in capitals”)

And, in honor of chairs composed mostly of the stuff, it also includes a paean to the existence of space, called “Space (I believe in).”

52

bigcitylib 02.18.10 at 4:26 pm

Actually, you often find Sense and Sensibilia in the English Lit section of your used book store. You also find “How To Do Things With Words” in the self help/powerful speaking section. A little joke on JLs part, perhaps.

And actually Davidson (or Searle, or Putnam) et al owe a fair bit to Austin, even if they have to tech-up the language to make it all look more profound. Also to hide the origons of their argument because Austin or LW don’t use enough crazy fake mathematical notation to be considered real philosophers anymore.

53

bigcitylib 02.18.10 at 4:39 pm

Hidari, The Matrix just IS a bunch of sense data that don’t correspond. And John Holbo yes I do think Austin and LW are awesome, and that its mostly been down-hill from there. Swap “qualia” for “sense data” for “sensation” and hey presto! people think you’ve done other than repackage Locke/Berkeley/Hume.

54

kid bitzer 02.18.10 at 4:54 pm

“Swap “qualia” for “sense data” for “sensation” and hey presto! people think you’ve done other than repackage Locke/Berkeley/Hume.”

ah. well, if you’re someone to whom everything looks the same, then there’s a good chance that everything will look the same to you.

55

John Holbo 02.18.10 at 4:58 pm

bigcitylib, what makes you think anyone – Searle, for example, Austin’s student – is hiding an Austinian/Wittgensteinian influence? This is pretty well known and not considered a source of shame, that I’m aware of. What I was objecting to was the hint that nothing has changed since 1955, not the assertion that 1955 came before now, and influenced what exists now. Also, your sense that it would be possible to swap in ‘qualia’ for ‘sense data’ for ‘sensation’ and wow the crowds is refreshing in its dramatic simplicity but, I think, likely to fail in practice.

56

bigcitylib 02.18.10 at 5:17 pm

John Holbo, actually you are correct about the first generation descendants of the original OL people. These days, however, arguments about contextualism in semantics, just for example, go on with little apparent knowledge of the historical roots of the arguments being deployed (IMHO).

As for the sensation/sense data/qualia thing, Hume and Co. thought that “evil demons” might be fooling them with false sensations. In The Matrix, evil robots fool their human slaves with non-veridical qualia. I would simply note the underlying similarity of the concepts involved. (By the way, even this complaint is from Austin origonally, who noted the sense data = sensation connection. Apparently, maybe in the 1960s, the names got changed again and Qualia came onto the stage.).

57

Treilhard 02.18.10 at 6:06 pm

“Hume & Co. thought that ‘evil demons’…”

Apparently you think that all of Early Modern philosophy can be found in Descartes’ Meditations, in which case kid bitzer is on the money, and everything really does look the same to you.

58

Richard Chappell 02.18.10 at 11:43 pm

Huh, looks like my question (#38) got lost in haystack. Sorry to repeat myself, but I really would be curious what John’s basis is for thinking that Chalmers didn’t mean to extend his account to Matrix-like cases where other agents are “messing with our heads” (so to speak).

Here’s a relevant section of the paper:

Evil Genius Hypothesis: I have a disembodied mind, and an evil genius is feeding me sensory inputs to give the appearance of an external world.

This is Rene Descartes’s classical skeptical hypothesis. What should we say about it? This depends on just how the evil genius works. If the evil genius simulates an entire world in his head in order to determine what inputs I should receive, then we have a version of the God Hypothesis. Here we should say that physical reality exists and is constituted by processes within the genius. If the evil genius is simulating only a small part of the physical world, just enough to give me reasonably consistent inputs, then we have an analog of the Local Matrix Hypothesis (in either its fixed or flexible versions). Here we should say that just a local part of external reality exists. If the evil genius is not bothering to simulate the microphysical level, but just the macroscopic level, then we have an analog of the Macroscopic Matrix Hypothesis. Here we should say that local external macroscopic objects exist, but our beliefs about their microphysical nature are incorrect.

The evil genius hypothesis is often taken to be a global skeptical hypothesis. But if the reasoning above is right, this is incorrect. Even if the Evil Genius Hypothesis is correct, some of the external reality that we apparently perceive really exists, though we may have some false beliefs about it, depending on details. It is just that this external reality has an underlying nature that is quite different from what we may have thought.”

59

jholbo 02.19.10 at 1:14 am

“what John’s basis is for thinking that Chalmers didn’t mean to extend his account to Matrix-like cases where other agents are “messing with our heads” (so to speak).”

Sorry for being unclear, Richard. I agree with you that Chalmers does extend his account to Matrix-like cases, but what he’s really interested in (this is what I meant to sa) are the hologram-type cases. So I think he ends up saying something about the Matrix case that’s just wrong, and I’m diagnosing the error as due to basic uninterest in seriously considering that sinister forces are behind it all.

A shorter version: if the world is actually designed, then certain Arguments From Design are actually good arguments, and a lot would follow from that.

60

jholbo 02.19.10 at 1:15 am

“As for the sensation/sense data/qualia thing, Hume and Co. thought that “evil demons” might be fooling them with false sensations.”

I have to agree with Treilhard, bigcitylib: it seems like maybe the reason it all looks the same to you is that you are not very familiar with the details.

61

bianca steele 02.19.10 at 1:21 am

Here we should say that physical reality exists and is constituted by processes within the genius.

I assume this is meant to parallel the computational case where the universe is a computer (physical reality exists and is constituted within the computer’s memory)–I think this is based on a deep misunderstanding of the computer, which I hesitate to mention because it seems more likely I’m missing the point. For example, Chalmers discusses Wolfram’s suggestion that at the deepest level the physical world is best thought of as a cellular automaton: he immediately rewords this as thinking of the world as being made up of ones and zeroes. Well, I guess for his purposes, “being made up of ones and zeroes” is the most perspicuous conceptualization.

For my purposes, it isn’t. Although I’m happy to say “constituted in the computer’s memory” if we need a place for them to be constituted somewhere, this would simply be following the pattern (established for this question) that anything real has to be constituted somewhere, not how I would conceptualize it if doing so “from scratch” (or even if I could choose any pattern I liked from anywhere).

It doesn’t really matter. The wording just feels “off” to my ear. Probably there is a reason it has to be that way.

62

noen 02.19.10 at 5:27 pm

Various responses randomly generated.

“We’re totally fictional.”
This is Lacan in a nutshell. Yes, our phenomenological experience of ourselves is a fictional construct. This is new?

“the world is a computer simulation”
This has been disproved by John Searle’s Chinese Room argument. Turing machines (What is meant by “computers” here.) can only process syntactic information. No computer program can ever understand semantic information (They cannot understand Chinese). The universe contains entities with semantic understanding. Therefore the universe cannot be a computer simulation.

“I pick the bug because, presumably, it doesn’t intend to deceive.”
Do insects that have evolved camouflage or other deceits posses intentionality? I would think not. Is our subjective phenomenological experience of ourselves likewise an evolved deceit?

“If we were to discover that we were (and always had been) in The Truman Show, we might well go insane. “
Indeed, we might even fly planes into IRS buildings when we come to believe that our entire political structure is a dog and pony show designed to extract the most wealth from us.

There are some serious storm clouds on the horizon kiddies.

63

noen 02.19.10 at 6:20 pm

Another thought:

Hidari @ 29
“…some people seem to think that because (e.g.) the United Nations is ‘really’ composed of its individual member states, or the United States is ‘really’ just a bunch of people on a specific rock, that therefore the US and UN don’t actually exist!”

Searle’s notion of collective intentionality serves to explain how nations like the US are constructed and they are maintained by the institutional facts that sustain them. Chairs and tables are brute facts that are constructed from the collective behavior of the atoms and molecules it is composed of.

Both chairs and tables and nation states and constitutions are objectively real and observer independent but chairs and tables are epistemically objective whereas nation states and constitutions are not.

64

bianca steele 02.19.10 at 6:22 pm

@62
Neo’s situation is actually a neat but thorough reversal of the Chinese Room scenario, I think. I also think you are probably picking up Chalmers’ equivocation between two different definitions for the word “simulation” (it usually doesn’t mean a virtual reality type thing–there is no reason the inputs to the VR have to be generated by a computer).

bug
I just had a flash of ants trying to diagnose software bugs. Why could this never work?

65

noen 02.19.10 at 7:19 pm

@64
“there is no reason the inputs to the VR have to be generated by a computer”

If the VR deception can be formalized within some syntax, that is, if it is possible in principle for it to be generated by a universal turing machine. Then we cannot be a part of the “Matrix” (The virtual reality deception.) because we posses intentionality and can understand semantic information. Something that no universal turing machine can do.

If however there exists a reduction of the “VR deception” along the lines of Wolfram’s “our reality is really just the result of the low level activity of cellular automaton” then that would be a brute fact of how the world works. There would then be a story to be told of how the underlying microstructure of the world can give rise to conscious thinking entities with intentionality.

I think the problem begins elsewhere. I think it begins when Chalmers tries to count the entities in the world and gets to two, when Leibniz counts and gets to one and Frege gets to three. I agree with Searle that we shouldn’t try to count.

66

Gareth Rees 02.19.10 at 9:35 pm

we posses intentionality and can understand semantic information. Something that no universal turing machine can do.

I’m glad to see that this long-standing problem has been solved.

67

noen 02.20.10 at 12:36 am

@66
The Chinese room argument directly attacks the claim that a turing machine could ever be constructed that is conscious (understands Chinese). I think it succeeds. If so then it cannot be the case that the universe is a giant simulation performed on a turing machine in which we are but embedded software programs because at least one entity exists that cannot be actualized in any turing machine, namely, me. Just as simulated weather is not weather, simulated consciousness is not conscious.

Therefore any deception that is taking place must take the form of physical brains in physical vats existing in some as yet unknown universe. Not only must we humans be jacked into the Matrix, so must any alien or artificial intelligence. Any Agent Smiths that exist only within the Matrix are dead simulations and not actual disembodied minds living in virtual reality. Mr. Smith is at best a zombie.

All of this confusion emanates from the dualistic assumptions of David Chalmers and others. Once you decide that the world consists of the mental and the physical it then becomes easy to think of ways in which the mental is deceived by the physical because there is no real connection between the two. This same confusion is also expressed in the zombie problem. The idea that there could exist people who look and behave exactly like us except they lack consciousness. This is dealt with in the same way as above by noting that there exists at least person, myself, who is self-conscious. Descartes got at least one thing right. We cannot be deceived about our own consciousness because to doubt one is conscious is to be conscious.

It’s tough work giving existential support to the whole of Being but someone’s got to do it. ;)

68

John Holbo 02.20.10 at 1:29 am

““I pick the bug because, presumably, it doesn’t intend to deceive.”
Do insects that have evolved camouflage or other deceits posses intentionality? I would think not. Is our subjective phenomenological experience of ourselves likewise an evolved deceit?”

Sorry, I thought this was clear: I picked the bug because it probably doesn’t intend anything and therefore, by extension, doesn’t intend the one thing it probably would intend, could it intend anything: namely, to deceive. Being a stick big, by hypothesis.

69

Treilhard 02.20.10 at 1:50 am

Just as simulated weather is not weather, simulated consciousness is not conscious.

Just as simulated weather is not weather, a simulated text is not a text!

More seriously, the Dreyfus paper that Hidari linked to has proven to be a fun read. I am curious though about the moral implications of ‘waking up from the Matrix’. One underlying assumption of the film is that it is in some sense better to wake up than remain in the Matrix. Dreyfus refers to this as the Buddhist/Platonic reading of the film [though for me it sounds more like Scientology], but notes the peculiar way in which, contrary to the traditional religious reading, Neo has to reenter the dream world in order to fully realize what it means to wake up from the dream. But isn’t it a bit naive of Neo, having lived 30 years in a hard reality, to wake up in a harder reality and just take it for granted that this new world is the hardest reality [it sets up such a convenient dialectic, it must be the hardest!]? Why shouldn’t Neo just assume that it’s “turtles all the way down”, and why is he so obviously a hero for giving us hardness in exchange for pleasantness?

For Dreyfus, Neo’s journey has nothing to do with experiencing the harder real, but is instead just a way to break up bourgeois monotony , in which case, I’d rather Neo just sell all of his possessions and, I don’t know, go on a road trip or something.

70

Richard Chappell 02.20.10 at 3:16 pm

John (@59) – thanks for the clarification. Though I do think it would be odd for anyone who really accepts Chalmers’ core arguments to turn around and give a different story when “sinister forces” are involved. As I put it in my first comment, “Merely having an external agent who envatted us [from birth] so as to prevent us from exercising agency in his world just does nothing at all to undermine the reality of the world that he shifted us into.” I don’t see the motivation for your alternative view.

Think of it this way. If Chalmers’ core metaphysical claims are right, then the Matrix scenario is basically equivalent to the following case:

(Fall From Grace): Generations ago, a bunch of souls were hanging out in heaven, until one (call him ‘God’) decided the others posed a threat. So God creates a physical universe, to which he banishes the other souls (by connecting them up to physical bodies, that will determine their future thoughts and experiences). Whenever the physical humans reproduce, another soul is created in heaven to serve as its non-physical mind. For everyone alive today, physical reality is the only reality they’ve ever known: their non-physical soul has been tied to a physical body from the moment it came into existence.

Two questions: is Fall From Grace a skeptical scenario? If you agree with me that it clearly isn’t, how is The Matrix relevantly different?

(The background challenge here is to defend your claim that Matrix vats have “the proper function of deceiving their inhabitants”, rather than, say, the proper function of exiling their inhabitants.)

71

bianca steele 02.20.10 at 5:08 pm

@noen,
I’m having a tough time understanding your posts. When did it become logical for definitions and quantifications to slip and slide from one sentence to the next? It would take an entire textbook, plus indoctrination in the way of thinking required to understand the textbook, to answer why you can’t just equate “expressible in language” with “formulable in some computer algorithm” with “thinkable,” and why to the extent you can it doesn’t really matter any more than the thing about empty space in chairs.

I’m remembering that Turing (see Davidson’s essay) actually proposed a contest between a computer and a woman. Is the woman supposed to be illogical (slipping and sliding her definitions), or is she supposed to be stubbornly, earthily logical? I wonder.

72

bianca steele 02.20.10 at 5:16 pm

And suppose we’ve been misunderstanding the reason “empty space in chairs” matters. Suppose we were really supposed to be thinking about the fact that “chairs burn–and turn to ash!” or “wood can be distilled–and turned to a liquid!” I mean, really, how long have we been supposed to be misunderstanding that? Where are those people who supposedly–we’re now imagining–really understood all along? And how the heck are we supposed to think about these alternative explanations, not to mention their implications, if we are spending all our time imagining how those “empty space” explanations are really, really important, even as the space within which the importance could reside gets narrower and narrower?

And, then, we could imagine that the reason we don’t is that all the secrets of the universe are in the possession of the stage magicians’ profession. There are lots of novels about magicians–The Prestige, Robertson’s Deptford Trilogy–so why not? Do we dare cross them?

It would be nice if we could find some middle ground between “rigor” and “imagination,” somehow. John H, any cites from the philosophical literature that might help here?

73

Gareth Rees 02.20.10 at 8:23 pm

Noen: I think it succeeds. If so …

Ah, I missed this elegant argument in favour of Searle. Have you considered publishing?

74

noen 02.20.10 at 9:53 pm

@ bianca steele 71
“I’m having a tough time understanding your posts.”

I’m sorry if I was unclear, I’ll try to do better. I’m a lay person. Most of the time I just read. Sometimes I try to be helpful, or funny, or to pose questions that might lead to further discussion. Rarely I sometimes suffer from the delusion that I know something. In this thread I’m pretty confident that the Chinese Room argument refutes the computational model of consciousness. From that I think it necessarily follows that the universe as computation cannot be true as we are in it. So if everyday experience really is a VR deception then there must be a “true” physical reality in which we are brains in vats. As Richard Chappell points out this is not at all “a skeptical scenario” as it violates Occam’s razor. Therefore I think we are safe in believing that we are not being deceived and that any misconceptions on our part are merely errors.

Under the section “The Computational Hypothesis” Chalmers just waves off the objection that reality cannot be composed ultimately of bits because they must be implemented in some physical device. In doing so he assumes it is possible for the program that creates our illusion of reality to be able to run without being realized in any hardware. It just “is”. We are back to mind/body dualism. Chalmers assumes that it must be possible for information to exist “out there” somewhere, just… floating around and then uses this assumption to dismiss out of hand any objections without ever trying to offer some kind of rational explanation why.

In the next section “The Mind-Body Hypothesis” Chalmers again simply dismisses all objections. “Even if contemporary science tends to suggest that the hypothesis is false, we cannot rule it out conclusively. “ I think we can. The problems with dualism are numerous. I don’t see him offering here any sort of justification for retaining it. Other than perhaps it gets him where he wants to go.

The forth section, “The Metaphysical Hypothesis” simply combines the errors of the previous sections. About this he says: “Even if we accept it, most of our ordinary beliefs about the external world will be left intact.” Yeah, except for the part where everything we believe is a lie.

I don’t think that if I am a brain in a vat that I am at all helped by the counsel that all of my beliefs are true…. for my dream world. No, all of my beliefs are just as false as the hallucinations of a schizophrenic are false. They may seem true to me and I may have even discovered facts that are true for my dream world, but they are not ultimately true because they could be changed at any time on the whim of whomever or whatever programed the computer I am being deceived by.

“The Matrix as Metaphysics” is full of fail.

75

noen 02.20.10 at 9:59 pm

@ Gareth Rees 73
Why would I duplicate Searle’s argument and try to present it as mine? If you have a definitive rejoinder I’m all ears.

76

Richard Chappell 02.20.10 at 10:23 pm

@noen (74) – it’s worth distinguishing a causally stable VR world from chaotic “hallucinations”. (See Chalmers’ section on the “Chaos Hypothesis”, and why this qualifies as a genuine skeptical scenario, unlike the Matrix.)

You suggest that the manifest facts (internal to the VR world) “are not ultimately true because they could be changed at any time on the whim of [the Creator]“. This seems confused. Just because a being has the power to intervene and upset a stable causal order, doesn’t show that there isn’t a real, stable causal order here in the meantime. Again, just suppose that an omnipotent God existed, who could fiddle with the physical world on a whim. That wouldn’t show that the physical world wasn’t real.

P.S. re: Searle, see ‘The Homunculus in the Chinese Room‘ for an explanation of why his argument is misleading.

77

Gareth Rees 02.20.10 at 11:47 pm

Noen: I’m just snarking at your bald assertion that “we posses intentionality and can understand semantic information. Something that no universal turing machine can do” as if this were some kind of settled point of scientific fact.

78

Farren 02.21.10 at 12:46 am

@Noan: Searle’s Chinese Room shows nothing of the sort. Its a stupid thought experiment that plays to prejudice without demonstrating what it purports to demonstrate. We are asked to imagine a situation which bears no resemblance to a computer AI whatsoever and does not illustrate an understanding/emulation gap.

Unlike, say, Einsteins thought experiments involving trains, which ask you to suspend some basic aspects of engineering in a manner that does not obviate the point being made, Searles room asks us to suspend aspects of computing generally and conversational processing specifically that would otherwise essentially nix his point.

For starters, he sets up the system composed of the room, book and clueless processor (the man in the room) as an analogy for some kind of AI, then claims that the processor’s lack of understanding shows that something can emulate without the internal experience of understanding. This perfectly obvious bait and switch is akin to saying that because your amygdala doesn’t “understand” your feelings (it being only a part of the system that is the totality you), you, the person, don’t have any real understanding or consciousness. I’ve read responses to this “systemic” criticism and found none of them a satisfying rebuttal.

But there are other, equally obvious failings. He proposes an algorithm, encoded in a book, which would allow a man to produce apparently sensible answers to statements in a language he does not understand. Anyone who’s spent any time around AI literature and software (I’m a dev who got into the business because of my fascination with it) will know that one of the great difficulties in it is context-sensitive information. In an old book on logic I have somewhere by Alfred Hodges, he discusses a brief statement along the lines of “Mary had a balloon. She loved that balloon. But the wind plucked it up and carried it into the tree. Mary cried and cried”. He points out that there is a whole bunch of information we infer from the style and subject matter that simply isn’t in the para. We infer that Mary is probably a child. That the wind probably didn’t carry her into the tree with the balloon and so on and so forth. Aspects like the children’s-storybook style of writing and the emotional attachment to the balloon tell us these things, but not the text itself.

Similarly the use of pronouns necessitate inference from previous statements, idioms the recognition of non-literal meaning, et al. Searle’s book-algorithm would require a man not just reading a book and following instructions, but writing a lot of things down in the book, then referring back to them. He would even, to emulate the understanding of the language he does not understand, need to modify the algorithm in a recursive fashion. To top it all off, he would not be fooling anyone outside of the room if his statements came a week after the statements from the outside they corresponded to. And in order to emulate the massively parallel-processing activity of a Chinese-speaker’s brain in an apparently linear processing environment, he would need to flatten out its dimensions into a smaller number, a process that generally massively expands the requirement in the more limited set of dimensions.

So for the system of the room to emulate understanding, you would have to have a man reading, flipping backwards and forwards through the book, writing stuff down, altering instructions in the book and so on at close to the speed of light, in a blur of activity not unlike the chaotic activity of neural networks. The man and the book would be almost indistinguishable from each other in that situation and if he was doing his job properly its doubtful the man would retain any seperate sense of identity from the thing he is interacting with at the limits of the speed of material things.

Searle’s Chinese Room is sophomoric gobbledygook and I am always appalled to see it referenced by otherwise intelligent and learned people (like Penrose in “The Emperor’s New Mind”) as if it illustrated anything philosophically interesting at all. It doesn’t.

79

noen 02.21.10 at 1:05 am

@ Richard Chappell76
“Just because a being has the power to intervene and upset a stable causal order, doesn’t show that there isn’t a real, stable causal order here in the meantime.”

“Real” means that something must posses an ontologoically objective existence. If however my reality can be altered by another being at it’s whim then that means my reality is ontologically subjective. If God or Descartes’ demon can alter what I believe to be ultimate reality then it isn’t ultimate reality. There must exist a deeper reality who’s rules permit them to change the rules of this reality. I have been deceived.

Your blog post is a restatement of the systems reply which says that it is the whole system of the Chinese room that understands Chinese. I’m just a cog in the machine. But what if I memorize the Chinese understanding program? I am given a piece of paper with a Chinese symbol written on it. I don’t understand what it means but I go through the operations of the program and it formulates a reply that shows it understands, but I still don’t understand Chinese.

80

Richard Chappell 02.21.10 at 1:32 am

Noen – I addressed that objection in the comments thread. In short: Searle conflates the ‘implementing’ and ‘realizing’ levels of cognition. Just as computation-implementing neurons give rise to a newly-realized mind (distinct from the implementing neurons), so we should expect that implementing a computation in your mind would give rise to a newly-realized mind (distinct from your implementing mind). We should not expect you to understand Chinese, any more than we would expect an ordinary Chinese-speaker’s neurons to understand Chinese. The whole argument rests on a levels confusion.

81

noen 02.21.10 at 1:45 am

@ Farren 78
(Re: The processor reply.) The Chinese Room is a philosophical thought experiment not a scientific one. As such it doesn’t matter if it would be physically impossible for any human to perform the operations needed in real time. Nor does it matter if the details of how a modern computer is physically constructed are not adequately reflected in the hypothetical. That isn’t the point. The real point is that syntax is insufficient for semantics. No Turing machine can ever be programed to understand semantic information because all any Turing machine can do is manipulate syntax.

Oddly enough you bring up the problem of what Searle terms “the background”. Did you really think that one of the leading philosophers of the theory language had never heard of that before? Perhaps you should be a bit more circumspect before you go about tossing around accusations like “Searle’s Chinese Room is sophomoric gobbledygook”. Educate yourself a little and try not to make everything a pissing match.

82

noen 02.21.10 at 2:07 am

@ Richard Chappell 80
“Just as computation-implementing neurons give rise to a newly-realized mind (distinct from the implementing neurons), so we should expect that implementing a computation in your mind would give rise to a newly-realized mind (distinct from your implementing mind).”

You are begging the question. The Chinese room experiment is formulated in order to show the problems with the computational model of consciousness. You cannot conclude that implementing the computation in my mind (memorizing the rules of the Chinese understanding program) would necessarily give rise to a new mind, one that understands Chinese, because you cannot formulate a conclusion by simply assuming your premise. The statement “Consciousness equals computation” is false.

None of this means that consciousness doesn’t arise from the underlying neural activity of the brain. The argument does not imply property dualism. We really don’t know how the brain does it but we can be certain that no Turing machine can.

And if no Turing machine can give rise to conscious thinking rational entities such as ourselves then neither can we be simulated conscious thinking rational entities living in a virtually constructed world.

83

Richard Chappell 02.21.10 at 2:59 am

noen – you’ve confused the dialectic. I’m not offering a positive argument for computationalism, I’m simply explaining why Searle’s argument fails. Searle argues that computationalism is false because your implementing the computation would not cause you, the implementer, to understand Chinese. This relies on the conditional, “if computationalism were true, the implementer would understand Chinese” (from which modus tollens yields Searle’s conclusion that computationalism is false). But the conditional is false. Computationalism does not imply that the implementer understands Chinese. So from Searle’s observation that the implementer wouldn’t understand Chinese, nothing follows about whether Computationalism is true or not. Maybe it is, maybe it isn’t. This argument is no help, so we’ll have to look elsewhere to decide the question.

84

Farren 02.21.10 at 3:33 am

@Noen

Unless I’ve misunderstood it, the original thought experiment was not presented as an illustration of the fact that a Turing machine could not have mind, but a computer. They are distinct concepts. Modern computers are based on the theoretical model of a Turing machine, but only because of practical limitations of current technology. Searles argument is against a computational theory of mind.

Searles answer to the systemic criticism boils down to “I could internalise all the elements of the the system and be none the wiser”. In essence “if I put the room, the man and the book in my brain and had it provided me with responses to Chinese, I would be none the wiser”. This isn’t even a response to the criticism. Because the exact same criticism would apply.

By “internalising” the system he means embedding it in Searle and merely vocalising its output and feeding it input, while remaining Searle with Searle’s consciousness. The fault becomes apparent when you consider embedding a chinese person in Searle and giving him the same relationship to that person. The fact that he is merely a courier does not deprive the chinese person in that thought experiment of understanding. No-one would accept the argument phrased thus. No, to rebut the criticism he must show that if he were to become the room and its occupants, he would have no understanding. His response does nothing of the sort.

His credentials as a thinker don’t impress me in the slightest. Perhaps he is a great thinker who has provided humanity with many other pearls of wisdom, but that thought experiment was not among them. And in fact an enormous number of equally thoughtful individuals with equally impressive credentials basically reject the experiment for much the same reason as I have above.

85

Farren 02.21.10 at 3:42 am

@Noen

Forgive the emotional tone of my earlier response. Not being an academic I don’t have the reflexively neutral tone that admittedly makes for more rational discussion.

Its rare that a philosophical proposal actually offends me, but I think Searles does not simply for being (to me) wrong for the most obvious of reasons, but being wrong for seemingly obvious reasons yet held in high regard by people who are otherwise impressive thinkers. Its not a pissing contest thing, nor do I have any animosity towards you for bringing it up :)

86

Dave Maier 02.21.10 at 3:48 am

Although I agree with Richard’s #80 that, as he puts it, “[Searle’s] whole argument rests on a levels confusion” of the sort he suggests, there is a sense in which noen’s #81 has a point against Farren.

[and maybe Richard’s #83, which I hadn’t seen when I wrote this, is more directly to the point than this, so if this is TL then DR]

It helps to know the history of the argument. Searle’s foil here is not Dennett or Hofstadter (though he does, thanks to his “levels confusion,” take his argument to apply to them as well) but Roger Schank. Like everyone else even at the time, Schank noticed that computers have problems with what Farren in #78 calls “context-sensitive information.” Schank’s program SAM (Script Applier Mechanism) was an attempt to deal with this. He thought that if the program were supplied with background information — say a description of what generally happens in a restaurant (people order, eat, and pay, but only if the food was okay) — then it could understand the “scripts” (e.g. about someone ordering a hamburger in a restaurant) that it was having trouble with before.

Searle thought that this wasn’t going to help, and wrote “Minds, Brains, and Programs” in response. He doesn’t mention Schank directly, I don’t think, but this is why he sets up the problem in terms of “strips of paper” (= “scripts”) shoved under the door of the Room for me to respond to. Now indeed it will not, which is why we never hear about SAM anymore. But Searle’s argument goes farther. He wants to show that even if it did help, and I could indeed respond appropriately because of the added background information, that wouldn’t show that I understood, given that all I’m doing is doing calculations.

This is why Farren’s #78 is a bit off point, even in speaking the truth. Searle is not denying the problem of context-sensitivity; he’s allowing its solution ex hypothesi to set up his thought experiment. Still, the penultimate paragraph of #78 does gesture at the result of Searle’s level confusion (“[t]he man and the book would be almost indistinguishable from each other in that situation”). But that last swipe is a bit harsh. I think the CR argument is kind of like Jerry Fodor’s somewhat analogously confused argument against adaptationism (which he calls “natural selection” – !?). It works against views that were no good to begin with, but isn’t nearly as significant as its author thinks.

I haven’t given an argument here — I think that Searle’s deep commitment to the Cartesian picture is more easily seen in other contexts (and not at all dispelled by his failure to endorse mind-body substance dualism in particular!) — but I think Hofstadter’s response (in The Mind’s I, I think, although his recent book probably has more) is pretty good. I do think that “the systems reply,” while mostly right, is a bit misleading, as I would rather speak not of the “system” but instead of the virtual entity we create in running the program (just as we ourselves come into being as minds due to our neural activity — which of course gives nothing whatever away to dualism in either case).

Again, this is all w/r/t the thought experiment in which the programming problems are all overcome, which I find almost inconceivable. But that “almost” is key in philosophy!

Holbo studied at Berkeley, so he knows Searle, but I think not as a student of his. Maybe he can say something here, yes?

87

Farren 02.21.10 at 3:49 am

@Noen,

and my rambling about the actual required mechanics of such a room were not so much a proof of its wrongness but to show that his bait-and-switch of placing of the man on a pedestal in the experiment only works for some because the use of a human as a component plays to intuitive prejudices. But in any such real-world experiment, the human would actually form a continuous integrated circuit where no such clear seperation could be made. In any event, that part is trivial. Searle has never provided a satisfactory reply to the systems response. The experiment is logically flawed in a very unambiguous way.

88

Farren 02.21.10 at 3:51 am

Oh cross post with Dave above. Interesting background. Yeah, I kind of went off on a tangent but didn’t really explain why. Which was that the couching of the experiment played to intuitional prejudices. But the systems criticism is sufficient to show its logical failings.

89

noen 02.21.10 at 4:20 am

@ Farren 85
“Forgive the emotional tone of my earlier response.”

Not a problem. I’m a lay person. Usually I don’t say much but this time I decided to chime in because I feel as though I can add something. I hope I have.

onward ho! –>

90

noen 02.21.10 at 5:51 am

@ Richard Chappell 83
“Searle argues that computationalism is false because your implementing the computation would not cause you, the implementer, to understand Chinese.”

@ Farren 84
“Unless I’ve misunderstood it, the original thought experiment was not presented as an illustration of the fact that a Turing machine could not have mind, but a computer.”

Yes, you’ve both misunderstood the argument. The Chinese room argument was design specifically to deal with the Strong AI claim that “mental processes are computational processes over formally defined elements.” Here is Searle’s summation of the argument by axioms:

(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

to the conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.

Computation is defined as syntax. It is the manipulation of symbolic representation by means of the rules of syntax. While symbolic tokens are implemented as physical tokens, voltage levels etc., they are not defined in terms of their physical features. Syntax is not physics. Therefore “computation is not discovered in the physics, it is assigned to it” While a syntactic interpretation can be assigned to physical phenomenon that syntax is observer relative whereas the physical phenomenon is observer independent.

It follows then that “you could never discover that the brain or anything else was intrinsically a digital computer” even though you could assign a syntactical interpretation to it. When you dissect a frog’s eye no one talks about how it implements the frog’s computational program for vision. Once you know how the frog’s vision actually works the “computational level” is a meaningless question. Even though it might be possible to simulate it’s vision the simulation offers no explanatory power. The frog’s vision is not explained by the program use to simulate it. We do not suppose that a program simulation of a hurricane provides a causal explanation of it’s behavior. Why should the brain be any different? It may be possible in the future to simulate a human brain but such a simulation will tell us nothing about how the brain works in it’s physical reality because we have imposed our syntactic interpretation onto the brain. Syntax has no causal powers. And if it is true for the human brain then it is true for the universe as a whole. Therefore the universe cannot be a computer simulation.

I am not deceived.

91

John Holbo 02.21.10 at 5:53 am

“Holbo studied at Berkeley, so he knows Searle, but I think not as a student of his. Maybe he can say something here, yes?”

The main trick to the Chinese Room Argument, particularly as a response to the Systems objection, is presenting it properly. You need to be able to imitate Searle’s voice, tolerably, and apply the exactly correct intonation to ‘I mean you don’t want to say the ROOM understands Chinese!’ If you don’t get exactly the right proportions of folksiness, pugnaciousness, brash self-confidence, whimsy, and mild contempt, the argument just won’t go through.

On a more serious note: Dave, funny you should mention Shank and SAM (no ‘c’ in Shank, by the by). I’m doing some stuff about story interpretation and I was thinking about looking into the history of attempts to write programs that can ‘interpret’ stories. Quite apart from the doubtful metaphysics and epistemology of strong A.I. – setting all that to one side – it is obviously right that if you want a program to ‘understand’ (in a functional sense: answer questions about) a story about a car crash, you have to program a lot of basic knowledge about cars and crashes and so forth. When I Googled Shank and SAM I didn’t find what I expected: namely, a lot of stuff about subsequent attempts to program in enough subject-matter knowledge to allow an expert system to do a good job. I’m sure that’s partly because: it’s inevitably going to be incredibly hard, and no doubt a certain frustration set in. But still: what have people done in this area, since Shank. Tons of stuff, no doubt. I’m obviously not putting the right terms into Google.

Wolfram Alpha? ‘Knowledge-based computing’?

92

Dave Maier 02.21.10 at 6:28 am

I’m obviously not putting the right terms into Google.

I’m not surprised, because it is indeed “Schank.” But even given that, surely “expert systems” gets you a bunch of hits. Or try Schank’s own site.

93

Dave Maier 02.21.10 at 6:31 am

Oh, and on the less serious note: that description is absolutely spot on. How does he do that?

94

John Holbo 02.21.10 at 7:13 am

Ah, Schank it is then! Trompe le Mond and all that!

95

Farren 02.21.10 at 11:08 am

@Noen

I get that its a analog/digital distinction thats been drawn. But the Chinese Room does not demonstrate that the former can produce mind and the latter can’t. I mentioned Penrose citation of the thought experiment because he explicitly references this distinction in the midst of a discussion of the failings of the view that neural networks are simply kinds of digital circuits.

Perhaps if you could extend the formal statement of the proof without reference to the thought experiment to show that the former cannot produce mind? Because there seems to be a hidden step in there of just assuming as a priori that minds rely on non-digital attributes of the universe.

I’m not sure which side of the debate Holbo’s snark above is aimed at, but it does seem to me that’s really what it boils down to. When I speak of intuitional predujices, I mean that Searle appears to be making the same kind of erroneous leap as Ayn Rand does when she claims to have bridged the is-ought gap.

96

kid bitzer 02.21.10 at 12:47 pm

oh, i see the response.

searle is no better than rand.
and rand is no better than a bully.
but no one puts up with rand’s bullying.
therefore one ought not to put up with searle, either.

sure; this is the kind of counter-argument by analogy that the greeks called a “pairabullies”.

97

Farren 02.21.10 at 12:55 pm

Heh, I didn’t mean to tar Searle by association with Rand, it was just the first thing that came to mind that had a similar format.

1. …
2. ponies
3. QED

98

JoB 02.21.10 at 1:39 pm

I don’t remember it too accurately but wasn’t Searle’s argument against strong AI so strong that it got rid of all intelligence?

99

jholbo 02.21.10 at 1:46 pm

Just to be clear: I think the Chinese Room Argument is very interesting and worth taking seriously, but it does have the ‘level problems’ Dave M. speaks of. Searle is not like Ayn Rand because she is really just opinionating dogmatically. She makes no serious attempts to address arguments on the other side. Searle is not that at all. But he is a wonderfully rhetorical performer, so when I think he is in the wrong, I can’t help but see him papering over the gaps with that trademark searliness. But I mean that affectionately, not disrespectfully.

Here’s something people don’t know about Searle: his opinions about topics like mind and language and intentionality – stuff he’s written about over and over for decades – are long since cemented, pretty much. No more so than most people who have been defending the same position for decades, probably. But that’s pretty cemented. Searle is not going to throw in the towel on any of his major positions at this point. But at least when I was last at Berkeley, at the end of the 90’s, he was still remarkably spry, flexible and open-minded, therefore a real treat to watch go to town, discussing some new argument of problem someone would pose for him. I sincerely wish him many more years of ‘I mean you don’t want to say the ROOM understands Chinese’.

100

noen 02.21.10 at 11:03 pm

Def: ad hominem
1) Appealing to personal considerations (rather than to fact or reason)”

Even if it is intended affectionately it still doesn’t address the argument.

Searle is not at all like Rand, exactly the opposite. She thought she could just bully people into accepting her dismissal of the analytic/synthetic divide. Ayn Rand and her followers violently rejected any suggestion that you cannot derive ought from is because for them materialism is a kind of religion. This continues today with the New Atheists and assorted science geeks who have a strong Objectivist streak. They also become just as violent and rude, as we can see above, at the suggestion that no Turing machine can be conscious and understand Chinese because then it seems we are left with some sort of dualism.

The systems reply to the Chinese room says that it is not the person who understands Chinese or the boxes of Chinese symbols or the rule book, it’s the whole room that understands Chinese. If you ask yourself “Why don’t I, sitting in the Chinese room understand Chinese?” The answer is I have no way to get from the syntax to the semantics. There is no way for me to get from the Chinese glyphs I am handed to what they mean. Therefore if I don’t have any way to get the meaning of the symbols neither does the room. Why don’t I understand Chinese? I passed the Turing test. The syntax of the program is not identical with nor is it by itself sufficient for the semantics of actual understanding.” This applies to whole room just as it applies to the program. The systems reply fails to understand the argument. After all, if we have a Chinese understanding program then we could just as easily model the whole room within the program and we are left back where we started.

So there is a syntax/semantics divide just as there is an ought/is divide and just as many people thought, and still think, that there must be some way to cross that divide, so also many people think there must be some way to derive semantics from syntax only. But the Chinese room argument show this is impossible, therefore we have all the sturm und drang surrounding this issue.

@ Farren 95
I mentioned Penrose citation of the thought experiment because he explicitly references this distinction in the midst of a discussion of the failings of the view that neural networks are simply kinds of digital circuits.

Penrose argues against the idea that the mind is algorithmic and doesn’t think a digital computer, a Turing machine, can be conscious. I don’t see how that helps you. John Searle dos not believe it is impossible for us to someday construct an artificial mind. He thinks it quite likely and sees no difference between that and building an artificial heart. But if we do it will be by duplicating the causal function of the brain and not through simulating it’s abstract processes.

Searle appears to be making the same kind of erroneous leap as Ayn Rand does when she claims to have bridged the is-ought gap.

You have it backwards. I am saying the you are the one who is claiming to have made the leap from syntax to semantics. And just like any Randian when confronted with the flimsiness of their ontology, it was you who struck out in rage when told it cannot be done.

101

Farren 02.22.10 at 1:54 am

@Noen, the hidden premise I’m seeing here is that the universe is not digital, which would imply that there are discrete minimum units and that the universe is essentially symbolic, or syntactical in nature. In fact this is the basis, as far as I can tell, of the holographic model that started this whole thread.

Deviations in gravity wave experiment readings appear to conform to the predictions of a model that projects Plank-distance “pixels” from a surface into the volume we experience as space. Which implies that the universe may very well have minimum discrete units. If this is the case, the physical model would actually demand that syntax, as you put it, gives rise to semantics.

So no, it does not appear to be an uncrossable gap. On the contrary, it appears to be a gap constructed by fiat, an unreasonable demand that is taken as a given.

102

Farren 02.22.10 at 1:56 am

My first sentence above is confusing. I meant to say that the universe being digital would imply that the universe is essentially symbolic.

103

Farren 02.22.10 at 1:56 am

…um wouldn’t imply it, more just means the same thing.

104

Farren 02.22.10 at 1:57 am

3:45am posting time for bed

105

Farren 02.22.10 at 2:05 am

Put another way. Rand inserts ponies to get from empirical facts to deontological necessities. Searle inserts ponies to deny the possibility of perfectly possible empirical facts based on, um… intuitions, maybe. They’re not the same, I’ll admit. I’m just saying I see ponies.

106

John Holbo 02.22.10 at 3:16 am

For the record, I think the problem with Searle, re: the Systems objection, is this: the systems position (strong A.I.) takes syntax to be necessarily sufficient for semantics. Searle is, in effect, arguing that syntax is necessarily insufficient. It could be – I think – that they are both wrong. We don’t know whether syntax could be sufficient. So the necessarilies on both sides are premature. A big part of the problem is an unclarity about what exactly counts as ‘syntax’. If you throw your net wide enough, you will surely eventually get enough to generate intentionality. But it starts to look more ecological than ‘syntactical’.

To put it another way, a weaker conclusion to the Chinese Room argument – one aiming at establishing only that syntax is not necessarily sufficient (that’s an epistemic ‘necessarily’) – would seem more reasonable.

107

JoB 02.22.10 at 1:01 pm

noen@100, thanks for that, very informative.

You say: But if we do it will be by duplicating the causal function of the brain & not through simulating it’s abstract processes.

Not even that won’t do. You’ll have to put the duplicated causal function in a causal chain that’s sufficiently similar to the one in which the brain is put. Simulating abstract processes is the least of our worries because all evidence points to the brain being very poor at abstract processing.

If we are to take Searle (and other though experimenters) seriously then we have to accept that with strong AI also the concept of the brain as ‘sufficient’ for consciousness is out. De facto this is the assumption of most current philosophy (Davidson, Habermas to name 2 of the ‘different’ traditions: that you need to at least be have ‘triangulated’ before you get to consciousness) – as far as I can see Searle is much less meaningful when the sexy thought experiment stops and the hard constructive work starts.

108

Farren 02.22.10 at 1:26 pm

John I think you’re on the mark. The claim that syntax is necessarily sufficient is too strong. But the presumption that it isn’t is not obvious at all and in fact the Chinese Room seems to take this as a given, if Noen’s portrayal of it is accurate.

The latter position relies on the intution that the universe is a continuum rather than being composed of discrete minimum units, and that its continuous nature is essential to consciousness, which is among Penrose’ arguments in the Emperors New Mind (btw Noen I didn’t mention that to bolster my argument, just to indicate that I was aware of the distinction you thought I was unaware of and its relationship to the Chinese Room)

A continuum defies complete description using any system of discrete, unambiguous symbols, hence the intuition. But when that intuition is stated as an obvious and unassailable fact, the claimant is making unsupported claims about physical reality, not metaphysics. They are claiming to know that the universe is analog, not digital.

Its interesting that you mention monads in the blog post above because based on my very limited understanding of the historical concept as well as related concepts in computing they fit very well within my own intuition about consciousness and its role in the universe, which is essentially panpsychic. Liebnitz’s ideas about monads I only know through fiction, specifically Neal Stephenson’s System of the World books, in which he characterises Liebnitz’ view as being one where instead of matter being acted on by external forces, the smallest units of matter have volition and the apparent laws of physics emerge from them acting on that volition, in response to matter around them. i.e. A kind of cellular automata with volitional units or tiles. I don’t know how accurate this portrayal is.

My intuition is that every smallest part of the universe has some kind of internal awareness, or qualia and a “sense” of being itself. And that the universe is holographic in the sense that the configuration of each such qualia is a reflection of the appearances of the parts that surround it.

This comes very close to describing a monad (triple) in functional programming, which takes a function as input. A universe of such monads is a universe of functions of functions of functions of functions, ad infinitum. It feels comfortable to me to imagine what we label consciousness arising out of this, because it makes all values in essence non-local, relying as they do on all other values. The holographic quality of it makes it conceivable that matter spread across an area could acquire a sense of unary identity and internal state.

109

Z 02.22.10 at 1:53 pm

(A1) Programs are formal (syntactic). (A2) Minds have mental contents (semantics). (A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

I like this axiomatic presentation, but find neither of the 3 axioms entirely convincing. Take, (A1). We way we sing this tune these days is more like “algorithms (syntax)+data structures (?)=programs”. It seems possible that data structures have semantic properties. For instance, does a computer “understand” that the empty list is empty?

Now, suppose you answer no to this last question, then I am not sure I have semantic knowledge (Noen seems convinced that s/he has some). After all, if “being the empty list” is now a syntactical property (whose syntactical property would be to terminate the program, as in “if the list is empty do this”), then “liquid”, “red” or “interesting” could be, as in “liquid=noun+can be used as direct object complement for the verb drink”. And “drink” would be the “terminating instruction” similar to “being the empty list”.

So, like J.Holbo in the comment just above mine right now, I think a proper version of the Chinese Room Argument would require a precise definition of where syntax stops. Incidentally, my wife, who is infinitely smarter than I, offered what I consider the best argument against the “Trompe le monde” argument: that such scenari presuppose that we are important and special enough so that “evil geniuses”, “super-computers” or what have you will make the effort of building something especially designed to fool us. In other words, it is yet again a rehash of the ever so comforting illusion that we are the center of a world revolving around us, and that if we are maybe not a beloved child, we are at least a victim valuable enough to deceive ever so cleverly. Nah, the only one deceiving anybody, she concluded, are our egos. And everyone thinks he can “tromper son monde” that way (this last expression implies a rather subtle meaning shift; it suggests that the deceiver knows very well that he is up to something fishy about himself, and that he is trying to convince himself that others are not noticing it).

110

Farren 02.22.10 at 2:59 pm

@Noen just returning to this:

“The systems reply to the Chinese room says that it is not the person who understands Chinese or the boxes of Chinese symbols or the rule book, it’s the whole room that understands Chinese. If you ask yourself “Why don’t I, sitting in the Chinese room understand Chinese?” The answer is I have no way to get from the syntax to the semantics. There is no way for me to get from the Chinese glyphs I am handed to what they mean. Therefore if I don’t have any way to get the meaning of the symbols neither does the room. Why don’t I understand Chinese? I passed the Turing test. The syntax of the program is not identical with nor is it by itself sufficient for the semantics of actual understanding.” This applies to whole room just as it applies to the program. The systems reply fails to understand the argument. After all, if we have a Chinese understanding program then we could just as easily model the whole room within the program and we are left back where we started.”

This part “The answer is I have no way to get from the syntax to the semantics. There is no way for me to get from the Chinese glyphs I am handed to what they mean.” is what I have a problem with. It seems in your mind the “meaning” is like the “ought” Rand fails to link to “is”. IOW, it is a well-defined term which does not mean the same thing as “statement” (alternatively “semantics” and “syntax”), and there is a burden of proof on anyone who makes the claim that the one can arise from the other.

Yet in my (folk) understanding of “meaning”, it speaks to the purpose or intent of the statement. So that if I were to distinguish between whether someone merely acquired the statement (syntax) or understood what I actually meant, I would measure that by their response. If I asked them to pass me the butter and they stared blankly at me before mouthing “pass me the butter?” I would assume I had communicated only the former. If they passed me the butter, they would have completely fulfilled the requirements to assume they understood the meaning.

Appropriate response, as far as I can see, is the only conceivable way of measuring understanding. Which is why the Turing test hinges on it. And based on that way of measuring understanding, its perfectly clear that you can arrive at semantics from syntax, because you can write a program that responds appropriately using a formal syntax. Unlike the “is-ought” problem, it does not represent a logical gap that cannot be bridged.

Unless, of course, there are extra attributes to ascribe to “understanding”, “meaning” and “semantics” which you’re not sharing. Assuming there are, the question must be asked “Are they measurable attributes?” and if not, how can they bolster the logical structure of the Chinese Room?

111

AcademicLurker 02.22.10 at 3:43 pm

Doesn’t the understanding of Chinese in this situation reside with whoever wrote the rule book?

112

noen 02.22.10 at 5:19 pm

@ Z 109
“We way we sing this tune these days is more like “algorithms (syntax)+data structures (?)=programs”. It seems possible that data structures have semantic properties.”

Separating a program into algorithm + data structure and then asking if the semantics resides in the data doesn’t help. Every digital computer ever made today or in the future can be completely formalized as a Turing machine. Any data structures are just different regions on the tape where it stores information. This is also why the Connectionist reply fails to refute the Chinese room argument. Any massively parallel computer can still be fully formalized as a universal Turing machine.The Chinese room argument addresses the abstract concept we call Turing machines and claims that no Turing machine can understand Chinese. The bits of hardware we call computers are just different implementations and any limitation of Turing machines must also apply to them.

By the way, here is John Searle’s own article “Chinese room argument” on Scholarpedia. It’s quite short and sums up his current thinking accurately.

In addition to claiming that Turing machines cannot understand Chinese the argument also refutes the Turing test or any other behavioral test. I am in the room and I pass the Turing test with flying colors, yet I do not understand Chinese. It is true that to outside observers I or the room appear to understand Chinese but I have something that they do not. I have access to my own inner subjective experience and on that basis I know that I do not understand Chinese.
==========
@ John Holbo 106
Thank you for that. John Searle says this

“I never really had any doubts about the argument as it seems obvious that the syntax of the implemented program is not the same as the semantics of actual language understanding. And only someone with a commitment to an ideology that says that the brain must be a digital computer (“What else could it be?”) could still be convinced of Strong AI in the face of this argument.”

I do not know enough to give my own opinion on whether or not syntax can get one to semantics. For the time being I’d have to say that I accept that as true just as I accept that one cannot get from “is” to “ought”. I think that’s how these things go isn’t it? Pick someone and go with them until you give up or surpass them.

113

Substance McGravitas 02.22.10 at 5:44 pm

It is true that to outside observers I or the room appear to understand Chinese but I have something that they do not. I have access to my own inner subjective experience and on that basis I know that I do not understand Chinese.

You don’t understand brainese either yet you put that into language.

114

Greg 02.22.10 at 5:46 pm

I have access to my own inner subjective experience and on that basis I know that I do not understand Chinese.

What if I told you I have access to my own inner subjective experience and yet I don’t think I have ever understood anything in the way you mean ‘understand’. I speak English, by which I mean I know English syntax – when Farren says ‘pass the butter’ I can respond appropriately, and I know that I can respond appropriately. To put it very crudely, I process the inputs and provide some outputs. Even my internal monologue has the same structure. I can’t fathom what it is to ‘understand’ English beyond that. If Farren spoke Chinese to me, I would not know what to do, because I don’t know those rules. I don’t ‘understand’ Chinese. What is there to ‘understanding’ beyond knowing those rules? I have no way to get from syntax to semantics either. (The objection of intentionality, namely that unlike a computer or what-have-you, I know what the sentence is ‘about’, seems to be entirely accountable for in syntactic terms., e.g. when someone, including myself, says ‘what is the sentence about’ I can provide a successful output.)

Or, to repeat Farren’s question, only because I am curious as to your answer:

Unless, of course, there are extra attributes to ascribe to “understanding”, “meaning” and “semantics” which you’re not sharing. Assuming there are, the question must be asked “Are they measurable attributes?” and if not, how can they bolster the logical structure of the Chinese Room?”

115

noen 02.22.10 at 6:01 pm

@ Farren 105
“Searle inserts ponies to deny the possibility of perfectly possible empirical facts”

What empirical facts does the Chinese room deny? The argument is completely abstract and not empirical at all.

“This part “The answer is I have no way to get from the syntax to the semantics. There is no way for me to get from the Chinese glyphs I am handed to what they mean.” is what I have a problem with. It seems in your mind the “meaning” is like the “ought” Rand fails to link to “is”.”

I don’t know Any Rand that well. I read “Intro to Objectivst Epistemology” when I was a teenager and thought it was nonsense then and never bothered with her since. My understanding however is that she says exactly the opposite of what you say here. Rand doesn’t fail to link ought with is, she explicitly connects them. The whole premise of Objectivism is the claim that we can deduce facts from values. That is why it is considered a cult.

“the hidden premise I’m seeing here is that the universe is not digital, which would imply that there are discrete minimum units and that the universe is essentially symbolic, or syntactical in nature. In fact this is the basis, as far as I can tell, of the holographic model that started this whole thread.”

The universe divides up the way we divide it up.

Speaking for myself now I don’t think the universe is essentially anything at all. Neither symbolic nor syntactic. Science and mathematics are descriptions of the world. They are not Nature’s own language. So I think that when people today make the claim that the universe is nothing but information they are making the same errors that people made in the 19th century. They are mistaking their own descriptions of the world for the world.

I cannot know what it is like to be a bat because the bat has something that I cannot have, the subjective experience of what it is like to navigate with sonar. Likewise I cannot know what it is like to be a pigeon who navigates by the Earth’s magnetic field or a shark who locates prey with electromagnetic signals.

There are things in the world which are not reducible to any formal syntax. Therefore the world cannot be pure syntax. This means that I am not a Materialist (in the philosophical sense) because materialism claims that everything is reducible to something else. It claims that there is a smooth reduction from states and constitutions all the way down to one dimensional vibrating strings. I see this as a religion on all fours with it’s mirror religion that says we are really just immaterial minds.

A former student of Searle’s says that his conception of the mind closely resembles that of 1st century Christians before they were polluted with Greek philosophy. I tend to agree. ;)

116

noen 02.22.10 at 6:09 pm

@ Substance McGravitas 113
“You don’t understand brainese either yet you put that into language.”

I understand Snarkese, which is “I got nothing but I can be a jerk on the internet, this means I R SMRT.” It’s a coward’s position that refuses to engage the world and chooses to snipe from a place of ultimate safety because one might fail and look foolish. I prefer to put myself out there and allow myself to be vulnerable and to potentially make mistakes. That’s how we grow. The kingdom of spite is a dead man’s land.

117

Walt 02.22.10 at 6:10 pm

Every time I see the Chinese Room argument trotted out, it confirms the truth for me of John Holbo’s characterization way up thread. I don’t find the argument convincing, so at the critical moment where I’m supposed to jump over the gap in the logic requires a some sort of dramatic performance to encourage me to slap myself in the head and say “Of course!”

This is where the Internet falls down. I’ve read all of your comments, noen, and I have no idea why you find that argument convincing. (I don’t find the “Don’t you see, consciousness is just an epiphenomenon!” argument all that convincing either.) The best you can convince me is that I possess something that one could plausibly label “semantics”, but I don’t see how I can conclude that you, Searle, or my wife and children possess it as well. And then you’ve failed to answer what seems to me the relevant question, which is how did “consciousness” get into these slabs of meat that are walking around, interrupting me while I’m trying to watch YouTube videos?

118

Walt 02.22.10 at 6:12 pm

Maybe complaining about how people are being rude is the dramatic performance that’s supposed to push me over the gap?

119

Substance McGravitas 02.22.10 at 6:14 pm

I understand Snarkese, which is “I got nothing but I can be a jerk on the internet, this means I R SMRT.”

Noen, it’s an actual argument. Nobody has a reasonable idea about how thoughts are constructed – brainese – yet you’re using language to worry about it. I wasn’t using “brainese” as a “haw haw you’re stupid” insult.

120

bianca steele 02.22.10 at 6:43 pm

@noen
Interesting that the phenomenological argument is never wielded against the Zen argument.

Of course Searle’s argument works against behaviorism, he says so himself: in fact what the argument boils down to is an accusation against the field of “strong AI”* that they are behaviorists and operationalists yet are secretly naive dualists, which despite any qualifications in the main body of the paper is going to seem pretty devastating to many readers, yet it’s not clear to me that the people he is talking about ought to worry about that.

*Not sure about this, since though he uses the term “strong AI” throughout the paper, in his responses to the readers’ responses, it is clear that he means something much broader.

121

bianca steele 02.22.10 at 6:53 pm

Would also appreciate some substantiation for the easy equation of UTM with “syntax,” if you have any.

122

Treilhard 02.22.10 at 7:26 pm

In addition to claiming that Turing machines cannot understand Chinese the argument also refutes the Turing test or any other behavioral test. I am in the room and I pass the Turing test with flying colors, yet I do not understand Chinese

It’s not immediately obvious to me that the Chinese Room could pass the Turing Test, nor that the Turing Test is relevant to the claims of strong AI, nor that strong AI insists that the Room has consciousness, nor that the Chinese Room is sufficiently complex to give us any meaningful insights into how neurobiological machines actually function [I’m thinking of something like Churchland’s “Illuminated Room” experiment]. I’m sorry noen; I am really keen on Searle [particularly his takedowns of Quine on indeterminacy], but I just don’t see how it falls out from anything you’ve argued here that strong AI-ists are nothing but blind ideologues.

Separating a program into algorithm + data structure and then asking if the semantics resides in the data doesn’t help. Every digital computer ever made today or in the future can be completely formalized as a Turing machine. Any data structures are just different regions on the tape where it stores information.

But in what relevant aspect is this different from “data structures are just different regions in the brain where it stores stores information”? If the Chinese room suddenly gains a mind when a book that assigns the Chinese characters to pictographs is slid through the door, then why can’t this same book be presented to the machine?

123

Treilhard 02.22.10 at 7:37 pm

@noen

It’s not immediately obvious to me that the Chinese Room could pass the Turing Test, nor that the Turing Test is particularly relevant to the claims of strong AI, nor that strong AI insists that the Room is conscious, nor that the Room is sufficiently complex to make any analogy (or disanalogy) with a neurobiological machine [I am thinking of something like Churchland’s “Illuminated Room” experiment]. I am really keen on Searle’s work, but it simply doesn’t fall out of anything you’ve argued here that strong AI is nothing but blind ideology in the face of the Chinese Room experiment.

Separating a program into algorithm + data structure and then asking if the semantics resides in the data doesn’t help. Every digital computer ever made today or in the future can be completely formalized as a Turing machine. Any data structures are just different regions on the tape where it stores information.

But in what important aspect is this different from, “Any data structures are just different regions in the brain where it stores information”? Further, if the Room suddenly gains a mind when a book that appropriately assigns Chinese characters to a series of pictographs is slid through the door, why can’t we give this same book to the Turing machine?

124

Treilhard 02.22.10 at 7:38 pm

Whoa sorry, I thought I closed the tab too soon and did a rewrite.

125

kid bitzer 02.22.10 at 7:51 pm

““I got nothing but I can be a jerk on the internet, this means I R SMRT.” It’s a coward’s position that refuses to engage the world and chooses to snipe from a place of ultimate safety because one might fail and look foolish.”

hey, if substance mcg was doing that, then you are totally right to slap him down.

because that’s my gig, damn it! tell him to get his own damn schtick and leave mine to me!

126

noen 02.22.10 at 9:18 pm

@ Walt 116
“I have no idea why you find that argument convincing.”

I find it convincing because I accept that there exists an abstraction called “syntax” which is separate from meaning. John upthread suggests that the separation is fuzzy or ambigious. Maybe, I don’t know, I’ll have to think about that. So you see my beliefs are not dogmatic. For the time being though I am going to continue to choose to believe that there is indeed something called syntax and something called semantics with no obvious means of deriving the former from the latter. If you can *prove* to me that this is possible then hey, that would be great. I am an autodidact so I know there are big holes in what I know. If you can point me in the right direction, not a propriatary place like Jstor please, I’d be more than happy to go and read. I am sick and tired of the Global Internet Pissing Contest that every single wannabe aplha male is engaged it. I’d rather learn something.

@ Substance McGravitas 118
“Nobody has a reasonable idea about how thoughts are constructed”

Non sequitur. The CRA does not address the problem of consciousness. It is designed specifically to counter the computational model. I, like Searle, believe that humans are conscious machines, that we can compute.

@ AcademicLurker 111
“Doesn’t the understanding of Chinese in this situation reside with whoever wrote the rule book?”

Yes.

@ bianca steele 120
“Would also appreciate some substantiation for the easy equation of UTM with “syntax,” if you have any.”

A Turing machine is defined by it’s formal syntax. It consists of a read/write head, a tape on which it can operate and a set of instructions. All digital computers from your digital clock to Big Blue can be reduced to a Turing machine. They are in one to one relation. Any limitations of Turing machines in general will also necessarily apply to all physical computers ever made or that ever could be made. “Computer” here does not mean “one that computes”. In that sense I am a computer because I can compute. Computer here means an abstract formal syntax that has been realized in some physical device.

127

Farren 02.22.10 at 9:25 pm

@Noen

“Speaking for myself now I don’t think the universe is essentially anything at all. Neither symbolic nor syntactic. Science and mathematics are descriptions of the world. They are not Nature’s own language. So I think that when people today make the claim that the universe is nothing but information they are making the same errors that people made in the 19th century. They are mistaking their own descriptions of the world for the world.

I cannot know what it is like to be a bat because the bat has something that I cannot have, the subjective experience of what it is like to navigate with sonar. Likewise I cannot know what it is like to be a pigeon who navigates by the Earth’s magnetic field or a shark who locates prey with electromagnetic signals.”

I for the most part agree with this, but fail to see how it positively precludes a symbol manipulating system from having understanding, unless your definition of understanding is “experiencing it exactly as humans do”

128

JoB 02.22.10 at 9:56 pm

124- yeah, you’re a regular kid bitcher (he said – watching this valiant knight cut head after head of the dragon that attacked the bat in the chinese cave)

PS: O come on, Nagel ánd Searle in one thread, and Chalmers and Fodor mentioned in it too; time must be standing still after all

129

noen 02.22.10 at 10:24 pm

I have a question for those who would claim that computation equals consciousness.

Does your watch know what time it is?

Does your hand calculator know what 2 + 2 = 4 means?

If consciousness equals computation then that means that *anything* that can meet the formal definition of a Turing machine can at least potentially be conscious or posses intentionality.

Is DNA conscious or posses intentionality?

Do species that use camouflage to avoid being eaten posses the collective intention, perhaps at the species level, to deceive those who prey on them?

Must you not then at least assert the possibility of Intelligent Design by means of intelligent DNA? It clearly meets the preconditions for a Turing machine. Therefore if it is at all *ambiguous* that syntax can give rise to semantics then the question of intelligent DNA must also be ambiguous.

Are cells conscious?

Aboriginal Shaman claim that they can speak to plants. If DNA is intelligent then you must admit this as a possibility.

Does this not also imply that the strong Gaia hypothesis is at least possible?

If the universe is syntax then must it not also follow (syntax = computation = consciousness) that God as conscious universe could exist?

The answer to all the above is no. My watch does not know what time it is any more than a piece of chalk knows about the law of gravity. A calculator does not know it is calculating. Syntax is a human abstraction. It is something that we do and then we impose our meaning on the world. We assemble a mechanical calculator (Mechanical or digital makes no difference. The syntax is exactly the same.) by abstracting the problem, designing the gears and assembling them into a working machine. But we assign meaning to the results. The adding machine itself contains no semantic information whatsoever.

If you like me think the above examples are a little silly then please do not give me a hard time for having the courage to deny what you yourself believe and yet are not willing to deny.

130

Walt 02.22.10 at 10:37 pm

See, that’s what’s puzzling. Is my calculator conscious? No. But is it potentially conscious? Why not? Is DNA potentially conscious? Why not? Is a cell potentially conscious? Why not? If you believe there is not some hard and fast line between things that can be conscious, and things that can’t, then it’s not much of a gotcha to point out that you believe that things that aren’t currently conscious potentially are.

131

john c. halasz 02.22.10 at 10:41 pm

@106:

“A big part of the problem is an unclarity about what exactly counts as ‘syntax’. If you throw your net wide enough, you will surely eventually get enough to generate intentionality. But it starts to look more ecological than ‘syntactical’.”

Ya, but then why would a “more ecological” approach,- (such as general systems theory?),- be at all syntactic, in any normal narrow sense of such an abstract rule system? It can’t merely be because a model is at least in principle formalizable and thus, very broadly speaking, “computable”, that it amounts to “syntax”, no?

Personally, I find discussion of “mind” in terms of “mind”/brain causality (parallelism or whatnot) alone fairly sterile and uninteresting, running along well-worn impasses. I don’t think “mindedness” can be fruitfully address in abstraction from embodiment, and such embodied mindedness in abstraction from interaction with an environment. Chalmers, whom I’ve never read,- and nothing said here would inspire me to wish to do so,- strikes me as caught up in a Cartesian/positivistic mode of “pure” theory, (i.e. a passive, contemplative, abstractly detached and sheerly “neutral” form of “reflection theory”, what Max Horkheimer termed in the 1930’s “traditional theory”), which blocks off access to the sort of real phenomena involved, and just serves to generate meaningless, non-existent “skeptical” pseudo-problems about “brains-in-vats” and “zombies”, which are the effect of the assumption of his form of theory is necesarily consonant with the issue about “mind” that it actually obstructs.

For the rest, the analogy of brains to digital computers is severely misleading, and mostly just an artefact of the arbitrary conjunction and availability of new technology. For good evolutionary biological reasons, brains are far more likely to function as analog pattern-recognition devices, (though the analog/digital distinction is not simple, nor binary). And how brains function has nothing to do with the ultimate structure of the external universe. Brains function on the basis of quite ordinary physio-chemical and electrical causal processes, (though likely in highly stochastic and multivalent ways), and the “ultimate” explanations of their functioning don’t require any occult principles about the “ultimate” structure of the universe, or some sort of mystic communion with quantum level phenomena.

132

Substance McGravitas 02.22.10 at 10:55 pm

Non sequitur. The CRA does not address the problem of consciousness. It is designed specifically to counter the computational model.

Not a non sequitur, because the CRA does address the problem of consciousness: that’s why it exists.

I, like Searle, believe that humans are conscious machines, that we can compute.

That’s the same thing as saying “A machine will one day be able to simulate this.”

133

John Holbo 02.23.10 at 1:25 am

“Personally, I find discussion of “mind” in terms of “mind”/brain causality (parallelism or whatnot) alone fairly sterile and uninteresting, running along well-worn impasses.”

Are there any ways of discussing the nature of mind that do not run along well-worn impasses? Surely you can’t be proposing Max Horkheimer, of all figures, as a way past that. Quite possibly his impasses are GOOD ones, I quite freely grant the point without the rack, but it could hardly be denied that people have been running along in them.

To put it another way, john, I think you have a habit of expressing personal impatience with a given approach – a thing to which you are richly and inviolably entitled – as if that were tantamount to having an objection to the given approach – a thing you may have as well, but that remains to be seen.

I also think there is something kinda suspect to confessing to having not read Chalmers in the same breath that you say that the problem with Chalmers seems to be that he is cut off from the phenomena he presumes to discuss. How could you possibly KNOW a thing like that, john?

“the “ultimate” explanations of their functioning don’t require any occult principles about the “ultimate” structure of the universe, or some sort of mystic communion with quantum level phenomena.”

This assumes that Chalmers, to the contrary, must require such occult principles. But whence have you derived this astonishing datum that Chalmers is an occultist, john? There is, after all, a perfect good sense in which it is false that “brains function has nothing to do with the ultimate structure of the external universe”: namely, presumably brains are part of the universe, and partake of its ultimate structure. How about those apples? And why does Chalmers need more occult apples than that?

134

john c. halasz 02.23.10 at 2:57 am

@133:

Nitpicky!

I happen to like old Max, but any number of other figures would do just as well. The point about “traditional” theory, as somehow purely and passively reflective a pre-given reality, on the basis of a subject/object split, whereby the phenonena in question must be dealt with in entirely “objective” terms, thereby occluding the very phenomena at issue, contrasts with the notion that some interested, participatory involvement in the “system” in question is required for the relevant phenomena to emerge and be accessed, let alone for some sort of theory about them erected on that basis. In other words, “mind” needs to be dealt with on an organic, embodied basis, in which there are no clear bright lines between the physiological, the behavioral and the mental, but rather a gradient in which all three interact processually to generate what we are wont to consider “mind”. (Why would there be anything emergently mental at all, if it were not a matter of mediating between organic needs and environmental events on the basis of behavioral interaction with that environment?) Asking after what “mind” is and proceeding on the basis “pure” logical analysis may well be self-obstructive and question-begging procedure. There’s no reason why the way these questions are dealt with in “Analytic philosophy of mind” are necessarily the most perspicuous or fruitful approaches. For that matter, not even natural science deals with “observer independent reality”, (as Searles was cited above,- reprising the Ding-an-Sich?-), since the experimental investigator is arranging and manipulating the parameters of his apparatus/theoretical framework, in order to observe the effects that result. In other words, there is already interestedness and participatory involvement, there, too, without any need to cite Heisenberg.

At any rate, the main point is rejecting the terms of reference for these sorts of issues set up by reductionist physicalism, a la Quine, in which physics is the basic description of “ultimate” reality and everything else is a mere matter of “special sciences” derivable from such terms of reference, (else the only alternative is some sort of untenable idealism), in favor of a non-reductive emergentist realism, as prefigured by Whitehead.

It’s not as if I’ve never heard of Chalmers or read any discussions of him. It’s that I don’t understand why he’s considered such a hot-shot or making some kind of signally interesting contribution, so why I should care. AFAICT, his “Cartesian” approach, and its “skeptical” offshoots, was already “destroyed” by Wittgenstein a long time ago. Perfect mindless “zombies” without any behavioral differences? Even “inverted spectrum” claims in this neck-of-the-woods appeal to hue and ignore intensity and value. Neurologists use charts they call “Mondrians”, after the similarity to the works of the painter, by which they test their brain-damaged visually impaired patients: there are empirical tell-tale signs, which you’d think “scientifically” minded philosophers might pay attention to, rather that relying speculatively on logical analysis, which, er begs the question of any pre-logical roots to the matter. Which is to say that many of these matters aren’t really philosophical matters at all, to which philosophical arguments would have much to contribute, but rather neuro-physiological questions to be investigated, with whatever guile, empirically.

You might try reading some of the works of Gerald Edelman et alia:

http://www.amazon.com/s/ref=nb_sb_ss_i_0_14?url=search-alias%3Dstripbooks&field-keywords=gerald+edelman&sprefix=gerald+edelman

Only my second paragraph above referred to Chalmers and his evident “Cartesianism”. The stuff about “ultimate structure of the universe” being irrelevant to consideration of the neuro-physiological and thus “causal” basis of “mind” was referring to other comments made on this thread, not to Chalmers, hence was misapplied in your response.

135

Farren 02.23.10 at 1:03 pm

The distinction between our mental models of nature and nature itself is useful and it must be conceded that errors can arise from reifying the model, but at the same time it also has to be conceded that “mind”, “consciousness” and so on are symbols, as long as they are used in discussions like this, with symbolic meanings.

Those symbols are meant to signify some perceived attribute of nature and if they do not signify the same thing to all participants in a discussion, its a fruitless exercise. If the only consensus that can be reached is “a behaviour I recognise in other people” then Turing was right. Because “mind”, the symbol, should lend itself to the set of symbolic transforms we call “logic” in a consistent manner. And performing those transforms we must conclude that something appears to be a mind, it is a mind. If the meaning is taken as “the experience of being human” then obviously nothing that is not human could ever share the quality that symbol represents.

Of course there are ethical ramifications to what we ascribe mind, or consciousness to, and I think they are at the heart of discussions like this. Our ethics regarding other animals are strongly informed by out views on whether and to what degree they share this slippery quality. Already, in ascribing it to humans – as opposed to only one human – we have created a fuzzy category that recognises it is a quality that is found in various distinct physical assemblies that are only similar at a certain macroscopic level of granularity. In the process of claiming that all humans have mind, we implicity quantize humans, reduce them to a symbolic description.

To speak of mind as anything else but “my qualia” then is to speak of a symbolic quality, not some ineffable quality of nature. As intuitively appealing as it is to dwell on the Taoist-like realisation that our individual experiences of being ourselves might defy being completely rendered in symbols, I think its irrelevant to the discussion.

136

Farren 02.23.10 at 1:25 pm

Which is to say I actually subscribe to the view that words are not the moon, they are the fingers pointing at the moon. That there are things that can be learned through symbols and things that can only be learned through being. And so on.

But I think its just plain silly to act as if the numbers on the screen at the checkout counter are meaningful, and respond appropriately, to pick out a specific product at the shop because your partner said “we need milk”, to stop at a traffic light because its red, then to turn around and say “all your symbols for mind R useless” when trying to settle on a satisfactory definition of something that will similarly influence social contracts in the future.

137

bianca steele 02.23.10 at 3:31 pm

Is this still a discussion of The Matrix?

@noen
Please forgive me if I don’t find what you’re saying interesting or intelligent enough to even momentarily concede that your “definitions” are more interesting than mine, even for the sake of being nice.

138

JoB 02.23.10 at 4:24 pm

Nope, this is most definitely a discussion in The Matrix.

139

noen 02.23.10 at 5:09 pm

@ Walt 130
See, that’s what’s puzzling. Is my calculator conscious? No. But is it potentially conscious? Why not?

Because syntax has no causal power and a calculator is nothing but syntax realized in either whirring gears or silicon. In order to build a conscious mind we will need to duplicate the casual relationships that give rise to consciousness in biological minds. A camera replicates the casual relationships that exist within biological eyes. It doesn’t simulate them. I could build a model of a camera out of any material I choose but if it did not duplicate the casual relation between the lens and the film, if I made the lens from wood for instance, it would not be a camera. It would only be a model of a camera. The computational model of consciousness claims that a model of a mind is a mind. The Chinese Room refutes that narrow specific claim.

==
@ bianca steele 137
“Please forgive me if I don’t find what you’re saying interesting or intelligent enough to even momentarily concede that your “definitions” are more interesting than mine, even for the sake of being nice.”

Thank you for sharing your ideas with us. I’m looking forward to your future contributions.
==
@ Substance McGravitas 132
“Not a non sequitur, because the CRA does address the problem of consciousness: that’s why it exists.”

I’m sorry but no it does not. It specifically attacks the computational model for consciousness. Other solutions to the problem of consciousness are not addressed.

“I, like Searle, believe that humans are conscious machines, that we can compute.”
That’s the same thing as saying “A machine will one day be able to simulate this.”

Again no, those two things are not the same. The computational model makes a very specific claim that consciousness can be reduced to a formal abstract system that is independently realizable in any physical system you like. One could therefore build a conscious mind from tinker toys or hydraulic valves or a vast expanse of clerks sitting at their desks scribbling in a book. The claim was that this is what a mind is and that when they wrote their programs they were creating minds.

The Chinese Room argument refutes that claim and I have yet to see many here besides John Holbo who even understand that much less even attempt a reply.

140

bianca steele 02.23.10 at 6:12 pm

JoB, I assume you are joking, but noen does seem to have set her/himself up as a kind of discount-store Morpheus here. If you admit there is some meaning behind the non-syntactical “nonsense,” you get to go through the rabbit hole and be “liberated” or “enlightened”–though once through you’ll have to supply all de-matricizing cells and the rest on your own. If you try to keep noen to utterances that make sense, you’re still only a “computer.”

Of course, noen may be joking too, just trying to make people realize they don’t understand what the discussion is really about, never will understand, and should leave and shut up. Probably noen thinks he does understand. But noen might think s/he also does not understand and thus has a responsibility to keep the other people who also don’t understand out of the discussion.

Though the latter could also be called “enlightenment” or “liberation.”

And anyway, I’m joking too: I don’t know what noen is thinking, and for all I know, I’m the only person in the world who would react to what he says the way I do.

Of course, it’s possible he really knows something and if I stop being so mean to him, get over all my “emotional baggage and so forth,” I’ll learn something about what he knows. But I don’t know whether being a kind of discount-store Morpheus is something s/he would be proud of.

141

Greg 02.23.10 at 6:16 pm

One could therefore build a conscious mind from tinker toys or hydraulic valves or a vast expanse of clerks sitting at their desks scribbling in a book. The claim was that this is what a mind is and that when they wrote their programs they were creating minds. The Chinese Room argument refutes that claim.

No, the Chinese Room argument as you’ve presented it says ‘Look at this counter-intuitive scenario. Surely this isn’t possible! Therefore it isn’t.’ I am fine with saying ‘our theories of cognition makes it reasonable to believe that structures homologous to the structures of our brains, albeit instantiated in different materials, are functionally equivalent. We would be justified in granting the presence of consciousness to them to the extent we would grant it to other people.’

I would like to know how you think human minds get from syntax to semantics. Maybe you could revisit my comment at #114 (delayed posting, I screwed up my e-mail).

142

Substance McGravitas 02.23.10 at 6:31 pm

I’m sorry but no it does not. It specifically attacks the computational model for consciousness. Other solutions to the problem of consciousness are not addressed.

Assuming “consciousness” is something of a tell that an argument is about consciousness. I agree with Greg that the Chinese Room argument is pretty much handwaving. Does an ear understand English? It accepts the input! Does a tongue understand English? It forms the output! Which nerve is the one that understands English? Which brain cell?

143

JoB 02.23.10 at 8:46 pm

140- sure I was joking but my joking has a meaning; jokes without that are just mere computation. I’ll say more: noen passed the Turing test imho. No Morpheus – but whatever the hell that subroutine was called that appeared as a black grandma.

I have no sympathy whatsoever for computational models of mind or consciousness in this life, the other, in brains or in elementary quantum particles (somebody once had a very emotional opinion about charms beng a unit of consciousness). But I think Searle, Nagel, Penrose & so on or getting too much mileage out of something that is essentially a cheap trick.

Of course there is more to language than ‘syntax’ – that is probably why there is a whole field called ‘semantics’, du-huh.

144

Greg 02.23.10 at 8:59 pm

Of course there is more to language than ‘syntax’ – that is probably why there is a whole field called ‘semantics’, du-huh.

If that’s a dig at my comments, I know and appreciate the difference – but I am trying to use the terms the way noen is (syntax meaning something like rule-following, semantics meaning something like having a subjective/first-person experience rather than simply displaying behavior). According to his use of those terms, I don’t see how his position makes sense.

145

Walt 02.23.10 at 9:09 pm

noen: This is where the dramatic performance comes in. You’re not giving an argument, just like Searle is not giving an argument. You’re just repeating your conclusion in various analogies. Since we don’t actually know the causal mechanisms behind consciousness, we don’t know whether a sufficiently complex Turing machine could do it. Once upon a time, Turing machines couldn’t do optical character recognition, and now they can. The Chinese Room argument doesn’t give us any reason why consciousness can’t be in that same category, except by an appeal to an intuition that somehow the certain analogy seems so unreasonable. I’m prepared to accept the possibility that the whole room understands Chinese, or whatever other shocking conclusion I’m supposed to draw. Searle’s argument is a good way for him to articulate his (and apparently, your) intuitions, but those intuitions don’t carry the force of to be called a refutation.

146

noen 02.24.10 at 6:50 am

@ Walt 145
“This is where the dramatic performance comes in. “

I don’t wish to engage in any drama. I don’t want people to think I am upset or for them to get upset. It’s just philosophy, nothing important. ;) What I would like to do is explain myself as best as I can. I still feel the argument is valid, just that I have failed to defend it well. I’ll try harder. You all say I am just repeating myself, well, so are you.

The Chinese room argument does not rely on the reader to just “get it” and somehow grok or intuitively grasp it’s meaning, have the clouds part and the Truth descend from above. It relies on direct contradiction.

The computational model states that syntax is sufficient for semantics. That the appropriate series of purely syntactic instructions (a computer program) executed by a Turing machine (the computer hardware) is all that is needed for a machine to be conscious (to understand Chinese).

Hypothesis:
IF (the syntax is implemented) AND (it is executed by a Turning machine) THEN (this is sufficient to understand Chinese).

Nevertheless, the man in the room does not understand Chinese. Therefore the hypothesis is false by contradiction.

Why doesn’t the man understand? The computational hypothesis states that he has all he needs to understand Chinese. The answer is that the man has no way to get from the purely formal symbolic manipulation of syntax to the semantic content or meaning of the Chinese symbols. And if the man has no way to get from the syntax to the semantic then neither does the whole room or system.

Searle:
“I never really had any doubts about the argument as it seems obvious that the syntax of the implemented program is not the same as the semantics of actual language understanding. And only someone with a commitment to an ideology that says that the brain must be a digital computer (“What else could it be?”) could still be convinced of Strong AI in the face of this argument.”

Now I expect that some will complain that it is obvious that the program must be the one that understands Chinese, not the man. But this is impossible because all computer programs are is syntax. They have no semantic content whatsoever. This is not a proposition about which there may be some ambiguity as John Holbo and others may claim (as above). A program is defined by it’s syntax just as a triangle is defined as a polygon with three sides. It may be true that in other disciplines there is some fuzziness between syntax and semantics, but this is not the case in computers. Formal languages alone do not have semantics.

“Turing machines couldn’t do optical character recognition, and now they can.”

This is a non sequitur and what Searle calls the “wait till next year reply”. Just because computers today can do things they could not do before does not mean there is not something no computer can do. And there is one thing that no Turing machine will ever have, a Verstehen of Chinese.

FYI — I am a she, not that it should matter.

147

Farren 02.24.10 at 8:07 am

@Noen

What’s being proposed in the Chinese Room, and what Searle was originally responding to, was not formal languages possessing understanding, but physical devices running programs written in formal languages, so “formal languages don’t have semantics” is already off the mark.

Also, as I’ve tried to point out, you haven’t provided a satisfactory definition for understanding (or “semantics” as you put it), some distinction between an assembly of matter processing a word and “understanding” a word. Turing did provide a satisfactory definition and ran with it. And by that definition, anything that produces an appropriate response implicitly possesses understanding.

So you’re employing some different, ephemerial meaning for “meaning” then declaring that a device running a program that employs a formal syntax “obviously” doesn’t have that. Therein lies the intuitive leap. You’re simply declaring that humans possess some quality of grokking words in a way that the device cannot, “obviously” based on an intution about how humans grok things, based apparently on your intuition.

Clearly understanding is an experience, and any assembly of matter can have an experience. It is also clear that no two humans have the same experience, so the “experience of being human” isn’t a sufficient definition of understanding. And absent some clearly defined quality that we can show exists in humans and doesn’t in machines running programs, you really haven’t shown that humans have understanding and machines can’t.

Perhaps it would make it clearer to imagine yourself telling something humans have snerf and snerf clearly doesn’t exist in syntax-processing machines. Not knowing what snerf is, your audience would obviously inquire what it is so that they could examine a machine and see if its present. And at that point, following your pattern on this thread, you would reply, “something that obviously can’t arise from these machines processing syntax”, which is no answer at all.

Comments on this entry are closed.