My friend Eric Schwitzgebel (philosophy prof. at UC Irvine, but once upon a time we played quite a bit of poker, once a week) craves responses to an online survey he devised with Fiery Cushman (a psychologist at Harvard). It’s ‘the moral sense test‘. I gather it is intended to investigate whether respondents with academic philosophical training respond differently to a suite of moral dilemmas (you know, the usual sort of potted philosophy cases) than do others (you know, the man on the street, mere mortals, Joe the Plumber).
I realize that trollycar-style ethical theory is regarded by many with a certain degree of skepticism – nay, it is the tipmost taper on the candelabrum of ‘not very punk rock’. Please feel free to use the comment box to express such sentiments, as your intellectual conscience and spleen dictate. But it strikes me as rather a good idea to investigate the sociology of philosophy, as it were, by checking to see to what degree academic philosophers’ ‘intuitions’ are, indeed, shared by non-philosophers. So I’m John Holbo and I approve this experiment.
UPDATE: Since we are discussing the survey in comments, you might want to take it before reading comments, if you are going to take it at all.
{ 135 comments }
John Quiggin 10.16.08 at 7:17 am
An immediate moral dilemma arises: “An Internet moral reasoning test is inordinately slow to load. Do you (a) persevere, or (b) just bag it out on the basis of a guess about its contents made on the basis of a blog post”
Having chosen (b), let me offer the following. It’s always struck me as bizarre to (1) assume that the job of a moral theory is to provide a theoretical account consistent with moral intuitions; than (2) derive your evidence about moral intuitions by asking people about their hypothetical choices in totally counterintuitive situations.
John Holbo 10.16.08 at 7:33 am
I can’t really argue with b) but it’s loading fine for me.
As to 1) and 2), I agree with 2 in the following sense. There is a serious problem that tests like this one don’t address adequately (I suspect). Namely, different respondents don’t take the cases in the same way. It’s not so much differing moral intuitions as differing sense of the social and intellectual conventions governing use of potted moral dilemmas. It’s genuinely interesting to investigate whether what philosophers are pleased to call their ‘intuitions’ are shared by others, outside the seminar room. But I’m not sure that tests like this can really get at that interesting object of investigation satisfactorily. I would think that a better test strategy would be to devise a longer, but more realistic scenario – something hypothetical but more fully realistic: about organ sales and donations, or responses of hospital personnel to the disaster or whatever – and tell the whole story, news or documentary-style. A few thousand words of story. Make it fit into a non seminar room genre of story-telling. Then try out a few variations on the story. Get reactions to that.
As to 1): I don’t think that most ethical theorists are guilty of this to an idiotic ‘bizarre’ degree (though sometimes, yes) and that, actually. It’s more of a reflective equilibrium thing (which may seem like weak tea, but it isn’t so bizarre). Also, I think one of the goals of this particular experiment is to try to gather empirical data that would cast doubt on the merits of 1).
Chris Bertram 10.16.08 at 7:46 am
John (Q) … should that be “then” before (2)??
(Sorry, having difficulty interpreting your 2nd para.)
John Holbo 10.16.08 at 7:47 am
I have a strong moral intuition that he meant ‘then’.
Z 10.16.08 at 7:47 am
The test load fine for me, once you have guessed that, contra the stipulations, you have to answer the very last question of the preliminary data gathering even if you don’t have a degree in philosophy.
Apart from that, yeah well, that was fun but “l’autonomisation du travail intellectuel nous conduit à croire à l’indépendance des idées et des oeuvres, à leur total détachement vis-à -vis de leurs conditions de production” and all that jazz. Nothing new, but surely someone should say it from times to times.
Z 10.16.08 at 8:00 am
I would think that a better test strategy would be to devise a longer, but more realistic scenario – something hypothetical but more fully realistic: about organ sales and donations, or responses of hospital personnel to the disaster or whatever – and tell the whole story, news or documentary-style.
Indeed, when I was not entering a fine amount for people throwing concrete bags to the ground, I was asked to imagine cruise ships sinking. I must say that having read the harrowing account of the Estonia sinking by William Langewiesche probably influenced my answers.
J Thomas 10.16.08 at 8:11 am
If you discuss the test then some others will read your discussion first and then take the test. You will influence their answers and bias the test to some unknown but surely very small degree.
But if you don’t discuss the test then you don’t get the fun of discussing the test on Crooked Timber in the heat of the moment, and the moral value that others might get from reading your wisdom in this context will be lost.
How moral is it to discuss the test here?
Seth Finkelstein 10.16.08 at 8:21 am
Hmm … if they want data from “the man on the street, mere mortals, Joe the Plumber”, an Internet survey is not the way to go. But let’s take non-academics as an approximation. Still, they might not want to pose a head-scratching dilemma in the very first screen – “My profession is computer programming – which of the very few classifications does that fall under?” (I finally went with “Services”)
For the problem noted in comment John Quiggin / #1, I also chose alternative b).
David Moles 10.16.08 at 8:55 am
But the Internet survey approach is perfect for brains in vats on Twin Earth.
What strikes me about this test is that too many of the questions depend on a level of certainty, or a predictive power on my part (“the brain knows trolleys”) that I’m just not willing to stipulate.
David Moles 10.16.08 at 8:58 am
(Especially the motorboat one. Maybe it’s just because I don’t know motorboats, but boy howdy do the certainties in that one strike me as stupid things to be certain about.)
Preachy Preach 10.16.08 at 9:13 am
Seth> As an in-house tax specialist for a law firm, I understand your plight…
jholbo 10.16.08 at 9:50 am
I thought about turning off comments on this post but decided that would probably just result in someone complaining about trolleycars in someone else’s CT thread. Humanity is irrepressible in its desire to express opinions about the wisdom of philosopher’s inventing these sorts of potted cases. I agree that it is very hard to ensure that the samples for these surveys are representative in any meaningful way.
Steven 10.16.08 at 10:25 am
David makes a good point. It is quite a salient feature of reality that most of the time we don’t know for certain what effects our actions or inactions will have, even if maintaining a pretence that this is not true makes for pleasant gotcha exercises among philosophers.
John Quiggin 10.16.08 at 10:39 am
“Then”, sorry
John Quiggin 10.16.08 at 10:51 am
I don’t think statistical accuracy is too important. While I’d be immensely pleased if the votes in the forthcoming election came out 59-41 in favour of Obama, I can’t imagine that you’d get too far by demonstrating that, among likely voters questioned after the third debate, 59 per cent favoured pushing the fat man onto the tracks, with only 41 per cent opposed, thereby ensuring that, if a ballot had been held on the day in question, consequentialism would have won in a landslide.
To put it more simply, these examples seem to rely pretty strongly on the claim that the desired intuition is universal (except for a handful of sad mistakes), which is why the dice are so heavily loaded in most cases.
Dan S. 10.16.08 at 11:02 am
My answers were remarkably incoherent.
{shrugs}
Adam 10.16.08 at 11:05 am
I wonder how many 1 year olds have completed the quiz.
rmz 10.16.08 at 11:16 am
You wake up one morning to find that your cat is sitting on the counter in front of your coffee maker. No matter what you do, the cat is not willing to move out of the way of the coffee maker, rendering you incapable of making coffee. You will have a terrible morning if you do not make coffee, and you know your terrible morning will negatively affect the lives of five other people. You know that the cat will never move of its own free will. You know that the cat does not like to be pushed off of counters. If you push the cat off of the counter, you may make coffee.
Pushing the cat off the counter is:
Extremely morally good….. neither good nor bad…. extremely morally bad
Matt 10.16.08 at 11:22 am
To my mind this sort of test shows mostly two things. First, that it’s really hard to make good tests or surveys, and that this one doesn’t really work. (I say this for reasons noted by many others above, but also some other reasons.) Secondly, that I don’t think these types of questions are that important for moral philosophy, perhaps even harmful for it, since they involve so many unusual situations that it’s quite implausible, I think, to think that we would (or should!) have clear intuitions about them or else that anything useful would follow from them. (It also seems wrong to me that moral philosophy is properly about reasoning from cases like this- it seems to already assume certain facts about ethics that I think are highly dubious- intuitionism, perhaps, and that moral evaluation applies to states of affairs, maybe.) This isn’t just a problem for ethics- it seems to me that drawing specific conclusions from highly implausible thought experiments is a dubious approach to philosophy in most areas- metaphysics and epistemology, too. (This is why I’m not so keen on most “twin earth” style examples, Mary in her colorless room, etc.)
Matt 10.16.08 at 11:23 am
rmz- you’re making the question too easy! (Also, much closer to life and easier to predict with great certainty than most of the examples in the test.)
rmz 10.16.08 at 11:47 am
@Matt
Well, I hope I captured some of the silliness of the puppet theater.
Maurice Meilleur 10.16.08 at 11:47 am
Maybe thinking about this is what makes me a political theorist and not a philosopher, but what bothered me about the test is that the questions stopped at my ethical evaluation of the proposed action, rather than going on to ask me if I would perform the action anyway, or if it would be the right thing to do nevertheless.
Especially in extreme circumstances, the choices we face in life are typically not between moral goods and moral evils as such, and especially political actors often face choices between moral evils of varying severity. Those force us into choices leaving what Bernard Williams called a ‘moral remainder’ we can’t reduce by pretending what we did was ethically good because it was the right thing to do under the circumstances.
And is it too much to ask professional philosophers to crack open a history book or read a newspaper? It’s depressing to see question after question about abstract contrived scenarios like fires in orphanages and people standing on railroad tracks with trap doors and switches. Haven’t humans created enough extreme scenarios in real life that would provide much more compelling sets of circumstances for ethical reflection.
I think I need to go ‘push the cat off the counter’ and get some coffee.
John Holbo 10.16.08 at 11:51 am
“I can’t imagine that you’d get too far by demonstrating that, among likely voters questioned after the third debate, 59 per cent favoured pushing the fat man onto the tracks, with only 41 per cent opposed, thereby ensuring that, if a ballot had been held on the day in question, consequentialism would have won in a landslide.”
Election slogans:
It’s the fat man on the tracks, stupid!
Maurice Meilleur 10.16.08 at 11:53 am
Oops, I probably misspoke about ‘cracking history books’. If memory serves, Hitler often shows up in these sorts of questions–as in, ‘The baby you kill now to save the fifteen orphans in the burning building would otherwise grow up to be Hitler’. Does Godwin’s Law apply to philosophy, too?
rea 10.16.08 at 12:05 pm
This is a horrible test–all the questions remind me of the “ticking time bomb” torture hypothetical–nothing resembling a real world situtation.
joXn 10.16.08 at 12:06 pm
Aside from the ridiculous levels of certainty involved in these questions, another issue not addressed which would be important to me in the moment is the participation of the other people in the scenarios. Why isn’t “you drive the boat, I’ll try my luck in the water” an option? Why isn’t asking “can I try turning off the power in order to trigger the alarm, even if we’re taking a risk with your life support machine” an option?
eric 10.16.08 at 12:08 pm
isn’t it possible that the test isn’t so much about what you choose, or why you choose it, but your consistency in applying moral principles? hence the questions about physical proximity? (and also your professed belief system) an internet survey doesn’t seem like a terrible way of asking about that. i thought it was relatively clever that you weren’t allowed to fill in a bubble, see how it looks, and then change your mind. (maybe that’s standard and I just don’t take so many of these tests).
Katherine 10.16.08 at 12:17 pm
The silliness of such scenarios was amusingly shown in Stargate Atlantis. Thus proving that sci-fi is better than philosophy.
Righteous Bubba 10.16.08 at 12:30 pm
If you are a familiar with the following ethical theories
Meow.
John Holbo 10.16.08 at 12:37 pm
“The silliness of such scenarios was amusingly shown in Stargate Atlantis.”
Atlantis. Silliness. Footnotes to Plato.
But seriously. What happened in Stargate Atlantis?
Lex 10.16.08 at 12:42 pm
Nothing much. That whole franchise went downhill after the fifth series….
Tyler 10.16.08 at 12:43 pm
@rea: exactly — this is the Jack Bauer school of morality. Why are moral philosophers giving intellectual cover to this sort of thinking?
And, um, yes, if you screw up and kill someone it’s worse than if you screw up and don’t kill someone. Even though you’re still an asshole if you get drunk and plow your car into a tree and you should have your license revoked, etc. Is there a “phfew, that was close” theory of moral philosophy?
richard 10.16.08 at 12:59 pm
I’m worried about the design of the diving exhibit at the aquarium, and their public-accessible control panels for switching the oxygen supplies around. Remind me never to go into any “diving pods” in coastal California.
John Emerson 10.16.08 at 1:02 pm
I realize that trollycar-style ethical theory is regarded by many with a certain degree of skepticism – nay, it is the tipmost taper on the candelabrum of ‘not very punk rock’…….I thought about turning off comments on this post but decided that would probably just result in someone complaining about trolleycars in someone else’s CT thread.
In other words, you want people in general to talk about trolley-car problems, but you don’t necessarily want me to talk about trolley car problems. So I won’t. Not much, anyway. I’ll go off-topic instead.
My big beef with trolley car problems is that they’re an especially vivid example of the academic / analytic philosophy practice of decontextualizing problems before analyzing them. Sometimes this works and sometimes it doesn’t, but the academic bias is too overwhelmingly in the decontextualizing direction. The original trolleycar problem with the switchman was real-world, but the elaborations and refinements became increasingly ludicrous. It would have been better to start collect real-world examples of the conflicts between moral intuitions / deontic ethics / emotive ethics / traditional ethics on the one hand, and consequentialism on the other — cases from warfare, public health policy, economic planning, policing practices, etc.
I just recently looked at Parfit’s book arguing that ethics should be more impersonal and realized that in practice for most people, ethics is personal by definition: who you are, what you’re willing and not willing to do, how you feel about the behavior of others you know or know about. At the same time, most people also have a practical consequentialist way of thinking which often conflicts with the moral sense, and people often describe consequentialist choices as practical compromises and, in some cases, as tragic necessities. (“A man’s got to do what he’s got to do”). Cases have been documented when individuals living according to violent honor codes followed that code in full knowledge that they were committing mortal sins.
Individuals who make consequentialist choices which conflict with intuitive or deontic ethics are usually doing so selfishly, or for the sake of a family or local community, and thus can be described as acting wrongly because their consequentialism is not universalist. But in theory government administrators, etc., make consequentialist choices for the general welfare, and normally it’s in cases where the persons involved are mostly distant and faceless. Non-consequentialist forms of ethics tend to lose some or most of their persuasive force in such situations.
All the kinds of things that are implausible or offensive about consequentialism for individual ethics (the impersonality and sometime ruthlessness, the assumption of an unreasonable amount knowledge about consequences, the belief that goods and bads can be quantified and aggregated on a single scale) are almost inevitable for administrators of large organizations of any kind. Consequentialism is an administrative ethics. Administrators have to think that way. There’s no intuitive administrative ethics.
Which leads me to my point about consequentialism as a proposed individual ethics. (Actually, this only applies to altruist consequentialism). What it does is make the individual a kind of public official administering that part of the social aggregate which is legally or otherwise defined as “his”. An individual is supposed to be a cog in an enormous rational administrative machine, the smallest capillary division of Society.
This view has a lot of deficiencies — mainly the assumption of more knowledge than an individual can realistically have, the repression of the local knowledge he actual does have, and the submission of every individual to a mysterious distant group. I do not think that it’s necessarily a good thing for individuals to submit their intuititive ethics to the universal or public ethics. (Huckleberry Finn, Eichmann, etc.)
My conclusion has been that utilitarian thinking and ethical thinking are two different things that act as checks and balances on one another, and it’s seriously wrong to try to reduce one to the other or to subordinate one to the other. (Some of these criticisms would also hold for universalistic forms of deontic logic).
Toulmin’s “The Abuse of Casuistry” describes concretely contexted ethical thinking, with special attention to the Catholic Church’s ways of treating the utilitarian-deontic dilemma.
John Emerson 10.16.08 at 1:14 pm
I think that I could add: for Catholics up to a certain point, and maybe now, the universalist ethics was deontic and somewhat intuitive, whereas the consequentialist ethics (consequentialist thinking) was local — personal or governmental (and government was personalized in the middle ages). Consequentialist ethics represents the universalization of state and private thinking at the expense of religious and deontic thinking. Consequentialist conflicts can either be conflicts between universal and private consequentialism (most often) or between consequentialism and some toher form of ethics.
e julius drivingstorm 10.16.08 at 1:14 pm
I’m afraid I skewed their data. I opted out at the fourth scenario.
John Emerson 10.16.08 at 1:16 pm
But by contrast, contemporary consequentialist ethics represents the universalization of state and private thinking at the expense of religious and deontic thinking. Consequentialist conflicts can either be conflicts between universal and private consequentialism (most often) or between consequentialism and some other form of ethics.
robert the red 10.16.08 at 1:27 pm
I also opted out at the fourth scenario. An un-presented option is suicide, rather than murder.
ajay 10.16.08 at 1:32 pm
I think most of those scenarios were fundamentally flawed, because they lay down that you have absolute knowledge about the future. The five patients will DEFINITELY die. The oxygen will DEFINITELY not last long enough. The rescuers will DEFINITELY not reach you in time. I don’t think our moral intuition is set up to deal with that sort of certainty. Even in the Sophie’s Choice shoot-the-hostage one, you’re supposed to be absolutely confident that the terrorist kidnapper will keep his word and help you escape. Dude, a) he’s a terrorist kidnapper and b) he’s just agreed to betray all his fellow terrorists! Don’t trust him an inch!
Cheryl Rofer 10.16.08 at 1:33 pm
The survey was nonsensical. Some of the nonsense has been listed upthread.
I find it hard to believe that serious ethicists can inflict this sort of thing on people. I guess they are trying to load their scenarios with various theoretical considerations and test which ones people will prefer/react to.
I lost interest in the test at the question about someone (a woman, no less!) having developed two substances and not knowing whether they were a cure or a deadly poison. I’m a chemist (and a woman, no less!), and most chemists would have a pretty good idea just from the structure of the compounds which would be which. And (ahem!) one tests such things on microorganisms or, at most, a vertebrate like a mouse.
Part of the descent into nonsense comes about because the test designer wants to force choices. So he (the actual sex of the investigator, apparently) puts time constraints on the scenarios (You are driving a motorboat…) which make them much less believable than most of our real life, which is where we derive our ethics.
My snark about the sex of various participants is a minor point of what I’m saying, so don’t bother to read much into it. It’s just that, as a woman chemist, I was quite offended by the stupidity of the imaginary woman chemist in the scenario. Perhaps I should have been offended by the stupidity that went into the development of that scenario.
ajay 10.16.08 at 1:33 pm
My point – and I do have one – is that I tended to ignore these precepts and behave in the way that gave everyone the highest chance of living for the longest time. Buying time is good – it gives time for better options to appear.
novakant 10.16.08 at 1:34 pm
I completed the test, but have to say it’s extremely lame:
– if I could be 100% sure of the outcome, of course I would choose 5 lives over one life
– we never, ever face such situations in real life, so I don’t understand why philosophers rely on them
– and if we did face such a situation, we wouldn’t have the time and composure to engage in moral reasoning, but would go by instinct and character
Compared to novelists who spend several hundred pages outlining situations, events and characters leading up to moral dilemmas that are generally far less dramatic, I don’t think this holds up very well.
novakant 10.16.08 at 1:35 pm
line breaks don’t seem to be working, sorry
Cheryl Rofer 10.16.08 at 1:36 pm
Oh, and I continued to check boxes so that I could see just how far the nonsense descended.
So I contributed to meaningless statistics in the results. There may be an ethical problem with that, or perhaps it is extremely morally good to bust up something that can’t possibly provide useful results, even though its designers believe it might.
matt 10.16.08 at 2:02 pm
In a bit of fairness to Eric, in another forum he points out that he hasn’t specified (to those taking the test) what he’s trying to study. Given that, it’s not clear how directly the objections above (including mine) hit. Maybe the hit directly but maybe not, since we don’t know exactly what’s at issue.
Righteous Bubba 10.16.08 at 2:04 pm
In unfairness to Eric there should have been more proofreading.
Mrs Tilton 10.16.08 at 2:15 pm
RMZ @18,
Pushing the cat off the counter is: Extremely morally good….. neither good nor bad…. extremely morally bad
You have a morally superior alternative to pushing the cat off the counter that will still let you have your coffee, if there is a blender nearby that you can stuff the cat into.
qb 10.16.08 at 2:21 pm
As usual, there’s been a lot of complaining about abstract thought experiments in this thread. I’m ambivalent, so let me play up the contrarian side by way of analogy. There’s an old joke about a physicist who, for some reason relevant to the punch line, goes up to a blackboard, draws a circle, and then says, “Imagine that’s a cow.” The point is, what the cow looks like is not usually relevant to the kinds of questions physicists study. There’s this assumption that ethical theorists are supposed, at the end of the day, to tell us what to do with ourselves in highly contextualized situations–and there are plenty of ethical theorists who think so too–but that’s like assuming theoretical physicists are useless if they can’t give us very specific instructions about, I don’t know, how to design cow-friendly space shuttles or something. There’s a lot of thumb-twiddling on the “high theory” end of ethics (just look at metaethics–zing!), but there’s also a lot of just straight-up moralizing crap on the applied end. There’s a balance to be had between crazy thought experiments and down-home folksy wisdom, but why think everybody should be looking in the same place?
John Emerson 10.16.08 at 2:22 pm
Stuffing a cat into a blender is only morally good if you love cats but understand that according to the moral law they should be stuffed into blenders. If you hate cats, you are not doing the wrong thing, but are not behaving in a properly moral fashion. If you want to do the right thing, you are behaving selfishly by doing it, though your selfishness is not culpable. Adolf Eichmann explained this decades ago.
Mrs Tilton 10.16.08 at 2:23 pm
I agree that trolley car tests are pretty useless at assessing people’s moral sense. But I thought that this particular trolley car test was looking not at that, but rather at whether people with advanced philosophical training respond to trolley car tests differently to people without? If that’s the case, the whole thing might be meta enough that the idiocy of the questions isn’t very relevant.
John Emerson 10.16.08 at 2:28 pm
There are reasons why abstract theory does not have the same status in ethics that it does in physics. Nice try, guys.
engels 10.16.08 at 2:51 pm
It amazes me that philosophers don’t realise how stupid, silly, ridiculous and nonsensical they are. Don’t they read the internet?
Ben Alpers 10.16.08 at 2:51 pm
I agree with most of the criticisms of these kinds of scenarios as providing us with serious information about moral intuitions. The two things that most bother me about them are the assumption of perfect knowledge of the consequences of one’s actions, as well as the extremely thin account of who the people involved are (are they friends? family? strangers? Hitler? Ghandi?). Real ethical dilemmas involve actual people, not Ethics Dilemma Meeples.
However…
1) My sense is that this test is designed to discover something about these kinds of scenarios as much as it is designed to get at actual moral intuitions. If so, it’s very well designed, as these are very good examples of a particular style of contemporary ethical “thought.”
2) The connection with the Jack Bauer/”ticking time bomb” nonsense is instructive. These kinds of scenarios may be worse than useless as a moral philosophical exercise. But they clearly play a role in the way public policy is formulated (or at least sold). Exploring how people think about such scenarios is thus important, even if it isn’t getting at anything ethically significant.
qb 10.16.08 at 2:53 pm
And those reasons are…
notsneaky 10.16.08 at 2:54 pm
I tried but it was too buggy. First it tried to make me religious (the trick here is to answer the religion question first, otherwise it’ll only let you be almost-strongly religious). Then it insisted I have a degree in philosophy. Finally when it came to drunk driving some poor little girl to death it wouldn’t let me pay the millions in damages I intended. So on the scale from “Do whatever it takes to complete the quiz” to “Give it a few tries than give up if it’s too buggy” to “Don’t bother in the first place” I chose the middle answer which is what I probably would’ve answered to most questions anyway.
Joel Turnipseed 10.16.08 at 2:55 pm
I don’t know: I like trolley car problems, because they show up quite nicely the trickiness in sorting out things like deontological, consequentialist, and virtue-ethics. But also how, as you start to bitch about the construction of such problems you see how many of them are actually, in practice, lead you to similar actions.
My favorite “trolley car” problem is one with which I’m quite familiar, and that’s “Marines don’t leave Marines on the battlefield.” Is this discussed in the ethics literature? At first it sounds insanely contra-consequentialist policy (“Why rescue one guy when it might cost the lives of five others?”), but in practice, it gets all Marines to work better together across the board, to take greater risks, risks which might, because of their daring, reduce the risks of everyone at hand. So: is this a consequentialist, or a deontological/virtue-ethics backed policy? Seems starkly the latter (and the training is very much in virtue-ethics domain), but actually has very strong consquentialist grounds, too.
John Emerson 10.16.08 at 3:03 pm
Given that far too many policy people are whiz kids from the Ivies, I think that the canned trolley-car ethics taught in top schools, beyond allowing us to interpret the way some people think, may actually be a contributing or constitutive factor of those ways. Arguing with whiz kids on the internet about political topics, I often encounter cold-blooded, decontexted, hypothetical, jocular, mad-dog-rationalist ways of thinking that I suspect come from their schooling and very possibly from philosophy.
rmz 10.16.08 at 3:03 pm
Can I go back to reading Geertz and Rorty now?
rea 10.16.08 at 3:06 pm
In a bit of fairness to Eric, in another forum he points out that he hasn’t specified (to those taking the test) what he’s trying to study.
If he testing how long it takes for people to stop answering his questions in a mxture of frustration and rage, he got some data from me.
“Rage” is probably an overreaction on my part–based on some things going on in my life right now, I can’t bear even hyp0theticals about people dying and me being unable to save them. Setting aside my own emotional reaction however, the only ethical response to any of these questions is to try like hell never to get in such a situation. That’s not an option you get on the quiz, however–the only options you are given are all repugnant ones.
OneEyedMan 10.16.08 at 3:07 pm
Anyone have a recommendation of a text that studies these issues but is accessible to casual study? I’m interested but unwilling to wade through a primary text.
lemuel pitkin 10.16.08 at 3:09 pm
I just took the test, and it reminded me of everything that I dislike about moral philosophy.
(1) The assumption that it is primarily a specialized discipline. That “What is your interest in moral philosphy” might have answers beyond what degree you got in it apparently did not occur to the test-makers.
(2) The abstraction from context — the assumption that morality is based ona smalls et of formal properties of a sitaution.
(3) The assumption that outocmes and probabilities can be known with certainty. This is what vitiates basically all the problems presented in the test. E.g. the probability that throwing a bag of conrete off a roof will hurt omeone is the sort oof thing that cannot be known with certainty in advance, and the fact that it did hurt someone is important evidence that it was likely to do so. A world in which we can know the ex ante probabilities of different outcomes perfectly without knowing the actual ex post outcome has nothing to do with the world we live in. Similarly, one of the main reasons we distinguish between the direct and indirect consequences of our actions is that the fcormer are more certain than the latter. By telling us again and again to assume that we know with certainty e.g. whether rescuers willarrive in time the test is abstracting away from one of the basic conditions of moral decision making in the real world.
In short, I think the kind of reasoning embodied in this test has zero relevance to morality as people understand it.
(As I’m sure lots of folks have said in comments already.)
qb 10.16.08 at 3:12 pm
OneEyedMan–If you mean you’re interested in normative ethics, I’d recommend James Rachel’s “The Elements of Moral Philosophy” for starters. If you mean you’re interested in the psychology and sociology of philosophers, I can’t recommend any books, but John Emerson seems to have a lot of opinions and anecdotal evidence.
John Emerson 10.16.08 at 3:12 pm
I think people normally end up arguing that virtue/deontic ethics ends up being consequentially justifiable at some meta level.This is somewhat related to group selection and self-sacrificing altruism in evolutionary biology, and in experimental economics has been studied by Gintis, Boyd, et al in “Moral Sentiments and Material Interests”.
John Emerson 10.16.08 at 3:14 pm
“Disciplined Minds”, Jeff Schmidt (nNot specifically about philosophers”. McCumber, “Time in the Ditch”. Reisch, “The Cold War and Philosophy of Science”. Mirowski, “Machine Dreams” (not specifically about philosophers).
Dan S. 10.16.08 at 3:19 pm
“But seriously. What happened in Stargate Atlantis?”
What happens in Stargate Atlantis stays in Stargate Atlantis . . .
Has anybody read Jasper Fforde’s “Thursday Next: First Among Sequels?
“deontic ethics”
I have a persistent eyeworm* where I tend to read that as odontic ethics – which presumably would involve a system of ethics with teeth to it . . . (and “deontological ethics” tends to come out in my head as “dendrological ethics” (can’t see the forest for the trees . . . ?)
* by a remarkably icky-sounding analogy with earworm, but really not the right word at all. Is there one?
Doctor Science 10.16.08 at 3:43 pm
John Emerson:
Yes, but you’re overlooking the strong selection-amplification cycle. To do well in philosophy courses requires that one be able to do this sort of decontexted analysis without collapsing in rage or frustration, and then those selected people have their tendencies amplified by working with this kind of philosophy.
I agree with you that the habits of mind canned-trolley-car philosophy instills are bad habits, an upscale version of Jack-Bauer-thought. In both cases the defect IMHO is in the *story*: both the story itself and the way it is told.
If the point of the test is truly to investigate whether respondents with academic philosophical training respond differently to a suite of moral dilemmas , then the test is certainly a failure. These are not true moral dilemmas, they are potted set pieces — they do not measure anyone’s “moral sense”, but instead measure one’s tolerance for this particular sort of story. And worse, people who are so put off by these set pieces that they don’t finish the test aren’t included in the results at all.
Ben Alpers 10.16.08 at 3:54 pm
but really not the right word at all. Is there one?
Apparently in his book on music, Oliver Sacks uses “brainworm” instead of “earworm” to designate songs that stick in your head. I heard him admit in an interview that the concept was the same, but since he studied brains he went with his neologism.
I suppose “brainworm” also has the advantage of covering a larger potential series of phenomena…though Sacks seems to think that music has an ability to create brainworms that is not shared by visual phenomena.
Ben Alpers 10.16.08 at 3:57 pm
These are not true moral dilemmas, they are potted set pieces—they do not measure anyone’s “moral senseâ€, but instead measure one’s tolerance for this particular sort of story. And worse, people who are so put off by these set pieces that they don’t finish the test aren’t included in the results at all.
This is a very cogent critique of the study. Why not include as an additional answer something like “this question is ridiculous and has nothing whatsoever to do with ethics”? My guess is that those with no formal training in ethics are more likely to give such an answer to these “dilemmas.” But it would be interesting to see if the data upheld that intuition.
F 10.16.08 at 3:59 pm
The comments here are of the ludicrous “I refuse to answer your hypothetical question” type. Yes, reality is more complicated, but that’s irrelevant. You can still offer an answer to the question that says something about your moral preferences. People complaining about that is like being at a dinner party and being asked an either-or hypothetical question and choosing neither. Sure, it’s more realistic, but it makes you a douchebag.
qb 10.16.08 at 4:05 pm
Well… it indicates that you’re a douchebag.
David Moles 10.16.08 at 4:08 pm
But John, I thought you wanted people to complain about trolleys. (Though now I think I see the ennui behind your “Please feel free…”)
In any case, I hope you didn’t take my comment as an indictment of the experiment. An indictment of trolley problems, maybe. But I think it would, in fact, be an interesting result in the sociology of philosophy if we discovered that ethical philosophers tend disproportionately to believe somebody who tells them that they know just how long it will take five swimmers some way off to drown and just how quickly their motorboat can get there under various load scenarios. Or that they tend disproportionately to equate pushing the fat man in front of the trolley with throwing the switch that sends the trolley down the track with the fat man on it. (Is it more or less ethical if you know that, when pushed, the falling fat man will produce a Wilhelm scream? How about if we add the scream afterwards in the studio?)
What conclusions one might draw from the discovery of such a disproportion is of course another story, but we could always do another survey to see how ethical philosophers’ conclusions differ from those of the general public.
MH 10.16.08 at 4:13 pm
I found the test thought provoking. Also, I was happy not to see a violinist. I got so sick of that guy during my one moral philosophy class.
But, these types of scenarios are sort of stacking the deck against virtue based ethics. For example, much of the reason for the Catholic distinction between “killing” and “letting die” is recognition that we have imperfect information about the results of our actions. And, if there was ever a month that demonstrated just how wrong even experts can be at predicting future outcomes and estimating risk….
(I’ve got to quit looking at the market every 10 minutes.)
chris y 10.16.08 at 4:15 pm
Chris is doing a dumb quiz on the internet. He sees two juxtaposed scenarios where the moral equivalence of the actions described should be apparent to anyone who can achieve functional literacy. However, for unconnected reasons of public policy, he believes that the outcomes for the actors should nonetheless differ in the real world. If he responds in accordance with this belief, there is a strong chance that the researchers’ heads will asplode…
F 10.16.08 at 4:21 pm
@73 agreed. On the other hand, my favorite response to such scenarios comes from those who think they possess superhuman powers and will use them in those situations. There is a noticable correlation between personality types and those who say “But I could save them all, even if you couldn’t”
ajay 10.16.08 at 4:22 pm
People complaining about that is like being at a dinner party and being asked an either-or hypothetical question and choosing neither. Sure, it’s more realistic, but it makes you a douchebag.
Is this really how philosophers talk to each other? In which case: yo momma so fat that when she took part in a concocted philosophical scenario they had to build a bigger vat.
lemuel pitkin 10.16.08 at 4:23 pm
these types of scenarios are sort of stacking the deck against virtue based ethics. For example, much of the reason for the Catholic distinction between “killing†and “letting die†is recognition that we have imperfect information about the results of our actions. And, if there was ever a month that demonstrated just how wrong even experts can be at predicting future outcomes and estimating risk….
Right. it’s interesting to recall that this exact trajectory is central to Keynes’ work. The idea that important decisions are not, and cannot be, based on known probabilities of various outcomes was carried directly over from his early interest in philosophy to his economics.
qb 10.16.08 at 4:23 pm
I’m trying to figure out what it would mean to believe that you have certain knowledge in a hypothetical scenario.
More likely, we’d find that philosophers are more willing to stipulate certain knowledge than others, more willing to imagine possible explanations for having such knowledge than others, or more willing to bracket potential epistemic problems than others.
F 10.16.08 at 4:25 pm
Oh, I’m no philosopher. Just a Joe Sixpack who is tired of Randroid types who try to tell me that they’d just use Aikido to kick all the children to safety. Weakling.
John Emerson 10.16.08 at 4:27 pm
F: What’s ludicrous? Some people logged on out of curiosity and decided it was crap. I didn’t even log on, for the same reason. No one did anything like being a prick at a dinner party they’d chosen to attend.
The closer the problems are to plausible, the more reasonable it is to answer them. Many of the more ludicrous ones might be salvageable if restated non-ludicrously. (E.g., the fat man situation is silly, but a more convincing case when it’s a choice between one person you see next to you and several distant strangers could easily be constructed). The ones assuming impossibly perfect knowledge are really noxious.
But mainly, I think that this kind of absence of convincing context has the effect of taking away the gravity, horror, and urgency which is one of the defining traits of serious moral choices, converting it to a puzzle-solving kind of thing (like the Liar’s Paradox, etc.), and too much of that produces a sort of non-ethical toy ethics. Should Algernon have eaten Aunt Augusta’s cucumber sandwiches?
F 10.16.08 at 4:31 pm
But the primary thing that makes a situation realistic is the uncertainty of the outcome. If the outcome is uncertain, the clear choice disappears and everyone chooses the easy way out.
Ben Alpers 10.16.08 at 4:38 pm
But the primary thing that makes a situation realistic is the uncertainty of the outcome. If the outcome is uncertain, the clear choice disappears and everyone chooses the easy way out.
If you admit that the certainty of the outcome makes the situation unrealistic, what is the point of these excercises?
More importantly, in the uncertain real world the choices clearly don’t disappear. People draw different ethical conclusions in real world, messy situations all the time, and they act differently as a result. Messy, real world situations produce real acts of moral bravery and moral cowardice (though we might not always agree which acts deserve those titles). It is not the case that in the real, messy world, everyone takes the easy way out. And it seems to this non-philosopher that if everyone did, ethics would be an even less relevant area of study than it sometimes seems to make itself.
lemuel pitkin 10.16.08 at 4:46 pm
People draw different ethical conclusions in real world, messy situations all the time, and they act differently as a result.
Right. In the real world, outcomes are uncertain , so people use various rules and contextual cues to make decisions. If you want to study people’s moral intuitions, you ahve to explore what practical heuristics people use for making decisions and what aspects of a situation are judged relevant. Not pick out a tiny subset of factors and conclude that if people don’t respond deterministically to those, they’re inconsistent or confused.
David Moles 10.16.08 at 4:52 pm
Aunt Augusta makes godawful cucumber sandwiches. Algernon was out of his mind.
MQ 10.16.08 at 5:06 pm
The comments here are of the ludicrous “I refuse to answer your hypothetical question†type. Yes, reality is more complicated, but that’s irrelevant. You can still offer an answer to the question that says something about your moral preferences. People complaining about that is like being at a dinner party and being asked an either-or hypothetical question and choosing neither. Sure, it’s more realistic, but it makes you a douchebag.
If anything I think people are going easy on the survey. It really does point out the problems with moral philosophy. The issue is that the questions abstract away from the stuff moral preferences are really about, and then ask you to come up with a “moral intuition”.
jim 10.16.08 at 5:08 pm
It’s interesting that several of us, not being professional philosophers, bailed once we saw how nonsensical the questions were. E Julius Drivingstorm [sic] @37 fears he distorted their data by doing so. Perhaps the proportion of non-philosophers bailing is the data that they’re trying to capture. They do say in the intro that you need not complete the test.
rea 10.16.08 at 5:09 pm
Police Officer: Mr. Philosopher, I’m investigating the drowning of your friend Mr. Hypo Thetical out at Lake Kant last weekend. I’ve just a few questions for you. Tell me what happened.
Philosopher: Well, we saw these 5 swimmers in trouble, and Mr. Thetical is really fat, so I pushed him overboard.
Police Officer: You’re under arrest for First Degree Murder. You have the right to remain silent . . .
lemuel pitkin 10.16.08 at 5:12 pm
Well, I’m sorry to say that I went all the way through. Another 15 minutes closer to death!
But that did mean I learned that Schwitzgebel thinks people might be basing their judgements on whether the hypothetical involved you physically touching the person you harmed. Is there any group of people with *less* grasp on how people assess moral decisions, than moral philosophers?
MH 10.16.08 at 5:18 pm
Rea, I think you should expand on that. Hypothetically, I’m a network executive who could launch CSI: Bentham.
John Emerson 10.16.08 at 5:28 pm
No, no, David Moles. Algernon had had the sandwiches made for his aunt, but ate every one of them himself — without even sharing them with his friend!
All blog commentators should familiarize themselves with “The Vital Importance of Being Ernest”, Monty Python, Spinal Tap, “The Big Lebowski”, “Fargo”, and The Firesign Theater. These works teach you to stay on topic and help you elevate the tone of the discourse.
rea 10.16.08 at 5:49 pm
All blog commentators should familiarize themselves with “The Vital Importance of Being Ernestâ€, Monty Python, Spinal Tap, “The Big Lebowskiâ€, “Fargoâ€, and The Firesign Theater. These works teach you to stay on topic and help you elevate the tone of the discourse.
Quite true, John, and moreover, anyone familar with that list would respond to these hypothetical choice-of-certain-outcomes moral surveys by saying, “Everything you know is wrong!”
Doctor Science 10.16.08 at 5:58 pm
I agree with MH that the scenarios stack the deck against virtue-based ethics. Even more, IMHO they stack the deck against the ethics people actually use, which is relational. That is, (putting on Real Evolutionary Biologist™ hat) humans are social animals, and our moral sense is about right or wrong *relationships*. The trolley-car-type scenarios are fatally flawed because they are premised on perfect knowledge, but they are also (IMHO fatally) flawed because you’re supposed to make judgements without any information about the human relationships involved. That’s IMHO what various commenters mean by saying the scenarios lack “context” — they lack *social* context, and social context is where our moral judgements come from.
My scanning of the basic types of moral philosophy suggests that there *is* no well-developed theory of relational ethics in academic philosophy, though Confuscius’ philosophy, for one, was certainly centered around right relationships.
The fact that there is no school of relational ethics suggest to me that the answer to lemuel’s question:
Is there any group of people with less grasp on how people assess moral decisions, than moral philosophers?
is “only people in the autism spectrum”.
David Moles 10.16.08 at 6:22 pm
Ah, yes. Sorry, my Wilde period is about ten years behind me now.
qb 10.16.08 at 6:37 pm
Doctor Science, your link to “the basic types of moral philosophy”
betrays a flippant ignorancewill leave you in the dark about the ways in which consequentialism, deontology, and virtue ethics can accommodate all or most of our intuitions about morality; to claim that these views must be mistaken because they are not fundamentally relational begs the very question in dispute among these theories. All the same, the traditional tripartite division of the subject leaves out more non-traditional approaches. In fact, some varieties of “care ethics”–developed mainly by feminist philosophers throughout the nineties are fundamentally relational. Before making pronouncements about the facts, much less comparisons between philosophers and those suffering from autism, I encourage you to read more than a single (ill-chosen) page out of the Stanford Encyclopedia!John Emerson 10.16.08 at 6:57 pm
Bad Dr. Science! A page from a standard reference! Study ethics for ten years and get back to me. How could a non-ethicist ever understand ethics?
Most recently people have telling me that all the things I want in philosophy (or economics) are there on some back shelf somewhere, but just mixed in and covered up with a lot more of the stuff I don’t like. I think that it was pretty reasonable of Dr. Science to comment on the basis of the three main types of normative ethics. (Is there a non-normative ethics?)
In general my objections to economics and philosophy are to what I see as their dominant trends, and especially have to do with something many of the best scholars think is not worthy of their attention, undergraduate teaching to those who will not specialize in the field.
roac 10.16.08 at 7:07 pm
No, no, David Moles. Algernon had had the sandwiches made for his aunt, but ate every one of them himself—without even sharing them with his friend!
Moreover, he lied about what he had done, and induced his valet to support the lie with a lie of his own!
John Emerson 10.16.08 at 7:15 pm
Few actually defend Algernon’s ethics. In fact, I feel confident that he himself would denounce himself in no uncertain terms.
I confess that to me Conservatives of the British sort all seem like Algernons to me, whereas American conservatives seem more like Tom Buchanan in The Great Gatsby:
Doug K 10.16.08 at 7:27 pm
the x+y=z nature of the problems brought out my inner Benthamite I fear.
I tried to be a good utilitarian, but I may have contradicted myself.. very well then I contradict myself.
being an obsessive-compulsive, I finished the quiz even though it raised, as rea observes, painful echoes of the ‘terrorist torture ticking time bomb’ nonsense.
MQ 10.16.08 at 7:53 pm
That’s IMHO what various commenters mean by saying the scenarios lack “contextâ€â€”they lack social context, and social context is where our moral judgements come from.
I don’t think that’s quite it. We make moral judgements about strangers all the time. Strangers have lots of moral valence. To me, the lack of context comes in in:
–the assumption of certainty, which is huge. This is done precisely to isolate a single choice from everything else about the situation. It makes the cases absurd on their face, as someone implied above certainty is a bureaucratic / legalistic assumption designed to make for artificial clarity in applying a law. And how you assess gather and assess evidence and how you choose to treat uncertainty is a big deal in practical decisionmaking, and part of being moral.
–the failure to acknowledge multiple, competing moral demands, such that something can be moral and immoral at the same time, or immoral and necessary, or moral in one way and immoral in another. The fascination of real moral narratives comes in these hard cases, not because they are puzzles that admit of logical solution but precisely because they are not.
Chris S 10.16.08 at 7:53 pm
Does it help the state of moral philosophy to point out that neither of the designers of this experiment are moral philosophers? One is a psychologist, and the other (primarily) a philosophers of mind.
F 10.16.08 at 8:35 pm
Yes, indeed, people’s moral judgments are more complicated in real life than in hypothetical situations. Also, gravity makes things fall!
I think MQ is quite accurate in pinning down what’s inauthentic about these scenarios. I had simply assumed that this was done intentionally, so that one could actually draw scientific conclusions from the answers. Without certainty and limited choices, what can one really say at the end of the day.
Seriously, I think there’s a really interesting aspect that is really hard to study, namely, that the moral judgments people make are often predicated on self-delusion. In order to avoid the difficult questions that result from defined outcomes, people simply fool themselves into thinking that what is inevitable (or as nearly so as is possible in the real world) is not.
BTW, I did think the “name the fine” problems were ludicrous. What on earth could you learn from that answer?
lemuel pitkin 10.16.08 at 8:56 pm
I had simply assumed that this was done intentionally, so that one could actually draw scientific conclusions from the answers.
Of course. Just like when you’re studying whether placing razor blades under cardboard pyramids makes them sharp, you should ask about important things like whether the pyramid was oriented toward magnitic north, and abstract away from irrelevant issues like how often the blades have been used. That way you can draw scientific conclusions.
F, please say you have a graduate degree in philosophy. That would be so satisfying.
F 10.16.08 at 9:04 pm
No, I preferred the harder sciences.
Looking back to this:”Not pick out a tiny subset of factors and conclude that if people don’t respond deterministically to those, they’re inconsistent or confused.”
If they’re using this to accuse people of inconsistency, well, I couldn’t agree more that that is pretty damned stupid. If they are trying to identify inconsistencies, maybe in the hopes of identifying the factors that lead to these inconsistencies, it sounds pretty reasonable to me.
Jason B 10.16.08 at 9:20 pm
BTW, I did think the “name the fine†problems were ludicrous. What on earth could you learn from that answer?
Seconded. I stabbed at a dollar amount on the first question, tried to take the second seriously, and then gave up, just entering quickly-decided amounts for the rest.
I just can’t see putting a dollar amount on any of those situations (as a moral, rather than legal-practical, decision).
Sam C 10.16.08 at 9:23 pm
I’m an academic moral philosopher, and this test doesn’t have much to do with what I read, write, and teach, or with the mainstream of moral philosophy. By all means criticize the test (if you’re sure you know what it’s intended to measure) or the use of implausible hypotheticals (if you’re sure you know what those who use them use them for). But the move from ‘this test sucks’ to ‘moral philosophy is nonsense’ and/or ‘academic philosophy is bankrupt’ doesn’t work.
Mrs Tilton 10.16.08 at 9:34 pm
F @101, Jason @104,
yes, the “fines” questions struck me as the greatest flaw in the test. All the other flaws discussed above are weakness of ideology (or whatever you’d like to call it). The “fines”, though, were above and beyond that a fundamental weakness in the very design of the test itself.
MH 10.16.08 at 9:38 pm
I thought the point of the “fines” was clear enough. How much of punishment should be based on the outcome and how much on the intent? I
novakant 10.16.08 at 10:23 pm
my objections to economics and philosophy are to what I see as their dominant trends
You only seem to see what you want to see though – the dominant trends are not nearly as dominant as you make them out to be and there is a wealth of other approaches to philosophy worth discovering.
eric 10.16.08 at 10:49 pm
no one seems to have been thinking about why the test asks you to make judgments about several similar-but-different scenarios. it’s not about any one scenario, it’s about the change over scenarios.
in this perspective, the questions about fines are much less ridiculous than people seem to think they are.
J Thomas 10.16.08 at 11:07 pm
I just can’t see putting a dollar amount on any of those situations (as a moral, rather than legal-practical, decision).
They want to get answers they can quantify. Money values are things that can be compared.
If the person does exactly the same thing, should he get a different punishment because of the actual unpredictable result? I said yes. I figured if he thought it was a one in a thousand chance and he threw the bag without looking, fine him say $2000. Depending on his pay scale that’s 1 week to 4 weeks pay. Enough to notice that society disapproves.
If he actually kills somebody I figured he should pay more because the penalty for actually killing somebody ought to be higher than just the penalty for taking a chance. I set it to $5000 because it ought to hurt but it’s no good if it’s more than he can pay and it hurts too much. Some higher value would be OK, that’s just the number I chose. When you take a chance it’s anybody’s guess how big the chance really was. Give the same penalty for a 0.1% chance as you do for actual negligent manslaughter? What if you think it’s a 0.01% chance and he thinks it’s a 0.0001% chance? At what point should you reduce the penalty to something less than the penalty for actually killing somebody?
So make the penalty for actually killing somebody worse than that for taking a chance and winning, even though the perp did exactly the same thing both ways. One way he can argue about the actual risk he took. The other way he killed somebody. Not the same thing.
Some other answer might be more valid, that was just my answer.
By making it quantitative they at least got something they can measure and perhaps draw some conclusion from. It’s a big deal whether it’s the same number both ways. Should the penalty vary with the actual results, when the intent and the actions were the same?
It isn’t realistic but I don’t right off see how to ask that particular question with a realistic scenario.
Bruce Baugh 10.17.08 at 1:19 am
Another person who bailed out about four questions in. I have just never found any of this sort of question to apply to my circumstances, nor known anyone who claims to have been helped by them except by a handful of utter moral cretins.
The firefighters question bugged me because it hit on something I actually did get training in, when I got my lifeguard certification. A lot of the emergency help part of lifeguarding is about preparing yourself to do what you can to improve people’s chances until the experts arrive. And we actually had a lesson on what do with messes where multiple people are at work and at risk of getting in each others’ way, with an emphasis on loud clear communication, like “Bob! Move that guy left!” as you haul up someone who needs to go into the space Bob put another victim who can rest in the next space over, and so on. I know that firefighters (at least some) get training in the same thing, since a firefighter taught us that lesson, so in the experimental question, the firefighter with the five babies to rescue would yell for their partner to move the one, then carry on.
The trapped-divers one had me thinking things like “What if two or three of us took some knockout stuff and went into deep sleep for the duration? Beats killing our buddy.” And so forth and so on. There seemed to be no suspicion that properly trained people might have access to options that do less irrevocable harm, and that while they increase some risks, greatly reduce others.
Bernard Yomtov 10.17.08 at 2:14 am
This test is silly, for reasons amply explained by many above. I bailed at about the 6th or 7th question.
rmz 10.17.08 at 2:55 am
Real life example of the classic Cat & Counter Conundrum
http://gizmodo.com/5064773/motion-detector-turns-on-blender-strobe-light-when-cat-nears-for-hilarious-results
seth edenbaum 10.17.08 at 3:31 am
My problem with the test began with the descriptions, which were insulting. “Extremely morally good” or “extremely morally bad” is the language of children; and the middle term “neither good nor bad” is evasive of moral responsibility. I refused to answer, on moral grounds.
The only reason to use any of these terms it seems to be is that somehow they are in the language of individualism, otherwise why not use “obligatory” for the first and “disallowed” for the second, with the middle allowing for the reality of a burden.
This takes me back to the mid 80’s when a friend on the staff of the Journal of Philosophy gave me a subscription as a birthday present. I may have mentioned this here before because the first piece I read, Morality and Self-Other Asymmetry, remains a touchstone for me regarding what I find offensive about academic philosophy, and the preference for reason over observation. And the subject of that absurd article is the same more or less as this test.
The Military is designed with barriers between those who make orders and those who follow them. Your commanding officers may not be your friends, may not “fraternize.” And officers are designated as those who make life or death decisions about others, like the decisions described in the test linked here. The essay made no mention of social structure or the function of morality. The role of some people as quasi outsiders, as liminal in groups or subgroups. It made no reference to basic anthropological observation. I’d never read any contemporary philosophy before I began to read it and before two pages I realized it was more concerned with the logic inside it’s own, academic, world than with the world of human life and interaction.
“What is obligatory among equals?”
“What is obligatory or disallowed in relations based on authority?”
“What is the relation of a soldier in an army in the service of a representative democracy to that democracy of which he is also a citizen?” You can reverse the order and ask it of a citizen who is also a soldier army.
Too many people spend too much time trying to answer questions that mean nothing, rather than looking for questions that need asking.
Solomon was not a logician
Cannoneo 10.17.08 at 3:35 am
#105, this stuff was fairly common when I was an undergrad philosophy student in the mid 90s.
I recall Derek Parfit as a visiting prof imagining a parent, offered the chance to guarantee her child a happy life, on the condition that she, as soon as she had taken the deal, would never know of this happiness and would go to her death believing her child was miserable. That many parents can be expected to take this deal went to disprove the notion that parents put their child’s happiness ahead of their own because it makes them happy to do so. (Ie, everyone is selfish by definition.)
One brave student, just as Parfit was moving on, raised his hand and began to stammer the half-formed objections I was also thinking of. But … but … at the time of the parent’s decision … definitions of happiness … artificial scenario… Parfit shut him down and moved right along. I lost my naive faith in academic philosophy and big-name genius professors.
J Thomas 10.17.08 at 3:42 am
Since they’re looking at differences between people who have trained in this specialty and others, I hope they are paying close attention to the number of people who quit early and which people those are.
not that kind of ethicist 10.17.08 at 4:06 am
105 & the first two sentences of 114 seem right to me. I too quit after a few questions, irritated at the set up & knowing better than to suppose the designers would reform. Also, 116– they’re looking at the differences between people who CLAIM an AOS in ethics and those who do not. Any boy with survey toy can claim to be an ethicist. Any gal with or without survey toy can too. and? it’s not like we can quiz the people who take this survey and find out if they really have such expertise as they claim.
e julius drivingstorm 10.17.08 at 4:26 am
The MST baseline question: Are you religious?
An Islamic jihadist, a Christian crusader, a Jewish zionist, a shaman or Raelian clone, etc. ad inf., heck, the red sox just won.
Someone above (Holbo?) suggested more realistic in-depth scenarios. How about movies like “The Yearling” or “Abandon Ship”?
Or real life: The failed army bomb plot against Hitler.
Sam C 10.17.08 at 9:24 am
Cannoneo (115): what stuff, exactly? Philosophers often use artificial scenarios, and have done since Socrates. But the question is, what do we use them for? The experimental philosophy movement uses them in empirical tests intended to uncover patterns in people’s ‘moral intuitions’ (whatever they are). This is interesting, but highly controversial, not at all the mainstream, and as far as I know wasn’t going on in the mid-90s.
Parfit doesn’t use hypotheticals in anything like the same way: his scenarios are typically intended to isolate a particular move in an argument; or to show that a principle we take for granted has strange consequences, or is in tension with other principles; or to illuminate real cases by exaggerating particular elements of them; or as persuasive rhetoric. One reading of your encounter with him is that your friend’s objections weren’t objections to the argument being made, which is just that psychological egoism is false. That conclusion doesn’t depend on any of ‘at the time of the parent’s decision … definitions of happiness … artificial scenario…’, so far as I can see. One difficulty of teaching philosophy is that most people don’t stick to the point in discussion: we move all over the place by association. Of course, it’s possible to be an arsehole about keeping students on track, and maybe Parfit was one. We all have off days.
ajay 10.17.08 at 9:35 am
FAF.: But Giblets does the end always justify the means? For example say there is a man stuck in the opening of a mine shaft.
GIBS.: How would a man get stuck in a mine shaft? Mine shafts are huge.
FAF.: Well lets say he’s a big fat man stuck in a mine shaft an there are like a dozen other people trapped in there because the fat man he is just so fat.
GIBS.: This is an improbably fat man we are talkin about.
FAF.: Maybe he has been eatin ham jello. For the glory of the republic.
GIBS.: Then he can stuff off. This is Giblets’s ham jello.
FAF.: Anyway the question is should we blow up the fat man if there is no other way to get him out of the mine shaft to free the trapped an starving people inside when we know that blowin up the fat man is cruel murder?
GIBS.: Ha! I’d like to see you try! The explosives’ll just make the mine shaft collapse an squish everyone inside.
FAF.: Giiiiblets, you’re ruinin my moral dileeeema.
GIBS.: The real solution is to keep the starvin people inside the shaft alive by eatin the fat man. Problem solved.
http://fafblog.blogspot.com/2004/07/serious-philosophical-discussion-on.html
Cannoneo 10.17.08 at 2:01 pm
Sam C, my sense of Parfit’s device was that its artificiality made it useless for the purpose he intended, which is why I put it in the trolley car category. I take your point that invented scenarios and thought experiments are not inherently stupid and in fact are often indispensable.
The thing is, my intuitions were and are *with* his argument that psychological egoism is false. In my inexperience, perhaps, it seemed like a compelling theory whose refutation needed a fuller account. I still think the scenario falls very far short of that. It was the scenario, not the argument it was put to, that troubled me.
But I shouldn’t imply any judgment on Parfit. It remains a vivid memory for me because it made me question for the first time the always-reductive tendencies of analytic philosophy. These had appealed to me, as a young and rather sheltered man, because they favor a knack for logic over any accumulation of experience or social sympathy.
Sam C 10.17.08 at 2:58 pm
Connoneo – fair enough, and for whatever it’s worth, I agree that that particular use of a hypothetical isn’t much use.
Sam C 10.17.08 at 2:59 pm
(oops: ‘Cannoneo’ not ‘Connoneo’ – sorry)
richard 10.17.08 at 3:08 pm
I know we’re past the point of cause loss on this comments thread, but I feel I have to defend a quite brilliant joke:
There’s an old joke about a physicist who, for some reason relevant to the punch line, goes up to a blackboard, draws a circle, and then says, “Imagine that’s a cow.â€
No. A farmer asks a physicist to help him get his chickens to market – “what’s the most efficient way,” he asks, “to stack them in my cart?”
The physicist, with a gleam in his eye, replies, “first, imagine the chickens are spherical…”
After 1998, the joke lost some of its power. Back when it referred to one of the great unsolved mathematical mysteries, it was funnier that the scientist would elide a possibly-solvable concrete problem for an unsolvable but oh-so-interesting abstract one.
MQ 10.17.08 at 3:58 pm
Actually, the “how much would you fine for X” questions struck me as the only good ones in the entire test. A big issue with most questions was the attempt to artificially jam a highly legalistic, top-down “legislators” framework onto moral questions that don’t fit that frame at all. But the question of how much you should penalize people based on the unintentional results of careless negligence is a very relevant question that societies actually do have to hash out through an impersonal legal system.
John Emerson 10.17.08 at 5:52 pm
124: The László Tóth who solved the orange-packing problem was not the same László Tóth who smashed Michelangelo’s Pietà with a hammer.
Damn damn damn damn damn. It would have been so beautiful.
Witt 10.17.08 at 6:23 pm
To do well in philosophy courses requires that one be able to do this sort of decontexted analysis without collapsing in rage or frustration, and then those selected people have their tendencies amplified by working with this kind of philosophy.
This is certainly an emotionally appealing explanation for my short-lived experience of said courses.
Pitkin has it right:
If you want to study people’s moral intuitions, you have to explore what practical heuristics people use for making decisions and what aspects of a situation are judged relevant. Not pick out a tiny subset of factors and conclude that if people don’t respond deterministically to those, they’re inconsistent or confused.
and this is a good point too: That’s IMHO what various commenters mean by saying the scenarios lack “contextâ€â€”they lack social context, and social context is where our moral judgments come from.
Right. As several people said above, I based my calculation of the fine not on what the person’s life was “worth,” but what I thought would be an amount significant enough to hurt the wrongdoer, yet not so significant that they would go to great lengths to avoid paying it (bankruptcy, fake death and move to a new city, move abroad, etc.). The amount I put in that answer isn’t anything at all to do with whether I think it’s worse that a “little girl” got killed by a drunk driver or a “person” got killed by a bag of concrete. It has to do with “What amount will best accomplish the goal of punishing the wrongdoer, dissuading him/her and others from similar carelessness in the future, and actually having a realistic chance of getting paid?”
Other than that, my major objections to this quiz are a) forced, ludicrously unrealistic choices, b) no opportunity to discuss things with other people in the scenario (like the chemist-testing-her-poison/vaccine), and c) no acknowledgment that the ability to buy time is valuable, especially in high-pressure situations where another solution may emerge.
B) is particularly egregious, because the best real-life example I’ve seen in recent years were the doctors during Hurricane Katrina that were trying to make decisions about oxygen, medication, etc. for terminally ill patients. In many cases there one could ask the patient what they wanted to do.
This artificial world of individual moral actors who don’t have any social ties or context is just beyond stupid to me. I really don’t understand how abstracting problems out to this level tells us anything useful, or valuable, about what it means to be human and how to be a better one.
qb 10.17.08 at 9:54 pm
Richard @ 124,
“No.” That’s not the joke I heard, but you’d have no reason to know that. Also, thanks for dodging my point.
Cala 10.18.08 at 1:26 am
But that did mean I learned that Schwitzgebel thinks people might be basing their judgements on whether the hypothetical involved you physically touching the person you harmed. Is there any group of people with less grasp on how people assess moral decisions, than moral philosophers?
He’s not a moral philosopher; to the extent I understand his research into this question, his work can be described as a criticism of many of these types of introductory ethics examples. I think the hypothesis here is that rather than any dominant theory of ethics or morality, some more primitive reaction like proximity to suffering (homeless guy here vs. starving person 5000 miles away) or physical repulsion (physically killing vs. letting someone die without touching them) is what underlies people’s intuitions.
I doubt the guy’s conclusions are going to be ‘this proves that morality is about physically touching’; I suspect it probably underwrites a criticism of the limits of these kinds of introductory toy examples.
Doctor Science 10.18.08 at 3:42 am
I had to come back and find out if anyone had linked to a brain in a vat is at the wheel of a runaway trolley. I think by this point we’re all qualified to get the joke.
not that kind of ethicist 10.18.08 at 4:33 am
Cala– true, he’s not a moral philosopher. That, however, will not be likely to stop him from claiming otherwise (hasn’t stopped any of the other professional surveyers)
engels 10.18.08 at 12:20 pm
Shorter This Thread:
Son, we live in a world that has trolleys. And those trolleys have to be directed by men with switches. Who’s gonna do it? You? You, Prof. Holbo? I have a greater responsibility than you can possibly fathom. You weep for your dead violinist and you curse Judith Jarvis Thompson. You have that luxury. You have the luxury of not knowing what I know: That a brain in a vat’s death, while tragic, probably saved lives. And my existence, while grotesque and incomprehensible to you, saves lives. You don’t want the truth. Because deep down, in places you don’t talk about at parties, you want me on that trolley. You need me on that trolley. We use words like radical uncertainty, competing moral demands, the complexity of lived experience …we use these words as the backbone to a life spent defending something. You use them as a punchline. I have neither the time nor the inclination to explain myself to a man who rises and sleeps under the blanket of the very freedom I provide, then questions the manner in which I provide it. I’d prefer you just said thank you and went on your way. Otherwise, I suggest you pick up some pre-phylloxera claret and go live on a desert island. Either way, I don’t give a damn what you think you’re entitled to.
idlemind 10.18.08 at 5:15 pm
I wonder if John Woo spent any time studying moral philosophy. Would it have made him more or less prone to the sort of reasoning reflected in the “torture memo?” He seems to have wrestled with some of the same questions (even if he did wind up entirely off the mat).
not that kind of ethicist 10.18.08 at 5:44 pm
132 wins. I literally fell off my couch laughing.
we can all go home now.
MPA 10.18.08 at 11:12 pm
“novakant 10.16.08 at 1:34 pm
I completed the test, but have to say it’s extremely lame:
– if I could be 100% sure of the outcome, of course I would choose 5 lives over one life”
Hmmm. I found myself wondering if the 5 people who were on the track had deliberately placed themselves in a dangerous/risky position (for a thrill?). This affected my thinking about whether I could justify killing the one person on the side track whom I imagined to be an innocent bystander.
J Thomas 10.19.08 at 2:25 pm
MPA, that’s interesting.
“Security.”
“Listen hard. You are the President’s bodyguards, the T-men. I am with the Palestinian Suicide League. We have five volunteers who will all kill themselves unless you kill the President within the next hour. And we have five more volunteers who will kill themselves the next hour. Kill the President or we will carry out our threat.”
Comments on this entry are closed.