Philippa Foot Has Died

by John Holbo on October 7, 2010

Following up my Trolley post: I didn’t realize until just now that Philippa Foot died on October 3. So let’s have a separate, more serious thread for anyone with thoughts about that.

{ 52 comments }

1

BenSix 10.07.10 at 11:59 am

Her friendship with Iris Murdoch was broken, however, on account of the suffering that the novelist caused the historian MRD Foot, when she left him for Thomas Balogh, a suffering for which Philippa consoled Foot by marrying him in 1945.

Makes MRD sound like a sad, neglected Labrador.

RIP

2

Lawrence Solum 10.07.10 at 1:15 pm

I was Foot’s student in the late 70s & early 80s at UCLA. She was a magnificent teacher. Some thoughts and a borrowing of a trolley image here.

3

John Holbo 10.07.10 at 1:19 pm

Thanks Lawrence. Your link was broken, but I fixed it.

4

Harry 10.07.10 at 1:21 pm

I fixed it too, Larry — that two of us did so simultaneously shows how well-respected you are round here!

5

John Holbo 10.07.10 at 3:03 pm

Sorry, it’s not my example, Beauregard, so I didn’t specify the details. But in the other thread I made it clear that that I in fact accept all these criticisms of the case.

6

josh 10.07.10 at 6:32 pm

Thanks for this.
Was I the only one puzzled, then irritated, by the fact that obit always referred to people by their full names? I don’t think I’ve seen this in an obituary before.

7

Pascal Leduc 10.07.10 at 7:12 pm

The first thing that came to mind when i read the title of this post was “I hope the rest of her is ok”.

I am a terrible person.

8

Timothy Scriven 10.08.10 at 12:28 am

Beauregard

It is possible that not flipping the switch maximises utility, since the person crushed may go on to do a lot of good. However, it’s more likely that a future noble laurete is among the many, than among the few.

So many points made in objection to this hypothetical seem to focus on the uncertainity of the situation. What if there is another way? What if the fat man will go on to cure cancer? This seems to me to miss the point, since the suggestion being made by the consequentalist is that we should maximise EXPECTED outcomes, rather than obey moral norms like do not kill.

Philippa Foot was a fine philosopher and a fascinating person, the world is diminshed by her passing.

9

John Holbo 10.08.10 at 4:56 am

“This seems to me to miss the point, since the suggestion being made by the consequentalist is that we should maximise EXPECTED outcomes, rather than obey moral norms like do not kill.”

I agree with you that some people are totally missing the point, Timothy. But others are, plausibly, making the reasonable point that the scenario is so odd that the ‘results’ are tainted. I think you will respond that it all depends on what use you are putting the examples to. And I would agree that is right to get clear on. But I think it’s fair to say that, as intuition pumps go, trolleys are problematic because it’s hard to be sure what’s getting into the pump.

10

John Quiggin 10.08.10 at 6:03 am

I find the general notion of an intuition pump problematic. If it has to be pumped, in what sense is it intuitive?

More fundamentally, to what extent is it appropriate to treat ethics as a kind of philosophometric exercise of finding the moral theory that gives the best fit for a data set consisting of assorted intuitions?

Particularly in relation to consequentialism, it seems to me that hypothetical contradictory data is not very helpful. At least with real-world cases, our intuitions are actually intuitive, and we can make some sort of judgement as to whether we should revise our intuitions where they would imply bad consequences or revise consequentialism where the results are counter-intuitive.

11

John Holbo 10.08.10 at 6:20 am

John, I agree that the metaphor of an intuition pump may be problematic, but I don’t think the idea itself is. Or not totally. It just means you come up with cases that isolate variables, so that you are only ‘pumping’ people’s thoughts about the thing you want to investigate. Maybe you should call it a ‘filter’, then. I dunno. You call it a pump, I guess, because if you just ask people what their ‘intuition’ is about consequentialism, you may pull a blank. But if you ask them a series of questions that are, in effect, about consequentialism, then – because they actually do give you responses, rather than saying ‘I have no opinion’ – you are pumping their intuitions.

Which doesn’t exactly answer your objections, but maybe a bit.

12

dsquared 10.08.10 at 7:28 am

This seems to me to miss the point, since the suggestion being made by the consequentalist is that we should maximise EXPECTED outcomes, rather than obey moral norms like do not kill.

IMO this is the big problem with most of these versions of consequentialism – that they help themselves to “the expected value” in contexts when:

a) it is not clear whether they are talking about the mathematical expectation or the subjective actual expectation of the person making the decision,
b) it is not at all clear that the mathematical expectation actually exists
c) it doesn’t seem like they would care to defend a version of the theory based on the subjective expectation of the decision maker.

Creating perfect-information examples is a way of avoiding getting rigorous about exactly what you mean by the expectation; in actual fact, utilitarianism has real problems with uncertainty.

or, to put it in the terms of John and John:

It just means you come up with cases that isolate variables

I think John’s objection (or rather mine, which is, as usual, a less mathematically and economically sophisticated version of John’s) is that isolating variables is very often a very wrong thing to do when you are dealing with a complicated system. Anyone who has worked in practical economics for any length of time knows how badly wrong you can go by taking intuitions and models which were meant to explain isolated variables and applying them to an actual problem. My opinion is that trolley-problems are the equivalent of perfect-competition models – not useless, certainly an important achievement, but basically and fundamentally flawed and IMO probably more likely a blind alley than a step on the way to a more workable theory.

13

Alex Gregory 10.08.10 at 8:25 am

Just a couple of small points:
a) Dealing with uncertainty about the future is not a problem only for consequentialists. Any sensible normative theory will take account of some uncertain effects of actions. To that extent, understanding how to deal with uncertainty is an issue independent of the consequentialism/non-consequentialism debate. (Which is probably why the relevant uncertainty is generally ignored with respect to the trolley problem.)

b) It’s true that the trolley problem is a little odd, but it’s not much of a stretch to see what it has in common with other more familiar decisions, like those to go to war, to redistribute funding between various medical causes, or to focus one’s teaching efforts on some students at the expense of others. Clearly, these real life cases have *additional* morally relevant features, but facts about the numbers of people they affect, and the distinction between causing harm and allowing harm, are nonetheless part of what is at stake.

c) Nonetheless, there is a worry about whether we can learn much from isolating variables in this way. The problem is – as D2 implies – that they might operate differently when placed in context than when by themselves. A good place to start is a paper by Shelly Kagan called “The Additive Fallacy”, though I think most people associate this worry with Jonathan Dancy. (Disclosure: Dancy is my PhD supervisor.)

Rest in peace Foot.

14

dsquared 10.08.10 at 9:12 am

Dealing with uncertainty about the future is not a problem only for consequentialists. Any sensible normative theory will take account of some uncertain effects of actions

I think it’s a particular problem for consequentialists though, because unless it’s buttressed with basically non-consequentialist rules (rule-utilitarianism), it actually doesn’t give answers to most important questions, viz:

A trolley is careering out of control – it looks like it’s going to plow into the playground of the School For Gifted Violinists, and if it did you don’t know what sort of injuries it might cause but there would almost certainly be many deaths. You can sort of see how you could divert it by pushing a fat man off a bridge, but this might work or it might not – you can’t be sure. The fat guy kind of looks like Goebbels, but he’s wearing a white coat and a badge marked “Cancer Research Experts’ Convention” pop quiz hotshot – what do you do?

any decision rule that’s purely dependent on consequences is basically either impossible to operationalise, or dependent on purely subjective assessments in a way in which significantly reduces its attractiveness.

15

Alex Gregory 10.08.10 at 12:43 pm

Any plausible moral theory is going to cover not only for cases where we do know all of the relevant features of the situation, but also for cases where we don’t (i.e. almost all cases). So *any* plausible moral theory needs some account of how uncertainty affects what we ought to do. That’s going to be true whether the uncertainty is about the consequences, or about what motive we’re performing the action from, or about whether our will could be consistently universalised, or about whatever else.

Moreover, *any* plausible moral theory will take account of uncertain consequences when the stakes are high enough. To take an extreme case, faced with a choice between killing a mad scientist or letting him live in the face of some evidence that he’ll destroy the universe, any plausible theory is going to at least have some story to tell about how we weigh the options against one another.

I think it’s fair to say that these days everyone thinks that the consequences matter. The question is not whether they ever do, but whether anything else ever does. Whichever of these options we pick, the problems you mention are problems we need to address.

16

Duncan 10.08.10 at 12:52 pm

Just in case anyone thinks less of Foot for having invented the tram/trolley problem, she introduced it in order to help clarify the doctrine of double effect and how a proponent of this doctrine would think differently about certain cases than a consequentialist would. It does this job very well, I think. She also writes: “In real life it would hardly ever be certain that the man on the narrow track would be killed. Perhaps he might find a foothold on the side of the tunnel and cling on as the vehicle hurtled by. The driver of the tram does not then leap off and brain him with a crowbar.” So she was quite aware of the difference between real life cases and hypothetical, philosophers’ cases. She also had a nice sense of humour.

17

Timothy Scriven 10.08.10 at 1:21 pm

John, I accept the point that they are bizzare and strangely stripped of emotion and may thus not be representative of our truest ethical feelings, but this is quite seperate to some other criticisms that are being made.

I don’t want to fully defend the examples- I really do accept the suggestion that the examples lack an emotional truth to life. Even this point can be overplayed though. What the thought experiments offer is just one scenario for thinking through a paticular tension in our ethical beliefs, between rights and consequences. I’d put more weight into various concerns I have about the case, if I put more weight into the cases themselves.

18

Tom Hurka 10.09.10 at 10:27 am

In Foot’s original version of the trolley example, it’s the driver who has to decide whether to turn the trolley. Foot says that, since in either case he’ll be actively killing (vs. allowing to die), the fact that he’s permitted to turn the trolley doesn’t count against the anti-consequentialist view there’s an important moral difference between killing and allowing to die, or between negative and positive duties — the only duties at issue in the case are negative.

It was Judith Jarvis Thomson who changed the example so it’s a bystander who can turn the trolley. Now the case does involve both positive and negative duties, since if the bystander doesn’t turn the trolley he’ll only be allowing the five to die. It’s this revised version of the case that causes problem for anti-consequentialism, or rather for one natural explanation it might give for its anti-consequentialist claims, i.e. just the claims about negative vs. positive duties that Foot’s earlier article had defended.

So Foot invented the trolley problem, but it’s Thomson’s revised version that’s philosophically challenging one and the one that’s discussed. (And much of the discussion above continues to misunderstand the point of the example: it’s *not* about consequentialism. For one thing, consequentialism would say you’re required to turn the trolley, but many people’s intuition is that though you’re permitted to turn the trolley you don’t have to. That’s not a consequentialist intuition.)

19

bianca steele 10.09.10 at 6:08 pm

If there is a correct interpretation of the problem, I’ve been wondering, what is the point of asking students for their intuitions? I don’t mean a correct philosophical interpretation of the problem; I mean a correct academic (or “scholarly”) interpretation of the problem. (Though then Thomson’s modification of the problem’s expression seems to raise questions about precedence: is the essential trolley problem the first one formulated, the most recent one, or the one formulated by the most prominent philosopher?) For example, Thomson seems to say that whether or not the person is actually driving is an inessential part of the dilemma, so to engage with Thomson’s article, you presumably have to take that as given.

There’s nothing wrong with this. However, it does raise the question whether the purpose of teaching the problem is to show people how to bring new problems under its umbrella, or to show people what problems have traditionally been considered already to be under its umbrella. I think both of these purposes have been assumed by many people who look into philosophy.

20

John Holbo 10.10.10 at 2:17 am

Sorry, Tom, I really don’t understand why you want to say that something that causes problems for anti-consequentialism is not ‘about consequentialism’. This is what you said in the previous thread, and here you are again, and I still don’t get it. Suppose I’m writing an argument against Nietzsche’s philosophy. Would you think it wasn’t ‘about Nietzsche’ because it’s arguing against him? Suppose I am a consequentialist, writing arguments against anti-consequentialism. Would it make sense to say my writings are not ‘about’ consequentialism?

Also, since people disagree about the case, it’s just not clear whether the case ‘really’ is, in effect, a standing argument for consequentialism. It’s a thing that cuts different ways, potentially. The fact that people will pull the lever but not push the fat man is an index of discontent with consequentialism. Which is why I was so baffled at your objection to my use of the phrase ‘consequentialism and its discontents’. Even you admit as much yourself in your own comment, above: that many people have non-consequentialist intuitions, and the example brings this out. If you are willing to say so, what’s so wrong with me saying so?

“If there is a correct interpretation of the problem, I’ve been wondering, what is the point of asking students for their intuitions? I don’t mean a correct philosophical interpretation of the problem; I mean a correct academic (or “scholarly”) interpretation of the problem.”

There may be a correct answer to the question of what the intentions of the authors of the original versions of the problem were – Foot and Thomson. But there is no reason to assume that ‘correct academic interpretation’ of the problem reduces to figuring out what the original authors intended. For one thing, the original authors certainly didn’t intend that the purpose of the problems was to be a sort of puzzle, the solution of which was realization of their intentions. Rather, they meant for the conceptual features of the puzzle – the responses it engenders – to function quite independently of what Foot and Thomson intended them to do.

21

John Quiggin 10.10.10 at 4:20 am

As regards uncertainty, I had a go at this topic here

https://crookedtimber.org/2009/11/21/consequentialism-compassion-and-confidence/

22

Tim Wilkinson 10.10.10 at 10:14 am

the intuition clamp

I agree with JQ @11 that an ‘intuition pump’ is of little use or interest. We are not finding data in order to find a best fit theory, nor even trying aiming to achieve ‘reflective equilibrium’ between our (the most widely accepted) general principles and our (the most widely held) intuitions about cases real or imagined, where these two are seen as independent and potentially countervailing vectors.

Pace JH passim, constructed examples are (or should be) a form of argument (@22) – specifying a target theory and deriving clear implications (@12) that opponents must agree are incorrect, so that the theory must be defective (possibly just because its recommendations apply only prima facie because it is defeasible by other factors – sometimes, wrongly, described as ‘ceteris paribus’ – perhaps ‘subvailing’, ‘subvalent’?). This is precisely analogous to a Popperian conception of scientific investigation. You set up a carefully controlled lab experiment (highly contrived example) to contradict the predictions (recommendations) of a certain theory or claim.

This kind of example is not (properly) concerned with generating or gleaning anything and, in particular, not part of any Romantic quest to find unspoilt moral truth from the mouths of babes and suckings (still less from teenagers trying to impress Michael Sandel). Instead it is a dialectical or rhetorical technique which aims to force rejection of a well-defined theory – rejection being included in amendment or refinement, i.e. replacement. Like pretty much all such techniques, it can be resisted of course.

(Or else it is just a springboard for further open-ended discussion, in which case it is immune from ‘rea’l criticism, being too springy for that.)

23

Tim Wilkinson 10.10.10 at 10:16 am

actively killing by doing nothing?
Tom Hurka @20 – going only on your summary, the idea that the driver would be actively killing whether they turn the trolley or not is an interesting one – I’m inclined to suspect some euqivocation there. Presumably we are talking about intentional action here, so if the driver ceases all such action on becoming aware of the situation (i.e. at the start of the example), it would be hard to make the case for killing rather than letting die. To compel agreement, I demand acceptance that this is undeniable if the driver, in despair at the moral dilemma, commits suicide immediately, and that there is no relevant difference between that and just doing nothing. Was the example intended to show that the active/passive distinction is ultimately untenable?

24

Tim Wilkinson 10.10.10 at 10:16 am

no epistemic free lunch
dquared @15 – uncertainty about the future is…a particular problem for consequentialists though, because unless it’s buttressed with basically non-consequentialist rules (rule-utilitarianism), it …is basically either impossible to operationalise, or dependent on purely subjective assessments in a way in which significantly reduces its attractiveness.

Rule-utilitarianism and its close relative, Kantianism, are no easier to operationalise, being no less dependent on ‘subjective’ – in the sense of non-omniscient – judgement, and much more dependent on subjective – in the sense of inchoate, non-algorithmic, or (essentially) contestable judegment. Both require a rule/maxim of some level of specificity to be selected as the one that governs any given case. The fact (or assumption) that Kantian rules are extremely general, and indeed that they are based on a certain highly restricted set of features does help to reduce the impact of uncertainty and risk, given that the rules tend to be about events very close to the actor (if one takes the Kantian view down an antinomian ‘good will’ path, the ‘rule’ concerns a property trannscendentally ‘internal’ to the actor, and empirically featureless – try operationalising that! But that’s an evaluative ‘rule’, not a practical one; see next slab). But generality-determinacy reduces uncertainty only at the cost of hiving off subjectiveness or arbitrariness onto Kant’s casuistic works, or some other source of accreted doctrine like religious or social tradition. (This is a bit like Libertarians sweeping all the questions of distributive justice under the carpet of ‘property’.)

It’s not entirely demented to suggest that having an easy determinate answer in every case is a good thing even at the cost of getting the wrong answer (the utilitarian is entitled to rely on their own answer being right if determinable, otherwise we have moved on to a different argument). But the more you argue for that position, the more you provide a basis for the utilitarian to build the considerations adduced into a utilitarian action strategy (call it indirect utilitarian if you wish, to distinguish it from vulgar – i.e. myopic, arbitrarily restrictive, underdone – ‘act’-utilitarianismhis is not the same as rule-utilitarian, which simply introduces a formal consistency constraint whose only determinate content is the stipulation that it is caoable of conflicting with ordinary utilitarianism).

John Q’s link is IMO on the right lines here – lines elaborated by David Hare, Fred Feldman in Doing the Best We Can, Hintikka IIRC, and of course Mill, who in particular notes that one of the things we have to decide is which things to blame others for (enforcing ethics) and punish (enforcing law), as well IIRC as noting the importance of self-inculcation of stable dispositions to act in certain ways – in a sense colonising the territory of virtue ethics before the term was invented. Utilitarianism carries all before it – the only way out is dogma.

25

Tim Wilkinson 10.10.10 at 10:17 am

expected value – ‘that’s just your opinion’

@13: a) it is not clear whether they are talking about the mathematical expectation or the subjective actual expectation of the person making the decision,

The rule specifies the former (if that means, as I say it does, the best expectation on all the evidence possessed by/possibly available to the actor – the correct credence relative to the ideal info set, however exactlyu that is to be specified). In fact one can never do better than the latter, but there is both a sui generis epistemic duty, and – on utilitarian grounds – a moral interest, in getting the latter to confrom to the former. (I think Lewis’s Principal Principle is a case of the epistemic duty mentioned, because I don’t recognise objective chance as a real and relevant thing, but never midn that.)

There is no substantial practical difference from the point of view of the actor impementing the imperative, any more than there is between ‘go away’ and ‘do your utmost to go away’. There may be a difference in retrospective evaluation, e.g. blame – if it is in fact not feasible to go away, then the former command may not have been met when the latter has, but we are then unless strict-liability nutters, talking extraneous excuse v perfect exculpation.

b) it is not at all clear that the mathematical expectation actually exists

It is not at all clear to me that (b) is true.

c) it doesn’t seem like they would care to defend a version of the theory based on the subjective expectation of the decision maker.

Given response to (a), the answer to this is that there is really no difference to speak of, and what they might not care to defend is in fact a theory that places great strain on the decision maker’s ability to adopt optimal expectations. But if this is meant to move the utilitarian, it is really only to say that there is some other strategy which would be better given epistemic limitations. And the utilitarian will listen to your proposal and, since I am sure it will be a good one, adopt it. Utilitarians are infuriating like that, though they tend to try (or try to tend) not to be.

Uncertainty is a problem for the util’ian, but only insofar as it’s a better problem to have than any alternative.

(sorry if these are a nuisance. I of course think this stuff is pure gold – GOLD, I tell you)

26

Tim Wilkinson 10.10.10 at 4:47 pm

The price of saving a hypothesis by means of such an objection is 1 account of emergency/anomaly: when it occurs and what its consequences are. Otherwise the tin ear of the analytic philosopher hears only indignant spluttering. Of course no-one has to accept the need to be, or possibility of being, precise about the content of morality, but either you’re in or out.

I’m not sure ‘most utilitarians’ here is right – if you mean most utilitarian moral philosophers of any sophistication. IIRC Mill contrasted expedience (the good – utility-maximisation) from on the one hand morality (the right – blamelessness) and on the other law (the non-criminal – impunity).

This is very rough and handwavy, to avoid further word-splurge, but such an account provides a framework for making just the kind of distinctions invoked here – something which brute deontological-retributivist accounts would have much more trouble with. (Blamelessness is complicated by knotty problems of subj/obj – e.g. prospective v retrospective, but I stop here)

27

bianca steele 10.10.10 at 5:56 pm

John Holbo:
I admit I’m still working out the difference between “philosophical” and “academic” interpretations, but I have in mind something along the lines of something Stanley Fish wrote: That what academics do–as opposed to practitioners (though what a practitioner would be in a pretty exclusively academic field such as philosophy has been for quite a while isn’t clear to me)–is study what has already been said, and the best arguments for and against what has already been said (those arguments having already been stated), and the best conclusion to reach considering all those arguments (and here is where one’s independent judgment comes in, I think). So in these terms, the student is expected to display a scholarly inclination (not a creative ability to invent interpretations, which isn’t an academic ability on these terms), a sense of how to understand how philosophy thinks of the issue.

Where a “philosophical” interpretation, I think, would interrogate the example without regard for what had been said before, without looking up scholarly literature, without paying a lot of attention to dogmatic statements from any quarter.

Authorial intention seems to have to do with how to figure out the essence of the problem, given that you know there are different formulations put forward by different people, and you know people within the profession have disagreed with one another, and so on. I don’t think it’s necessary to get into intention (in the way MacIntyre does), with a book as a part of a thinker’s intellectual/moral/spiritual narrative, or into rhetoric, or anything along these lines. It’s more a question of how do you know which considerations carry the most weight.

These are all considerations for undergraduates, I guess. I ran into brainteasers like these long ago in high school (not the trolley problem, though, I think), and I think they are intended to get kids thinking, to knock them out of their complacency and their unwillingness to think hard or work hard. That’s in high school, though (and with teachers who weren’t trying to get us indoctrinated), and I’m sure they knew that in college we’d be expected to think about such problems in a different way than the way we were taught in high school.

28

Tom Hurka 10.10.10 at 10:22 pm

John: Reading my earlier post I see I wrote carelessly in sometimes making it seem as if I think the trolley case causes problems for non-consequentialism in general, though I suppose it can be used that way. I see it as causing problems primarily for one particular non-consequentialist view as against others. Foot, recall, introduced the problem in a paper, “Abortion and the Problem of Double Effect,” which argued that the important non-consequentialist distinction is not between intending and foreseeing, as Anscombe held, but between doing and allowing, or between negative and positive duties. It was a paper addressing an internal dispute among non-consequentialists, and arguing that the trolley case is no problem for her particular version of non-consequentialism. (Actually, when you switch to Thomson’s version of the case, the intending/foreseeing distinction does much better with the case than Foot’s distinction.) And as I said above, to support consequentialism the trolley case would have to elicit the intuition that you’re positively *required* to turn the trolley. And few people have that intuition.

Tim Wilkinson: That the driver would be killing no matter what he did was what Foot supposed. I think her idea was that even if the driver commits suicide now, the deaths of the five will be something he actively caused, because they result from his earlier action of starting the train. If I shoot an arrow at you and kill myself before it reaches your heart, I still actively killed you though I was dead when the arrow did the killing.

29

Tim Wilkinson 10.10.10 at 11:32 pm

Not so obviously if I have jumped out from behind a bush into the path of your (unusually slow) arrow, though.

30

Tim Wilkinson 10.10.10 at 11:35 pm

I remember a somewhat similar – but not quite on-point – example from Asimov’s three laws of robotics. But a side issue anyway I suppose – though incidentally, the reality of robots has I suppose given new relevance to rigorously systenatic attempts at ethics (well, more deontic logic I think, since the fact of being designed for a slave ‘race’ makes a system of rules quite different from ethics as we know it – unlike the otherwise parallel case of epistemics.)

31

Anderson 10.11.10 at 1:05 am

Anyone know whether Philippa Bosanquet Foot was any kin to Bernard B.?

32

bianca steele 10.11.10 at 1:23 am

@35 Not necessarily, if the three laws are just a simplified description of a supposed human ethics: beginning with an injunction against unmotivated suicide, an absolute requirement to prevent harm to others. Similarly to the trolley problem, I don’t remember an Asimov plot where “harm” was other than bodily harm. As you suggest, the purpose of the three laws was to prevent an uprising (brought out even more in the recent movie version), or at least to make sure the robots’ other jobs didn’t tempt them to do things like push people off bridges to get them out of the way.

33

Anderson 10.11.10 at 3:16 am

I don’t remember an Asimov plot where “harm” was other than bodily harm.

In one of the I, Robot stories, romantic disappointment to Susan Calvin was considered “harm” by a robot. Can’t recall the story, but I’m pretty sure “romantic disappointment to Susan Calvin” was a unique occurrence.

34

dsquared 10.11.10 at 7:27 am

b) it is not at all clear that the mathematical expectation actually exists

To clarify this – when we’re talking about events and moral decisions sufficiently complicated to be interesting (as opposed to constructed perfect-information examples), they are individual historic events which can happen a maximum of once and which have causes. To assert that they’re actually random variables, that there exists a probability distribution which describes the outcomes, and that this probability distribution maps onto some value-function in such a way as to allow for an expected value of the consequences conditional on each of a set of actions (and that the different possible actions can be distinguished and enumerated in a sensible way) is actually to make quite a strong and specific set of claims about the universe, which I don’t think anyone ever really supports. My particular point of interest is in “nonergodicity” in economics (in Paul Davidson’s sense), which is the claim that the expected value of most business decisions doesn’t exist, but in general, it can’t be assumed without checking that all these expectations actually exist.

And to clarify that in turn – this isn’t merely an objection about the calculability or knowability of these expectations and distributions (which would be a separate problem if it was deep or intrinsic) – it’s an objection that in a lot of cases or interest there actually is no expected value.

35

John Quiggin 10.11.10 at 9:50 am

I agree that the concept of a mathematical expectation is generally inapplicable, particularly if you want a frequentist/objective definition of probability. But I don’t see this as a fatal problem for consequentialism. Consequentialism means choosing the action that yields the best consequences, insofar as these can be foreseen. Under uncertianty, the “best consequences” means “the best distribution of consequences”. Working out which distribution is best is problematic of course, but the same is true under certainty as regards the distribution of consequences for different individuals.

As I said before, all of this provides good reasons for rule-based approaches, and reasons for doubting the usefulness of examples where absurd hypotheticals produce conflicts between rule-based and act-based procedures.

36

dsquared 10.11.10 at 10:24 am

Consequentialism means choosing the action that yields the best consequences, insofar as these can be foreseen

Yes, except that “insofar as these can be foreseen” often means “not at all”. To take a topical example, the consequences of Britain involving itself in the First World War.

37

John Quiggin 10.11.10 at 10:38 am

As regards WWI, Lord Grey seemed to have a pretty good idea of the consequences, at least in general terms.

More generally, as Alex said @16, I don’t see this as a problem that’s specific to consequentialism.

Most obviously, it arises in an equally acute form for any system which demands truth-telling. If we can never know the truth for sure, or even describe our uncertainty properly, how can such a demand be met?

38

dsquared 10.11.10 at 11:30 am

I don’t think that’s right – is there really any moral system that demands truth-telling in that sense? “Thou shalt not bear false witness” is the commandment – it’s a proscription on intentionally lying (which is defined subjectively in exactly the way that isn’t a problem). I don’t think that there’s any system that demands truth-telling in a sense that would make that sort of epistemic demand.

But I agree it’s not a special problem for consequentialism (it isn’t even a problem for all forms of consequentialism – specifically, it’s not a problem for forms that swallow the bullet and make it a theory about subjective beliefs). It is a problem though for a) forms of utilitarianism that want to downplay the status of rules and b) anything that gets most of its intuitive support from forms of appeal to perfect-information cases.

39

bianca steele 10.11.10 at 7:21 pm

Beauregard,
Perjury is not in issue if the witness believes the statement to be true, even if the statement is false, whether the cause is a simple mistake or Alzheimer’s. Or is your comment a sly reference to a well-known 1998 case?

40

bianca steele 10.11.10 at 7:32 pm

FWIW the Three Laws of Robotics seem very similar, in some ways, to the eugenically-bred ethics of the lesser races of the Dominion (in Star Trek</i), with the shape-shifters in the role of humans with rights, and all other races in the role of sub-human slaves (fighters in the case of the Jem Haddar, advisers in the case of the Vorta).

41

bianca steele 10.11.10 at 7:32 pm

Unclosed tag fixed:
FWIW the Three Laws of Robotics seem very similar, in some ways, to the eugenically-bred ethics of the lesser races of the Dominion (in Star Trek), with the shape-shifters in the role of humans with rights, and all other races in the role of sub-human slaves (fighters in the case of the Jem Haddar, advisers in the case of the Vorta).

42

Tim Wilkinson 10.11.10 at 7:42 pm

I think what goes for truth-telling goes also for expected-value maximisation, without the need to specify ‘subjective beliefs’ – in fact making such a specification is going to be an invitation to try and game the system, neglect moral duties of an epistemic nature, etc.

There’s an ought/can thing going on there that for the most part be left unspoken so far as prospective, practical, rules goes (impossibility has a way of asserting itself whatever you may say), though it will be explicitly invoked when it comes down to post hoc judgements (in crime terms, things like mens rea, the necessity defence).

43

John Quiggin 10.11.10 at 8:08 pm

DD, we are in furious agreement as usual.Consequences must be assessed in terms of subjective judgements – these are the only kind available to us. And, awareness of our own fallibility should lead us to prefer rules to acting as judges in our own case, or in any cases where we lack the knowledge needed for discretionary rule-breaking.

This seems obvious to me. But then, I am so steeped in (post-)Bayesianism that the idea of an expectation independent of someone doing the expecting has something of an air of paradox for me.

44

g 10.12.10 at 9:33 am

dsquared, arguably W K Clifford’s famous “The ethics of belief” demands truth-telling in something like that sense. Not the utterly unreasonable sense of saying “it is wrong to say something unless it is, in fact, true” but the more reasonable but still unusual and challenging “it is wrong to say, or even to believe, something unless you have good rational grounds for thinking it is true”.

45

dsquared 10.12.10 at 11:50 am

Oh yes, it does doesn’t it – I think I did that one at university. But in this case “good rational grounds” are going to mean something like “rules”, aren’t they?

46

g 10.12.10 at 2:29 pm

For every account of what it means to have good rational grounds for believing something, there’ll be a corresponding version of Clifford’s principle. Unless anyone’s crazy enough to endorse the theory that it’s only ever rational to believe things that are actually true, all these versions will still make what it’s reasonable to believe (and to say) depend on one’s own epistemic situation, so it’s still not going to be a matter of demanding that no one ever say anything that turns out to be untrue. But it’s a step in that direction, relative to the usual position that you’re entitled to say whatever you sincerely believe to be true.

(Perhaps Clifford would have said agreed that you’re entitled to say what you sincerely believe, but that there are constraints on what you are entitled to believe; the latter half of that was certainly his position. But I think what his arguments for that position really show (in so far as they work) is that there are constraints on what you’re allowed to act-as-if-you-believe.)

47

Tim Wilkinson 10.12.10 at 3:43 pm

#52 – Not really – WKC only goes as far as referring to evidence and inquiry in formulating his ‘moral duties of an epistemic nature’. I don;t think these count as rules in the sense of heuristics.

But in nay case, it;s not as if mentioning rules is the end of the story: formulating or selecting rules is exactly the problem that a utilitarian faces. Deciding what rules to follow, habits to adopt etc is of course a problem for anyone except the dogmatist or the moral nihilist: the utilitarian theorist is distinctive in refusing to decide this issue for us at the level of moral theory (except insofar as like Sidgwick one wishes to make utilitarianism an esoteric elite doctrine). Instead a standrad is offered by which to decide it: maximise utility. That provides a constraint on what, as g @53 puts it, you’re allowed to behave as if you believe (even if you do in fact believe it for whatever reason).

A maxim analogous to g’s Cliffordian one, applying to utilitarian practical reason, would be something like ‘it is wrong to do something unless it has the highest expected value’, not ‘unless it accords with the following rules (which btw I think are optimal)…’, nor ‘unless you happen at the time to think it’s optimal (regardless of how you got there)’.

Including expected value in the specification of morality means appealing to objective probabilities/prospects – not merely personal ones, which is how various remarks are soundig to me at the moment. Once again the prospective-practical/retrospective-judgmental distinction is relevant – we may excuse someone for having got things wrong, but we don’t give them permission to do so in advance.

If we are formulating a rule of conduct (the fundamental utilitarian rule), concessions to subjectivity must stop somewhere, and they stop well short of saying ‘do whatever you think best’. And where they stop, I’d suggest, at least along the dimension under discussion, is at objective probability. (Objective probability is objective in the sense of intersubjectively valid, but not in the sense of non-relative: one thing it is always relative to, explicitly or not, is an information set).

I think there may be a danger here of inflating objective probability into something utterly inaccessible, with the result that the only alternative seems to be brute personal opinion. Maybe it’s more a question of getting a bit equivocal about the ‘subjective’ in ‘subjective probability. Or perhaps relying on a particular conception of probability as though that were an invariant (ergodic?) component of utilitarianism. Though maybe I’m falling int the trap of doing that in referring to ‘expected value’ – but I’m thinking more of the idea of unquantifiable uncertainty as extraneous to probability and, as a matter of necessity, pervasive; or that probabilities must be based on simple frequencies, that kind of thing.

(I can think of two specific reasons, besides the conceptual point that it’s just wrong, why building personal opinion into the specification of the utilitarian rule is a problem: one would be if our fallibility has the form of a proportional shortfall rather than a ceiling on what can be achieved, so that any standard which takes account of error bakes that error in, and allows more error on top, in a sort of aiming low or double-counting sort of way. But I don’t think that is a very good one, even if it were experessed better.

The other, which I think is better but not really distinct from the main thrust, is that appealing only to actual beliefs eliminates the need – or even removes the basis – for modesty about how good those beliefs are.)

48

dsquared 10.12.10 at 4:26 pm

I think there may be a danger here of inflating objective probability into something utterly inaccessible, with the result that the only alternative seems to be brute personal opinion

Yes, I am also in (hopefully not violent) agreement with Tim here. The trouble (if it is one) is that we end up making the principles of decision theory into moral principles (I don’t have much problem with this but a) I think some might and b) it means that the status of moral rules depends on the status of mathematical propositions, which is at least interesting). And also that there will be cases in which the decision-theoretic toolkit doesn’t give an answer (or even sensible bounds on an answer).

49

Tim Wilkinson 10.13.10 at 11:20 am

DD – well I’m not sure how much of decision theory really gets promoted to the status of fundamental moral principle, rather than derived or subsidiary principle. As indicated, I suspect ‘expected value’ is maybe a bit too technical a term, and maybe builds in a bit too much imm0desty about current methods – especially if those methods are going to be useless in cases where uncertainty gets treated as ‘cluelessness’ and the sums come up with null/unknown results.

Whatever is at the top rung will have to be better than the notoriously dual maximand of ‘greatest benefit for greatest number’, of course.

In particular, it will need to take account of some kind of probability-disytribution type of thing, since I’d say the difference between that and a single-consequence rule ‘do the thing that has the best consequence’ is a difference in the kind of standard, rather than one which can be built in in the course of imperfect attempts to achieve that standard – if that makes sense. (Of course the single-consequence rule will be incorporated as a limit case of the prob-dist version – this might be one way of establishing a difference in kind or ‘order’ or something.)

The limited epistemic content of the fundamental rule would also of course have to avoid building in paralysis through ‘cluelessness’, as suggested above – to the person who has to make a decision, risk and uncertainty can’t be distinguished too fastidiously – if the numbers are vague, you’re going to need to sharpen them up somehow quick smart, befpore that trolley gets here.

Anyway, isn’t risk ultimately based on the princ of indifference? And not just as a matter of quick and dirty actuarial ‘cet. par.’ assumptions: even if you wanted to build a working Bayesian model of the world at the most fundamental level, the idea would still have to be that when you have divided and subdivided all the facts – in a logical-atomist kind of way – each distinct possible state of the world counts for exactly one. Depending on your prob-semantics, maybe this can be expressed as as Leibnizian principle – no duplication of indiscernible possible worlds, or something. But that is – or may as well be – just a construction; the princ of indiff is still ineliminable.

That’s all a bit rarefied, but I’m still not satisfied that risk and uncertainty are really distinct. Separately, it does remind me of another issue – let’s say you want to include things like the probability of experts being right, say – how can that get incorporated into a decision calculus? The looming issue of how we pick our ways of dividing up (conceptualising) the world to pick our propositions comes in here. Is there some kind of a problem of double-counting if different levels of reality/analysis (social, biological, chemical, physical, quantum, etc), are combined in the data? I think we should be told.

50

John Quiggin 10.13.10 at 7:32 pm

“I think we should be told”

When my long-running research program on decisions under differential awareness and bounded rationality reaches its triumphant conclusion[1] you will be.

fn1. I plan to time the announcement to coincide with the opening of the first commercial nuclear fusion plant

51

Tim Wilkinson 10.13.10 at 9:46 pm

I await it with an unknown degree of eager anticipation.

52

john c. halasz 10.14.10 at 1:26 am

Just as an irrelevant aside here, I find the affirmations of utilitarianism in some of the above comments actually self-stultifying. Just to raise one basic point, if there are to be practicable and accessible moral rules, then they will be just rough heuristics, with a synoptic and imprecise relation to experience and the likely distribution of relevant events. Pretending to some sort of more precise calculus of consequences or their distributions amounts to an idealizing illusion, which is impracticable in terms of the actual capacities and negotiations of deliberating agents. There are lots of other problems, as well, such as a failure adequately to relate consequences to intentions, (lest we regress to a barbarous notion of taboos), the assumption that morality and deliberation can be dealt with in terms of a basically instrumentalist calculus, when the whole problem is set by the non-manipulable status of human beings, whether that can adequately be expressed in terms of “respect” or not, the assumption that a calculus in some sort of homogeneous medium could eliminate the conflicts and the incommensurable commitments to values or ends involved at the root of the problem, etc.

But then any account of morality or ethics and deliberation, choice, values, norms and ends, und so weiter, requires not just a basic account of human agency, “freedom”, and its limits, but also a basic phenomenological description of what the “nature” and complexion of the domain of morality and the limits of that domain are. I remember, in Stanley Cavell’s “The Claim to Reason”, an discussion of the work in moral philosophy of some early Analytic, Stevenson, I think, in which it was clear that he hadn’t the foggiest clue of what “morality” was about. But then, the domain for “morality” and its application, especially under the highly structurally differentiated conditions of advanced modern societies, is much more restricted than most liberalistic thinkers would like to imagine. Minima moralia, I say!

Comments on this entry are closed.