Consequentialisms

by Brian on December 20, 2003

I’m in the odd position that my favourite ethical theory is one I regard as having been decisively refuted. The theory is a form of consequentialism that I used to think avoided all the problems with traditional forms of consequentialism. I now think it avoids all but one or two of those problems, but those are enough. Still, whenever I feel like letting out my inner amateur ethicist, I keep being drawn back to this theory.

It’s a form of consequentialism, so in general it says the better actions are those that make for better worlds. (I fudge the question of whether we should maximise actual goodness in the world, or expected goodness according to our actual beliefs, or expected goodness according to rational beliefs given our evidence. I lean towards the last, but it’s a tricky question.) What’s distinctive is how we say which worlds are better: w1 is better than w2 iff behind the veil of ignorance we’d prefer to be in w1 to w2.

What I like about the theory is that it avoids so many of the standard counterexamples to consequentialism. We would prefer to live in a world where a doctor doesn’t kill a patient to harvest her organs, even if that means we’re at risk of being one of the people who are not saved. Or I think we would prefer that, I could be wrong. But I think our intuition that the doctor’s action is wrong is only as strong as our preference for not being in that world.

We even get something like agent-centred obligations out of the theory. Behind the veil of ignorance, I think I’d prefer to be in a world where parents love their children (and vice versa) and pay special attention to their needs, rather than in a world where everyone is a Benthamite maximiser. This implies it is morally permissible (perhaps even obligatory) to pay special attention to one’s nearest and dearest. And we get that conclusion without having to make some bold claims, as Frank Jackson does in his paper on the ‘nearest and dearest objection’, about the moral efficiency of everyone looking after their own friends and family. (Jackson’s paper is in Ethics 1991.)

So in practice, we might make the following judgment. Imagine that two children, a and b, are at (very mild) risk of drowning, and their parents A and B are standing on the shore. I think there’s something to be said for a world where A goes and rescues her child a, and B rescues her child b, at least if other things are entirely equal. (I assume that A and B didn’t make some prior arrangement to look after each other’s children, because the prior obligation might affect who they should rescue.)

But what if other things are not equal? (I owe this question to Jamie Dreier.) Imagine there are 100 parents on the beach, and 100 children to be rescued. If everyone goes for their own child, 98 will be rescued. If everyone goes for the child most in danger, 99 will be rescued. Could the value of paying special attention to your own loved ones make up for the disvalue of having one more drown? The tricky thing, as Jamie pointed out, is that we might ideally want the following situation: everyone is disposed to give preference to their own children, but they act against their underlying dispositions in this case so the extra child gets rescued. From behind the veil of ignorance, after all, we’d be really impressed by the possibility that we would be the drowned child, or one of her parents.

It’s not clear this is a counterexample to the theory. It might be that the right thing is for every parent to rescue the nearest child, and that this is what we would choose behind the veil of ignorance. But it does make the theory look less like one with agent-centric obligations than I thought it was.

This leads to a tricky taxonomic question. Is the theory I’ve sketched one in which there are only neutral values (in Parfit’s sense) or relative values? Is it, that is, a form of ‘Big-C Consequentialism’? Of course in one sense there are relative values, because what is right is relative to what people would choose from behind the veil of ignorance, and different people might reasonably differ on that. But setting a community with common interests, do we still have relative values or neutral values? This probably just reflects my ignorance, but I’m not really sure. On the one hand we have a neutrally stated principle that applies to everyone. On the other, we get the outcome that it is perfectly acceptable (perhaps even obligatory) to pay special attention to your friends and family because they are your friends and family. So I’m not sure whether this is an existence proof that Big-C Consequentialist theories can allow this kind of favouritism, or a proof that we don’t really have a Big-C Consequentialist theory at all.

(By the way, the reasons I gave up the theory are set out in this paper that I wrote with Andy Egan. The paper looks like a bit of a joke at first, but it makes a moderately serious point. Roughly, the point is that although the form of consequentialism set out here is not vulnerable to just the form of ‘moral saints’ objection that seems devastating to Benthamite utilitarianism, there’s still a moral saints objection, and that is a problem.)

{ 20 comments }

1

Anno-nymous 12.20.03 at 10:10 pm

Perhaps the paper would look like less of a joke if it weren’t entitled “Prank.pdf” — just a thought.

More seriously, I feel like we would prefer to live in a world where parents love their children based on our actual, not rational, beliefs: We value it because it was important in our own lives.

On the other hand, though, there’s probably some strong “human nature” that tends towards the usual parent-child bond. Since we’re dealing with happiness, it seems like human nature might be able to get us out of just about any jam if we’re able to say “Because of how we’ve evolved, people are a lot happier when they act in this manner. Thus it is moral.”

Of course, I’m not an ethicist, and I think I can fairly assume any thoughts I’ve had have already been had, better, by professionals.

2

John Q 12.20.03 at 11:29 pm

Brian, I posted what I thought was a definitive refutation of the organ bank example a while back, but my blog archives are inaccessible so I can’t link to it. In the standard example, the proposed course of killing the healthy patient would be dominated by the alternative of killing one of the sick patients, chosen by lot, and distributing their organs to the others.

This version of the example is arguably trickier for the kind of consequentialism you describe, or maybe for the usefulness of intuition. It seems plausible that lots of people would agree, behind the veil of ignorance, to drawing lots in the case I’ve described, but it still seems icky (compare shipwreck and cannibalism).

3

Brian Weatherson 12.20.03 at 11:46 pm

I agree that’s a tougher version of the doctor problem, but I don’t really see how it affects my theory – unless some moderately spectacular assumptions are made.

If we let the lottery process just be the process of waiting to see who dies first, and all five agreeing to donate spare organs to the other four if they are first to go, that doesn’t seem icky at all. (At least not to me – but I have been accused of being insensitive on these matters.)

Now that lottery might not work if (a) we expect two or more of them will go at once, or close enough to it, or (b) that waiting for natural death will ruin the organs. But without one of those two assumptions there’s a very natural lottery to run.

Just why we should be happy with nature’s lotteries but not human’s is a harder question. Perhaps as anno says we’re just used to some things and know they work.

4

degustibus 12.21.03 at 2:03 am

The darkest refutation of consequentialism as I understand it, comes from Mark Twain’s Mysterious Stranger. The point being that one can never know the ultimate consequences of any act. (A woman donates a kidney to someone who turns out to be a serial rapist/murderer, …. the child saved at the beach turns out to be a 21st century Adolph. Or this one, where a guy gets drunk gets in a wreck and is pried from his vehicle, saved by the Jaws of Life, later gets drunk, kills a family in a wreck.)

“morally permissible” This sounds like nonsense to me, in a world where all is permitted. Imagine someone (not wearing a badge and uniform or carrying a weapon) saying to you, stop doing that, It’s morally impermissible.”

Says who? According to what standards? And what will happen if I don’t stop doing it? (Some beareded galoot in a burqa will come a long and hit ya with a stick.)

Ethicists should be having a great time with the real life ethical problem posed by killing many innocent people to bring one tyrant to trial.

But then ethicists typically don’t have anything to say in these matters, other than to offer a rationale after the fact that can be used to justify the action. (It seemed like a good idea at the time.)

5

Chris Bertram 12.21.03 at 10:16 am

I’ve been trying to puzzle out what the theory might be and what it is a theory of and I’m not seeing it. So maybe you can help me out.

Is it a theory about the _rightness_ of actions?

Assuming that it is ….

Does it apply to action types or tokens?

Is it a theory that tracks those actions that are right, that explains why they are the right actions, that identifies a right-making property??

Perhaps a crisp canonical formulation would help.

6

Chris Bertram 12.21.03 at 11:10 am

Reading my last comment I realise that it might come across as that staple of philosophical seminars: the deliberately disingenuous question. But it wasn’t meant that way – I’d really appreciate some clarification and unpacking.

7

Brian Weatherson 12.21.03 at 3:39 pm

Yep, it’s meant to be about which actions are morally right.

It’s meant to apply to action tokens not types, and for now I’d just be happy if it were extensionally correct – i.e. if it tracks those actions that are right. So the claim is just that an action is right iff it leads to a better world than its alternatives, where a is better world than b iff we’d prefer being in a to being in b from behind the veil of ignorance.

(I’ve assumed there the objective version rather than the subjective version. If need be replace would lead to with ‘would be expected to lead to’. I’ve also assumed something like causal determinism – that there’s one world an action would lead to. Removing that requires complicating the formulation, but only in familiar ways. It also requires us to have preferences over lotteries from behind the veil of ignorance. I think that’s OK, but others might disagree.)

I certainly don’t think it’s an account of the right-making features of actions. It’s right to visit your sick friend in hospital because she’s your friend, and she’s sick, and she could use the company. Those reasons, I think, set out the right-making features of the action. It’s possible that the consequentialist claim could be read as an explanation for why those things are right-making features. I’m mostly interested in the weaker claim that the consequentialist theory tracks the right actions.

8

Matt McIrvin 12.21.03 at 3:47 pm

These discussions of moral systems, whether they be utilitarianism or divine command theory or something else entirely, always seem strange to me: they amount to attempts by someone to axiomatize their gut feelings about right and wrong, and then they always seem to founder on some situation in which they give an answer contrary to our gut feelings about right and wrong. But if that’s really the decisive criterion—if we’re going to regard any abhorrent-sounding consequence of a consistent moral system as fatal—why do we need anything other than the gut feelings in the first place? (This idea of “worlds in which we would prefer to live” is perhaps a stab in that direction.)

Maybe the thoughts of a well-brought-up moral agent constitute an essentially irreducible system. Or, given that we know that gut feelings are sometimes wrong (consider the many bigotries which have been discarded by the enlightened), are we trying eventually to establish some higher-level criterion for when to follow our guts and when to follow an axiomatic system? And if so, is there evidence that we can make any progress?

9

roger 12.21.03 at 7:58 pm

There is. I think, a problem with the general statement of consequentialism and the articulation of the counter-example. The problem has to do with levels of description. The notion of a world populated by humans is one level of description, and on this level, one doesn’t want organs harvested from one human to benefit another human. But in your example, you can go down, without semantic loss or gain, from that level to the level in which the harvester is a doctor and the human is a poor person. I’d question that logical continuity, and the premise that these descriptions are unproblematically held within your general description of “humans.” I think that filling out of humans by their relationships one to another requires a thicker description of those relations before you can move from ethical judgements that are valid on level one to ethical judgments that are valid on the micro-level, level two. In fact, I would think that the mismatch between descriptions of worlds in terms of complete generality — Kant’s world, for instance — and worlds that are captured only by a thicker description of human relationships is the whole impetus behind some version of consequentialism. In a sense, this transposes the sorites paradox into ethical terms. Just as there is no formula for getting heaps out of sand, there is no formula for getting doctors and poor people out of humans.

The drowning child example is a good case for the transformations that thicker descriptions can bring about. From one end, the rescue of 99 children is equivalent to the rescue of 99 children. But introduce a variable into the mix: what if — looking into the minds of the participants — there was an intention, on the part of his parents, to murder him? The intention, however, is not to murder him by drowning, but murder him in some other way. Accidentally, his drowning, in spite of the maximum effort put in by all parents to save the children, fullfilled that parental will to murder. One would want to say that the result, then, could be judged to be wrong vis a vis the intent of the murderous parents. On the other hand, the same result could have happened without any bad intentions in the minds of the rescuing parents. Now, a consequentialist, trying to get out of the tar baby posed by motive, has a problem here. This problem is invisible from the macro level, but very visible from the micro level.

10

robin green 12.22.03 at 7:38 am

Brian – nice formulation, and exposition of the advantages over other consequentialisms. I had despaired of ever finding a “realistic” consequentialism, but I will have to think about this some more.

Roger – Yes, ethical decisions depend on context. This is obvious. So what? The beauty of pure act-consequentialism is *precisely* that it fails to make any hard-and-fast rules that can be thwarted by contextual surprises (apart from the defining principle, which is so general as to be supposedly almost perfect). So your argument seems to be pro-consequentialist to me. Is that what you intended?

And I’m not sure what you mean by “the result was wrong”. How would that be phrased? “It was wrong for the child to have died?”

But in order to make a moral judgement about a person we have to be able to say that their actions (or inaction) was/were right or wrong, no? So do you mean that the honourable action was bad because it was performed by someone with dishonourable intentions – not someone intending to kill the child now, but later?

Or do you mean instead that the parent in question surreptiously let their child drown, but pretended to try and save them? That would be quite different, and is obviously a wrong act. But that doesn’t seem at all consistent with what you said.

In fact, what you said seems incoherent. How can someone be held responsible for anyone’s death when they performed the “maximum effort” possible to try and save them?

As the film Minority Report I think shows, we should not be condemned for “evil thoughts” that we might have – no matter how horrible – before we have had the chance to enact them. Of course, that doesn’t mean that preventative measures are never in order – and it also shouldn’t be taken to apply to imminent actual threats to life or limb (so, it’s a fuzzy line).

degustibus – I’m not sure how Mark Twain’s refutation works against the “expected utility given what you know” type formulations. Since you could only have expected those life saving acts to have good consequences (er… assuming you aren’t a serious misanthrope) the actions were still right and no-one could have expected you to have done anything better. Simple. End of story.

I think where “morally impermissible” comes into the equation with things like Iraq is things like demonstrators, like myself, walking past Downing Street and shouting “Shame on you!” in Mr. Blair’s general direction, into the video cameras of the helpful policeman standing guard. If “Shame on you!” isn’t a moral exclamation, what is?

Of course, you don’t need very complicated ethical theories to see that the Iraq war was wrong on many levels.

Matt – The purpose of ethical argument is argument. Not as in ranting for rantings sake, but as in persuading your codisputant of the rightness of your position – or, alternatively or additionally, the rightness of the principle(s) from which your position spring. The well-known Hareian refutation of the crude emotivism you espouse there (“Ethical debate is merely dressed-up Yays and Boos”) is that it implies that ethical arguments are not actually arguments worth having, when clearly they are. They are because it is possible – it may be rare, but it is possible, I know as I have been on the receiving end – to convince someone that the principles that you and I share, when viewed correctly and without fear or favour, motivate my position, not yours.

Without some bedrock of relevant shared principles, however, debate is fruitless. This is why the abortion debate, for example, is so intractable. There may be some shared principles on the table, but not, it seems, enough.

11

Chris Bertram 12.22.03 at 8:33 am

I guess I’m sceptical. If the consequentialist test that you advance here merely tracks the right answers then we don’t have anything like a consequentialist theory in the traditional sense. And it looks like you’ve just found a way of incorporating some deontological constraints within a system that looks consequentialist.

I’m also puzzled about the connection (if any) to ordinary ethical judgements and decisions. So take the decision to visit your friend in hospital. You rightly identify the reason-giving (and right-making)features of the situation. But it looks as if you should want to say that there are circumstances where that decision should be overridden. No doubt there are. But I doubt that the best way of capturing those circumstances is to say that you should do something else instead just in case doing that other thing leads to a better world in the sense specified…

But then perhaps you want to say that from behind the VoI we’d judge that we don’t want commitments to be too easily overridden by the application of the test itself. But when the test gets self-referential in that way I find myself staring into a hole with n-different meta-levels, regresses, no proper control on what is and isn’t allowed to count &c).

12

Backword Dave 12.22.03 at 9:46 am

Surely there is a huge problem with the way the question is formulated? How can anyone know in advance that the first action will lead to 98 lives being saved and 99 lives in the second action? Kanneman and Tversky (spelling?) have lots of examples which show that common-sense ignores statistical principles. Who thinks that morals can be determined by processes most people can’t handle?
I don’t expect to rescue anything 100 drowning kids in my lifetime, so for most people, either course of action works. Finally, at risk of sounding like a heartless Tory, if you plan to take your kids to the beach, learn to swim first. And of course save your own, do you really think the next person on the beach is (likely to be) an ethicist?

13

Matt Weiner 12.22.03 at 3:05 pm

Backword Dave hits on something that bothers me about the drowning-children example: How do you calculate consequences of your action when many people are acting? Do you simply hold everyone else’s actual action fixed, or calculate what they would have done if you had done it–in which case the policy with the best consequences is to save the child that no one else is actually saving?

The problem with this view comes when many people do things that collectively overdetermine something very bad, but individually have small positive consequences past the threshold for the bad thing. If you calculate the consequences of an individual agent’s actions, holding the rest fixed, the consequences are positive–the bad thing would have happened anyway. But the agents collectively did something bad.

(A canned example is: Two people are pointing guns at a third. Each simultaneously resolves a minor itch by squeezing their trigger finger. If A had not squeezed the finger, the target would have died anyway, so A’s action has the net consequence of relieving the itch, and so is right.–That can’t be right.)

You could say–we wouldn’t choose to live in a world where so many people sully themselves by collaborating in a collective bad action. But that seems like smuggling in deontology, as I think Chris B is saying.

This is much treated in the literature I’m sure, so I’d be happy if more clued-in people spoke up.

14

roger 12.22.03 at 3:56 pm

Robin, I am not averse to consequentialism, but I do think that it too hastily decouples motive from act in order to reach an admirable goal: embedding ethics in practice. To do this, I think you have to move from talk about worlds and humans to talk about situations and human relationships. As I said, I think that the assumption that you can move logically from one level of description reproduces all the problems of a more Kantian moral theory — and this is a problem, insofar as consequentialism is supposed to be an alternative to Kantian moral theory.

As an example of what I mean: when you say “As the film Minority Report I think shows, we should not be condemned for “evil thoughts” that we might have – no matter how horrible – before we have had the chance to enact them…,” the scarequotes don’t rescue “evil” from its moral value. How, actually, can these thoughts be evil if evil is defined solely on actions? Of course, you can speak of thought acts that are like speech acts, but at that point you have dissolved the distinction you are trying to preserve. That you, as an Other, embodied in some social institution, shouldn’t punish evil thoughts tells us only, or at least tells me only, that punishment and reward are not the necessary coefficients of moral judgement. If I were to pinpoint the one problem I have with both Kanian and consequentialist moral theory, it is the move that identifies judgments of good and bad with punishment and reward.

As to mixing in the motive of the parents, this comes from positing that the parents are rescuing children regardless of their personal relationship to the rescued child. For 99 parents, this comes from a consequentialist p.o.v.; for one, it comes from a small homicidal wager: that their child won’t get rescued.

Oh, and here’s one further problem for consequentialism — it does make it rather difficult to pick out moral agents. After all, it is only because human beings have something like consciousness and they act that we distinguish their killings and charity from the acts of, say, a guppy. On the level of pure action, however, a guppy killing its thousand children and a homo sapiens killing its one aren’t morally distinguishable.

15

roger 12.22.03 at 3:56 pm

Robin, I am not averse to consequentialism, but I do think that it too hastily decouples motive from act in order to reach an admirable goal: embedding ethics in practice. To do this, I think you have to move from talk about worlds and humans to talk about situations and human relationships. As I said, I think that the assumption that you can move logically from one level of description reproduces all the problems of a more Kantian moral theory — and this is a problem, insofar as consequentialism is supposed to be an alternative to Kantian moral theory.

As an example of what I mean: when you say “As the film Minority Report I think shows, we should not be condemned for “evil thoughts” that we might have – no matter how horrible – before we have had the chance to enact them…,” the scarequotes don’t rescue “evil” from its moral value. How, actually, can these thoughts be evil if evil is defined solely on actions? Of course, you can speak of thought acts that are like speech acts, but at that point you have dissolved the distinction you are trying to preserve. That you, as an Other, embodied in some social institution, shouldn’t punish evil thoughts tells us only, or at least tells me only, that punishment and reward are not the necessary coefficients of moral judgement. If I were to pinpoint the one problem I have with both Kanian and consequentialist moral theory, it is the move that identifies judgments of good and bad with punishment and reward.

As to mixing in the motive of the parents, this comes from positing that the parents are rescuing children regardless of their personal relationship to the rescued child. For 99 parents, this comes from a consequentialist p.o.v.; for one, it comes from a small homicidal wager: that their child won’t get rescued.

Oh, and here’s one further problem for consequentialism — it does make it rather difficult to pick out moral agents. After all, it is only because human beings have something like consciousness and they act that we distinguish their killings and charity from the acts of, say, a guppy. On the level of pure action, however, a guppy killing its thousand children and a homo sapiens killing its one aren’t morally distinguishable.

16

sennoma 12.22.03 at 4:25 pm

Who thinks that morals can be determined by processes most people can’t handle?

Singer (in Practical Ethics; he’s discussing someone else’s ideas, but I can’t remember whose) makes a distinction between day-to-day ethical decisionmaking and carefully, critically reasoned ethical decisionmaking. He suggests that it is useful to have a set of carefully worked-out guiding principles on which to base those day-to-day choices which must be made without devoting hours of introspection to each one. I think (and I could be very wrong in this) that most people can handle statistical principles if they have time to sit down and nut them out, so that such principles can at least play a role in, er, pre-emptive ethical decisionmaking.

17

roger 12.22.03 at 4:28 pm

Robin, I’m sympathetic to the motives for consequentialism — which I take to be the effort to embed ethical judgments in the world of practice. But I don’t think consequentialism quite gets there.

1. Levels of description. The problem with starting out with on the level of the most general description and then descending to the next level, in which human relationships are fleshed out, is that it isn’t logically clear how you make that move. In other words, the hierarchy Brian presents is linear — you merely add things to worlds and to humans and you get doctors and parents. But I think that it is more likely that the moral world is non-linear — that there isn’t a “human being” to which you add the “doctor” formula and you get a doctor. This actually vitiates the whole point, which is to understand practice — how doctors are made, etc. etc.

2. When you write: “As the film Minority Report I think shows, we should not be condemned for “evil thoughts” that we might have – no matter how horrible – before we have had the chance to enact them,” the scarequotes around evil don’t rescue it from its moral status. But of course, if evil — or any moral judgment — only attaches to acts, then how can there be such a thing as evil thoughts. It would be like thoughts that are colored blue. I think that there are bad thoughts — and that the thinker reacts to them as bad thoughts. This is the whole phenomenon of conscience. A moral theory that has no place for, or explanation of, conscience is not without problems.

3. Perhaps by condemnation you meant merely that we describe something as evil — but there is a larger sense in which this is the flaw in consequentialist reasoning. I think we have to disjoin punishment or reward from good or bad. The consequentialist seems to think that the essence of bad is punishable. This, I think, reproduces the worst habit of utilitarianism. Ultimately, we know that punishment and reward stem from a different class of motives — motives circulated within various social institutions — then the motives that prompt us to describe an act or a thought as good or bad. It has long been a philosophical platitude that a good act is not good because it makes us feel good — but philosophers have a much harder time decoupling bad acts from punishment.

4. I presented a confusing interpretation of the parent example, you are right. Here’s what I meant. If the one hundred parents decide to rescue children regardless of relationship, the homicidal parents could then rescue a child — something good — only to cover up their wager — admittedly a long shot — that their own child wouldn’t be rescued. Now, would we say this was bad?

5. Speaking of which — if we do exclude all mention of motive or thought from our moral talk, it is puzzling how we pick out moral actors. Why distinguish humans from guppies, morally? It seems that you need some way of packing in certain distinguishing things about humans to begin with. And surely that can’t be derived from their acts — it has to be derived, somehow, from what they think of acts, and that they think of acts.

18

roger 12.22.03 at 4:31 pm

Oops — I’m TERRIBLY sorry — my browser has been hanging up. I only meant the last thing to get on line.

19

robin green 12.23.03 at 8:54 am

Matt – You bring up a very important point about collective action which I don’t have a good answer to. I remember reading Parfit’s “Reasons and Persons”, which is supposed to be a very good book on consequentialism and personhood, and I remember putting the book down because he completely fudged and/or ignored the vital question that you have raised. It was a good book up to that point, but I just couldn’t continue after I’d thought through the implications of that hole in the theory. It seemed to me then, and still seems, insoluble.

Roger’s parent example – in its clarified form – seems to me to be a fascinating, more convuluted example of the kind of collective action problem that consequentialism does badly at.

Roger – I don’t see what follows from taking into account a person’s personal history, or whatever, which isn’t necessarily relevant. Where does this lead?

As for bad thoughts, let me modify my absolutist and simplistic “anti-thought-crime” stance, which as you rightly pointed out is untenable. Yes, a thought _can_ be bad, but only because of what it predisposes us to *do* , if anything – and it can’t ever be as bad as the actual act. If it is only a “joke thought” that does not have any serious intention to do anything in reality attached to it, then it’s less harmful still.

For me, moral actors are distinguished by a very simple pragmatic criterion – their ability to comprehend moral judgements. It is pointless to upbraid a lion for “inhumanely” killing its prey.

As an aside, the “ability to comprehend” has to be defined quite loosely if you want to include full-blown psychopaths as moral agents – because they appear to have no functioning conscience, but we still want to be able to talk about them doing bad things.

20

sennoma 12.23.03 at 4:34 pm

Robin, Matt — in re: collective action, I haven’t read Reasons and Persons (it’s on my guilt list), but I wonder if Parfit’s fudge is along the lines of good/bad faith? That is, each agent must determine, in good faith and from as much of the evidence as they can see, their own right course of action. If it is reasonable to expect a given agent to have seen the big picture, then taking part in the collective wrongdoing was wrong on that agent’s part. This at least avoids the recursive nature of trying to work out what each agent would have done if some other agent/s had done this or that or the other, and allows one to focus on what each actually did. Example: the “tragedy of the commons” model wherein the commons is ruined by each agent maximising his/her own utility always seemed a bit bogus to me, because it is not difficult to incorporate “having a viable commons” into the idea of maximal personal utility.

Comments on this entry are closed.