2
   

Utilitarianism

 
 
djbt
 
  1  
Reply Mon 5 Sep, 2005 06:45 am
Val, I too will be going over old arguments here, but I don't think this is boring or futile, since I still don't think we fully understand each other.

val wrote:
Happiness for the greatest number of people ... but how are we going to define happiness - in utilitarianism we have not a previous philosophy that allows us to give definitions like that. And the concept of happiness is not univocal.

It doesn't matter if the word, or concept, is not univocal, as long as its definition in the context is univocal. Words, as they say, are our slaves not our masters. Now, clearly I don't want to speak for all utilitarians, but the way I understand utilitarianism (and I'm happy to discuss this in isolation for anyone else's understanding of it):

An increase in happiness = an increase in pleasure and/or a decrease in pain.

Pleasure = a sensation you'd rather have than not
Pain = a sensation you'd rather not have.

If you object that others may not define happiness, pleasure or pain this way, I can easily sidestep this objection by restating my summary of utilitarianism to:

The greatest increase in sensations that are gladly felt and decrease in sensations that would rather not be felt.

val wrote:
And why must "the great number of people" prevail onto individual happiness? Happiness to the majority frequently implies suffering to minority. What is the criteria?

There is a very big difference between the 'greatest number of people' and a 'great number of people'. The former I would take to mean 'as near to all as is possible', definitely not 'just a majority will do fine'. And not only does individual happiness matter, it is the only thing that matters, and all individuals are counted as equally important. The allowing of any individual unhappiness that could possibly have been avoided is a failure in utilitarian terms.
0 Replies
 
Ray
 
  1  
Reply Tue 6 Sep, 2005 12:30 am
Is life worth more than happiness? To what degree is happiness and pain measured?

What about human dignity?
0 Replies
 
djbt
 
  1  
Reply Tue 6 Sep, 2005 02:12 am
Ray wrote:
Is life worth more than happiness?

A tough one. Clearly, without life one would have no pleasure at all, which makes unnecessary death a very bad thing in utilitarian terms. If we assume that life generally contains (or has the potential to contain) enough pleasure to make it worth the pain, the taking away of life is the taking away of a lifetime's worth of happiness, much much worse that taking away any temporary moment of happiness.

However, it is conceivable that one could be so unhappy that without a little more happiness, they would not wish to continue living. In this case, you could perhaps say that happiness is worth more than life, although, of course, you would not being choosing between the two.

Ray wrote:
To what degree is happiness and pain measured?

I don't understand this question, can you explain more fully what you mean.

Ray wrote:
What about human dignity?

What do you mean by 'human dignity'? If you mean the moral worth of living by a set of social constraints considered 'dignified', then the existence of this feeling would be something to be taken into account, but no something given a priori importance. If you mean the 'right' of a person not to be humiliated, ill-treated, commodified, then I think we would agree that being humiliated, ill-treated and commodified would make one very unhappy, and therefore such experiences should be minimised, although I don't see what being human has to do with it.
0 Replies
 
Thomas
 
  1  
Reply Tue 6 Sep, 2005 02:16 am
Ray wrote:
Is life worth more than happiness?

Depends on how much life, how much happiness, and the individual you're asking.

Ray wrote:
To what degree is happiness and pain measured?

By observing peoples' willingness to seek the former and avoid the latter.

Ray wrote:
What about human dignity?

I don't think utilitarianism has a good way to measure that -- but neither does any other moral philosophy.
0 Replies
 
val
 
  1  
Reply Tue 6 Sep, 2005 05:15 am
djbt

I see you define pleasure and pain in terms of physical sensations.
I think this is a drastic limitation in human behavior, individual and social.

See this example: Socrates died, not because he felt pleasure in drinking the hemlock, not because he felt pleasure in dying. In fact, he felt pleasure in living like he always lived.
He died because he chose not to do deny his own beliefs and because he chose not to escape the prison and go to the exile. He made a choice based on values.

But even at the level of the sensation your example is not convincing:
Imagine a father and his son. Both are hungry. But there is only food to one person. So, the father decides no to eat - not to have that sensation of pleasure by eating - in order to allow the son to eat and have that sensation. The decision he made was not based in that sensation of pleasure: he is still hungry. He made an affective choice.

In fact, I don't believe that most of human choices are based in that sensation/pleasure basis.
A man that dies trying to save a stranger from the flames, does not act in order to receive a good sensation. He does that because of a sense of duty, or solidarity, but in both cases the moral involved is not utilitarian. To be utilitarian, that man had to measure the value of the victim's life and the value of his own life - in the meantime, while making that reasoning, the other man would be burn.
0 Replies
 
djbt
 
  1  
Reply Tue 6 Sep, 2005 06:30 am
val wrote:
djbt

I see you define pleasure and pain in terms of physical sensations.
I think this is a drastic limitation in human behavior, individual and social.

I hope you are not trying to straw man me! But seriously, this is what I meant about us still not fully understanding each other.

I didn't limit it to physical sensations, I just said sensations. Obviously, psychological sensations can be pleasurable and painful too.

val wrote:
See this example: Socrates died, not because he felt pleasure in drinking the hemlock, not because he felt pleasure in dying. In fact, he felt pleasure in living like he always lived.
He died because he chose not to do deny his own beliefs and because he chose not to escape the prison and go to the exile. He made a choice based on values.

I would say that, for people with moral values, to betray those morals would be psychologically painful, painful enough to outweigh other pleasures, perhaps.

But in any case, my utilitarian position is one about how people should act, not how people do act. Whether or not people do act just for their personal pleasure is an entirely separate question. The issue under discussion is how utilitarianism says people should act.

Just to be clear, it certainly does not say that people should act to maximise their own pleasure and minimise their own pain. Rather they should act to maximise pleasure and minimise pain for all. In utilitarian ethics, I don't count for anymore than anyone else, everyone counts equally.

So, if you are holding up Socrates' actions as something that should be considered morally good, as a utilitarian, I would agree with you. I think that by making himself a martyr, Socrates made a powerful statement about freedom of expression, and the importance of a honest and benevolent quest for knowledge. Of course, I don't think freedom of expression etc. have a priori moral value, but I do think that, generally, such values tend to reduce suffering. Clearly, utilitarianism requires benevolence and knowledge - benevolence to consider the happiness of every other individual as as important as your own happiness, and knowledge to know how to act to succeed in making individuals happier.

val wrote:
In fact, I don't believe that most of human choices are based in that sensation/pleasure basis.

Just to be doubly clear, I'm not saying they are. I'm saying they should be.
val wrote:
A man that dies trying to save a stranger from the flames, does not act in order to receive a good sensation. He does that because of a sense of duty, or solidarity, but in both cases the moral involved is not utilitarian. To be utilitarian, that man had to measure the value of the victim's life and the value of his own life - in the meantime, while making that reasoning, the other man would be burn.

Well, it seems a little ridiculous to debate the motivations of hypothetical people...

However, two points:

(1) A utilitarian (if sufficiently quick-thinking) would not assess the value of his own life against that of the other man, because (a) he is not in a position to exchange their positions, and (b) all lives have the same inherent value in utilitarian ethics (though he could think about whose death would cause most suffering, though it is highly unlikely he would have any useful information to make any assessment of this).

What he might do is assess the likelihood of him being able to save the man. If there were no chance of saving the other man, he might decide not to try, to avoid the risk of causing, by his death, great suffering to his family and friends. He must also bear in mind that trying to save the man might, in fact, make the man more likely to die, if any fire brigade officers had to spend time getting him out of trouble, rather than saving the other man.

If there were a reasonable chance of saving the other man (and there was no-one else in a better position who was willing to do it), he might well decide that the odds of them both surviving were greater than them both dying, and so try to save the man, out of a utilitarian sense of duty.

(2) The question of how a utilitarian should act when they don't have time to assess the situation is an interesting, but separate, one. I would suggest that the utilitarian thing to do would be to have 'rule of thumb' reactions for such situations, which tend to maximise happiness. But in situation when someone does not have time to consider the morality of the situation, then their moral beliefs, utilitarian or otherwise, are irrelevant anyway.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 08:04 am
Thomas wrote:
I guess that much of the ill-feeling towards utilitarianism exists for the same reasons other people like it. It is at its core a mechanic construction that can work entirely independent of the moral sentiments that we want justice to satisfy.

That statement makes no sense. If utilitarianism claims to be a system of morality, then it cannot work "independent of the moral sentiments that we want justice to satisfy." Moral sentiments that are inconsistent with morality are pure figments. Either utilitarianism is a system or morality and is, thus, consistent with moral sentiments, or else there are no such things as "moral sentiments."

Thomas wrote:
Utilitarianism is an amoral theory of morality, and that makes people uncomfortable about it.

There can be no such thing as an "amoral theory of morality." You're describing a chimera.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 08:12 am
djbt wrote:
(1) A utilitarian (if sufficiently quick-thinking) would not assess the value of his own life against that of the other man, because (a) he is not in a position to exchange their positions, and (b) all lives have the same inherent value in utilitarian ethics (though he could think about whose death would cause most suffering, though it is highly unlikely he would have any useful information to make any assessment of this).

I agree with (a), but you'll have to explain your position on (b). If "all lives have the same inherent value in utilitarian ethics," then that rule must be justified by a utilitarian calculus (if it isn't, then you would be attempting to build a utilitarian system on a non-utilitarian foundation). So what is the utilitarian rationale for the rule that all lives have the same inherent value?

Take this hypothetical: there are five patients at a hospital. One needs a heart transplant, two need kidney transplants, and two need lung transplants. Each will die in the next 48 hours unless they receive the needed transplant. A perfectly healthy man arrives at the hospital for a routine checkup. It is discovered, during the checkup, that he is a perfect match for all five of the transplant patients. If the doctors remove the healthy man's heart, lungs, and kidneys, he will, of course, perish, but the lives of five people will be saved.

Under your rule, djbt, all lives are equal, but it is equally certain that 5>1. So does the utilitarian doctor kill the healthy man to save the five sick patients?
0 Replies
 
Thomas
 
  1  
Reply Tue 6 Sep, 2005 08:14 am
Fair enough, Joe. My point is that once you have accepted that you ought to make as many people as possible as happy as possible, everything else reduces to is- questions. There are barely any ought- questions left in practically pursuing the best state of affairs, as judged by the utilitarian calculus.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 08:16 am
Thomas wrote:
Fair enough, Joe. My point is that once you have accepted that you ought to make as many people as possible as happy as possible, everything else reduces to is- questions. There are barely any ought questions left in pursuing the best state of affairs, as judged by the utilitarian calculus.

I thoroughly disagree. Morality is nothing but "ought" questions. But perhaps you'd like to tackle the hypothetical that I posed to djbt while avoiding any notions of "should" or "ought."
0 Replies
 
Thomas
 
  1  
Reply Tue 6 Sep, 2005 08:58 am
joefromchicago wrote:
Thomas wrote:
Fair enough, Joe. My point is that once you have accepted that you ought to make as many people as possible as happy as possible, everything else reduces to is- questions. There are barely any ought questions left in pursuing the best state of affairs, as judged by the utilitarian calculus.

I thoroughly disagree. Morality is nothing but "ought" questions. But perhaps you'd like to tackle the hypothetical that I posed to djbt while avoiding any notions of "should" or "ought."

My answer is that yes, my moral intuitions agree with sacrificing the healthy man to the five unhealthy ones, as the utilitarian calculus predicts. But only if you explicitly state the hypothetical's implicit assumption, which is that the world will come to an end the next day. If you don't make that assumption, the story isn't over at the point you so cleverly chose to stop telling it. Here is how the story might continue:

As time goes on, every potential patient will come to avoid that going to a hospital is potentially lethal to a healthy person, or to one who is only slightly sick. Being maximizers of their own utility, people will increasingly avoid hospitals unless the chance of being cured there exceeds the risk of being slaugtered for organs. A viscious circle sets in: Hospitals treat more and more people who need organs, and fewer and fewer who could potentially "donate" them against their will. At some point, the Big Computer for Utility Maximization (BICUM), which has replaced the government, recognizes a great drop in aggregate utility because nobody goes to hospitals anymore. It calculates that aggregate utility could be much higher if it defined and enforced a rule that everybody has property rights in their own body, along with freedom of contract to part with any body parts. So he enacts it -- even in the situation you have described, Joe. Over time, the enforcement occasionally is a utility loss, usually is a utility win (because people trust hospitals again), for a total utility win that is still enormous when averaged over all occasions and all occasions it concerns.

For what I can see, there is no "ought part" in the story except the minimal one I mentioned in my last post. And BICUM, acting with no "ought" opinions of its own, comes up with a set of rules that happens to feel just to my ethical intuition.

I believe that you, Joe, made a mistake in neglecting that individuals react to incentives. As a result, the rule you propose (doctors can grab their organs from wherever they find them) does not maximize utility except for the very short time while people haven't reacted to the incentives it sets. It falls very far short of maximizing utility after people have reacted to those incentives. Note that if I am right, your mistake has been one about the "is" part, not about the "ought" part.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 10:05 am
Thomas wrote:
My answer is that yes, my moral intuitions agree with sacrificing the healthy man to the five unhealthy ones, as the utilitarian calculus predicts. But only if you explicitly state the hypothetical's implicit assumption, which is that the world will come to an end the next day. If you don't make that assumption, the story isn't over at the point you so cleverly chose to stop telling it. Here is how the story might continue:

I never implied that the world would end the next day. Certainly the participants in that little drama would have acted far differently if they believed the world would end the next day (there would, for instance, be no need for the healthy man to get a checkup, let alone for transplant operations on the sick patients).

But go ahead and posit the long-term consequences of your "moral intuitions." That simply identifies you a rule-utilitarian rather than an act-utilitarian -- a common enough choice. Indeed, I would have been surprised if you hadn't considered the long-term consequences of operating on the healthy man.

Thomas wrote:
At some point, the Big Computer for Utility Maximization (BICUM), which has replaced the government, recognizes a great drop in aggregate utility because nobody goes to hospitals anymore.

A society will only replace government with a BICUM if it has decided that the BICUM itself is justified by a utilitarian calculus.

Thomas wrote:
It calculates that aggregate utility could be much higher if it defined and enforced a rule that everybody has property rights in their own body, along with freedom of contract to part with any body parts. So he enacts it -- even in the situation you have described, Joe. Over time, the enforcement occasionally is a utility loss, usually is a utility win (because people trust hospitals again), for a total utility win that is still enormous when averaged over all occasions and all occasions it concerns.

Oh brave new world that has such machines in't!

Thomas wrote:
For what I can see, there is no "ought part" in the story except the minimal one I mentioned in my last post. And BICUM, acting with no "ought" opinions of its own, comes up with a set of rules that happens to feel just to my ethical intuition.

Actually, there are "oughts" scattered throughout your story. For instance, why ought the next doctor who is presented with my hypothetical refuse to operate on a healthy man? If it is because the BICUM said so, then the next question is: why ought the doctor follow the dictates of the BICUM? And if the answer to that question is "because following the dictates of the BICUM maximizes utility," the doctor may logically ask: why ought I maximize utility? And I'm not sure anyone has yet answered that question.

Thomas wrote:
I believe that you, Joe, made a mistake in neglecting that individuals react to incentives.

I merely set out a hypothetical. It was not my job to consider the incentives, that was your job.

Thomas wrote:
As a result, the rule you propose (doctors can grab their organs from wherever they find them) does not maximize utility except for the very short time while people haven't reacted to the incentives it sets.

I proposed no rule whatsoever. I posed a question. You proposed a rule.

Thomas wrote:
It falls very far short of maximizing utility after people have reacted to those incentives. Note that if I am right, your mistake has been one about the "is" part, not about the "ought" part.

If we concentrate on what the doctor will do, then we simply don't have enough information to form an opinion. The doctor, after all, could be a homicidal maniac. On the other hand, if we want to ask what the doctor should do, then we have enough information at hand to arrive at an answer.
0 Replies
 
djbt
 
  1  
Reply Tue 6 Sep, 2005 10:39 am
joefromchicago wrote:
djbt wrote:
(1) A utilitarian (if sufficiently quick-thinking) would not assess the value of his own life against that of the other man, because (a) he is not in a position to exchange their positions, and (b) all lives have the same inherent value in utilitarian ethics (though he could think about whose death would cause most suffering, though it is highly unlikely he would have any useful information to make any assessment of this).

I agree with (a), but you'll have to explain your position on (b). If "all lives have the same inherent value in utilitarian ethics," then that rule must be justified by a utilitarian calculus (if it isn't, then you would be attempting to build a utilitarian system on a non-utilitarian foundation). So what is the utilitarian rationale for the rule that all lives have the same inherent value?

In my utilitarian system (the only one I can defend), the three foundation assumptions (and they are just that, assumptions) are that:

(1) Pleasure is good.
(2) Pain is bad.
(3) All things that experience have the same claim to happiness. (In other words, everything that has interests should have their interests equally considered).

So, assuming that we know nothing of the consequences of either of the men's deaths, and assuming that they both want to stay alive, they both have the same claim to staying alive.

joefromchicago wrote:
Take this hypothetical: there are five patients at a hospital. One needs a heart transplant, two need kidney transplants, and two need lung transplants. Each will die in the next 48 hours unless they receive the needed transplant. A perfectly healthy man arrives at the hospital for a routine checkup. It is discovered, during the checkup, that he is a perfect match for all five of the transplant patients. If the doctors remove the healthy man's heart, lungs, and kidneys, he will, of course, perish, but the lives of five people will be saved.

Under your rule, djbt, all lives are equal, but it is equally certain that 5>1. So does the utilitarian doctor kill the healthy man to save the five sick patients?

Well, this is a bit like asking - if you had to kill your mother or your sister, which would you kill? Really? You monster, you'd kill a member of your own family?!

But I won't dodge the question, I'll trust that you will take my answer as an answer only to be applied to this very specific, extremely improbable, contrived hypothetical situation, and not consider it a guideline for behaviour in reality (where your question would be an example of the false dilemma fallacy).

To save time, can I list some extra hypothetical facts, without which I could escape the hard question you are posing? Let us assume:

The 5 patients will definitely die if they don't get the healthy man's organs - there is absolutely no other way they could live (i.e., the patients couldn't give donate organs to each other, and no other organs will turn up).

The patients will definitely live if they get the organs.

No one will ever find out what has happened, no precedent will be set, and there will be absolutely no other consequences except the death of the healthy man, or the deaths of the 5 patients.

If this are fair assumptions, then yes, you are quite right, five people dying is worse than one person dying, therefore the doctor should kill the one man to save the five.

Note, once again, this answer bears no relation whatsoever to what my answer would be in a similar, real life, situation, because in real life situation, there tends to be less certainty, and more than just two options. The danger of a hypothetical like this is that in can be misapplied, and related to the real world. However, I have answered it because it brings up an important point, namely, the question the moral status of action and inaction.

While I think that a legal distinction between action and inaction is very important, I don't think a moral distinction is. There is no inherent difference between the consequences of killing someone and not saving someone. To not save someone is to act in a way that results in that person's death. To kill someone is to act in a way that results in that person's death. So killing and not saving are morally equivalent action (for the purposes of deciding what one should do, not for judging a person for what they have done).

So the doctor, in this hypothetical (and in hypothetical land only), has the unenviable task of choosing to act in a way that results in one death, or acting in a way that results in five deaths., and would minimise suffering by choosing the former
0 Replies
 
Thomas
 
  1  
Reply Tue 6 Sep, 2005 10:51 am
joefromchicago wrote:
Thomas wrote:
My answer is that yes, my moral intuitions agree with sacrificing the healthy man to the five unhealthy ones, as the utilitarian calculus predicts. But only if you explicitly state the hypothetical's implicit assumption, which is that the world will come to an end the next day. If you don't make that assumption, the story isn't over at the point you so cleverly chose to stop telling it. Here is how the story might continue:

I never implied that the world would end the next day.

In this case, the behavior of the doctors does not maximize utility over the relevant timeframe, so your example does not test the moral implications of utilitarianism.

Thomas wrote:
But go ahead and posit the long-term consequences of your "moral intuitions." That simply identifies you a rule-utilitarian rather than an act-utilitarian -- a common enough choice. Indeed, I would have been surprised if you hadn't considered the long-term consequences of operating on the healthy man.

Thanks -- I take that as a compliment of sorts. As an aside, I find the distinction mostly academic. Act utilitarianism becomes rule utilitarianism once you account for the facts that the expectation of future happiness makes people happy at present, and that people like to develop habits dislike thinking too hard too often.

joefromchicago wrote:
Thomas wrote:
At some point, the Big Computer for Utility Maximization (BICUM), which has replaced the government, recognizes a great drop in aggregate utility because nobody goes to hospitals anymore.

A society will only replace government with a BICUM if it has decided that the BICUM itself is justified by a utilitarian calculus.

I agree; I made that assumption to simplify away the possibility of government failure -- since you generally worry less about that possibility than I do, perhaps my assumption does not simplify much for you. In this case, feel free to forget it.

joefromchicago wrote:
Actually, there are "oughts" scattered throughout your story. For instance, why ought the next doctor who is presented with my hypothetical refuse to operate on a healthy man? If it is because the BICUM said so, then the next question is: why ought the doctor follow the dictates of the BICUM? And if the answer to that question is "because following the dictates of the BICUM maximizes utility," the doctor may logically ask: why ought I maximize utility? And I'm not sure anyone has yet answered that question.

He ought to maximize utility because I assumed he ought to. Note that the claim I had made, that you quoted and "thoroughly disagree"d with, was carefully phrased as a hypothetical: "My point is that once you have accepted that you ought to make as many people as possible as happy as possible[...]" (emphasis added).

joefromchicago wrote:
Thomas wrote:
It falls very far short of maximizing utility after people have reacted to those incentives. Note that if I am right, your mistake has been one about the "is" part, not about the "ought" part.

If we concentrate on what the doctor will do, then we simply don't have enough information to form an opinion. The doctor, after all, could be a homicidal maniac. On the other hand, if we want to ask what the doctor should do, then we have enough information at hand to arrive at an answer.

He could be a homicidal maniac -- but we know for a fact that almost the overwhelming majority of doctors is indeed a homicidal maniacs. This gets reflected in our opinions on what people ought to do. If humans were a species with radically different preferences, one where most individuals will rather grill on an electric chair than refraining from murder, utilitarianism predicts that we would have radically different legal and moral systems. As another aside, David Friedman (Milton's son, professor of law and economics at Santa Clara University and my favorite poster on Usenet) is currently trying to test such predictions in a series of seminars called "Legal systems very different from our own." The idea is to observe the preferences people have in very different societies, then use utilitarianism, (or rather "law and economics") to try and make refutable predictions about them. It's interesting stuff, and I'm curious where this line of research will lead.
0 Replies
 
djbt
 
  1  
Reply Tue 6 Sep, 2005 10:57 am
joefromchicago wrote:
Why ought I maximize utility? And I'm not sure anyone has yet answered that question.

'One ought to maximize utility' would be the one ought Thomas referred to, I believe.

As to why one ought maximise utility, I have no answer, except that it follows from:
(1) Pleasure is good.
(2) Pain is bad.
(3) All things that experience have the same claim to happiness. (In other words, everything that has interests should have their interests equally considered).
... but, of course, one could disagree with any of these premises.

But even if one agreed with these premises, the problem doesn't go away. If you agree that an ought statement cannot be derived from an is statement, then it seems to me that it is impossible to conceive of a moral system that does assume at least one unfounded ought statement - one ought to do good.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 11:32 am
djbt wrote:
But I won't dodge the question, I'll trust that you will take my answer as an answer only to be applied to this very specific, extremely improbable, contrived hypothetical situation, and not consider it a guideline for behaviour in reality (where your question would be an example of the false dilemma fallacy).

If you are simply following an ad hoc series of expedients, then I can understand how your response to a hypothetical situation should not be considered a guideline for behavior in reality. But if you are following some kind of rule about moral behavior, then the rule should apply equally in both real and hypothetical situations.

djbt wrote:
To save time, can I list some extra hypothetical facts, without which I could escape the hard question you are posing? Let us assume:

The 5 patients will definitely die if they don't get the healthy man's organs - there is absolutely no other way they could live (i.e., the patients couldn't give donate organs to each other, and no other organs will turn up).

The patients will definitely live if they get the organs.

No one will ever find out what has happened, no precedent will be set, and there will be absolutely no other consequences except the death of the healthy man, or the deaths of the 5 patients.

These are all fair assumptions.

djbt wrote:
If this are fair assumptions, then yes, you are quite right, five people dying is worse than one person dying, therefore the doctor should kill the one man to save the five.

Note, once again, this answer bears no relation whatsoever to what my answer would be in a similar, real life, situation, because in real life situation, there tends to be less certainty, and more than just two options.

How do you know that? How do you know that the next time you are faced with this situation it won't be exactly as I have described it? And why would that make any difference to your response?

djbt wrote:
The danger of a hypothetical like this is that in can be misapplied, and related to the real world. However, I have answered it because it brings up an important point, namely, the question the moral status of action and inaction.

While I think that a legal distinction between action and inaction is very important, I don't think a moral distinction is. There is no inherent difference between the consequences of killing someone and not saving someone. To not save someone is to act in a way that results in that person's death. To kill someone is to act in a way that results in that person's death. So killing and not saving are morally equivalent action (for the purposes of deciding what one should do, not for judging a person for what they have done).

Killing and not saving are "morally equivalent?" Is that because they are equivalent in a utilitarian calculus?

Consider this hypothetical: A sees B standing on a train track with a train fast approaching. It would take some effort on A's part to save B from the oncoming train, and he would expose himself to a moderate amount of risk. Weighing his options, A decides that the effort and risk are not worth it, even though, by not saving B, he will (under your brand of utilitarianism, djbt) face the same amount of moral censure as he would if he killed B outright. Given that fact, A figures he might as well murder B (something that he has always secretly wanted to do), so he takes out a gun and shoots B to death before the train runs over his corpse.

Question: in this situation, should it be more blameworthy for A to have murdered B than for him simply to have done nothing?
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 11:50 am
Thomas wrote:
In this case, the behavior of the doctors does not maximize utility over the relevant timeframe, so your example does not test the moral implications of utilitarianism.

Of course it does. You just don't see it because you're begging the question: the only reason that utility is relevant at all to the doctor's decision is because it is the basis for morality in a utilitarian system.

Thomas wrote:
Thanks -- I take that as a compliment of sorts. As an aside, I find the distinction mostly academic. Act utilitarianism becomes rule utilitarianism once you account for the facts that the expectation of future happiness makes people happy at present, and that people like to develop habits dislike thinking too hard too often.

Depends on how much one values future happiness. But then that needs to be subject to a utilitarian calculus.

Thomas wrote:
He ought to maximize utility because I assumed he ought to. Note that the claim I had made, that you quoted and "thoroughly disagree"d with, was carefully phrased as a hypothetical: "My point is that once you have accepted that you ought to make as many people as possible as happy as possible[...]" (emphasis added).

I'll grant that you assumed that one should maximize utility. But then that's really the problem, isn't it?

Thomas wrote:
He could be a homicidal maniac -- but we know for a fact that almost the overwhelming majority of doctors is indeed a homicidal maniacs.

Really? I was unaware of that fact.
0 Replies
 
Thomas
 
  1  
Reply Tue 6 Sep, 2005 11:53 am
joefromchicago wrote:
Consider this hypothetical: A sees B standing on a train track with a train fast approaching. It would take some effort on A's part to save B from the oncoming train, and he would expose himself to a moderate amount of risk. Weighing his options, A decides that the effort and risk are not worth it, even though, by not saving B, he will (under your brand of utilitarianism, djbt) face the same amount of moral censure as he would if he killed B outright. Given that fact, A figures he might as well murder B (something that he has always secretly wanted to do), so he takes out a gun and shoots B to death before the train runs over his corpse.

Question: in this situation, should it be more blameworthy for A to have murdered B than for him simply to have done nothing?

Under my version of utilitarianism, yes. People get unhappy when they expect that their corpses will be desecrated, so there is a utilitarian gain if A's shot is discouraged by legal or moral censure. In the limit where A's bullet hits B's head the very moment the train hits B, the censure of A's action should equal everybody else's statistical expectation of having their corpses flagellated in this manner, plus the fact that A has wasted a perfectly good bullet.
0 Replies
 
joefromchicago
 
  1  
Reply Tue 6 Sep, 2005 11:56 am
djbt wrote:
'One ought to maximize utility' would be the one ought Thomas referred to, I believe.

As to why one ought maximise utility, I have no answer, except that it follows from:
(1) Pleasure is good.
(2) Pain is bad.
(3) All things that experience have the same claim to happiness. (In other words, everything that has interests should have their interests equally considered).
... but, of course, one could disagree with any of these premises.

Or I could disagree with your conclusion. I don't see how it necessarily follows that, if pleasure is good, we then ought to maximize it for society as a whole.

djbt wrote:
But even if one agreed with these premises, the problem doesn't go away. If you agree that an ought statement cannot be derived from an is statement, then it seems to me that it is impossible to conceive of a moral system that does assume at least one unfounded ought statement - one ought to do good.

"One ought to do good" is implicit in every moral system, since every moral system is concerned with "good" and "bad" actions.
0 Replies
 
djbt
 
  1  
Reply Tue 6 Sep, 2005 11:59 am
joefromchicago wrote:
djbt wrote:
If this are fair assumptions, then yes, you are quite right, five people dying is worse than one person dying, therefore the doctor should kill the one man to save the five.

Note, once again, this answer bears no relation whatsoever to what my answer would be in a similar, real life, situation, because in real life situation, there tends to be less certainty, and more than just two options.

How do you know that? How do you know that the next time you are faced with this situation it won't be exactly as I have described it? And why would that make any difference to your response?

Were I faced with this situation, I would say that I should act as I described. I just don't think that such a situation is possible.


joefromchicago wrote:
djbt wrote:
The danger of a hypothetical like this is that in can be misapplied, and related to the real world. However, I have answered it because it brings up an important point, namely, the question the moral status of action and inaction.

While I think that a legal distinction between action and inaction is very important, I don't think a moral distinction is. There is no inherent difference between the consequences of killing someone and not saving someone. To not save someone is to act in a way that results in that person's death. To kill someone is to act in a way that results in that person's death. So killing and not saving are morally equivalent action (for the purposes of deciding what one should do, not for judging a person for what they have done).

Killing and not saving are "morally equivalent?" Is that because they are equivalent in a utilitarian calculus?

Consider this hypothetical: A sees B standing on a train track with a train fast approaching. It would take some effort on A's part to save B from the oncoming train, and he would expose himself to a moderate amount of risk. Weighing his options, A decides that the effort and risk are not worth it, even though, by not saving B, he will (under your brand of utilitarianism, djbt) face the same amount of moral censure as he would if he killed B outright. Given that fact, A figures he might as well murder B (something that he has always secretly wanted to do), so he takes out a gun and shoots B to death before the train runs over his corpse.

Question: in this situation, should it be more blameworthy for A to have murdered B than for him simply to have done nothing?

As I thought I had made clear, I do not consider the point of a moral system being to work out how to apportion blame, or 'moral censure'. Apportioning of blame or censure is the job of law, not morality. Since I would base any legal system on my utilitarian moral system, I would make laws that tend to maximised happiness, and assess 'blameworthiness' accordingly. This is a separate, lengthy, question, but I'll go into it if you wish, but perhaps on another thread, like 'How can utilitarianism be used to decide on laws', although it would probably turn into a historical debate rather than a philosophical one.
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » Utilitarianism
  3. » Page 7
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.11 seconds on 12/23/2024 at 02:15:20