Happiness for the greatest number of people ... but how are we going to define happiness - in utilitarianism we have not a previous philosophy that allows us to give definitions like that. And the concept of happiness is not univocal.
And why must "the great number of people" prevail onto individual happiness? Happiness to the majority frequently implies suffering to minority. What is the criteria?
Is life worth more than happiness?
To what degree is happiness and pain measured?
What about human dignity?
Is life worth more than happiness?
To what degree is happiness and pain measured?
What about human dignity?
djbt
I see you define pleasure and pain in terms of physical sensations.
I think this is a drastic limitation in human behavior, individual and social.
See this example: Socrates died, not because he felt pleasure in drinking the hemlock, not because he felt pleasure in dying. In fact, he felt pleasure in living like he always lived.
He died because he chose not to do deny his own beliefs and because he chose not to escape the prison and go to the exile. He made a choice based on values.
In fact, I don't believe that most of human choices are based in that sensation/pleasure basis.
A man that dies trying to save a stranger from the flames, does not act in order to receive a good sensation. He does that because of a sense of duty, or solidarity, but in both cases the moral involved is not utilitarian. To be utilitarian, that man had to measure the value of the victim's life and the value of his own life - in the meantime, while making that reasoning, the other man would be burn.
I guess that much of the ill-feeling towards utilitarianism exists for the same reasons other people like it. It is at its core a mechanic construction that can work entirely independent of the moral sentiments that we want justice to satisfy.
Utilitarianism is an amoral theory of morality, and that makes people uncomfortable about it.
(1) A utilitarian (if sufficiently quick-thinking) would not assess the value of his own life against that of the other man, because (a) he is not in a position to exchange their positions, and (b) all lives have the same inherent value in utilitarian ethics (though he could think about whose death would cause most suffering, though it is highly unlikely he would have any useful information to make any assessment of this).
Fair enough, Joe. My point is that once you have accepted that you ought to make as many people as possible as happy as possible, everything else reduces to is- questions. There are barely any ought questions left in pursuing the best state of affairs, as judged by the utilitarian calculus.
Thomas wrote:Fair enough, Joe. My point is that once you have accepted that you ought to make as many people as possible as happy as possible, everything else reduces to is- questions. There are barely any ought questions left in pursuing the best state of affairs, as judged by the utilitarian calculus.
I thoroughly disagree. Morality is nothing but "ought" questions. But perhaps you'd like to tackle the hypothetical that I posed to djbt while avoiding any notions of "should" or "ought."
My answer is that yes, my moral intuitions agree with sacrificing the healthy man to the five unhealthy ones, as the utilitarian calculus predicts. But only if you explicitly state the hypothetical's implicit assumption, which is that the world will come to an end the next day. If you don't make that assumption, the story isn't over at the point you so cleverly chose to stop telling it. Here is how the story might continue:
At some point, the Big Computer for Utility Maximization (BICUM), which has replaced the government, recognizes a great drop in aggregate utility because nobody goes to hospitals anymore.
It calculates that aggregate utility could be much higher if it defined and enforced a rule that everybody has property rights in their own body, along with freedom of contract to part with any body parts. So he enacts it -- even in the situation you have described, Joe. Over time, the enforcement occasionally is a utility loss, usually is a utility win (because people trust hospitals again), for a total utility win that is still enormous when averaged over all occasions and all occasions it concerns.
For what I can see, there is no "ought part" in the story except the minimal one I mentioned in my last post. And BICUM, acting with no "ought" opinions of its own, comes up with a set of rules that happens to feel just to my ethical intuition.
I believe that you, Joe, made a mistake in neglecting that individuals react to incentives.
As a result, the rule you propose (doctors can grab their organs from wherever they find them) does not maximize utility except for the very short time while people haven't reacted to the incentives it sets.
It falls very far short of maximizing utility after people have reacted to those incentives. Note that if I am right, your mistake has been one about the "is" part, not about the "ought" part.
djbt wrote:(1) A utilitarian (if sufficiently quick-thinking) would not assess the value of his own life against that of the other man, because (a) he is not in a position to exchange their positions, and (b) all lives have the same inherent value in utilitarian ethics (though he could think about whose death would cause most suffering, though it is highly unlikely he would have any useful information to make any assessment of this).
I agree with (a), but you'll have to explain your position on (b). If "all lives have the same inherent value in utilitarian ethics," then that rule must be justified by a utilitarian calculus (if it isn't, then you would be attempting to build a utilitarian system on a non-utilitarian foundation). So what is the utilitarian rationale for the rule that all lives have the same inherent value?
Take this hypothetical: there are five patients at a hospital. One needs a heart transplant, two need kidney transplants, and two need lung transplants. Each will die in the next 48 hours unless they receive the needed transplant. A perfectly healthy man arrives at the hospital for a routine checkup. It is discovered, during the checkup, that he is a perfect match for all five of the transplant patients. If the doctors remove the healthy man's heart, lungs, and kidneys, he will, of course, perish, but the lives of five people will be saved.
Under your rule, djbt, all lives are equal, but it is equally certain that 5>1. So does the utilitarian doctor kill the healthy man to save the five sick patients?
Thomas wrote:My answer is that yes, my moral intuitions agree with sacrificing the healthy man to the five unhealthy ones, as the utilitarian calculus predicts. But only if you explicitly state the hypothetical's implicit assumption, which is that the world will come to an end the next day. If you don't make that assumption, the story isn't over at the point you so cleverly chose to stop telling it. Here is how the story might continue:
I never implied that the world would end the next day.
But go ahead and posit the long-term consequences of your "moral intuitions." That simply identifies you a rule-utilitarian rather than an act-utilitarian -- a common enough choice. Indeed, I would have been surprised if you hadn't considered the long-term consequences of operating on the healthy man.
Thomas wrote:At some point, the Big Computer for Utility Maximization (BICUM), which has replaced the government, recognizes a great drop in aggregate utility because nobody goes to hospitals anymore.
A society will only replace government with a BICUM if it has decided that the BICUM itself is justified by a utilitarian calculus.
Actually, there are "oughts" scattered throughout your story. For instance, why ought the next doctor who is presented with my hypothetical refuse to operate on a healthy man? If it is because the BICUM said so, then the next question is: why ought the doctor follow the dictates of the BICUM? And if the answer to that question is "because following the dictates of the BICUM maximizes utility," the doctor may logically ask: why ought I maximize utility? And I'm not sure anyone has yet answered that question.
Thomas wrote:It falls very far short of maximizing utility after people have reacted to those incentives. Note that if I am right, your mistake has been one about the "is" part, not about the "ought" part.
If we concentrate on what the doctor will do, then we simply don't have enough information to form an opinion. The doctor, after all, could be a homicidal maniac. On the other hand, if we want to ask what the doctor should do, then we have enough information at hand to arrive at an answer.
Why ought I maximize utility? And I'm not sure anyone has yet answered that question.
But I won't dodge the question, I'll trust that you will take my answer as an answer only to be applied to this very specific, extremely improbable, contrived hypothetical situation, and not consider it a guideline for behaviour in reality (where your question would be an example of the false dilemma fallacy).
To save time, can I list some extra hypothetical facts, without which I could escape the hard question you are posing? Let us assume:
The 5 patients will definitely die if they don't get the healthy man's organs - there is absolutely no other way they could live (i.e., the patients couldn't give donate organs to each other, and no other organs will turn up).
The patients will definitely live if they get the organs.
No one will ever find out what has happened, no precedent will be set, and there will be absolutely no other consequences except the death of the healthy man, or the deaths of the 5 patients.
If this are fair assumptions, then yes, you are quite right, five people dying is worse than one person dying, therefore the doctor should kill the one man to save the five.
Note, once again, this answer bears no relation whatsoever to what my answer would be in a similar, real life, situation, because in real life situation, there tends to be less certainty, and more than just two options.
The danger of a hypothetical like this is that in can be misapplied, and related to the real world. However, I have answered it because it brings up an important point, namely, the question the moral status of action and inaction.
While I think that a legal distinction between action and inaction is very important, I don't think a moral distinction is. There is no inherent difference between the consequences of killing someone and not saving someone. To not save someone is to act in a way that results in that person's death. To kill someone is to act in a way that results in that person's death. So killing and not saving are morally equivalent action (for the purposes of deciding what one should do, not for judging a person for what they have done).
In this case, the behavior of the doctors does not maximize utility over the relevant timeframe, so your example does not test the moral implications of utilitarianism.
Thanks -- I take that as a compliment of sorts. As an aside, I find the distinction mostly academic. Act utilitarianism becomes rule utilitarianism once you account for the facts that the expectation of future happiness makes people happy at present, and that people like to develop habits dislike thinking too hard too often.
He ought to maximize utility because I assumed he ought to. Note that the claim I had made, that you quoted and "thoroughly disagree"d with, was carefully phrased as a hypothetical: "My point is that once you have accepted that you ought to make as many people as possible as happy as possible[...]" (emphasis added).
He could be a homicidal maniac -- but we know for a fact that almost the overwhelming majority of doctors is indeed a homicidal maniacs.
Consider this hypothetical: A sees B standing on a train track with a train fast approaching. It would take some effort on A's part to save B from the oncoming train, and he would expose himself to a moderate amount of risk. Weighing his options, A decides that the effort and risk are not worth it, even though, by not saving B, he will (under your brand of utilitarianism, djbt) face the same amount of moral censure as he would if he killed B outright. Given that fact, A figures he might as well murder B (something that he has always secretly wanted to do), so he takes out a gun and shoots B to death before the train runs over his corpse.
Question: in this situation, should it be more blameworthy for A to have murdered B than for him simply to have done nothing?
'One ought to maximize utility' would be the one ought Thomas referred to, I believe.
As to why one ought maximise utility, I have no answer, except that it follows from:
(1) Pleasure is good.
(2) Pain is bad.
(3) All things that experience have the same claim to happiness. (In other words, everything that has interests should have their interests equally considered).
... but, of course, one could disagree with any of these premises.
But even if one agreed with these premises, the problem doesn't go away. If you agree that an ought statement cannot be derived from an is statement, then it seems to me that it is impossible to conceive of a moral system that does assume at least one unfounded ought statement - one ought to do good.
djbt wrote:If this are fair assumptions, then yes, you are quite right, five people dying is worse than one person dying, therefore the doctor should kill the one man to save the five.
Note, once again, this answer bears no relation whatsoever to what my answer would be in a similar, real life, situation, because in real life situation, there tends to be less certainty, and more than just two options.
How do you know that? How do you know that the next time you are faced with this situation it won't be exactly as I have described it? And why would that make any difference to your response?
djbt wrote:The danger of a hypothetical like this is that in can be misapplied, and related to the real world. However, I have answered it because it brings up an important point, namely, the question the moral status of action and inaction.
While I think that a legal distinction between action and inaction is very important, I don't think a moral distinction is. There is no inherent difference between the consequences of killing someone and not saving someone. To not save someone is to act in a way that results in that person's death. To kill someone is to act in a way that results in that person's death. So killing and not saving are morally equivalent action (for the purposes of deciding what one should do, not for judging a person for what they have done).
Killing and not saving are "morally equivalent?" Is that because they are equivalent in a utilitarian calculus?
Consider this hypothetical: A sees B standing on a train track with a train fast approaching. It would take some effort on A's part to save B from the oncoming train, and he would expose himself to a moderate amount of risk. Weighing his options, A decides that the effort and risk are not worth it, even though, by not saving B, he will (under your brand of utilitarianism, djbt) face the same amount of moral censure as he would if he killed B outright. Given that fact, A figures he might as well murder B (something that he has always secretly wanted to do), so he takes out a gun and shoots B to death before the train runs over his corpse.
Question: in this situation, should it be more blameworthy for A to have murdered B than for him simply to have done nothing?