2
   

Utilitarianism

 
 
joefromchicago
 
  1  
Reply Fri 9 Sep, 2005 11:26 am
Thomas wrote:
In principle, yes -- if his displeasure from helping is so great, or the happiness he could bring to society so minor, that his displeasure from helping is always greater than the pleasure of society from being helped by him. I don't believe such people are common enough to be a problem in practice though.

But what about the long-term consequences? If people knew that they could be relieved from onerous obligations by displaying a certain amount of displeasure, then that would simply encourage more people to be displeased (game theory would predict just such a result -- it's a classic free rider problem). Ultimately, a large number of people would exhibit this type of displeasure (disingenuously, to be sure), which would, in effect, create a privileged group of hedonists in the midst of a utilitarian society.

Want to re-think your response?
0 Replies
 
joefromchicago
 
  1  
Reply Fri 9 Sep, 2005 11:37 am
djbt wrote:
Well, I'm sure you can see what I'm trying (trying evidently being the operative word...) to say: 'Killing is not inherently any worse or any better than not saving'.

You also say the opposite.

djbt wrote:
So what type of premises could this 'ought-conclusion' be built upon? Ought-premises? Too question-begging, apparently. Is-premises? But then you would deriving an 'ought' from an 'is', which, as far as I understand it, is logically impossible. How would you resolve this?

Deriving an "ought" from an "is" is not, strictly speaking, logically impossible. Even Hume, who first came up with the "is-ought" problem, wasn't clear on that. And it certainly is not begging the question.

The problem isn't that an "ought" cannot be derived from an "is," but that an "ought" cannot necessarily be derived from an "is." In other words, just because something is doesn't mean that it necessarily ought to be.

djbt wrote:
Well, whatta you know? I guess I'm a watered-down Kantian as well as a utilitarian.

Well, you're certainly not a Kantian, and I increasingly doubt that you're much of a utilitarian either.

djbt wrote:
However, I require enlightenment. How have you come to the conclusion that this is watered down Kantianism?

It's a mangled version of the categorical imperative.
0 Replies
 
joefromchicago
 
  1  
Reply Fri 9 Sep, 2005 11:40 am
Thomas wrote:
My answer to your question is close to "yes". If he committed suicide, that would be ethically preferable to his not doing so.

I'm not sure I understand the notion of "ethically preferable." If he does not commit suicide, would his decision be morally blameworthy? In other words, if he did not commit suicide would we say that he did a bad thing?

Thomas wrote:
Say somebody has an illness that sends him into an extremely painful epileptic seizure every time he holds the door up for someone, helps an old lady cross a street, pays a single dollar in taxes, or does anything else for society. I have no problem saying that a society that relieves such a person from such duties is ethically preferrable to a society that does not. Would you rather put him through those seizures?

I repeat: what about the long-term consequences of permitting such an exception?
0 Replies
 
Thomas
 
  1  
Reply Fri 9 Sep, 2005 12:10 pm
joefromchicago wrote:
But what about the long-term consequences? If people knew that they could be relieved from onerous obligations by displaying a certain amount of displeasure, then that would simply encourage more people to be displeased (game theory would predict just such a result -- it's a classic free rider problem).


I just rethought my response as you suggested, and I it didn't change my conclusion. First of all, opportunistically acting like a selfish sociopath is not what registers as "pain from helping" in the utilitarian calculus. If somebody could help other people without much displeasure to himself, but opportunistically pretends he doesn't because he thinks he'll get away with it, the utilitarian calculus finds, consistently with our moral intuitions, that this is unethical. Utilitarianism only excuses your hedonists if they are really, honestly feeling extreme pain from helping, like the epileptic seizure guy I mentioned. I agree that society, by relieving these people of their duty to cooperate, would give them an incentive to free-ride. But the requirement that we'd have to be talking of actual, honest pain -- not just opportunistic faking of it -- makes it hard to respond to that incentive. So unlike in the example where the doctor slaughters the healthy patient, considering the long run does not change my assessment, except perhaps in evolutionary time.

joefromchicago wrote:
Ultimately, a large number of people would exhibit this type of displeasure (disingenuously, to be sure), which would, in effect, create a privileged group of hedonists in the midst of a utilitarian society.

Why "ultimately"? Even assuming that people could react to this particular incentive to free-ride: Why are you assuming that the process can't go on forever, until everyone feels a previously abnormal displeasure from helping? At this point, the whole population would feel an equal level of displeasure again. The previously abnormal level of displeasure would no longer be abnormal. And you're back in a regime where utilitarians don't have to worry about utility monsters anymore. And that's another reason why the long run does not change my conclusion.
0 Replies
 
Thomas
 
  1  
Reply Fri 9 Sep, 2005 12:15 pm
joefromchicago wrote:
Thomas wrote:
My answer to your question is close to "yes". If he committed suicide, that would be ethically preferable to his not doing so.

I'm not sure I understand the notion of "ethically preferable." If he does not commit suicide, would his decision be morally blameworthy? In other words, if he did not commit suicide would we say that he did a bad thing?

For reasons I can't really put my finger on, I'm uncomfortable talking about "good" and "bad" in absolute terms. My term "ethically preferrable" was an attempt to sound profound without really saying anything about good and bad. Let me try another way to get around it, one that may or may not come closer to answering your question: If he does commit suicide, he is acting better than if he doesn't. If he doesn't, he is acting worse than if he does.
0 Replies
 
Ray
 
  1  
Reply Sat 10 Sep, 2005 01:58 pm
True that I still don't agree with you Thomas. It is complicated enough to try and quantify people's want. There is also the problem of a person wanting to do something, but not particularly liking it. An addict might dislike what they're doing but still has the craving or desire to use the substance he or she is addicted to. It is thus not sufficient to decide morality in terms of willingness to seek something.

As for the problem of the masochist, I do find it innately wrong to associate pleasure with something that is obviously painful. It is harmful to the person and potentially harmful to children who may pick up such behaviours from the person. What if a person were to harm himself to the point of exhaustion or death? How would you calculate with your utilitarian calculus then?

This is actually my last post for a while, since I have college, and ironically I'm reading papers on utilitarianism for next week philosophy class. Confused
0 Replies
 
djbt
 
  1  
Reply Sun 11 Sep, 2005 10:44 am
joefromchicago wrote:
djbt wrote:
So what type of premises could this 'ought-conclusion' be built upon? Ought-premises? Too question-begging, apparently. Is-premises? But then you would deriving an 'ought' from an 'is', which, as far as I understand it, is logically impossible. How would you resolve this?

Deriving an "ought" from an "is" is not, strictly speaking, logically impossible. Even Hume, who first came up with the "is-ought" problem, wasn't clear on that. And it certainly is not begging the question.

Well, it was you whom said an ought premise would beg the question, but never mind. Please, describe, or better, show an example of, how an ought statement can be derived from an is statement.
0 Replies
 
djbt
 
  1  
Reply Sun 11 Sep, 2005 10:46 am
Thomas wrote:
joefromchicago wrote:
Thomas wrote:
My answer to your question is close to "yes". If he committed suicide, that would be ethically preferable to his not doing so.

I'm not sure I understand the notion of "ethically preferable." If he does not commit suicide, would his decision be morally blameworthy? In other words, if he did not commit suicide would we say that he did a bad thing?

For reasons I can't really put my finger on, I'm uncomfortable talking about "good" and "bad" in absolute terms. My term "ethically preferrable" was an attempt to sound profound without really saying anything about good and bad. Let me try another way to get around it, one that may or may not come closer to answering your question: If he does commit suicide, he is acting better than if he doesn't. If he doesn't, he is acting worse than if he does.

Surely it would be best, by utilitarian calculus, if he pretended to commit suicide, and went to live someplace else, where he would be happy (and free from suidcidiphile freaks) for the rest of his days.
0 Replies
 
djbt
 
  1  
Reply Sun 11 Sep, 2005 10:52 am
joefromchicago wrote:
But what about the long-term consequences? If people knew that they could be relieved from onerous obligations by displaying a certain amount of displeasure, then that would simply encourage more people to be displeased (game theory would predict just such a result -- it's a classic free rider problem).

Are we discussing whether utilitarianism is a good moral philosophy for a person to adopt, or a good one to attempt to force on people? Surely no moral system is immune from people cheating the rules.
0 Replies
 
joefromchicago
 
  1  
Reply Mon 12 Sep, 2005 06:16 am
Thomas wrote:
I just rethought my response as you suggested, and I it didn't change my conclusion. First of all, opportunistically acting like a selfish sociopath is not what registers as "pain from helping" in the utilitarian calculus.

No doubt, and I'm sure that the opportunistic hedonist would be regarded as morally blameworthy under utilitarian ethics. But it seems you've set up a system of rewards that encourages opportunistic hedonism.

Thomas wrote:
If somebody could help other people without much displeasure to himself, but opportunistically pretends he doesn't because he thinks he'll get away with it, the utilitarian calculus finds, consistently with our moral intuitions, that this is unethical. Utilitarianism only excuses your hedonists if they are really, honestly feeling extreme pain from helping, like the epileptic seizure guy I mentioned. I agree that society, by relieving these people of their duty to cooperate, would give them an incentive to free-ride. But the requirement that we'd have to be talking of actual, honest pain -- not just opportunistic faking of it -- makes it hard to respond to that incentive. So unlike in the example where the doctor slaughters the healthy patient, considering the long run does not change my assessment, except perhaps in evolutionary time.

But remember that your concern about the doctor was the example that it would set for others. It wasn't that there would be a huge number of healthy patients murdered for their organs, but rather that people in society would alter their behavior in an inutile way because of the example. In the same way, if people knew that they could get out of onerous moral obligations through the simple expedient of cynically adopting hedonism, wouldn't that discourage genuine utilitarians from carrying out their duties?

Thomas wrote:
Why "ultimately"?

Well, because I don't think it would take place immediately.

Thomas wrote:
Even assuming that people could react to this particular incentive to free-ride: Why are you assuming that the process can't go on forever, until everyone feels a previously abnormal displeasure from helping? At this point, the whole population would feel an equal level of displeasure again. The previously abnormal level of displeasure would no longer be abnormal. And you're back in a regime where utilitarians don't have to worry about utility monsters anymore.

No, ultimately there are no more utilitarians -- everyone is a hedonist.

Thomas wrote:
For reasons I can't really put my finger on, I'm uncomfortable talking about "good" and "bad" in absolute terms. My term "ethically preferrable" was an attempt to sound profound without really saying anything about good and bad. Let me try another way to get around it, one that may or may not come closer to answering your question: If he does commit suicide, he is acting better than if he doesn't. If he doesn't, he is acting worse than if he does.

Well, there's nothing magical about "good" and "bad." Utilitarians believe that utility is good and disutility is bad.
0 Replies
 
joefromchicago
 
  1  
Reply Mon 12 Sep, 2005 06:48 am
djbt wrote:
Well, it was you whom said an ought premise would beg the question, but never mind.

I'll go over this one more time: if you start with an "ought" premise, you would be begging the question. If you go from "is" to "ought,"you would not be begging the question. For an explanation of what it means to "beg the question," check out this site.

djbt wrote:
Please, describe, or better, show an example of, how an ought statement can be derived from an is statement.

"All citizen-voters in a democracy are equal. Therefore, all citizen-voters in a democracy should get an equal vote."

Now, to be sure, I've skipped over a lot of preliminary and intermediate steps, but I think you can understand the basic idea. Hume implied that one cannot infer a normative prescription from a descriptive statement, but an ethical theory that dispensed with all descriptive statements would be completely divorced from reality. Hume, after all, rejected the notion that morality was based on any kind of universal rules (he regarded morality as arising from a mix of personal psychology and social convention). In other words, Hume could say that an "ought" cannot be derived from an "is" because he didn't believe that an "ought" could be derived from anything. A utilitarian, however, must believe that an "ought" can be derived from something, because utilitarianism is a theory of morality.

djbt wrote:
Are we discussing whether utilitarianism is a good moral philosophy for a person to adopt, or a good one to attempt to force on people? Surely no moral system is immune from people cheating the rules.

Quite true, but if a moral theory erects a system of incentives that rewards selfish behavior on the part of some people to the detriment of society as a whole, then one is justified in asking whether that is moral.
0 Replies
 
djbt
 
  1  
Reply Mon 12 Sep, 2005 07:31 am
joefromchigago wrote:
djbt wrote:
Please, describe, or better, show an example of, how an ought statement can be derived from an is statement.

"All citizen-voters in a democracy are equal. Therefore, all citizen-voters in a democracy should get an equal vote."

Now, to be sure, I've skipped over a lot of preliminary and intermediate steps, but I think you can understand the basic idea.

Well, you've skipped quite a lot else, too. Like a definition of the word 'equal' in this context, and an explanation of how exactly you make the logical leap from the first statement to the second.

If I presume by 'equal' you mean something like; 'equally possessing the same relevant attribute', then I do not see how the second statement follows from the first, unless you add a further ought statement - 'we ought to treat things that are the same in the same way'.

If by 'equal' you mean 'having an equal claim to a certain right', then you must add 'we ought to respect rights'.

If by equal you mean 'deserving to be treated equally', then the first statement would appear to be an ought statement in disguise.

I am still unconvinced you can derive an ought from an is.

joefromchigago wrote:
A utilitarian, however, must believe that an "ought" can be derived from something, because utilitarianism is a theory of morality.

Well, I would say that all systems of morality have to be based on at least one unsupported ought premise, even if it is just 'one ought to be good', or 'one ought to care about the well-being of those other than his/herself'.

joefromchigago wrote:
djbt wrote:
Are we discussing whether utilitarianism is a good moral philosophy for a person to adopt, or a good one to attempt to force on people? Surely no moral system is immune from people cheating the rules.

Quite true, but if a moral theory erects a system of incentives that rewards selfish behavior on the part of some people to the detriment of society as a whole, then one is justified in asking whether that is moral.

If something is 'to the detriment of system as a whole', then utilitarian calculus would not support it. Your argument seems to be: 'A utilitarian government would follow utilitarian calculus and do x. The consequences of this would, in fact, be bad by utilitarian standards. Therefore there is a problem with utilitarian calculus'.

Clearly this is a ludicrous argument. All that is in error is your initial assumptions about what utilitarian calculus would support. Clearly, if you are capable of anticipating exploitation of a system, any BICUM would also anticipate this, and adjust any policy decisions accordingly.
0 Replies
 
Thomas
 
  1  
Reply Mon 12 Sep, 2005 10:32 am
joefromchicago wrote:
But it seems you've set up a system of rewards that encourages opportunistic hedonism.

I haven't set up a system. I say that people who truly feel pain from cooperating get a free ride, but opportunists don't. Stated the other way round, by responding to encouragement for opportunistic hedonism, you prove to the utilitarian calculus that you don't deserve a free ride. You could argue that opportunists are hard to separate from true pain-feelers in practice. But as a matter of theory, which is what we're talking about at this point, there is no incentive anybody can respond to.

joefromchicago wrote:
But remember that your concern about the doctor was the example that it would set for others. It wasn't that there would be a huge number of healthy patients murdered for their organs, but rather that people in society would alter their behavior in an inutile way because of the example.

The potential patients have a choice of not going to the hospital. The hedonist has a choice of what philosophical position he takes. But the person who is thrown into an epileptic seizure when he helps old ladies across the street does not have a choice, after he has helped the old lady. This latter case would be the only one who is excused. I agree that in practice it is difficult to distinguish the honest philantropophobe from the opportunistic hedonist. But that's an is-problem, not an ought-problem. Or in other words, it is an epistemological problem, not a moral one.

But even to the extent that the epistemological problem does cause a moral problem, there is a utilitarian fix for it: Make the utilitarian calculus account for the disutility of determining who truly feels pain from helping, and who is faking it. As a result, you end up with a rule that is non-trivial but reasonably enforcable, some but not all true philanthropophobes being excused, and some but not all cynics not being excused. Within the utilitarian framework, this is an engineering task, similar to the legal system's optimization of civil and criminal procedure. I don't see how either poses a hard philosophical problem.

joefromchicago wrote:
Utimately there are no more utilitarians -- everyone is a hedonist.

You haven't convinced me that is true. But if it was, that would present no problem for me. As a utilitarian, I have no moral problem with a society consisting of six billion happy hedonists. And note that wile hedonists won't help for the sake of helping, they will happily cooperate with one another if they gain more pleasure from the other side of their bargains than they are sacrificing on theirs.
0 Replies
 
djbt
 
  1  
Reply Mon 12 Sep, 2005 12:42 pm
Thomas wrote:
djbt wrote:
So, are you saying there is an objective morality that is discoverable by one's conscience,

Yes.

djbt wrote:
If the former, how will we know which of our consciences isn't working properly?

In a similar way as you find out that someone is red-green blind. Or that somebody is incapable of speaking grammatically correct English. Our language instinct is perhaps even more comparable to our justice instinct than our physical senses are.

Well, we learn language, as it is a human construct - am I seeing the wrong part of your analogue?

And my question wasn't general. I meant, with reference to animals, how are you and I to know which of our consciences isn't working properly? Which of us is red-green blind?
0 Replies
 
Thomas
 
  1  
Reply Mon 12 Sep, 2005 12:50 pm
djbt wrote:
Well, we learn language, as it is a human construct - am I seeing the wrong part of your analogue?

Yes, but as Noam Chomsky has shown, there is a deep structure to grammatically correct sentences that is the same for all human languages. Without having done any research to support it, I believe that the same could be shown for all systems of ethics and law.

djbt wrote:
And my question wasn't general. I meant, with reference to animals, how are you and I to know which of our consciences isn't working properly? Which of us is red-green blind?

Gotcha. I have no good answer to that.

(My own private guess is that you're just making up a cute philosophical position. If you actually had to choose between either saving one child or two cats from drowning, I'm pretty sure your conscience would tell you to save the child. But I have no way of proving that.)
0 Replies
 
djbt
 
  1  
Reply Mon 12 Sep, 2005 01:02 pm
Thomas wrote:
(My own private guess is that you're just making up a cute philosophical position. If you actually had to choose between either saving one child or two cats from drowning, I'm pretty sure your conscience would tell you to save the child. But I have no way of proving that.)


I probably would save the child. Less chance of vengeance from the kittens' mother. However, judging someone's conscience by how they react under pressure is a bit like judging someone's grammar under the influence of vodka.

Not sure what you mean by 'cute', do you mean insincere? If you do, you are mistaken.
0 Replies
 
Thomas
 
  1  
Reply Mon 12 Sep, 2005 01:21 pm
djbt wrote:
However, judging someone's conscience by how they react under pressure is a bit like judging someone's grammar under the influence of vodka.

I disagree. After all, it's an influence where one's conscience actually matters.

djbt wrote:
Not sure what you mean by 'cute', do you mean insincere? If you do, you are mistaken.

What I had in mind was an exaggeration of what you really think in the direction of what makes for a spectacular argument. Nothing wrong with that -- I like to indulge in it myself.
0 Replies
 
djbt
 
  1  
Reply Mon 12 Sep, 2005 01:32 pm
Thomas wrote:
djbt wrote:
Not sure what you mean by 'cute', do you mean insincere? If you do, you are mistaken.

What I had in mind was an exaggeration of what you really think in the direction of what makes for a spectacular argument. Nothing wrong with that -- I like to indulge in it myself.

No, I am not doing this (although I must say that it's a refreshing change for someone who advocates the interests of animals to be accused of too much cynicism and not enough genuine feeling - rather than the other way around!)
0 Replies
 
joefromchicago
 
  1  
Reply Tue 13 Sep, 2005 08:14 am
djbt wrote:
Well, you've skipped quite a lot else, too. Like a definition of the word 'equal' in this context, and an explanation of how exactly you make the logical leap from the first statement to the second.

You asked for an example of an "ought" statement derived from an "is" statement. I provided that. You didn't ask for the method by which the former was derived from the latter, and this thread is, in any event, not the place for that discussion. If you still question whether an "ought" can be derived from an "is," then explain the reason for your doubt rather than get sidetracked into irrelevant tangents.

djbt wrote:
I am still unconvinced you can derive an ought from an is.

No doubt you heard someone say that once and now it has stuck in your mind, but you offer no evidence that suggests that you understand the proposition. For Hume, it was easy to contend that an "ought" cannot be derived from an "is," because Hume questioned all "ought" statements. But you describe yourself as a utilitarian, and, as I mentioned previously, utilitarianism is a theory of morality; therefore, utilitarians must believe that there are "ought" statements. Your task, then, is to explain why you adhere to Humean skepticism regarding the basis of morality yet still believe in it.

djbt wrote:
Well, I would say that all systems of morality have to be based on at least one unsupported ought premise, even if it is just 'one ought to be good', or 'one ought to care about the well-being of those other than his/herself'.

Then you would be begging the question. Didn't you read the link that I provided?

djbt wrote:
If something is 'to the detriment of system as a whole', then utilitarian calculus would not support it.

Then the selfish hedonists would not be allowed to avoid their obligations to maximize utility for others?

djbt wrote:
Your argument seems to be: 'A utilitarian government would follow utilitarian calculus and do x. The consequences of this would, in fact, be bad by utilitarian standards. Therefore there is a problem with utilitarian calculus'.

Clearly this is a ludicrous argument. All that is in error is your initial assumptions about what utilitarian calculus would support. Clearly, if you are capable of anticipating exploitation of a system, any BICUM would also anticipate this, and adjust any policy decisions accordingly.

That doesn't answer my question. I asked "if a moral theory erects a system of incentives that rewards selfish behavior on the part of some people to the detriment of society as a whole, then one is justified in asking whether that is moral." And if your answer is that utilitarians would not be put in that position because they would always (under the benevolent guidance of their big computer) enact rules that maximized utility, then the question is: is it moral to enact rules that punish selfish hedonists who experience great disutility when called upon to maximize the utility of others?
0 Replies
 
joefromchicago
 
  1  
Reply Tue 13 Sep, 2005 08:21 am
Thomas wrote:
I agree that in practice it is difficult to distinguish the honest philantropophobe from the opportunistic hedonist. But that's an is-problem, not an ought-problem. Or in other words, it is an epistemological problem, not a moral one.

I tend to agree, and I'm afraid we're in danger of veering off into a discussion of human nature rather than strictly one of utilitarianism. That's something better left to psychologists and sociologists. I will simply repeat what I've said in the past: you and I have vastly different views of human nature.

Thomas wrote:
You haven't convinced me that is true. But if it was, that would present no problem for me. As a utilitarian, I have no moral problem with a society consisting of six billion happy hedonists. And note that wile hedonists won't help for the sake of helping, they will happily cooperate with one another if they gain more pleasure from the other side of their bargains than they are sacrificing on theirs.

Well, perhaps as a libertarian you have no problem with a world of hedonists. But as a utilitarian you should have a problem with a society consisting of six billion happy hedonists, because such a society would not necessarily be a utility maximizing society.
0 Replies
 
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Destroy My Belief System, Please! - Discussion by Thomas
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
  1. Forums
  2. » Utilitarianism
  3. » Page 10
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.05 seconds on 05/07/2024 at 12:43:04