42
   

Destroy My Belief System, Please!

 
 
InfraBlue
 
  1  
Wed 29 Jan, 2014 05:49 pm
mark
0 Replies
 
joefromchicago
 
  3  
Thu 30 Jan, 2014 07:33 am
@Thomas,
Thomas wrote:
Perhaps I technically don't know it --- as Frank would say, it's not a fact, just a guess about a fact. But it's a guess I'm fairly confident about.

Haven't you read Kahneman and Tversky yet? It's easy to believe your estimation of the future is more likely than not when it coincides with your inherent biases. I can just as easily imagine a scenario where people are encouraged to go to the hospital under the hypothetical I presented. For instance, people understand that there's a trade-off between safety and efficiency in a wide variety of contexts, such as speed limits, and they accept a certain level of risk - even the risk of death - in order to enjoy greater benefits. Suppose the government said "you'll be guaranteed free replacement organs if you'll agree that, occasionally, somebody will have to be sacrificed to provide those organs." Is it so far-fetched to think that people would accept that trade-off, in the same way that they now accept that speed limits of 75 mph mean that more people will die in accidents?

Thomas wrote:
Whatever maximizes the expectancy value of the present discounted value of all future happiness . If this involves costs and benefits in the far future, discount their present value at the long-term market interest rate. In principle, the field of economics has a fairly standard procedure for this kind of tradeoff.

You seem to think you can weigh happiness the way that you can weigh bananas. The problem is that someone else might think they're weighing apples.

Thomas wrote:
Perhaps. Do you know of any respectable ethic that doesn't have this problem?

A system of absolute morality doesn't have that problem in the same way. For instance, when Kant said that it is always immoral to lie, he didn't leave a lot of leeway for someone to argue that, in this particular case, lying should be OK. In contrast, a utilitarian can always argue that, in this case, I should be allowed to do what is normally not allowed. All it takes is a recalculation of the consequences.

Thomas wrote:
joefromchicago wrote:
You'd be immobilized, unable to speak or hear or do anything, but you'd be conscious and you'd be supremely happy. Would you take that drug?

Sure!

You may be surprised to learn that your position is not universal.
wandeljw
 
  2  
Thu 30 Jan, 2014 09:42 am
Why follow any system? Why not take each situation as it comes and try to make the best decision?
Thomas
 
  1  
Thu 30 Jan, 2014 10:05 am
@igm,
igm wrote:
It also means that you can't know your belief system is correct.

Of course not. By the time you know something, believing it becomes superfluous. I don't know that the Goldbach conjecture is true. I don't know it's okay to lie when lying is likely to prevent a murder. But I believe both those things. Sure, it would be nice to know if my beliefs are correct. But life is short, and I can't hold out for every question to be answered with unimpeachable evidence. The whole exercise of believing would be pointless if I could!

igm wrote:
Therefore your belief system is useless because you can't know if it is more or less detrimental than not having your belief system

Perhaps you want to reconsider what makes a belief system useful. As human beings, we are often forced to make decisions without conclusive evidence to base them on. Whenever that happens, we use beliefs to fill the gaps between the points on which we are sure. Although our beliefs are false sometimes (which diminishes their usefulness), they also save us the trouble of reasoning our way to every little fact of life from scratch, and sometimes enable us to make decisions when we otherwise couldn't. That is what makes belief systems useful. Your demands for certainty ignore the whole problem that belief systems are useful for solving.

igm wrote:
It is not worth maintaining it should be destroyed.

I don't think this conclusion follows from the points you have demonstrated. But thank you for a vigorous effort!
0 Replies
 
Thomas
 
  2  
Thu 30 Jan, 2014 10:07 am
@wandeljw,
wandeljw wrote:
Why follow any system? Why not take each situation as it comes and try to make the best decision?

Define "best". Without using any kind of system, please. Smile
Thomas
 
  1  
Thu 30 Jan, 2014 10:57 am
@joefromchicago,
There's a lot of meat in your post. It will take several posts of my own to reply properly. Let's start with the bias problem.

joefromchicago wrote:
Haven't you read Kahneman and Tversky yet?

Incidentally, I'm about 50 pages into Kahneman's Thinking, Fast and Slow. But I have stalled before ever finishing it, and I haven't read any of their other books yet. You're right, this is a serious omission, which I must fix.

joefromchicago wrote:
It's easy to believe your estimation of the future is more likely than not when it coincides with your inherent biases.

True, but this isn't just true of normative beliefs; it's true of factual beliefs as well. As we write, for example, inherent biases are causing at least one third of Americans to hold absurd factual beliefs:
  • that the Earth is 6000 years old,
  • that global warming is a hoax,
  • that taxation in America is on the downward-sloping side of the Laffer curve,
  • and multiple others.
We're talking about tens of millions of people, not just the odd strawman here and there. So if the problem of inherent biases frustrates my normative tenet (#2), why aren't you arguing that it's frustrating my factual tenet (#1) as well?
wandeljw
 
  1  
Thu 30 Jan, 2014 11:02 am
@Thomas,
Thomas wrote:

wandeljw wrote:
Why follow any system? Why not take each situation as it comes and try to make the best decision?

Define "best". Without using any kind of system, please. Smile


...the "best" decision that we as individuals can make based on our own abilities
Thomas
 
  1  
Thu 30 Jan, 2014 11:32 am
@joefromchicago,
Next, about your wicked-healthcare scenario and your perpetual-Soma-high scenario.

joefromchicago wrote:
Suppose the government said "you'll be guaranteed free replacement organs if you'll agree that, occasionally, somebody will have to be sacrificed to provide those organs." Is it so far-fetched to think that people would accept that trade-off, in the same way that they now accept that speed limits of 75 mph mean that more people will die in accidents?

Yes, because empirical evidence establishes that people's preferences are risk-averse rather than wealth-maximizing. And when they have to chose between competing risks, they prefer accidents and omissions to conscious agents deliberately coming after them. Discussing the evidence would make my already-too-long response even longer, so I'll just state that it goes far beyond the point where cognitive bias can compromise it. It is one thing to assume that a 75-mph speed limit won't trigger people's risk aversion. It's quite another to assume that a carte-blanche to be butchered for parts would not. But you do make a good point: if aliens on a distant planet hired me as their ethical advisor, and if I was satisfied of the fact that the aliens' psychologies are wealth-maximizing and not risk-averse, I would advise that their government pursue such a policy. Wouldn't you, too?

joefromchicago wrote:
You may be surprised to learn that your position [on the crippling happines drug] is not universal.

Only because the scenario tricks you into making factual assumptions that are false. It comes just close enough to describing junkies that it triggers people's knee-jerk reflexes about the ethics of being a junkie. But on closer examination, the stated facts are quite different. There is no hangover after the high, no crime to get money for the next fix, no desperate families and friends. If there was, I might well change my judgment. Conversely, if the story did not subtly mislead people about unstated facts, I bet my assessment would be much more mainstream.
joefromchicago
 
  1  
Thu 30 Jan, 2014 12:35 pm
@Thomas,
Thomas wrote:
So if the problem of inherent biases frustrates my normative tenet (#2), why aren't you arguing that it's frustrating my factual tenet (#1) as well?

I didn't take your first tenet as a factual statement. It's a statement about facts, not a factual statement.
Thomas
 
  1  
Thu 30 Jan, 2014 12:42 pm
@joefromchicago,
Finally, does a system of absolute morality protect you against corruption by biases and prejudices? I don't think so.
joefromchicago wrote:
A system of absolute morality doesn't have that problem in the same way. For instance, when Kant said that it is always immoral to lie, he didn't leave a lot of leeway for someone to argue that, in this particular case, lying should be OK.

I disagree. While we didn't agree on much when we last talked about Kant's take on lying, I think we can agree on the following points:
  • "Don't ever lie" is a maxim consistent with the Categorical Imperative. Kant thinks our duty to act on this maxim is absolute.
  • Other maxims, such as "don't ever make yourself an accessory to murder", are consistent with the Categorical as well. Our supposed duty to act on those would be just as absolute.
  • When we are faced with absolute duties to act on conflicting maxims, Kant gives us no meta-rules for choosing between conflicting maxims. Neither does it guarantee to us that there exists any "right thing" for us to do.
So it seems to me that in practical situations like lying to prevent a murder, Kant's formalism just throws up it hands, avoids the question. "what am I suspposed to do now?" altogether, and retreats to the comfortable but useless position that sometimes we're damned no matter what we do. I doubt that this reduces the potential for corruption in by prejudice, bias, and ill-supported seat-of-the-pants judgments.
Thomas
 
  1  
Thu 30 Jan, 2014 12:44 pm
@joefromchicago,
joefromchicago wrote:
I didn't take your first tenet as a factual statement. It's a statement about facts, not a factual statement.

Fair enough. My usage was off. Thanks for correcting it. It was intended as a statement about facts and my approach to them.
0 Replies
 
joefromchicago
 
  1  
Thu 30 Jan, 2014 12:47 pm
@Thomas,
Thomas wrote:
if aliens on a distant planet hired me as their ethical advisor, and if I was satisfied of the fact that the aliens' psychologies are wealth-maximizing and not risk-averse, I would advise that their government pursue such a policy. Wouldn't you, too?

Probably not. But then I'm not a utilitarian.

Thomas wrote:
Only because the scenario tricks you into making factual assumptions that are false. It comes just close enough to describing junkies that it triggers people's knee-jerk reflexes about the ethics of being a junkie.

Well, actually, it's just a variation on Robert Nozick's "experience machine." I could make it a hypothetical about being hooked up to a machine that reproduces happy experiences and that avoids any reference to drugs if that would help.

Thomas wrote:
But on closer examination, the stated facts are quite different. There is no hangover after the high, no crime to get money for the next fix, no desperate families and friends. If there was, I might well change my judgment. Conversely, if the story did not subtly mislead people about unstated facts, I bet my assessment would be much more mainstream.

Well, there's no "after the high." The drug renders you catatonic. That's a permanent state. Does that change your response?
joefromchicago
 
  2  
Thu 30 Jan, 2014 12:55 pm
@Thomas,
Thomas wrote:
So it seems to me that in practical situations like lying to prevent a murder, Kant's formalism just throws up it hands, avoids the question. "what am I suspposed to do now?" altogether, and retreats to the comfortable but useless position that sometimes we're damned no matter what we do. I doubt that this reduces the potential for corruption in by prejudice, bias, and ill-supported seat-of-the-pants judgments.

I agree that Kant doesn't really have much to say about competing moral imperatives, and that, implicit in his system of morality, is the possibility that sometimes the only choice is a bad choice. But that's different from saying that the categorical imperative allows the individual to "put his finger on the scale" when it comes to deciding what to do. As I see it, the utilitarian is never placed in the same situation as a Kantian, where the only choices are between incompatible moral duties. For the utilitarian, there's always a "better" choice - it's just that the person making the choice is also the one who is making the moral calculus.
Finn dAbuzz
 
  1  
Thu 30 Jan, 2014 06:32 pm
@Thomas,
Thomas wrote:

1.Believe in facts if the balance of the evidence supports them, and for no other reason.

2.Believe in values if acting on them will tend to increase the overall surplus of happiness over suffering, and for no other reason.
Quote:


I haven't read through all of the replies so this may be redundant, but it seems that if there is a flaw with #1 (and I believe this has been addressed) it is with the nature of the "balance of evidence." Presumably the balance of evidence consists solely of "facts;" otherwise what you consider a fact could be supported soley by your personal subjective take on things. Each one of these facts that constitute the "balance of evidence" would be subject to the same test, and in turn the "facts" that make of the "balance of evidence" that supports would be as well.

Unless you are prepared to subject each "fact" in the chain (It's turtles all the way down) to the test you've established for belief, at some point, it seems to me, you must be engaging in faith, if only to keep yourself from following the turtles all the way down.

Now, I don't have a problem with faith in Newtonian physics (for example). I don't have to test all the facts that have led to it to believe in it, but unless I duplicate Newton's process, aren't I exhibiting faith?

I don't think, by the way, that there is a substantive flaw in #1, but you've suggested that it is unassailable. If it is, you should be able to shred my assault.

I think #2 is a bit more assailable.

What is the scope of "tend?"

And it is hard to imagine that this formula doesn't lead to someone possessing values that butt up against each other.

I might say that I believe in freedom and acting on that belief has led me to kill someone. The killing of this person (at least in my mind) will tend to increase happiness over suffering. With him gone, there will, overall, be less suffering and more happiness.

I also believe in non-violence, and acting on that belief I don't kill the aforementioned horrible tyrant. My refusal to kill him could be supported by the happiness/suffering equation: Clearly he and those who if not love him, depend upon him will be happier (all though this may not outweigh the future suffering of his potential victims) but isn't there a chance that my non-violent reseponse to an opportunity for, arguably justified, violence will inform others and so increase happines and reduce suffering?

Additionally, for all I know, he may give up his horrible ways tomorrow and there will be no curent or future victims.

Retribution for past suffering doesn't seem to have a place in #2, unless of course I assume his death will be a warning to other horrible tyrants and thereby perhaps reduce suffering and increase happiness.

In the end, it's not a question of whether or not you should believe in facts etc, or values etc, but a recognition of the sponginess of #'s 1 and 2. Both depend largely on subjectivity and (I would argue) faith.

Whether or not this belief system of your is unassailable, I'll give you credit for having one, and even more is you are as consistent with it as possible.

Quote:
I try, and sometimes manage, to live my life by a minimalistic, two-tenet religion.


Obviously you're not suggesting that you live a life that it always in keeping with your stated belief system, but I wonder, to what extent, you appreciate that your failures may not be as much a flaw in you as they are in the system.

Personally, I would have a very hard time trying to reduce my beliefs and values into any number of bullet points, let alone two.

Not that I wouldn't like to be able to do so, but I just don't see life as so pat.
Thomas
 
  1  
Thu 30 Jan, 2014 08:32 pm
@joefromchicago,
joefromchicago wrote:
For the utilitarian, there's always a "better" choice

True. I consider this a feature, not a bug.

joefromchicago wrote:
it's just that the person making the choice is also the one who is making the moral calculus.

Not necessarily. I don't think there's anything wrong for a Utilitarian to outsource the calculating. That's what Act Utilitarians do when in practice they usually just follow ordinary common-sense rules such as don't lie, don't steal, don't throw sand at other kids, do as you would be done by, and so forth.
Thomas
 
  1  
Thu 30 Jan, 2014 08:37 pm
@joefromchicago,
joefromchicago wrote:
Well, actually, it's just a variation on Robert Nozick's "experience machine." I could make it a hypothetical about being hooked up to a machine that reproduces happy experiences and that avoids any reference to drugs if that would help.

You're kidding. Robert Notzick (1974) plagiarized the orgasmotron from Woody Allen's Sleeper (1973)? I did not remember this section! No, it would not change my view. Each individual can freely spend the rest of their lives in the orgasmotron --- or not, if she prefers. It's an individual preference with no moral implications as far as I am concerned.
joefromchicago
 
  2  
Fri 31 Jan, 2014 09:46 am
@Thomas,
Thomas wrote:
True. I consider this a feature, not a bug.

Utilitarianism certainly is flexible, I'll give it that.

Thomas wrote:
Not necessarily. I don't think there's anything wrong for a Utilitarian to outsource the calculating. That's what Act Utilitarians do when in practice they usually just follow ordinary common-sense rules such as don't lie, don't steal, don't throw sand at other kids, do as you would be done by, and so forth.

I don't think Bentham or Mill articulated anything definitive on who gets to make the calculations. I believe, however, that they took a rather Protestant view of the matter. Each person, presumably, should be able to determine whether his or her actions tend to produce more or less overall happiness. I'm not sure who would be placed in a better position to make that decision, and I'm not sure if Bentham or Mill knew either.

Rule (not act - you have that backwards) utilitarians are no different, since they determine for themselves what rules are more productive of happiness. If a bunch of people come to the same conclusion, then that just means the rule gets widely adopted, it doesn't mean the rule is per se utile. Utilitarianism doesn't work as a democracy, it works more like the market. Rules aren't imposed by the will of the majority, they are followed by the majority because they prove more utile in practice.
0 Replies
 
joefromchicago
 
  2  
Fri 31 Jan, 2014 09:49 am
@Thomas,
Thomas wrote:
You're kidding. Robert Notzick (1974) plagiarized the orgasmotron from Woody Allen's Sleeper (1973)? I did not remember this section! No, it would not change my view. Each individual can freely spend the rest of their lives in the orgasmotron --- or not, if she prefers. It's an individual preference with no moral implications as far as I am concerned.

I think you're missing Nozick's point. It's not that there's any kind of moral implication in a person's decision to hook up to the experience machine, it's that the unwillingness of many to hook up to the experience machine demonstrates that hedonic systems of morality (including utilitarianism), based on the presumption of happiness as the greatest good, are inherently flawed.
Thomas
 
  1  
Fri 31 Jan, 2014 03:51 pm
@joefromchicago,
joefromchicago wrote:
I think you're missing Nozick's point. It's not that there's any kind of moral implication in a person's decision to hook up to the experience machine, it's that the unwillingness of many to hook up to the experience machine demonstrates that hedonic systems of morality (including utilitarianism), based on the presumption of happiness as the greatest good, are inherently flawed.

This problem, if it is one, received a narrow, technical fix with the advent of preference utilitarianism. Preference utilitarianism is like classical utilitarianism, except that it identifies people's preferences as the relevant measure of happiness for the utilitarian calculus. If some people prefer the orgasmotron while others prefer their reality straight and tough, this only goes to show that different people have different tastes. It does not reveal any deep flaw about hedonism in general.
Thomas
 
  1  
Fri 31 Jan, 2014 04:19 pm
joefromchicago wrote:
Rule (not act - you have that backwards)

No, I actually meant that. According to Google ngrams, the distinction between act and rule utilitarianism arose in the late 1950s. Further Googling suggests to me that it got popular because of JJ Smart's article Extreme and Restricted Utilitarianism (PDF) (1956). Smart, an Act Utilitarian (or "extreme Utilitarian" as he puts it), points out that nothing keeps Act Utilitarians from setting rules and following them, and that in practice that's what they usually do. The distinction is that for Act Utilitarians, rules are mere labor-saving devices for the practical work of churning out useful acts. For Rule Utilitarians, rules are the center of attention. They don't care about the usefulness of individual acts.

I don't think the distinction is terribly important for my point or your counterpoint. I was just bringing up as the rule-following Act Utilitarians as an example of outsourcing the moral calculus. (The outsourcing in this case is to whoever made the rules.)
 

Related Topics

How can we be sure? - Discussion by Raishu-tensho
Proof of nonexistence of free will - Discussion by litewave
Star Wars in Philosophy. - Discussion by Logicus
Existence of Everything. - Discussion by Logicus
Is it better to be feared or loved? - Discussion by Black King
Paradigm shifts - Question by Cyracuz
 
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.11 seconds on 12/22/2024 at 09:43:13