Preserve humanity despite suffering or no?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Push Reddy?

Yes
5
12%
No
38
88%
 
Total votes: 43

User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

I wouldn't push the red button unless I was in a truly foul mood.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Post by Nova Andromeda »

Boyish-Tigerlilly has conveniently written a detailed description of the ethical view I used in the OP that argues against preserving humanity. I used a pain/pleasure metric, but you could swap that out for some other metric (belief in God, hair color, a generalized goal metric, etc.). Preference is another popular metric. However, I have partially undermined the preference metric by proposing a scenario in which humanity is largely ignorant of its condition. I did this because I’ve seen it suggested that people should be protected from themselves if they aren’t making rational decisions. What is the ethical thing to do when a person’s preference for no morphine is irrational and they are in severe pain from a bad injury? In this scenario the stakes have just been raised.



-Here is a list of objections. If bandwagon wants to make its own list that would be fine too. However, I’m not going to field everyone’s responses individually.


Objection 1: I don't trust myself to decide. I don't have the authority to decide.

This line of reasoning ignores the possibility of consulting others which is well within the bounds of the OP.


Objection 2: The OP doesn't conform to reality.

Hypothetical scenarios don't need to conform to reality. This particular scenario is setup to force people to defend humanity's existence. However, the suffering/pleasure ratio is > 1. The suffering/pleasure metric is one of the major secular metrics used to make ethical decisions. The general idea is to reduce the ratio to 1 or less. The obvious method for doing this in the OP is to kill off humanity. So far only one person is willing to off humanity. Failure to provide a logical reason for this suggests a severe lack of logical thought with respect to decisions involving humanity’s existence.


Objection 3: There is some immutable ethical right for humanity to persist regardless of a major reason to the contrary such as an overall suffering/pleasure ratio > 1.

Where is the logic or evidence to support the existence of the immutable right?


Objection 4: Self interest. I don't want to die.

Q is happy to accept that. Nevertheless this is SLAM and I want you to justify your decision. How is your self interest ethical in the larger context or are you simply stating a preference for unethical behavior?


Objection 5: There is no such thing as an objective ethical system.

An ethical system is basically a system for making decisions. All one needs to do is pick a function to maximize/minimize. One widely accepted a secular criterion is suffering vs. pleasure. Of course, this criterion doesn't encompass all human goals. One could choose some theoretical "human goal" criteria. Another criterion is preference. These are objective in the sense that pain, pleasure, desire, etc. can be observed. In reality exact quantification may be difficult, but not in this is a hypothetical scenario.


Objection 6: The OP said Q didn’t require a reason for not pushing Reddy.

Q is happy to accept that. Nevertheless this is SLAM and I want you to justify your decision. How is your decision to avoid thinking about the problem ethical? Congratulations, you have fallen into a logical trap.
Nova Andromeda
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

I can see the value of the hypothetical, even though it's not realistic insofar as it sketches out and tests the depths to which you are prepared to take a particular philosophy. As I mentioned earlier, this hypothetical points out a major problem with the previous version of Utilitarianism.

This is one major reason why the old version of Utilitarianism based on the pleasure/pain independent ratio has been ditched. Very few ethicists actually use it. It has evolved over time because it simply had too many problems, especially if you take it to the logical conclusions of the doctrine. It leads to results no one is prepared to accept.

But this problem continues, to an extent, in all forms of Utilitarianism/maximizing consequentialism as they, compared to say, traditional medical ethics, are often grossly at odds with intuitition. No matter what has been done, new forms of Utility theory end up having new forms of unavoidable criticisms that are fixed only through making even more new forms of it. I have come to the conclusion that, if you want to follow the sytem, as it leads to generally good results in practice, you simply need to bit the bullet and accept it's problems when it gets to the more extreme applications of it. Then again, it's probably not a good idea to be a pure Utililitarian simply because it can be impossible in a society of non-utilitarians. If everyone were utilitarian there would be no problem, as everyone would rationally calculate costs and benefits on an aggregative basis as an extension of the core principles of the theory, but most people don't do that. Many people will in fact, for instance, save their own child rather than a room full of equal children simply due to kindship ties. It takes an extreme person to be a hard-core utilitarian.

Utility is probably the only major ethical system I adhere to, but I admit that, a lot of the time, I don't always know what to do and I doubt it/lose faith it because of some of the unpalatable, unintuitive things it advocates. I try to tell myself, though, that intuitition is a shitty way of making decisions.
Objection 3: There is some immutable ethical right for humanity to persist regardless of a major reason to the contrary such as an overall suffering/pleasure ratio > 1.

Where is the logic or evidence to support the existence of the immutable right?
I would agre with this objection to the objection. There is no real reason why humanity must survive at all costs regardless of the quality of life of its members. This position has already been advanced several times on this board alone. I don't think that's the acutal problem. I think the problem is killing people who actually value life more than the bads, regardless of whether or not there is a greater quantity of bads than goods. The trick is calculating quality of life, which isn't as easy as simply adding up bads and goods, since they come in different intensities and often affect the individual based on his system of self evaluation. He might value the fewer goods more than a greater number of bads. It's a weighting system. For instance, I can imagine someone's life filled with a far greater quantity of annoyances and small things that make you sad for a few seconds each day, but spread out through a life time, it's rather insignificant in practice.


I don't see a tremendous difference between the people being informed or ignorant on this issue if it's given that we know the people would choose to live even if they knew the truth just as they would if they knew nothing. Let's assume they do know, but still don't choose to die. This merely means they value the goods more than the bads, even if the latter has a greater quantity.

How do you determine rational ethical actions? I would assume in a manner similar to economcis, no? Could you consider life an economic transaction and the "bads" the costs to purchase the goods?
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

I should make an addition. I didn't think of this until later, so I am sorry for the double post. Perhaps they could be merged.


1. I have a problem with the classical ledger system of deciding life quality precisely because it doesn't reflect the real-time psychology or emotional status of the individuals. I am not prepared to tell someone who is obviously happy, looking back on his life, that he's unhappy/miserable etc and thus isn't worth living his own life, even if there are a greater quantity of equal intensity bads. When discussing people havin things done to them by others or by themselves, I can't make a decision for him, because I would be imposing my own utility valuation over his. Imo, I would first think something wrong with MY calculations if they add up to him not having a life worth living based on a suffering'happiness ratio given that the ethinomical calculations don't actually correspond to his observable mentality.

2. For instance, let's use Bob again. Say Bob only lives 10 years, and over that ten year period, he has 100 experiences of equal intensity, bad and good. From the classical perspective of the ledger, you would not have a life worth living if you had a higher quantity of equally intense bads than goods. This makes no distinction between high or low majority. 51 out of 100 experiences being bad, under this system, would produce a negative util ratio. But what if this mathematical abstraction doesn't meet the empirical observation of his emotional status? I would say my calculations would be wrong or irrelevant, since if he truely were leading a bad life, it would probably show.

I don't think most people here would off themselves if they had a life that had 51 bad experiences out of 100, regardless of what the ledger says. This shows there's a strong intuitive disconnect with ledger morality. You would probably start to see some real problems if the "cut off" point were not simply any "net" number, but rather a high precentage of the total experiences (e.g. perhaps 89%).

All I am saying is that if observations of suffering/happiness don't match the adding, there's something wrong with the adding probably.
Nieztchean Uber-Amoeba
Sith Devotee
Posts: 3317
Joined: 2004-10-15 08:57pm
Location: Regina Nihilists' Guild Party Headquarters

Post by Nieztchean Uber-Amoeba »

Why not gas the people who suffer the most? Say, wipe out the populations of the Congo, Indonesia, the Middle East, etc. That would solve the problem to a much greater extent.

Oh wait. That would be psychotic genocide which only the most despicable of cretins would conceive of.

As far as I'm concerned, murder is wrong because to arbitrary cessation the existence of another person is the greatest harm one can possibly cause them, as it is the only thing which entirely extinguishes the possibility of further pleasure in the person's life. Obviously the vast majority of people are happy with the pain/pleasure ratio they live in, because mass suicide is uncommon. Deciding to murder them in the interests of no longer causing them suffering is a nauseatingly evil, cruel and narcissistic act, as it presumes you know better than them that their life means nothing and you should take their decisions from them based only on your own belief that their suffering outweighs their pleasure.

Frankly, anyone who would push the button deliberately is the worst kind of monster imaginable, because pressing it would be the most evil action you could undertake. Ever.

This moral dilemma sucks ass.
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Post by Nova Andromeda »

Boyish-Tigerlilly wrote:I don't see a tremendous difference between the people being informed or ignorant on this issue if it's given that we know the people would choose to live even if they knew the truth just as they would if they knew nothing. Let's assume they do know, but still don't choose to die. This merely means they value the goods more than the bads, even if the latter has a greater quantity.

How do you determine rational ethical actions? I would assume in a manner similar to economcis, no? Could you consider life an economic transaction and the "bads" the costs to purchase the goods?
-The idea for the scenario is that current human preferrences to live are irrational for reasons like: human memories of things like severe pain are forgotten and hard times in the past are remembered as the good old days. The humans in the midst of severe pain would literally wish they were never born and humans suffering through hard times only carry on because they hope for a better day (which probably won't come) and think old times were good when they weren't.
-In the general case a rational ethical action is a decision that is made with accurate information where the decision criteria are known to the "decider." If the person is unaware of making a decision it won't fall into a rational/irrational metric.
Nova Andromeda
User avatar
18-Till-I-Die
Emperor's Hand
Posts: 7271
Joined: 2004-02-22 05:07am
Location: In your base, killing your d00ds...obviously

Post by 18-Till-I-Die »

I would push it just to see what would happen. I think it'd look AWESOME! Like Q would teleport the Drej mothership from Titan AE over Earth and a huge blast of energy would scorch the planet in an instant, leaving the world a sterile desrt where life can never grow again. Then all those people who laughed at me in high school would pay! Suck on it girls who never dated me! You just wasnt to be friends, be friends with the void! :finger:


Did i do it right? I never tried nihilism before, i dont know if i got the "angry teen virgin" tone most nihilists sport down or not. :lol:


No seriously, of course i wouldnt. I honestly cant fathom how anyone could justify doing so.
Kanye West Saves.

Image
User avatar
Lusankya
ChiCom
Posts: 4163
Joined: 2002-07-13 03:04am
Location: 人间天堂
Contact:

Post by Lusankya »

Nova Andromeda wrote: -The idea for the scenario is that current human preferrences to live are irrational for reasons like: human memories of things like severe pain are forgotten and hard times in the past are remembered as the good old days. The humans in the midst of severe pain would literally wish they were never born and humans suffering through hard times only carry on because they hope for a better day (which probably won't come) and think old times were good when they weren't.
Most people would say that my boyfriend was irrational if he wanted me to take to him with a whip and leave bruises on his tender buttocks. That said, if he wanted me to do it, then I'd consider it. Why? Because when dealing with other people, you're not dealing with what they should rationally want, but with what they do want. And given that people aren't going around offing themselves on a regular basis, and that when they do, it's usually due to mental illness, I'd say that what people do want is to not die. It's not as though nobody in the world has a concept of death, so I can't even say that I'm just doing something for them that they haven't thought of yet.

I don't believe I have the ethical right to go against the desires of every single person on the planet, especially since in this case I can't even try to justify it as being in their best interests. I mean, once I push the button, they'll be frelling dead, which is like a permanent state of being as unhealthy as you can possibly get. And, being permanent, it means that the people who are displeased with my decision aren't in a position to do anything about it. If I don't push the button, and a bunch of people complain, then they will still have the option of locking themselves in a helium tent or something similar, to give themselves the same result as though I had pushed the button, albeit a while later.

That, of course, is assuming that people are being irrational by wanting to live, which is a retarded concept in and of itself. Living provides the option of "benefits" outweighing the "pain", whereas with death, you only ever get the option of there being no benefits, along with the no pain. Besides, if you live in a developed nation, then the majority of the pain you experience will be along the lines of "Oh, the world sucks! This girl totally didn't go down on me, because she thinks I'm a total douchebag, or something. Well, whatev. She wasn't as hot as she thought she was anyway. She had, like, this pimple on her left cheek, and it was totally pink and stuff. I would have been totally embarrassed if I were her. This is sooo lame. I'm going to go and write about it on my livejournal while listening to Linkin park on my iPod." Except with worse grammar and spelling.
"I would say that the above post is off-topic, except that I'm not sure what the topic of this thread is, and I don't think anybody else is sure either."
- Darth Wong
Free Durian - Last updated 27 Dec
"Why does it look like you are in China or something?" - havokeff
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Nova Andromeda wrote:Objection 3: There is some immutable ethical right for humanity to persist regardless of a major reason to the contrary such as an overall suffering/pleasure ratio > 1.

Where is the logic or evidence to support the existence of the immutable right?
Since you seem to reject the notion that humans have immutable rights, let's turn this on its head. Where is the logic or evidence to support the use of an extreme form of classical utilitarianism with no mitigation by preference?
Objection 4: Self interest. I don't want to die.

Q is happy to accept that. Nevertheless this is SLAM and I want you to justify your decision. How is your self interest ethical in the larger context or are you simply stating a preference for unethical behavior?
My code of ethics presumes that my self-preservation instinct is ethical, when it comes down to it. And at an instinctual level, I don't give a fuck whether or not the instinct is rational or irrational; it exists nonetheless. My answer in this situation is the same as my answer in any scenario where I'm fighting for my life and might have to kill more people than would otherwise die if I died. If six men were coming to execute me and I had a chance to kill or maim them (or otherwise stop them), I would.

EDIT: In fact, I'd argue that codes of ethics exist to attempt to approximate the general consensus of instinctual responses to a given situation, not vice-versa. For example, the golden-rule philosophy, the assumption of fundamental human rights, and the extreme form of classical utilitarianism described above all give essentially the same results in a vast majority of situations: act to minimize harm, help people, treat them well; it's only in extreme, contrived situations like this one that they differ appreciably. And we can see that people are rejecting the extreme utilitarian approach and embracing the notion of human rights or moral symmetry because the result that utilitarianism gives is unconscionable.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

[quote] And we can see that people are rejecting the extreme utilitarian approach and embracing the notion of human rights or moral symmetry because the result that utilitarianism gives is unconscionable.[/quote[

You should note, though, Surlethe, that this particular problem largely only exists within the realm of an out of date version of Utilitarianism. Almost no one uses the base pleasure/pain principle. It's been considerably modified.


Although, the other ones have some problems as well. New ones.


I would want to ask though what you mean by ethics being a refined approximation to the consensus of instinctual responses. Do you believe that the goal of ethics is to simply explain and conform to what is our natural tendency?

One problem with ethics is that it rests on the idea that what is natural isn't necessarily what ethics should prescribe, even if it is approximating your natural instincts. Do you mean like it should conform to natural states or intuition of the people?

For example, is it intuitive or instinctual for most mothers or parents to sacrifice their children for a far greater number of total strangers? Utiltiarianism would probably tell you to sacrifice your children for the whole in such a case. (A building on fire, for instance).

Do you ignore the principle or go by the instincts? I have seen on this forum that people would consider it morally bankrupt to save one and let the many die, but that's entirely natural and instinctual if you have no connection to them.
User avatar
Nova Andromeda
Jedi Master
Posts: 1404
Joined: 2002-07-03 03:38am
Location: Boston, Ma., U.S.A.

Post by Nova Andromeda »

Surlethe wrote:
Nova Andromeda wrote:Objection 3: There is some immutable ethical right for humanity to persist regardless of a major reason to the contrary such as an overall suffering/pleasure ratio > 1.

Where is the logic or evidence to support the existence of the immutable right?
Since you seem to reject the notion that humans have immutable rights, let's turn this on its head. Where is the logic or evidence to support the use of an extreme form of classical utilitarianism with no mitigation by preference?
-You are free to mitigate by preference all you want, however I have undermined it in this scenario as I have already pointed out. If a person's preference varies depending on whether something is a memory or a current experience then how much value does it really have? Don't forget that the scenario is deciding the future of humanity. If people inaccurately predict their future preferences then their current preference regarding the future isn't nearly as useful.
Surlethe wrote:
Nova Andromeda wrote:Objection 4: Self interest. I don't want to die.

Q is happy to accept that. Nevertheless this is SLAM and I want you to justify your decision. How is your self interest ethical in the larger context or are you simply stating a preference for unethical behavior?
My code of ethics presumes that my self-preservation instinct is ethical, when it comes down to it. And at an instinctual level, I don't give a fuck whether or not the instinct is rational or irrational; it exists nonetheless. My answer in this situation is the same as my answer in any scenario where I'm fighting for my life and might have to kill more people than would otherwise die if I died. If six men were coming to execute me and I had a chance to kill or maim them (or otherwise stop them), I would.

EDIT: In fact, I'd argue that codes of ethics exist to attempt to approximate the general consensus of instinctual responses to a given situation, not vice-versa. For example, the golden-rule philosophy, the assumption of fundamental human rights, and the extreme form of classical utilitarianism described above all give essentially the same results in a vast majority of situations: act to minimize harm, help people, treat them well; it's only in extreme, contrived situations like this one that they differ appreciably. And we can see that people are rejecting the extreme utilitarian approach and embracing the notion of human rights or moral symmetry because the result that utilitarianism gives is unconscionable.
-I'm not saying the existance a person's desires is irrational. I'm saying that their decisions don't maximize the fulfillment of those desires and that is irrational.
-Ethical codes aren't some fundamental part of nature like gravity. Instead ethical codes are more like mathematical functions. They exist because humans are generating them as part of various attempts make "better" decisions. People are generally not attempting to simply write down a consensus of instinctual responses. The apparent reason for most people rejecting logical conclusions of various utilitarian approaches has more to do with group think and gut feelings the various problems with those approaches. I say this due to severe lack of rational reasoning for their rejections of those approaches as pointed out previously. This is irronically exemplified in your response when you call some utilitarianism results unconscionable like it's some sort of immutable truth.
Nova Andromeda
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Nova Andromeda wrote:Ethical codes aren't some fundamental part of nature like gravity.
True
Nova Andromeda wrote:Instead ethical codes are more like mathematical functions.
Strictly you can model anything consistent (i.e. probably anything that exists) with a function of sufficient complexity. Human instinctual ethics are sufficient non-transitive and context sensitive that they look more like a heuristic system on a spreading activation substrate (which is unsurprising, since that's a good micro-summary of mid-level brain architecture in general). When humans invent codes of honour and laws they are trying to create formal context-insensitive systems that approximate these instinctual ethics, often with additional game-theoretic elements (i.e. demonstrated adherence to a code of honour makes possible a level of trust and bargains to the benefit of both sides that would not be possible without it).
The apparent reason for most people rejecting logical conclusions of various utilitarian approaches has more to do with group think and gut feelings the various problems with those approaches.
Sometimes but in this case (for intelligent responses) any conflict with utilitarianism is due to a misapplication of the later technique (probably excessively naive extreme classical utilitarianism as Boyish Tigerlilly mentioned; utilitarianism as a philosophy goes much further than just the application of utility theory, which is really just the simplest way to ensure consistency/transitivity in a system with non-binary value judgements).
I say this due to severe lack of rational reasoning for their rejections of those approaches as pointed out previously.
Still bitching and whining I see.
This is irronically exemplified in your response when you call some utilitarianism results unconscionable like it's some sort of immutable truth.
No you muppet it's the observed discrepancy between the strictly utilitarian answer and personal judgement in numerous experiments. 'Unconscionable' means 'humans don't consider it moral', nothing immutable there.
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Boyish-Tigerlilly wrote:
And we can see that people are rejecting the extreme utilitarian approach and embracing the notion of human rights or moral symmetry because the result that utilitarianism gives is unconscionable.
You should note, though, Surlethe, that this particular problem largely only exists within the realm of an out of date version of Utilitarianism. Almost no one uses the base pleasure/pain principle. It's been considerably modified.
This is true; I wasn't trying to ignore what you'd said.
Although, the other ones have some problems as well. New ones.


I would want to ask though what you mean by ethics being a refined approximation to the consensus of instinctual responses. Do you believe that the goal of ethics is to simply explain and conform to what is our natural tendency?
First, there's no objective way of picking a system of ethics: you have to treat it like theology or math. Some ethics systems assume that your goal is to minimize suffering; some assume humans have rights; some assume the truth of two-thousand-year-old tomes; there are probably many others I don't know. So it seems coincidental that, in general, different ethics systems seem to reach the same conclusions in a vast majority of circumstances. For example, a moral code based on the golden rule will reach the same conclusions as Nova Andromeda's naive utilitarianism in what seems like a majority of circumstances: in mundane, daily activities, strive to be a good person, increase utility, help other people, etc.

There's no a priori reason I can think of that such different assumptions should lead to so similar results, differing in only extreme cases, like this one. From this, I'd think that you have people living all over the world who, in their daily lives, act essentially the same despite subscribing to radically different assumptions about the foundations of morality. You probably act in largely the same manner toward your neighbors as Joe Q. Muslim, even though you are formally a utilitarian while Joe derives his formal moral structure from the Koran and what his Imams teach him.

Add to this the fact (as I recall from The God Delusion) that there is an overwhelming consensus in place about what to do in various moral dilemmas, and it seems that the various ethical schemes that exist serve to more or less approximate what you could call the mass conscience: the general moral zeitgeist.
One problem with ethics is that it rests on the idea that what is natural isn't necessarily what ethics should prescribe, even if it is approximating your natural instincts. Do you mean like it should conform to natural states or intuition of the people?
I'm not talking about individual instinct, but the consensual instinct (if that means anything). As below, you note that most people consider saving one person at the expense of many lives reprehensible, even though that might be the individual's instinct. Is any of this making sense? What I'm saying is that we all generally agree on what's the right thing to do when we're not in the situation, even if in the situation our instincts overpower our brains. So ethics is what's natural for the group -- the consensus of the group -- instead of for the individual.
For example, is it intuitive or instinctual for most mothers or parents to sacrifice their children for a far greater number of total strangers? Utiltiarianism would probably tell you to sacrifice your children for the whole in such a case. (A building on fire, for instance).
Again, I'm talking about the general consensus. If they're not your children, is it right to sacrifice them? What do your instincts say -- what do average Joe's instincts say if they're not his children? That's what I'm talking about, and that's what I tend to think moral codes try to approximate.
Do you ignore the principle or go by the instincts? I have seen on this forum that people would consider it morally bankrupt to save one and let the many die, but that's entirely natural and instinctual if you have no connection to them.
What I'm looking for is where the principles come from. In the case that there's a dilemma, obviously principles and instinct are in conflict; else, there wouldn't be a dilemma.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Nova Andromeda wrote:
Surlethe wrote:Since you seem to reject the notion that humans have immutable rights, let's turn this on its head. Where is the logic or evidence to support the use of an extreme form of classical utilitarianism with no mitigation by preference?
-You are free to mitigate by preference all you want, however I have undermined it in this scenario as I have already pointed out. If a person's preference varies depending on whether something is a memory or a current experience then how much value does it really have? Don't forget that the scenario is deciding the future of humanity. If people inaccurately predict their future preferences then their current preference regarding the future isn't nearly as useful.
You seemed to miss my point. You're essentially demanding objective justification for a different system of ethics; consequently, I'm demanding the same objective justification for your naive utilitarianism. Why should I believe in this sort of utilitarianism instead of subscribing to some notion of, say, universal human rights?
-Ethical codes aren't some fundamental part of nature like gravity. Instead ethical codes are more like mathematical functions. They exist because humans are generating them as part of various attempts make "better" decisions. People are generally not attempting to simply write down a consensus of instinctual responses. The apparent reason for most people rejecting logical conclusions of various utilitarian approaches has more to do with group think and gut feelings the various problems with those approaches.
You're missing my point again. I'm saying that ethical systems exist to approximate the group consensus rather than saying the group consensus is flat-out wrong when it contradicts a given ethical system (and since you yourself have admitted that choice of ethical systems is arbitrary, you have no recourse when someone applies a system other than your utilitarianism).
I say this due to severe lack of rational reasoning for their rejections of those approaches as pointed out previously. This is irronically exemplified in your response when you call some utilitarianism results unconscionable like it's some sort of immutable truth.
What are you talking about? Immutable truth? The vast majority of responses in this thread alone have demonstrated quite satisfactorily that the utilitarian justification you give is unconscionable.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

Surlethe wrote:
Boyish-Tigerlilly wrote:
And we can see that people are rejecting the extreme utilitarian approach and embracing the notion of human rights or moral symmetry because the result that utilitarianism gives is unconscionable.
You should note, though, Surlethe, that this particular problem largely only exists within the realm of an out of date version of Utilitarianism. Almost no one uses the base pleasure/pain principle. It's been considerably modified.
This is true; I wasn't trying to ignore what you'd said.
Although, the other ones have some problems as well. New ones.


I would want to ask though what you mean by ethics being a refined approximation to the consensus of instinctual responses. Do you believe that the goal of ethics is to simply explain and conform to what is our natural tendency?
First, there's no objective way of picking a system of ethics: you have to treat it like theology or math. Some ethics systems assume that your goal is to minimize suffering; some assume humans have rights; some assume the truth of two-thousand-year-old tomes; there are probably many others I don't know. So it seems coincidental that, in general, different ethics systems seem to reach the same conclusions in a vast majority of circumstances. For example, a moral code based on the golden rule will reach the same conclusions as Nova Andromeda's naive utilitarianism in what seems like a majority of circumstances: in mundane, daily activities, strive to be a good person, increase utility, help other people, etc.

There's no a priori reason I can think of that such different assumptions should lead to so similar results, differing in only extreme cases, like this one. From this, I'd think that you have people living all over the world who, in their daily lives, act essentially the same despite subscribing to radically different assumptions about the foundations of morality. You probably act in largely the same manner toward your neighbors as Joe Q. Muslim, even though you are formally a utilitarian while Joe derives his formal moral structure from the Koran and what his Imams teach him.

Add to this the fact (as I recall from The God Delusion) that there is an overwhelming consensus in place about what to do in various moral dilemmas, and it seems that the various ethical schemes that exist serve to more or less approximate what you could call the mass conscience: the general moral zeitgeist.
One problem with ethics is that it rests on the idea that what is natural isn't necessarily what ethics should prescribe, even if it is approximating your natural instincts. Do you mean like it should conform to natural states or intuition of the people?
I'm not talking about individual instinct, but the consensual instinct (if that means anything). As below, you note that most people consider saving one person at the expense of many lives reprehensible, even though that might be the individual's instinct. Is any of this making sense? What I'm saying is that we all generally agree on what's the right thing to do when we're not in the situation, even if in the situation our instincts overpower our brains. So ethics is what's natural for the group -- the consensus of the group -- instead of for the individual.
For example, is it intuitive or instinctual for most mothers or parents to sacrifice their children for a far greater number of total strangers? Utiltiarianism would probably tell you to sacrifice your children for the whole in such a case. (A building on fire, for instance).
Again, I'm talking about the general consensus. If they're not your children, is it right to sacrifice them? What do your instincts say -- what do average Joe's instincts say if they're not his children? That's what I'm talking about, and that's what I tend to think moral codes try to approximate.
Do you ignore the principle or go by the instincts? I have seen on this forum that people would consider it morally bankrupt to save one and let the many die, but that's entirely natural and instinctual if you have no connection to them.
What I'm looking for is where the principles come from. In the case that there's a dilemma, obviously principles and instinct are in conflict; else, there wouldn't be a dilemma.
It seems as if you are hinting at the concept of Reflective Equilibrium. I think I understand now.
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Boyish-Tigerlilly wrote:It seems as if you are hinting at the concept of Reflective Equilibrium. I think I understand now.
What is "Reflective Equilibrium"?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

Reflective Equilibrium is the concept devised by John Rawls wherein one tries to seek the balance of decisions had by a conglomeration of ethical doctrines. One tries to modify each position until they tend to overlap toward the general ethical concensus of all ethical systems.

One tries to find the common ground of ethical positions for every day, non extreme interactions.

Of course, a lot of ethicists oppose the concept of reflective equilibrium because they historically track the shifting moral Zeitgiest that you mentioned in Dawkins' book. Non-intuitionists tend to view the collective morality with some skepticism.

Rawles in part created the concept in a counter argument against Utilitarian conceptions of justice, which many ethicists seem to see at odds with traditional notions of justice.

Reflective Equilibrium
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

Sorry for the double post, but I forgot something. If you scroll down to the criticisms portion of the document, the primary criticisms of Rawls' ethics oftne come from Utilitarians.

This isn't a surprise, as Utilitarianism is often regarded as a radical ethical doctrine (the diametrical opposite of another radical doctrine: Objectivism). In many cases, not just a small minority, Utilitarian prescriptions run counter to traditional social ethics. There are certainly areas in which it overlaps, but Utilitarians tend to argue that they overlap because the opposing systems ultimately have social utility. This is one reason why Utilitarianism has so considerable opposition in professional ethics. It's a lot like mocked cripple of Ethics. Many of its prescriptions were, and continue to be, mocked and haggled, but eventually with that shifting moral Zeitgeist, become accepted through acculturation.

One side is generally intuitionist, while the other is prescriptivist.
Post Reply