Consequentialism - is it ultimately flawed ethical idea

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Formless »

Bakustra wrote:So let's break this down: scenario n is worse, because "there is less happiness in that society relative to its size". Happiness relative to size is precisely the argument of average welfare. But let's address the argument that you thought you were making, again. Nobody lacks in their basic needs in scenario n. The minimum level of welfare is always met within these societies. The only other way you can go without being outright insane would be to leap to variable-value welfare, which I addressed when responding to Simon.
Wait, let me get this straight. If their basic welfare has been met... what is the problem, again? :wtf:

Think about what you are saying. No famine. No lack of housing. No lack of access to healthcare, including mental health care. Education. Employment. Etc. We don't even have these things fully addressed in real life-- what you are proposing sounds like a socialist utopia, and everything after that is the icing on the cake. Okay, maybe they lack a few privileges, compared to their neighbors, but they aren't unhappy. Did you stop and think about how fucking nice these people in your hypothetical have it? Where is the problem? Purely in the comparison? Okay, comparatively they aren't as happy. But I fail to see how that is a critical weakness in utilitarianism.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
User avatar
Formless
Sith Marauder
Posts: 4143
Joined: 2008-11-10 08:59pm
Location: the beginning and end of the Present

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Formless »

Nieztchean Uber-Amoeba wrote:This doesn't have anything at all to do with my point, and it's pretty stupid even then, but, uh, congratulations on knowing some words?
What it means is that even if Nigerians are happy with what they have, it does not necessarily mean our society can emulate Nigeria.
Your criticism fails on a few counts: first, I never mentioned humans specifically, so I don't know where you got that, aside from being retarded, but about beings in general. While we might be able to perfectly predict our humans needs - and we can't - a utilitarian system presumably weighs something's value on more than its gene code.
I never said it was our genes that grant us value. Where did you get that impression? Most ethical questions pertain to humans. If you want to talk about animals, fine, then we can talk about animals and how answers pertaining to them differ from answers pertaining to us. How does that change the ethics we use to address human problems? I don't think it does, but go ahead and explain that one to me.
Second, your argument is self-defeating: you are arguing that you can make rational ethical decisions about other people based on your presumed intuitive understanding of their mental states. Even worse, you invoked the Naturalistic Fallacy, presumably because you were going for a hat trick of being wrong. If you are arguing for the power of our minds based on evolution, then surely you are aware that our brains are wired to interpret and make decisions that are best for our survival, not for what is the most accurate or correct insight. Let's use a very, very basic example. Your brain is also naturally good at attaching intentionality to inanimate objects and forces. But I suspect that you would agree that making a rational utilitarian decision would not involve weighing the value of the Thunder's spirit, but would require deliberately acting contrary to those intuitions, becuase Utilitarianism fundamentally rejects intuition as a sound basis for decision-making. I know that your worldview is probably riddled with self-contradictions in all areas, but please refrain from making them so transparent
You are calling out a fallacy where none exists. I never stated that we must evaluate ethical questions pertinent to humans based on our social instincts alone (though we certainly do every day), I used that as a counter argument against the assumption that other human minds are so alien to the individual that he or she cannot have knowledge of them or how consequences effect them. Whether that knowledge is scientific or intuitive or just practical is besides the point, though I prefer scientific knowledge where I can get it. Is it not that lack of knowledge that underlies why you don't think Utilitarianism or other Consequentialist systems infeasible or impossible to work out?

In fact, if we cannot have such knowledge, then deontological ethics are also infeasible. But I digress.
Finally, your first paragraph basically doesn't make any goddamned sense. I'm not arguing against improving your knowledge, I'm arguing that a moral system based on the sum total result of an action you take is a shitty one because determining that sum total is both heinously difficult in a practical stuation, to the point of being worthless as a way of governing our actions, and even more basically, it's theoretically irrevocably flawed and irrational.
How, exactly, is it so hard to understand? Your argument is that the world is too complex to make accurate predictions about the future, and that other humans are too alien from the individual to assign utility values properly. But although we aren't perfect at it, we do it all the time to within enough accuracy as to make your criticism seem... weird. Heck, at least two methods of making these predictions easier have been stated in this thread-- Rule Utilitarianism and Virtue Ethics. What do you really think we need before such ethics are possible? And I do hope you manage to come up with a criticism that doesn't hamper all ethics pretty equally, though given your username I guess I can't be disappointed if you don't.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Nieztchean Uber-Amoeba
Sith Devotee
Posts: 3317
Joined: 2004-10-15 08:57pm
Location: Regina Nihilists' Guild Party Headquarters

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Nieztchean Uber-Amoeba »

Formless wrote:What it means is that even if Nigerians are happy with what they have, it does not necessarily mean our society can emulate Nigeria.
It means that we don't have a good basis for wanting Nigeria to better emulate, say, a classically Liberal Democracy or Welfare State, and vice versa for our goal to be to construct such a model of a society when it is apparent that such a State is not what gives its citizens the most happiness. Not that emulating Nigeria would - but once we can't say what specifically would, what use is the system?
I never said it was our genes that grant us value. Where did you get that impression? Most ethical questions pertain to humans. If you want to talk about animals, fine, then we can talk about animals and how answers pertaining to them differ from answers pertaining to us. How does that change the ethics we use to address human problems? I don't think it does, but go ahead and explain that one to me.
Goddamn it, not. I brought this up in my first post, because you haven't brought up a good reason to be talking about specifically humans or animals at all. Under Utilitarianism, it's just the general happiness of a number of beings. What sort of beings and what sort of happiness, then? The point is that you need to find a consistent yardstick for what constitute a being that should be judged and its worth as a being, and once you do, we still won't be talking about humans because that still covers an indefinite number of possible beings, and the fact that you can intuitively understand a few of them still won't make Utilitarianism valid.
You are calling out a fallacy where none exists. I never stated that we must evaluate ethical questions pertinent to humans based on our social instincts alone (though we certainly do every day), I used that as a counter argument against the assumption that other human minds are so alien to the individual that he or she cannot have knowledge of them or how consequences effect them. Whether that knowledge is scientific or intuitive or just practical is besides the point, though I prefer scientific knowledge where I can get it. Is it not that lack of knowledge that underlies why you don't think Utilitarianism or other Consequentialist systems infeasible or impossible to work out?
Oh, I was confused. I thought you were actuaslly proposing an argument against the epistemic problems with Utilitarianism, but it seems you actually just wanted to counter the fact that it is difficult or impossible to judge others' mental states well enough to make the best decision for their well-being with 'lol no i bet we can'.
In fact, if we cannot have such knowledge, then deontological ethics are also infeasible. But I digress.
There are barriers for deontological ethics, but it isn't concerned specifically with A. Producing the best possible consequences of any single action you take, or B. Requiring knowledge of all your subjects' mental states, so it comes out of this critique pretty clean.
How, exactly, is it so hard to understand? Your argument is that the world is too complex to make accurate predictions about the future, and that other humans are too alien from the individual to assign utility values properly. But although we aren't perfect at it, we do it all the time to within enough accuracy as to make your criticism seem... weird. Heck, at least two methods of making these predictions easier have been stated in this thread-- Rule Utilitarianism and Virtue Ethics. What do you really think we need before such ethics are possible? And I do hope you manage to come up with a criticism that doesn't hamper all ethics pretty equally, though given your username I guess I can't be disappointed if you don't.
To sum up:

1. The fact that we are good at intuiting other human beings and working together for our shared survival is not helpful to consequentialism, since to make it a universal system would require that it be as applicable to beings who are not intelligible to us,

2. 'We are good at using instincts to get along with humans' /= 'We have enough information to make the choice with the most possible happiness for the most number of beings, where 'beings' and 'happiness' have yet to be defined.'

How are you not getting this?

And as for your last few sentences there... I don't even... No.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Purple »

Let me start a short defense of utilitarianism. Human welfare can be measured and expressed in terms of an individuals needs and the fulfillment of these or lack there off. As in, a person not starving is better off than a person that is starving. The existential need for nourishment being fulfilled in the former case. In sociology the term need extending beyond just the physical need of the body and into various categories including existential needs (nourishment, housing and health care), needs for self improvement and self realization, needs for freedom etc. You can find a short version here: http://en.wikipedia.org/wiki/Fundamental_human_needs

This as I understand is one of the basic premises of Utilitarianism.

Now, one thing I see being done here that I consider wrong is that people seem to forget not all needs are equal. In fact, most of the critiques presented here seem to implicitly claim quite the contrary. As in they claim that (simplistic example) a person that can get his favorite brand of candy bar gives you equal positive points as a person starving gives you negative ones and that therefore the two will cancel out. This indeed does breed the many repugnant conclusions like the one that a small group of super satisfied people can offset huge starving masses. However personally I think it is a major oversight and does not, or at least should not reflect the spirit of utilitarianism.

Something that as I have said before is often quite conveniently ignored is that in order for any sort of utilitarian decision making to work human needs need to be stacked in a distinct and clearly defined order where each level is by many orders of magnitude more valuable than those coming after it. The first in this list would be the existential needs for life, nourishment, housing and health care. These need to be many orders of magnitude more valuable than all others so that one starving person easily outweighs thousands if not millions of ecstatic ones (under perfect conditions it should out weight an arbitrarily huge number of them). Only than can we move up to other things going down the list slowly, step by step assigning a weight factor to each along the way only reaching things like candy bars or arbitrary freedoms near the far end of the line. What this achieves is that as a person gains access to a new level of happens (lets call it that) this fact delivers a progressively smaller amount of positive points to the whole. And thus increasing the happens of any one person only achieves a net positive result once every person in the society has achieved the same level that person is currently on. In other words, when the net loss of people not having access to the level he is on before incrementation is multiplied by 0.

Once such a list is established it does away with any sort of repugnant conclusions becouse of two things. Firstly, it massively stacks the calculations in favor of a society where no one goes hungry, sick or homeless and continues stacking it at a progressively decreasing rate to things further away from those needs until we reach abstract concepts like freedom to own your favorite candy bar or what ever. And secondly it stacks the calculation massively in favor of equal welfare across the board and against extreme values.

It's 4AM so I might not be 100% clear or understandable in my typing. And my examples are admittedly simplistic for the sake of description but I think you will understand the point I am trying to make.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Bakustra »

That has nothing to do with the Repugnant Conclusion, Purple.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Purple »

Bakustra wrote:That has nothing to do with the Repugnant Conclusion, Purple.
How so? When I have clearly demonstrated how my method makes said conclusions invalid. Furthermore, my entire point was that the only way to reach such conclusions is to start from very wrong base assumptions like that adding happiness to a person up to an arbitrary level is equal to adding the same amount of happiness distributed across the board. Something that is not and should newer be considered true.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

Early morning moral calculus, go!

Yeah, it seems to me that all we need to do is assign some diminishing marginal returns on utility transactions.

m = goodness of a given situation
u = utility possessed by an individual
k = arbitrary constant of proportionality
n = index number assigned to each individual in a population, for purposes of summation

functions shaped like:
dm = k*Σdu_n -> repugnant conclusion?


whereas functions shaped like:
dm = k*Σdu_n/(u_n+du_n) -> no repugnant conclusion?

problem solved?
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Consequentialism - is it ultimately flawed ethical idea

Post by K. A. Pital »

Suffering clearly incrased since most people are on the edge of survival. It cannot be "no worse" than Y people with a good life standard.

Happiness is bullshit since it is an entirely subjective psychological construct. You can measure physiological damage in a reliable way objectively, but you can't do the same with psychological preferrences (at least with our current understanding of psychology and neurophysiology).
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Purple »

@Destructionator XIII.

Thing is, by my formula the act of killing a person (by definition depriving him of the fulfillment of all his needs) produces sufficient negative points that it out weights infinite gain to infinite people. The whole concept is that a society is newer good, it is only less or more bad.

Now, I can't understand a thing Feil said. So I can't comment on it. But I can comment on the rest. Also, I agree with staz on the suffering issue. Allow me to explain through a simplified version.:
  • Each individual has equal value to any other.
  • Each individual has the same basic needs (personal preferences are ignored).
  • Each category of needs has a factor expressed as X to the Nth power where N a function of the items location in the list and X is some number. So item 1 is more valuable than items 2+. Item 2 is less valuable than item 1, but at the same time more valuable than items 3+. etc.
Since each person, without exception has all of the needs we can for the sake of calculation abstract said person with a binary array where each bit represents a different need group. (Yes, I am a programmer so I represent things through flags. So sue me.) In this representation, 0 indicates that the particular need is fulfilled and 1 that it is not. Now, in order to calculate the moral value of a society all we need to do is multiply each bit with the value of its assigned need. As in, Sum 1 to N, where N is the total number of need categories (1 or 0 x [need group value]). This produces a number that is a numerical representation of the lack of need fulfillment (in other words suffering) in a society. The lower this number is, the better. An ideal utopia, where all the needs of each individual are met there fore would score 0 where as an arbitrarily huge number would indicate the opposite state.

Admittedly, my concepts are designed for social engineering. But they can be adapted to making any sort of decision. All you have to do is look at what needs of how many people you are enabling/disabling the fulfillment off. Extremely simplistic I know, but it can be a good starting point for someone far more skilled in philosophy than me to develop from.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Bakustra »

Feil wrote:Early morning moral calculus, go!

Yeah, it seems to me that all we need to do is assign some diminishing marginal returns on utility transactions.

m = goodness of a given situation
u = utility possessed by an individual
k = arbitrary constant of proportionality
n = index number assigned to each individual in a population, for purposes of summation

functions shaped like:
dm = k*Σdu_n -> repugnant conclusion?


whereas functions shaped like:
dm = k*Σdu_n/(u_n+du_n) -> no repugnant conclusion?

problem solved?
Not really.
Bakustra wrote:Congratulations, Simon!!! You just discovered the variable-value "solution"! Guess where it leads to? It leads to two different conclusions based on how you set up the math- either you end up with the Sadistic Conclusion, wherein it becomes better to add suffering people rather than happy people to a population at certain points, because at some point the relative harm of adding people with lower positive welfare outweighs the harm of adding a sufferer. If suffering dilutes too, then this extends further and so adding more people in suffering outweighs adding people with lower positive welfare under the right conditions.

Or you end up with an absurd conclusion, where if you take a society with a million people with good welfare and one person who suffers, which is good, and multiply it upon itself, while the proportions do not change, at some point the society becomes bad, because there is no dilution of suffering while positive welfare dilutes. These are inescapable using those means of calculating value, unless you can show a way that doesn't fall victim to these somehow to avoid the disquieting conclusions that they end up with. There are plenty of ways to avoid the Repugnant Conclusion, but most of them are frightening and unintuitive in their own right.
So it only leads to further uncomfortable conclusions which are also nonintuitive.
Purple wrote:
Bakustra wrote:That has nothing to do with the Repugnant Conclusion, Purple.
How so? When I have clearly demonstrated how my method makes said conclusions invalid. Furthermore, my entire point was that the only way to reach such conclusions is to start from very wrong base assumptions like that adding happiness to a person up to an arbitrary level is equal to adding the same amount of happiness distributed across the board. Something that is not and should newer be considered true.
No. You don't understand what I originally posted and your bullshit has to do with welfare distribution, which is assumed to be equal in the arguments leading to the Repugnant Conclusion. Shut the fuck up and read what I posted originally.
Stas Bush wrote:Suffering clearly incrased since most people are on the edge of survival. It cannot be "no worse" than Y people with a good life standard.

Happiness is bullshit since it is an entirely subjective psychological construct. You can measure physiological damage in a reliable way objectively, but you can't do the same with psychological preferrences (at least with our current understanding of psychology and neurophysiology).
It's not the "edge of survival", it's the edge of suffering. Nobody suffers, but they are on the edge of it. Your second paragraph seems to suggest that non-physiological sources of harm and suffering are irrelevant to utilitarian calculations, so I will assume that you didn't really mean it.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Consequentialism - is it ultimately flawed ethical idea

Post by K. A. Pital »

Destructionator XIII wrote:The best way to minimize suffering is to exterminate all life. The dead feel no pain.
By causing the ultimate amount of suffering? You can't do that without suffering peaking at the point of extermination. So that's a no-go. :lol: Though good try.
Bakustra wrote:It's not the "edge of survival", it's the edge of suffering. Nobody suffers, but they are on the edge of it.
In this case I hardly see a problem. It is not an inferior state if suffering does not occur. How would you judge?
Bakustra wrote:Your second paragraph seems to suggest that non-physiological sources of harm and suffering are irrelevant to utilitarian calculations
Utilitarianism experiences trouble with that. Because psychological preferences are much harder to observe, record and qualify objectively. Psychology is not as easy as physiology. That's why. It is not irrelevant theoretically, but practically as of now it is.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

Expressed in plain English, now that it's not ohgodwhy in the morning:

Suppose that the change of goodness, dm, in a situation is the arbiter of morality.

Option 1: Let the change in goodness of a situation be proportionate to the sum over the entire population of the change in individual utility to each member of a population, du_n. Or,
dm = k*Σ[du_n]

CONCLUSIONS OF OPTION 1:
+Reducing the total utility of one group is immoral unless balanced by increasing the total utility of another group. Note that this would invalidate the Repugnant Conclusion of the first kind, allowing a society to tend towards misery by increasing the population while holding utility constant.
+The addition of a new member to the set is a morally null act, because du_n=0 for a member who is added to a set.
+Lowering the total utility of one group is moral if balanced by increasing the total utility of another group. This allows the Repugnant Conclusion of the second kind, the One Village In Paradise result.


Option 2: Let the change in goodness of a situation be proportionate to the sum over the entire population of a more complex formula: the change in individual utility to each individual, du_n, divided by their individual utility after the proposed change, (u_n+du_n). Or,
dm = k*Σ[du_n/(u_n+du_n)]

CONCLUSIONS OF OPTION 2:
+As in Option 1, the RC of the First Kind is invalidated.
+As in Option 1, adding a new member to the set is morally null.
+The RC of the Second Kind is also invalidated: morality is maximized when utility is most evenly distributed.
+If death is defined as removing all of a person's utility, death caused by an action (du_n = -u_n) results in a divide by zero error and must thus be excluded from the set and treated separately. Proposition:

Option 3:
Define set p to hold all those individuals who would be killed or saved from death by an action.
Let n be the set of all individuals in the population which are not members of set p.
dm_1 = k*Σ[du_n/(u_n+du_n)]
dm_2 = (k_2)Σ(du_p)
if dm_2 >= 0, dm = dm_1+dm_2
if dm_2 < 0, action is immoral.

CONCLUSIONS OF OPTION 3:
+RC1 and RC2 are invalidated
+adding a new member to the set is still morally null
+Spock dying to save the Enterprise is good. Killing Spock to give everybody in the world a cup of coffee is evil. Divide-by-zero errors from death are avoided.
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Bakustra »

Feil wrote:*snip*
You are seriously misunderstanding the problem. The means of transit from one state to the other are not important to the question. What is important is whether the Garden of Eden, Asimov's Caves of Steel, or a world where everybody has a sustainable equivalent to the First-World standard of living are each equivalent or if one is morally preferable.

And again, weighting welfare shifts to diminish in value as additional members are added to the society generally produces some variant of the Sadistic Conclusion, or else absurd conclusions.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Consequentialism - is it ultimately flawed ethical idea

Post by K. A. Pital »

Destructionator XIII wrote:long term prosperity
Of whom? The peak of suffering will signify the obsoletion of all moral measurements as there'd be no one left to experience suffering or "gain" any sort of "utility". Suicide is not "painless" unless you use a very certain way of suicide and even if you do, most people do not desire to be killed. Going against one's desires is not suffering par se, but since in the end it causes him to die, I guess that qualifies.

If you found a way to end humanity with it being absolutely content with that, sure, you just solved all moral problems. Duh.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

Bakustra wrote:You are seriously misunderstanding the problem. The means of transit from one state to the other are not important to the question. What is important is whether the Garden of Eden, Asimov's Caves of Steel, or a world where everybody has a sustainable equivalent to the First-World standard of living are each equivalent or if one is morally preferable.
What is that even supposed to mean? Morality is associated with actions, not states of being. Is it evil to be sad? Is happiness heroic? :lol:
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Bakustra »

Feil wrote:
Bakustra wrote:You are seriously misunderstanding the problem. The means of transit from one state to the other are not important to the question. What is important is whether the Garden of Eden, Asimov's Caves of Steel, or a world where everybody has a sustainable equivalent to the First-World standard of living are each equivalent or if one is morally preferable.
What is that even supposed to mean? Morality is associated with actions, not states of being. Is it evil to be sad? Is happiness heroic? :lol:
Well, yes, if you reject the premises of an argument, it often becomes incoherent. But I guess we can't judge whether some state is better because you said so while being too fucking stupid to read one fucking post why are people finding this so hard so there's that major problem of population ethics rendered irrelevant!
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
RRoan
Padawan Learner
Posts: 222
Joined: 2005-04-16 09:44pm

Re: Consequentialism - is it ultimately flawed ethical idea

Post by RRoan »

I never understood how the Repugnant Conclusion is actually, you know, repugnant. Their lives are still worth living, they aren't suffering, and they're still happy. That's better than many people have it in real life, so what, exactly, is the issue?
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

It's easy to judge whether one state is better or not using any of the formulae I named above, since goodness is easily expressed as:

m = m_initial + Σdm from m_initial to m_final
by definition of dm

As one would expect, better states are those which can be reached from inferior states by moral actions; inferior states are those which can be reached from better states by immoral actions.
User avatar
Bakustra
Sith Devotee
Posts: 2822
Joined: 2005-05-12 07:56pm
Location: Neptune Violon Tide!

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Bakustra »

Feil wrote:It's easy to judge whether one state is better or not using any of the formulae I named above, since goodness is easily expressed as:

m = m_initial + Σdm from m_initial to m_final by definition of dm

As one would expect, better states are those which can be reached from inferior states by moral actions; inferior states are those which can be reached from better states by immoral actions.
So basically, you're trying to argue about something without accepting any of the basic premises of it (namely, that we can measure the total and average utility in a system of people independent of measuring actions). Good on you for being quixotic, I guess.
RRoan wrote:I never understood how the Repugnant Conclusion is actually, you know, repugnant. Their lives are still worth living, they aren't suffering, and they're still happy. That's better than many people have it in real life, so what, exactly, is the issue?
Because most people would think that a world where a million people had their own spaceships and they were all of humanity would be better than a world where a trillion people had the essentials of survival and little else, but this shows it not to be the case, and in many formulations, the latter world is actually better!
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

No, indeed, you cannot define a measurement system without some single-valued function which takes you from one measurement to another.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Purple »

Ok, you people beat me with maths. I can't keep up with all this any more. I tried reading Feil's post that supposedly makes things simpler and now my brain hurts. I give up.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Re: Consequentialism - is it ultimately flawed ethical idea

Post by K. A. Pital »

Destructionator XIII wrote:
Stas Bush wrote:Of whom? The peak of suffering will signify the obsoletion of all moral measurements as there'd be no one left to experience suffering or "gain" any sort of "utility".
OK, if you do value happiness in the moral equation, it doesn't (necessarily) lead to the extermination conclusion.

I think I just read too much into he "happiness is bullshit" words.
I said it is merely impractical to universalize happiness as psychology is understood worse than physiology and it is a lot harder to make a definite statement that something is "happiness" and can be objectively measured. Damage to human tissues can be objectively measured. Dissappointment or pleasure, etc. is hard to measure. Theoretically, if it was possible to understand the mechanics of "happiness" at a level where you understand what each person needs to maximize happiness, one could've factored it in.

It is obviously a part of utilitarianism, just a hard-to-use part.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Straha
Lord of the Spam
Posts: 8198
Joined: 2002-07-21 11:59pm
Location: NYC

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Straha »

There are a couple major objections to Consequentialism that I hold, and that I don't think are satisfactorily answered:

First: The (simplified) Heideggerian critique of how consequentialism turns people into cogs in machines, things that can be understood and controlled like electrical circuits (or math equations). Not only does this sort of approach empirically fail according to its own standards (whenever a government has applied them to min-max happiness/welfare, at any rate), but it destroys the essence of being human on a grander scale and turns us all into 'standing reserve' where we are only valued by how well we serve the formula. This enframing of what we are matters, and simplifying living creatures down is complicated and problematic in all its forms.

Second, every consequentialist/utilitarian system of ethics will have massive blindspots with drastic consequences. The ultimate questions of who counts as an individual to be valued can never fully be answered, people will always be left outside the system of ethics and once they are outside the system their loss doesn't matter at all because they were never counted in the first place. Illegal immigrants, young children, aboriginal populations, the holocaust, and our treatment of animals are all vivid proof of this in action. Moreover even when you try to amend for previous oversights it'll still inevitably be fucked up; South Africa, America, Australia, and Canada's absolute failure to be able to in anyway remedy the injustices committed to their respective aboriginal populations, America's inability to solve for the latent racism as a result of centuries of Chattel slavery, and the West's schizophrenic approach towards Animal reform/welfare are all horrifying proofs of this.

Third, these impacts are inevitable (as Bakustra touched on before). Any system of consequentialist ethics will invariably value certain attributes and populations, and devalue others. These devaluations can, and will, lead to the conclusion that the cost of maintaining the lives of certain people(s) outweighs the benefits of their livelihood. The endpoint of this is what Michael Dillon calls the "Zero point of the Holocaust", which is alarmist but accurate. Moreover any consequentialist system that tries to account for this and changes the initial set-up so as to make this zero-point devaluation impossible (an impossibility, at any rate, see point 2) has already accepted that the system of ethics is inherently flawed and destined for inevitable failure. Rather than use a tragically flawed system we should look elsewhere for our ethical grounding.


In some ways, admittedly for the present, consequentialism will always apply, but it should be always be tertiary to other considerations when we are making ethical judgments.
'After 9/11, it was "You're with us or your with the terrorists." Now its "You're with Straha or you support racism."' ' - The Romulan Republic

'You're a bully putting on an air of civility while saying that everything western and/or capitalistic must be bad, and a lot of other posters (loomer, Stas Bush, Gandalf) are also going along with it for their own personal reasons (Stas in particular is looking through rose colored glasses)' - Darth Yan
User avatar
Feil
Jedi Council Member
Posts: 1944
Joined: 2006-05-17 05:05pm
Location: Illinois, USA

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Feil »

Straha wrote:Second, every consequentialist/utilitarian system of ethics will have massive blindspots with drastic consequences. The ultimate questions of who counts as an individual to be valued can never fully be answered, people will always be left outside the system of ethics and once they are outside the system their loss doesn't matter at all because they were never counted in the first place. Illegal immigrants, young children, aboriginal populations, the holocaust, and our treatment of animals are all vivid proof of this in action. Moreover even when you try to amend for previous oversights it'll still inevitably be fucked up; South Africa, America, Australia, and Canada's absolute failure to be able to in anyway remedy the injustices committed to their respective aboriginal populations, America's inability to solve for the latent racism as a result of centuries of Chattel slavery, and the West's schizophrenic approach towards Animal reform/welfare are all horrifying proofs of this.
Perhaps I do not understand you. You propose that because an ethical system does not work when the person applying it considers some people less than human, the ethical system which is flawed?
User avatar
Straha
Lord of the Spam
Posts: 8198
Joined: 2002-07-21 11:59pm
Location: NYC

Re: Consequentialism - is it ultimately flawed ethical idea

Post by Straha »

Feil wrote: Perhaps I do not understand you. You propose that because an ethical system does not work when the person applying it considers some people less than human, the ethical system which is flawed?
You're only getting half of it. What I am saying, in a nutshell, is that any consequentialist approach to ethics will inevitably leave certain groups out of consideration. It's not the person applying it that's the problem, per se, it's that the system forces blinders on the person.
'After 9/11, it was "You're with us or your with the terrorists." Now its "You're with Straha or you support racism."' ' - The Romulan Republic

'You're a bully putting on an air of civility while saying that everything western and/or capitalistic must be bad, and a lot of other posters (loomer, Stas Bush, Gandalf) are also going along with it for their own personal reasons (Stas in particular is looking through rose colored glasses)' - Darth Yan
Post Reply