Consequentialism - is it ultimately flawed ethical idea
Moderator: Alyrium Denryle
- SpaceMarine93
- Jedi Knight
- Posts: 585
- Joined: 2011-05-03 05:15am
- Location: Continent of Mu
Consequentialism - is it ultimately flawed ethical idea
Some people once told me that Consequentialism, the class of ethics which holds the consequences of one's actions are the ultimate basis for any judgment about the rightness of that action, is a very dangerous way of thinking despite it's rational approach towards assessing how just a conduct is.
Their reasoning being that in many cases whether or not the consequences of a certain conduct is right or wrong is entirely subjective. Alternatively, it may lead people to commit atrocities in order to achieve a certain goal, under the belief that so long the end results are good, whatever ways of achieving it is inconsequential. "The Ends Justifies the Means" as they say.
After much researching, I found two more objections to Consequentialism:
According to philosopher William Gass, Consequentialism is ineffective as a means to judge whether a morally wrong action is morally wrong. He proposes the "obliging stranger" analogy: Let's say an obliging stranger agrees to be shot with a high caliber rifle, without protection, meaning certain death. Gass claims that the rationale that any moral theory might attempt to give for this being wrong, e.g. it does not bring about good results, is simply absurd. To Gass, it is simply wrong to shoot a stranger, however obliging, and nothing more can or need be said about it.
Philosopher Bernard Williams further states that such a ethical thinking is in itself self-destructive for the person holding such ethics, and alienating: Anyone holding such views would have to be very impersonal in their view of their actions. After all, since it is only the consequences, and not who produces them, that matters. Williams argues that this is too demanding of moral agents—since consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible.
Even after much consideration, I still could not make a conclusion over the validity of the arguments. Namely because I could not find any good counter-arguments. Can anyone here suggest a good point supporting Consequentialism?
Their reasoning being that in many cases whether or not the consequences of a certain conduct is right or wrong is entirely subjective. Alternatively, it may lead people to commit atrocities in order to achieve a certain goal, under the belief that so long the end results are good, whatever ways of achieving it is inconsequential. "The Ends Justifies the Means" as they say.
After much researching, I found two more objections to Consequentialism:
According to philosopher William Gass, Consequentialism is ineffective as a means to judge whether a morally wrong action is morally wrong. He proposes the "obliging stranger" analogy: Let's say an obliging stranger agrees to be shot with a high caliber rifle, without protection, meaning certain death. Gass claims that the rationale that any moral theory might attempt to give for this being wrong, e.g. it does not bring about good results, is simply absurd. To Gass, it is simply wrong to shoot a stranger, however obliging, and nothing more can or need be said about it.
Philosopher Bernard Williams further states that such a ethical thinking is in itself self-destructive for the person holding such ethics, and alienating: Anyone holding such views would have to be very impersonal in their view of their actions. After all, since it is only the consequences, and not who produces them, that matters. Williams argues that this is too demanding of moral agents—since consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible.
Even after much consideration, I still could not make a conclusion over the validity of the arguments. Namely because I could not find any good counter-arguments. Can anyone here suggest a good point supporting Consequentialism?
Life sucks and is probably meaningless, but that doesn't mean there's no reason to be good.
--- The Anti-Nihilist view in short.
--- The Anti-Nihilist view in short.
- RazorOutlaw
- Padawan Learner
- Posts: 382
- Joined: 2006-06-21 03:21pm
- Location: PA!
Re: Consequentialism - is it ultimately flawed ethical idea
I don't have time to get into detail but I think I can give at least one point supporting Consequentialism over Rule based ethics (Deontological, I think).
Deontological ethics can have you support a rule, like "never lie", under any circumstances, i.e., regardless of the consequence. Because it's the rule that matters.
Consequentialism will have you look at the scenario where lying will have good consequences, like allowing an innocent person, being pursued by a killer, to escape by telling the killer that the person actually went down the street instead of into the alley.
I know I'm not covering all scenarios, but ethical work usually advances by coming up with scenarios where a so-called theory might run into trouble. Hearing that consequentialism's results are/can be subjective is a new one to me.
Deontological ethics can have you support a rule, like "never lie", under any circumstances, i.e., regardless of the consequence. Because it's the rule that matters.
Consequentialism will have you look at the scenario where lying will have good consequences, like allowing an innocent person, being pursued by a killer, to escape by telling the killer that the person actually went down the street instead of into the alley.
I know I'm not covering all scenarios, but ethical work usually advances by coming up with scenarios where a so-called theory might run into trouble. Hearing that consequentialism's results are/can be subjective is a new one to me.
Sig.
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Consequentialism - is it ultimately flawed ethical idea
Some of the same issues plague deontological systems as well. They are ultimately arbitrary. To say that murder is wrong because the moral code says murder is wrong is at least as arbitrary as murder is wrong because we value human life and killing one person will also negatively affect many others. However, the latter case, basing actions on terminal values, gives more flexibility in moral dilemmas than absolute rules. Plus, you still have to choose a deontological system. Of course most inherit some form of one from their culture, but there are lots of cultures with different rules that come into conflict. How is that resolved? Might makes right? Do any of the culturally inherited systems allow that as a rule? It isn't impractical for most people in the sense that the absolute majority of humans value the same things, though they may rank them differently in terms of priority. Nothing prevents deontological systems from having contradictory rules, and ad hoc rationalizations have to be made to make a course of action acceptable. It can be mentally taxing to try and calculate every outcome, though, (in fact it is impossible for humans), though we should make a best effort. Nobody lives up to deontological systems perfectly either.
The main part of the problem with "the ends justify the means" is that the full cost of the means is usually not factored into the end result, nor is the real likelihood of the desired result generally well estimated, and less destructive alternatives are usually not well developed. Historically, that mindset was used not ultimately in a well reasoned scientific way, but as a rational band aid to acting on irrational fears and historical bigotry. Those that suffered were usually vulnerable voiceless parts of the population to start with, and couldn't put much input into the moral equations of the rulers, who often did not personally have to suffer consequences from their choices. Hence, eugenics as it was practiced in the first half of the twentieth century. This is why more subtle versions of utilitarianism have developed, which both make some allowances for personal weaknesses (we aren't readily willing to sacrifice ourselves to help the abstract Greater Good, and any system that expects us to do so at every opportunity is impractical) and cognitive weakness: we cannot realistically know what the long term outcome will be with any certainty, so we should avoid doing very dangerous and destructive acts for the Greater Good if we can possibly avoid it. Hence, Rule Utilitarianism: because humans suck at math.
The main part of the problem with "the ends justify the means" is that the full cost of the means is usually not factored into the end result, nor is the real likelihood of the desired result generally well estimated, and less destructive alternatives are usually not well developed. Historically, that mindset was used not ultimately in a well reasoned scientific way, but as a rational band aid to acting on irrational fears and historical bigotry. Those that suffered were usually vulnerable voiceless parts of the population to start with, and couldn't put much input into the moral equations of the rulers, who often did not personally have to suffer consequences from their choices. Hence, eugenics as it was practiced in the first half of the twentieth century. This is why more subtle versions of utilitarianism have developed, which both make some allowances for personal weaknesses (we aren't readily willing to sacrifice ourselves to help the abstract Greater Good, and any system that expects us to do so at every opportunity is impractical) and cognitive weakness: we cannot realistically know what the long term outcome will be with any certainty, so we should avoid doing very dangerous and destructive acts for the Greater Good if we can possibly avoid it. Hence, Rule Utilitarianism: because humans suck at math.
Every day is victory.
No victory is forever.
No victory is forever.
Re: Consequentialism - is it ultimately flawed ethical idea
Well, let me provide the strongest argument against utilitarianism and consequentialism in general- the Repugnant Conclusion, which is, simply put, this:
Let's suppose a situation A where there is a population with lives that are very much worth living and everybody is very happy. Let's suppose a situation A+ where there are now two populations, the one in A and a new population that is less happy, isolated from the A-population, but still has lives worth living. Both populations are the same size. Under utilitarianism, this is not a worse situation than A. So let's now suppose a situation B- where the populations, still isolated, now have the same amount of happiness but the total happiness has not increased. However, the average happiness has increased because the decrease in the happiness of the A-population is less than the increase in the happiness of A- population. So let's now suppose a situation B where the populations merge into a single population twice as big as the original population with the same happiness overall as B-. B is, clearly, not worse than A.
This can continue indefinitely until we reach situation n where each individual person has a life just barely worth living, but there are so many more of them that we can conclude that this is not worse than A, because it was not worse than n-1, which was not worse than n-2, and so on. But this is why it's called the Repugnant Conclusion- because the end result is disturbing and, indeed, repulsive, because it says that as long as a world on the edge of misery is more populated than a world where everyone is happy, they are equally good outcomes. This is seemingly contradictory, but most efforts within the utilitarian frame to get away from this conclusion end up in their own contradictions.
Let's suppose a situation A where there is a population with lives that are very much worth living and everybody is very happy. Let's suppose a situation A+ where there are now two populations, the one in A and a new population that is less happy, isolated from the A-population, but still has lives worth living. Both populations are the same size. Under utilitarianism, this is not a worse situation than A. So let's now suppose a situation B- where the populations, still isolated, now have the same amount of happiness but the total happiness has not increased. However, the average happiness has increased because the decrease in the happiness of the A-population is less than the increase in the happiness of A- population. So let's now suppose a situation B where the populations merge into a single population twice as big as the original population with the same happiness overall as B-. B is, clearly, not worse than A.
This can continue indefinitely until we reach situation n where each individual person has a life just barely worth living, but there are so many more of them that we can conclude that this is not worse than A, because it was not worse than n-1, which was not worse than n-2, and so on. But this is why it's called the Repugnant Conclusion- because the end result is disturbing and, indeed, repulsive, because it says that as long as a world on the edge of misery is more populated than a world where everyone is happy, they are equally good outcomes. This is seemingly contradictory, but most efforts within the utilitarian frame to get away from this conclusion end up in their own contradictions.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Consequentialism - is it ultimately flawed ethical idea
Re Bakustra:
This strikes me as a fixable problem in the utilitarian's utility function- that your sample utilitarian values increasing the population of the world, independent of other concerns. If we consider "add more people" to be a morally neutral act, the Repugnant Conclusion goes away, because having 100 billion people living on the edge of misery that are just barely preferable to death is no longer better than having 10 billion people living non-miserable lives.
You might respond that this makes genocide ethically neutral. I would reply that it doesn't. "Adding more people" being morally neutral doesn't mean that "killing some people" is morally neutral.
Failing to add more people isn't depriving anyone of anything, it doesn't actively make the world worse from a consequential standpoint. But actively removing people from the world deprives those people of opportunities and freedom, and it hurts the people around them.
So I would argue that this is a good way to avoid the Repugnant Conclusion: to not automatically assume that diluting the standard of living so that more people 'enjoy' less of it will lead to a higher utilitarian positive score. In some conditions, that will be true, but in other conditions, it won't be, so there's no need to take the theory to illogical extremes.
...
On the flip side of the argument (because you can construct an equally repugnant conclusion of the form "one incredibly happy person is worth a million miserable people"), I would introduce the idea of diminishing returns. It is much easier to double the happiness of a person living in ordinary conditions than to double the happiness of someone who already lives in a paradise, so sacrificing the happiness of the many people in Ordinary-land to increase the happiness of the one person in paradise doesn't pay off.
Re the original post:
One way to provide an argument for X is to consider what happens when we assume X isn't true.
Assume that rule-based deontological ethics are better than consequential ones. Try using them instead.
Don't we run into problems when rules contradict each other? What if our rules are too simplistic to cover the situation we find ourselves in? That's a common source of conflict- say, when our rule against lying clashes with our rule against hurting people's feelings unnecessarily?
When we bear this in mind, Gass's argument just falls apart under its own weight. According to him, "it is simply wrong" to do X, "it is simply wrong" to do Y, but we are forced to choose between doing X and doing Y. The only real option is to pick whichever one is less wrong- X, or Y. And how do you do that under a pure rule-based ethics?
Do you write specific exceptions? Do you say "the rule against Y overrides the rule against X when Special Case #1, or #2, or #3, or..., or #500 apply?" Do you create intricate rules that really will give you the right call in all situations? At what point does the choking complexity of your rules make them impossible to follow?
Is it not so much simpler to look at the consequences of an action and say "this will make bad things happen, so don't do it?" We do that when thinking about issues where ethics aren't at stake all the time.
This strikes me as a fixable problem in the utilitarian's utility function- that your sample utilitarian values increasing the population of the world, independent of other concerns. If we consider "add more people" to be a morally neutral act, the Repugnant Conclusion goes away, because having 100 billion people living on the edge of misery that are just barely preferable to death is no longer better than having 10 billion people living non-miserable lives.
You might respond that this makes genocide ethically neutral. I would reply that it doesn't. "Adding more people" being morally neutral doesn't mean that "killing some people" is morally neutral.
Failing to add more people isn't depriving anyone of anything, it doesn't actively make the world worse from a consequential standpoint. But actively removing people from the world deprives those people of opportunities and freedom, and it hurts the people around them.
So I would argue that this is a good way to avoid the Repugnant Conclusion: to not automatically assume that diluting the standard of living so that more people 'enjoy' less of it will lead to a higher utilitarian positive score. In some conditions, that will be true, but in other conditions, it won't be, so there's no need to take the theory to illogical extremes.
...
On the flip side of the argument (because you can construct an equally repugnant conclusion of the form "one incredibly happy person is worth a million miserable people"), I would introduce the idea of diminishing returns. It is much easier to double the happiness of a person living in ordinary conditions than to double the happiness of someone who already lives in a paradise, so sacrificing the happiness of the many people in Ordinary-land to increase the happiness of the one person in paradise doesn't pay off.
Re the original post:
One way to provide an argument for X is to consider what happens when we assume X isn't true.
Assume that rule-based deontological ethics are better than consequential ones. Try using them instead.
Don't we run into problems when rules contradict each other? What if our rules are too simplistic to cover the situation we find ourselves in? That's a common source of conflict- say, when our rule against lying clashes with our rule against hurting people's feelings unnecessarily?
When we bear this in mind, Gass's argument just falls apart under its own weight. According to him, "it is simply wrong" to do X, "it is simply wrong" to do Y, but we are forced to choose between doing X and doing Y. The only real option is to pick whichever one is less wrong- X, or Y. And how do you do that under a pure rule-based ethics?
Do you write specific exceptions? Do you say "the rule against Y overrides the rule against X when Special Case #1, or #2, or #3, or..., or #500 apply?" Do you create intricate rules that really will give you the right call in all situations? At what point does the choking complexity of your rules make them impossible to follow?
Is it not so much simpler to look at the consequences of an action and say "this will make bad things happen, so don't do it?" We do that when thinking about issues where ethics aren't at stake all the time.
This space dedicated to Vasily Arkhipov
Re: Consequentialism - is it ultimately flawed ethical idea
You're misunderstanding the scenario fundamentally, I'm afraid. X on the edge of misery isn't better than Y extraordinarily happy (as long as X is sufficiently greater than Y), it simply is no worse, which in this case means that they're morally equivalent. It doesn't assume that diluting happiness increases it, but rather than happiness overall isn't decreased by spreading it out. So your solution doesn't really get you away from the conclusion in either direction. Having childbirth being morally neutral doesn't really do it either, since at some point there needs to be some sort of moral negativity associated with the transit in order to avoid the full-on Repugnant Conclusion.Simon_Jester wrote:Re Bakustra:
This strikes me as a fixable problem in the utilitarian's utility function- that your sample utilitarian values increasing the population of the world, independent of other concerns. If we consider "add more people" to be a morally neutral act, the Repugnant Conclusion goes away, because having 100 billion people living on the edge of misery that are just barely preferable to death is no longer better than having 10 billion people living non-miserable lives.
You might respond that this makes genocide ethically neutral. I would reply that it doesn't. "Adding more people" being morally neutral doesn't mean that "killing some people" is morally neutral.
Failing to add more people isn't depriving anyone of anything, it doesn't actively make the world worse from a consequential standpoint. But actively removing people from the world deprives those people of opportunities and freedom, and it hurts the people around them.
So I would argue that this is a good way to avoid the Repugnant Conclusion: to not automatically assume that diluting the standard of living so that more people 'enjoy' less of it will lead to a higher utilitarian positive score. In some conditions, that will be true, but in other conditions, it won't be, so there's no need to take the theory to illogical extremes.
...
On the flip side of the argument (because you can construct an equally repugnant conclusion of the form "one incredibly happy person is worth a million miserable people"), I would introduce the idea of diminishing returns. It is much easier to double the happiness of a person living in ordinary conditions than to double the happiness of someone who already lives in a paradise, so sacrificing the happiness of the many people in Ordinary-land to increase the happiness of the one person in paradise doesn't pay off.
Diminishing returns don't really help either, because it's not a question of sacrifice or "one person in paradise is worth a thousand million impoverished souls", but rather "a world where a few lived in paradise is morally equivalent to a world where a number lived happily and to a world where a multitude lived on the edge of misery." It doesn't speak about transitioning between the worlds, though it applies to things like childbirth most heavily if you accept it or try to deny it.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
Re: Consequentialism - is it ultimately flawed ethical idea
Clarifications- "happiness" is a bit of a misnomer- "welfare" is a much better term because it's something that's externally measurable.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
-
- Sith Devotee
- Posts: 3317
- Joined: 2004-10-15 08:57pm
- Location: Regina Nihilists' Guild Party Headquarters
Re: Consequentialism - is it ultimately flawed ethical idea
Which only brings up another problem with consequentialism - what exactly is this nebulous concept of 'welfare' or 'happiness' is the system working towards? They aren't the same thing, and it could be easily argued that there's no inherent moral value to material upgrades to the former. One of the statistics that gets tossed around the most in utilitarianism debates is the Nigeria is the nation with the highest happiness among its citizens. Yet clearly most utilitarians would recoil at the thought of an ethics system that would attempt to bring society more into line with what it's like in Nigeria, preferring a Classical Liberal approach that would favour prosperity and longevity for the population.Bakustra wrote:Clarifications- "happiness" is a bit of a misnomer- "welfare" is a much better term because it's something that's externally measurable.
Personally, my favourite argument against consequentialism is the Epistemic one, since it so rarely gets brought up. The fact is that it's difficult or almost impossible to know the full consequences of any action you take, and this is a large problem in itself, but surmountable by some uncomfortable means. If, for example, you predicate your actions on those beings you know about specifically, you run into the problem of not knowing how similar their minds are to yours and therefore how to weight them in a Utilitarian measurement of worth. This includes standard ethical dilemmas about how to weight the worth of beings' lives - take any standard problem involving animals, children, the dying or mentally handicapped, etcetera, presuming you're basing your decisions on the quality of the subjects' minds - but also whether you can determine their happiness by the argument from metaphor. The most rational Utilitarian would be one that privileges his own welfare above all others, since he can be infinitely more certain of his own mind and what would maximize its happiness than of others', and thus the consequences of his actions, ironically, would be no different than those of a total sociopath.
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
The word he is looking for is "situational". Of course, if you view ethics as "this behavior is always right or always wrong," then consequentialism is naturally going to appear subjective. In fact, it is using a totally different standard, which would make any such complaint ridiculous on its face.RazorOutlaw wrote:I know I'm not covering all scenarios, but ethical work usually advances by coming up with scenarios where a so-called theory might run into trouble. Hearing that consequentialism's results are/can be subjective is a new one to me.
What a rube. Given no context for why you are shooting someone, of course any consequentialist will say its wrong to shoot someone. How do you know it is wrong to shoot someone? Because it fucking harms them! Shooting someone who agrees to be shot doesn't change the fact that it harms them, it just indicates the obliging strangers is mentally ill, or otherwise abnormal for a human being in the modern world.SpaceMarine93 wrote:According to philosopher William Gass, Consequentialism is ineffective as a means to judge whether a morally wrong action is morally wrong. He proposes the "obliging stranger" analogy: Let's say an obliging stranger agrees to be shot with a high caliber rifle, without protection, meaning certain death. Gass claims that the rationale that any moral theory might attempt to give for this being wrong, e.g. it does not bring about good results, is simply absurd. To Gass, it is simply wrong to shoot a stranger, however obliging, and nothing more can or need be said about it.
Furthermore, he isn't arguing against consequentialism. This is just a "philosopher" in need of a clue. Consequentialists assign actions an immoral status not when it results in nothing good, but when it results in harm coming to sentient beings. Otherwise, we would have to say that choosing between drinking cocoa and coffee is immoral regardless of which drink you choose.
In other words, this is a strawman of consequentialist systems, not a valid argument.
And? So? What, convenience to oneself outweighs all other concerns? If you think that, you are a selfish fuck, and have no ethics at all. You aren't a virtue ethicist, you aren't a Deontologist by any standard, you are just a selfish fuck.Philosopher Bernard Williams further states that such a ethical thinking is in itself self-destructive for the person holding such ethics, and alienating: Anyone holding such views would have to be very impersonal in their view of their actions. After all, since it is only the consequences, and not who produces them, that matters. Williams argues that this is too demanding of moral agents—since consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible.
Also, in consequentialism, the moral actor is perfectly permitted to assign himself moral value as well. Just no more moral value than anyone else, assuming all other things are equal. For instance, laypeople may be applauded for going into burning buildings to save others, but the job of doing that is still given to trained firefighters with proper firefighting gear, including safety equipment that allows them to get back out of the building unharmed. What your dumbass philosophers have missed is that real sacrifices (and I mean real sacrifices, as opposed to "Wahh, I can't have privileged X!") are weighed on the same side of the scale as any other objective harm that might come to someone. Not everyone is expected to be Jesus; again, another strawman.
That is because you need to actually read what theories like Utilitarianism say before assuming its critics are presenting honest or accurate criticism.Even after much consideration, I still could not make a conclusion over the validity of the arguments. Namely because I could not find any good counter-arguments. Can anyone here suggest a good point supporting Consequentialism?
My solution to this, rather than Rule Utilitarianism, is to incorporate elements of Aristotelian style virtue ethics (in fact, logically they share the same premises, just use a different set of heuristics to conclude what is good or bad). If you practice virtuous behavior every day, eventually the calculations become second nature to you.Alerik the Fortunate wrote:It can be mentally taxing to try and calculate every outcome, though, (in fact it is impossible for humans), though we should make a best effort. Nobody lives up to deontological systems perfectly either.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
@ Bakustra: Simon isn't misunderstanding. The argument you present assumes that happiness/welfare need not scale with the size of a society, and that an absolute amount of happiness/welfare is the only thing that is worth measuring. But on an individual level, everyone has the same needs and desires happiness more or less equally. Think of the populations as vehicles of different sizes, with different sized gas tanks to accommodate them. In this scenario, each of those gas tanks have the same absolute amount of gas in them. Relative to their sizes though, the larger vehicle's tank is clearly more "empty" than the other, despite having the same amount of fuel in it. In the same way, a utilitarian would look at the percentage of people in that society that are living in misery, and immediately conclude that there is less happiness in that society relative to its size. That's not a perfect analogy, of course, but it reveals the measurement error made by the Repugnant Conclusion.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
Heh. I think that leans more towards the conclusion "ignorance is bliss". Nigerians likely do not measure their fortune against people from other nations, but against other Nigerians; and the reverse is also true of people living in first world nations who often measure themselves against the higher classes of their society. If nothing else, if you never meet (or are reminded of) people far richer than yourself you can't feel envy of them, and if you never meet (or are reminded of) people far poorer than yourself you will never feel gratitude for being born into relative wealth. Of course, that can change should you ever gain such knowledge.Nieztchean Uber-Amoeba wrote:Which only brings up another problem with consequentialism - what exactly is this nebulous concept of 'welfare' or 'happiness' is the system working towards? They aren't the same thing, and it could be easily argued that there's no inherent moral value to material upgrades to the former. One of the statistics that gets tossed around the most in utilitarianism debates is the Nigeria is the nation with the highest happiness among its citizens. Yet clearly most utilitarians would recoil at the thought of an ethics system that would attempt to bring society more into line with what it's like in Nigeria, preferring a Classical Liberal approach that would favour prosperity and longevity for the population.Bakustra wrote:Clarifications- "happiness" is a bit of a misnomer- "welfare" is a much better term because it's something that's externally measurable.
How I hate this criticism. Its so lazy-- I mean, if we applied it to any other intellectual or scientific pursuit it would get chased back into a corner as being terminally stupid. Of course you can never have perfect predictive abilities. That doesn't mean you should stop trying to predict things. It means you should seek to improve your predictive abilities.Personally, my favourite argument against consequentialism is the Epistemic one, since it so rarely gets brought up. The fact is that it's difficult or almost impossible to know the full consequences of any action you take, and this is a large problem in itself, but surmountable by some uncomfortable means. If, for example, you predicate your actions on those beings you know about specifically, you run into the problem of not knowing how similar their minds are to yours and therefore how to weight them in a Utilitarian measurement of worth. This includes standard ethical dilemmas about how to weight the worth of beings' lives - take any standard problem involving animals, children, the dying or mentally handicapped, etcetera, presuming you're basing your decisions on the quality of the subjects' minds - but also whether you can determine their happiness by the argument from metaphor. The most rational Utilitarian would be one that privileges his own welfare above all others, since he can be infinitely more certain of his own mind and what would maximize its happiness than of others', and thus the consequences of his actions, ironically, would be no different than those of a total sociopath.
Think about it. In order for this argument to work you have to assume something contrary to the facts-- that other human minds are so alien that your every attempt to understand them is at high risk of failing. But instead, humans evolved a remarkable ability to empathize and communicate their feelings with one another, and even other animals. Even without our huge intellectual capacity, we still make these judgements just fine.
Edit: also, I've often found that I don't necessarily understand myself better until I talk to others and get their impressions of me. Indeed, why else do you think people like to come to their friends when they need help? Their friends shouldn't know themselves better than they do. And yet, it helps.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Consequentialism - is it ultimately flawed ethical idea
Nobody is living in misery within the premises of the Repugnant Conclusion. But if you measure average welfare, as you are doing, then you have another, harsher conclusion- the society is better which is smaller, for it has more welfare relative to its size. This leads to "one man in paradise" (as shorthand for an arbitrarily few people with an arbitrarily high welfare) as something to aspire to, which is actually worse than the Repugnant Conclusion as I've presented it, which doesn't suggest that state n is something to be aspired to. That's not a "measurement error", it's one of the simplest arguments made to counter the Repugnant Conclusion, but it fails there for the same reasons the Repugnant Conclusion does- it leads to extraordinarily uncomfortable and counter-intuitive ends.Formless wrote:@ Bakustra: Simon isn't misunderstanding. The argument you present assumes that happiness/welfare need not scale with the size of a society, and that an absolute amount of happiness/welfare is the only thing that is worth measuring. But on an individual level, everyone has the same needs and desires happiness more or less equally. Think of the populations as vehicles of different sizes, with different sized gas tanks to accommodate them. In this scenario, each of those gas tanks have the same absolute amount of gas in them. Relative to their sizes though, the larger vehicle's tank is clearly more "empty" than the other, despite having the same amount of fuel in it. In the same way, a utilitarian would look at the percentage of people in that society that are living in misery, and immediately conclude that there is less happiness in that society relative to its size. That's not a perfect analogy, of course, but it reveals the measurement error made by the Repugnant Conclusion.
It also leads to some others- namely, under this principle, a society of millions who suffer horribly is better than one in which one individual suffers more, because the average is all that matters.
People have actually thought about this and many attempts to get around it have failed, so don't treat it with such contempt- it's probably the single most important question of population ethics.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
Intuition is flawed, anyway. But so are your facts. A society below a certain size is simply unable to function, and the possible existence of "one man in paradise" is not supported by the facts of human psychology. We are social animals, one man in paradise would quickly succumb to loneliness or even madness. Give him a family, and he won't die of loneliness-- but he won't have any support to draw upon either. Use your damn head-- the very idea of Utilitarianism is that it is supposed to be scientific, and you are being anything but.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Consequentialism - is it ultimately flawed ethical idea
There are two explanations for your behavior. The first is that you've been engaged in vigorous exercise by slamming your head into a brick wall. The other is that you're a low-down dirty rotten coward who's unwilling to actually engage in thought when challenged. It's a pity either way- as the UNCF reminds us, a mind is a terrible thing to waste.Formless wrote:Intuition is flawed, anyway. But so are your facts. A society below a certain size is simply unable to function, and the possible existence of "one man in paradise" is not supported by the facts of human psychology. We are social animals, one man in paradise would quickly succumb to loneliness or even madness. Give him a family, and he won't die of loneliness-- but he won't have any support to draw upon either. Use your damn head-- the very idea of Utilitarianism is that it is supposed to be scientific, and you are being anything but.
I bolded the part you either missed or pretended didn't exist. It's shorthand for an arbitrarily small, functioning society that maximizes individual and average welfare. Your premises inevitably lead to the conclusion that this is an ideal to aspire to, as well as to the other conclusion that I mentioned in my post. Deal with it, either by admitting that you were wrong, or finding some way to avoid this and the other end of the repugnant spectrum.This leads to "one man in paradise" (as shorthand for an arbitrarily few people with an arbitrarily high welfare) as something to aspire to,
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Consequentialism - is it ultimately flawed ethical idea
I don't think this really changes my argument.Bakustra wrote:You're misunderstanding the scenario fundamentally, I'm afraid. X on the edge of misery isn't better than Y extraordinarily happy (as long as X is sufficiently greater than Y), it simply is no worse, which in this case means that they're morally equivalent. It doesn't assume that diluting happiness increases it, but rather than happiness overall isn't decreased by spreading it out. So your solution doesn't really get you away from the conclusion in either direction. Having childbirth being morally neutral doesn't really do it either, since at some point there needs to be some sort of moral negativity associated with the transit in order to avoid the full-on Repugnant Conclusion.
Diminishing returns don't really help either, because it's not a question of sacrifice or "one person in paradise is worth a thousand million impoverished souls", but rather "a world where a few lived in paradise is morally equivalent to a world where a number lived happily and to a world where a multitude lived on the edge of misery." It doesn't speak about transitioning between the worlds, though it applies to things like childbirth most heavily if you accept it or try to deny it.
The Repugnant Conclusion comes in two versions, as I see it.
Version 1:
Let State X have 1000 billion people, each of whose lives contains 0.01 utils of happiness.
Let State Y have 10 billion people, each of whose lives contains 1 utils of happiness.
Each state contains 10 billion utils of happiness. Therefore, they are morally equivalent.
Version 2:
Let State X have 1 person with 1000000 utils, and 1000000 people with 0.01 utils each.
Let State Y have 1000001 people with 1.01 utils each (rounded to three significant figures).
Each state contains 1.01 million utils of happiness. Therefore, they are morally equivalent.
Both these conclusions are repugnant and are similar, perverse results of the same simple law: the claim that we can superpose different quantities of utils evenly in different people's hands indefinitely, that all the weighting functions we use to judge how much a given outcome is worth are linear.
That is, giving ten people a million utils each is morally equivalent to ten million people with one util each. Giving ten people two million utils is (under this kind of calculation) actually better than giving ten million people one util each.
Now, if this leads us to perverse conclusions, perhaps the problem is in the way we tally the value of moral outcomes. I would argue that Formless has got my position about right- the problem is that we're calculating the value of State X and State Y incorrectly.
A correct calculation would tend towards a higher grade of justice- the whole premise of consequentialism is that if following the rules leads to bad consequences, we're using the wrong rules and need better ones.
A correct calculation would look at Version 1 and conclude "This is the high-population limit, where happiness is spread thin throughout the population. In this case, doubling the number of relatively-happy people and diluting a constant amount of happiness among them is not an improvement."
A correct calculation would look at Version 2 and conclude "This is the low-population limit, where happiness is concentrated among a few people and very little is left for everyone else. In this case, doubling the number of relatively-happy people and diluting a constant amount of happiness among them, at the expense of the existing happy people, is an improvement."
Those claims are not mutually exclusive. They just mean that the function by which we work out the utility-function-value of a society by counting up the amount of happiness is not linear: 1 person with 1000 utils is not necessarily equivalent to 1000 people with one util each, who are in turn not necessarily equivalent to a million people with 0.001 util each.
Now, if I screw up the function, I may wind up with a function that says, in some situations, that the most ethical thing to do is to kill off people in the name of making society a paradise for the people who are left. If that itself becomes a repugnant conclusion, then that means I screwed up, and need to start over. Which is kind of the point of this philosophy- if you have a rule that's telling you to do something wrong, you need to start over and come up with new rules.
So I don't deny that a repugnant conclusion can arise from doing a utilitarian calculation. But if you find one, it's not a sign that utilitarianism is inherently bad, it's a sign that you did the math wrong.
This space dedicated to Vasily Arkhipov
Re: Consequentialism - is it ultimately flawed ethical idea
Did you even read the reasons why determining consequences is a tricky business? It's not just a matter of hightening the resolution of your magical consequence machine or whatever. The sheer complexity of the world around us would seem to make it almost impossible. Your analogy with scientific endevour is laughable. You state that if the reasoning were applied to these fields, it would get laughed at. Which would be swell if the trial and error approach to finding out about the universe was comparable to actively shaping it.How I hate this criticism. Its so lazy-- I mean, if we applied it to any other intellectual or scientific pursuit it would get chased back into a corner as being terminally stupid. Of course you can never have perfect predictive abilities. That doesn't mean you should stop trying to predict things. It means you should seek to improve your predictive abilities.
Think about it. In order for this argument to work you have to assume something contrary to the facts-- that other human minds are so alien that your every attempt to understand them is at high risk of failing. But instead, humans evolved a remarkable ability to empathize and communicate their feelings with one another, and even other animals. Even without our huge intellectual capacity, we still make these judgements just fine.
Edit: also, I've often found that I don't necessarily understand myself better until I talk to others and get their impressions of me. Indeed, why else do you think people like to come to their friends when they need help? Their friends shouldn't know themselves better than they do. And yet, it helps.
You... vastly overestimate our ability to understand others, particularly at the scope consequentialism would demand. If nothing else, this is the underlying premise of every sitcom we have uncovered since the paleolithic era.
Jupiter Oak Evolution!
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
And out come the Ad Hominim attacks. Wow, that was, like, one post? Amazing, Bakustra. How quick you are to discard honesty. I did address your argument, you just failed to put two and two together. So read, asshole:Bakustra wrote:There are two explanations for your behavior. The first is that you've been engaged in vigorous exercise by slamming your head into a brick wall. The other is that you're a low-down dirty rotten coward who's unwilling to actually engage in thought when challenged. It's a pity either way- as the UNCF reminds us, a mind is a terrible thing to waste.
*snip bullshit*
I bolded the part you either missed or pretended didn't exist. It's shorthand for an arbitrarily small, functioning society that maximizes individual and average welfare. Your premises inevitably lead to the conclusion that this is an ideal to aspire to, as well as to the other conclusion that I mentioned in my post. Deal with it, either by admitting that you were wrong, or finding some way to avoid this and the other end of the repugnant spectrum.
A human society (especially a modern one, with modern utilities, architecture, and logistics) cannot function below a certain number of individuals. Even if we go back to the stone age, you need a minimum population to keep the species alive. So the idea that there can be a "one man living in paradise" even if we say one man isn't one man but a small group of people ( *facepalm*), you are still ignoring the facts in favor of a hipothetical senario [sic]. Which misses the whole goddamn point of utilitarianism.
Furthermore, I love how you automatically disregard any method of measuring happiness besides averages. You realize that there are other statistics that people might find important, right? For instance, people analyze the happiness quotient of different nations because they are also interested in the distribution of happiness-- that way, we know where to direct our efforts to improve the world. Averaging together the whole world together can be useful, but you also lose large swaths of important information in the process. That too is a utilitarian idea, but you obviously don't care about any form of consequentialism besides the one in your head.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Consequentialism - is it ultimately flawed ethical idea
Formless, I hate to tell you this pal, but that's not what an ad hominem actually is. I just thought you'd like to know, you being such a learned scholar of rhetoric.
Jupiter Oak Evolution!
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
So its all right to just shrug and say "we can't predict anything, everybody go home"?Zablorg wrote:Did you even read the reasons why determining consequences is a tricky business? It's not just a matter of hightening the resolution of your magical consequence machine or whatever. The sheer complexity of the world around us would seem to make it almost impossible. Your analogy with scientific endevour is laughable. You state that if the reasoning were applied to these fields, it would get laughed at. Which would be swell if the trial and error approach to finding out about the universe was comparable to actively shaping it.
You... vastly overestimate our ability to understand others, particularly at the scope consequentialism would demand. If nothing else, this is the underlying premise of every sitcom we have uncovered since the paleolithic era.
Fuck that, the world isn't that goddamn complex. You wouldn't be able to live even one day in the world if you held to that attitude. Furthermore, you act like we have to start from a state of no knowledge. Except that we already have huge amounts of knowledge passed down by culture to draw upon, meaning you don't have to build up a personal experience base to start from when making an ethical code to live by.
In other words, the existence of the unpredictable does not invalidate the existence of the predictable. That a given child might turn out to be the next Hitler if you save him from a burning vehicle doesn't mean you should let that override the fact that he's in a burning vehicle, with obvious consequences right there in front of you.
Eh, whatever, he still ignored a key point I made. The only reason he did so, so far as I can tell, is because he has, as usual, gotten worked up about the flaws he perceives in his opponents.Formless, I hate to tell you this pal, but that's not what an ad hominem actually is. I just thought you'd like to know, you being such a learned scholar of rhetoric.
Or maybe not, but if he likes to push other people's buttons he can have a taste of his own medicine.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Consequentialism - is it ultimately flawed ethical idea
Congratulations, Simon!!! You just discovered the variable-value "solution"! Guess where it leads to? It leads to two different conclusions based on how you set up the math- either you end up with the Sadistic Conclusion, wherein it becomes better to add suffering people rather than happy people to a population at certain points, because at some point the relative harm of adding people with lower positive welfare outweighs the harm of adding a sufferer. If suffering dilutes too, then this extends further and so adding more people in suffering outweighs adding people with lower positive welfare under the right conditions.Simon_Jester wrote:
Or you end up with an absurd conclusion, where if you take a society with a million people with good welfare and one person who suffers, which is good, and multiply it upon itself, while the proportions do not change, at some point the society becomes bad, because there is no dilution of suffering while positive welfare dilutes. These are inescapable using those means of calculating value, unless you can show a way that doesn't fall victim to these somehow to avoid the disquieting conclusions that they end up with. There are plenty of ways to avoid the Repugnant Conclusion, but most of them are frightening and unintuitive in their own right.
Hey, guess what, dumbass? Those may be "ad hominim" attacks, but they're not argumenta ad hominae, since I am merely expressing incredulity at your inability or unwillingness to accept the concept of a "thought experiment". Your use of patronizing analogies is incompatible with this, since you're unwilling to accept anything other than literalism from the people you're arguing with. Similarly, you're apparently unwilling to accept the use of shorthand to denote things.Formless wrote:And out come the Ad Hominim attacks. Wow, that was, like, one post? Amazing, Bakustra. How quick you are to discard honesty. I did address your argument, you just failed to put two and two together. So read, asshole:Bakustra wrote:There are two explanations for your behavior. The first is that you've been engaged in vigorous exercise by slamming your head into a brick wall. The other is that you're a low-down dirty rotten coward who's unwilling to actually engage in thought when challenged. It's a pity either way- as the UNCF reminds us, a mind is a terrible thing to waste.
*snip bullshit*
I bolded the part you either missed or pretended didn't exist. It's shorthand for an arbitrarily small, functioning society that maximizes individual and average welfare. Your premises inevitably lead to the conclusion that this is an ideal to aspire to, as well as to the other conclusion that I mentioned in my post. Deal with it, either by admitting that you were wrong, or finding some way to avoid this and the other end of the repugnant spectrum.
A human society (especially a modern one, with modern utilities, architecture, and logistics) cannot function below a certain number of individuals. Even if we go back to the stone age, you need a minimum population to keep the species alive. So the idea that there can be a "one man living in paradise" even if we say one man isn't one man but a small group of people ( *facepalm*), you are still ignoring the facts in favor of a hipothetical senario [sic]. Which misses the whole goddamn point of utilitarianism.
Furthermore, I love how you automatically disregard any method of measuring happiness besides averages. You realize that there are other statistics that people might find important, right? For instance, people analyze the happiness quotient of different nations because they are also interested in the distribution of happiness-- that way, we know where to direct our efforts to improve the world. Averaging together the whole world together can be useful, but you also lose large swaths of important information in the process. That too is a utilitarian idea, but you obviously don't care about any form of consequentialism besides the one in your head.
Secondly, you fucker, you presented an argument for average welfare with your gas tank analogy, so don't back away from it now while claiming that I'm dishonest for insulting you. The rest of your post is really just red herrings that underline your pathetic attempts to avoid engaging with the actual argument, babbling about welfare distribution (which is assumed to be equitable in the original argument), attempting to squirt ink to get away. Unfortunately, Formless, octopi and squid are arguably cute-to-adorable, and you are little more than annoying and putrid. Please go away if you're not willing to work with the thought experiment.
Your "argument" is really just "I, Formless, am either literally incapable of parsing figurative speech and interpreting simple shorthand, or else am refusing to do so for reasons unfathomable to other human beings" (which, ironically, sinks your other argument). I'll respond to arguments when you make a coherent one, rather than a load of cowardly babble.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Consequentialism - is it ultimately flawed ethical idea
Simon Jester & Formless are of course correct; you can express any consistent system of morality as a utility function as long as you are careful enough about specifying it, and in fact this is the only globally consistent way to specify a goal system. This exercise just demonstrates the point that designing a good utility function for an all-powerful genie is an extremely difficult task, which is the basis of the Friendly AI Problem. Non-utilitarian systems are usually intransitive and produce chaotic and arbitrary results as the power of the moral agent is scaled up. Predictive power is a canard; a rational (Bayesian) agent will still always achieve better results with pure utilitarianism than a (static) rule-based approximation, because expected utility inherently takes predictive limitations into account. That said of course utilitarianism is not practical for humans for day to day decisions, as we are hardly optimal probablistic reasoners.
Re: Consequentialism - is it ultimately flawed ethical idea
Sure, resort to Ad Hominem attacks. Stay classy, asshole.Formless wrote:Eh, whatever, he still ignored a key point I made. The only reason he did so, so far as I can tell, is because he has, as usual, gotten worked up about the flaws he perceives in his opponents.Formless, I hate to tell you this pal, but that's not what an ad hominem actually is. I just thought you'd like to know, you being such a learned scholar of rhetoric.
Or maybe not, but if he likes to push other people's buttons he can have a taste of his own medicine.
Jupiter Oak Evolution!
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Consequentialism - is it ultimately flawed ethical idea
@ Bakustra: I made an argument for fulfilling potential happiness, you complete moron, based on the fact that every individual human being requires a minimum level of happiness/welfare. If every individual requires a minimum level of welfare/happiness, so to do societies require a minimum level of welfare/happiness. Just like how different vehicles have different gas requirements, and different sized gas tanks as a result. It isn't my problem you failed to read and comprehend what I said, nor is it my problem that you like to fuck straw men in the ass.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
Re: Consequentialism - is it ultimately flawed ethical idea
Jesus Christ, Formless, if you really think that I'm trying to "push your buttons", the way to "win" is- get this- not lose your temper.Formless wrote:@ Bakustra: I made an argument for fulfilling potential happiness, you complete moron, based on the fact that every individual human being requires a minimum level of happiness/welfare. It isn't my problem you failed to read what I said, nor is it my problem that you like to fuck straw men in the ass.
Now, your argument is as follows:
So let's break this down: scenario n is worse, because "there is less happiness in that society relative to its size". Happiness relative to size is precisely the argument of average welfare. But let's address the argument that you thought you were making, again. Nobody lacks in their basic needs in scenario n. The minimum level of welfare is always met within these societies. The only other way you can go without being outright insane would be to leap to variable-value welfare, which I addressed when responding to Simon.Formless wrote:The argument you present assumes that happiness/welfare need not scale with the size of a society, and that an absolute amount of happiness/welfare is the only thing that is worth measuring. But on an individual level, everyone has the same needs and desires happiness more or less equally. Think of the populations as vehicles of different sizes, with different sized gas tanks to accommodate them. In this scenario, each of those gas tanks have the same absolute amount of gas in them. Relative to their sizes though, the larger vehicle's tank is clearly more "empty" than the other, despite having the same amount of fuel in it. In the same way, a utilitarian would look at the percentage of people in that society that are living in misery, and immediately conclude that there is less happiness in that society relative to its size. That's not a perfect analogy, of course, but it reveals the measurement error made by the Repugnant Conclusion.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
-
- Sith Devotee
- Posts: 3317
- Joined: 2004-10-15 08:57pm
- Location: Regina Nihilists' Guild Party Headquarters
Re: Consequentialism - is it ultimately flawed ethical idea
This doesn't have anything at all to do with my point, and it's pretty stupid even then, but, uh, congratulations on knowing some words?Formless wrote: Heh. I think that leans more towards the conclusion "ignorance is bliss". Nigerians likely do not measure their fortune against people from other nations, but against other Nigerians; and the reverse is also true of people living in first world nations who often measure themselves against the higher classes of their society. If nothing else, if you never meet (or are reminded of) people far richer than yourself you can't feel envy of them, and if you never meet (or are reminded of) people far poorer than yourself you will never feel gratitude for being born into relative wealth. Of course, that can change should you ever gain such knowledge.
Your criticism fails on a few counts: first, I never mentioned humans specifically, so I don't know where you got that, aside from being retarded, but about beings in general. While we might be able to perfectly predict our humans needs - and we can't - a utilitarian system presumably weighs something's value on more than its gene code. Second, your argument is self-defeating: you are arguing that you can make rational ethical decisions about other people based on your presumed intuitive understanding of their mental states. Even worse, you invoked the Naturalistic Fallacy, presumably because you were going for a hat trick of being wrong. If you are arguing for the power of our minds based on evolution, then surely you are aware that our brains are wired to interpret and make decisions that are best for our survival, not for what is the most accurate or correct insight. Let's use a very, very basic example. Your brain is also naturally good at attaching intentionality to inanimate objects and forces. But I suspect that you would agree that making a rational utilitarian decision would not involve weighing the value of the Thunder's spirit, but would require deliberately acting contrary to those intuitions, becuase Utilitarianism fundamentally rejects intuition as a sound basis for decision-making. I know that your worldview is probably riddled with self-contradictions in all areas, but please refrain from making them so transparentHow I hate this criticism. Its so lazy-- I mean, if we applied it to any other intellectual or scientific pursuit it would get chased back into a corner as being terminally stupid. Of course you can never have perfect predictive abilities. That doesn't mean you should stop trying to predict things. It means you should seek to improve your predictive abilities.
Think about it. In order for this argument to work you have to assume something contrary to the facts-- that other human minds are so alien that your every attempt to understand them is at high risk of failing. But instead, humans evolved a remarkable ability to empathize and communicate their feelings with one another, and even other animals. Even without our huge intellectual capacity, we still make these judgements just fine.
Edit: also, I've often found that I don't necessarily understand myself better until I talk to others and get their impressions of me. Indeed, why else do you think people like to come to their friends when they need help? Their friends shouldn't know themselves better than they do. And yet, it helps.
Finally, your first paragraph basically doesn't make any goddamned sense. I'm not arguing against improving your knowledge, I'm arguing that a moral system based on the sum total result of an action you take is a shitty one because determining that sum total is both heinously difficult in a practical stuation, to the point of being worthless as a way of governing our actions, and even more basically, it's theoretically irrevocably flawed and irrational.