Morality is relative

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Morality is relative

Post by Broomstick »

Surlethe wrote:After all, in other areas we don't choose the best behavior to fulfill an evolutionary function: for instance, we use birth control even though the evolutionary function of sex is reproduction.
Incorrect. The evolutionary function of success is successful reproduction, that is, your descendants grow up to have more of the same. If you have a hundred offspring and yet none of them produce offspring of their own you might as well have not bothered at all. If limiting the numbers of offspring significantly increases their chances of growing up and reproducing in their own right then limiting the numbers of offspring actually is the sounder strategy. Thus, birth control can be a better option, from an evolutionary standpoint, that simply popping out more babies.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Morality is relative

Post by Simon_Jester »

Samuel wrote:
but the human species as a whole does not have to die, unlike the dinosaurs (which, as far as we know, operated instinctually and had little ability to adapt to catastrophes).
You do know humanity almost went extinct in the past? We went through a bootleneck with only about a thousand people surviving to pass on their genes- it is the reason humans have such a low genetic diversity.

If we had to face what dinosaurs had to face at that time, there wouldn't be a human race.
Not in the Stone Age, when Mount Toba let go. But I think today we might be able to pull it off... maybe. We've passed a fairly critical threshold of tool use since then.
Broomstick wrote:
Surlethe wrote:After all, in other areas we don't choose the best behavior to fulfill an evolutionary function: for instance, we use birth control even though the evolutionary function of sex is reproduction.
Incorrect. The evolutionary function of success is successful reproduction, that is, your descendants grow up to have more of the same. If you have a hundred offspring and yet none of them produce offspring of their own you might as well have not bothered at all. If limiting the numbers of offspring significantly increases their chances of growing up and reproducing in their own right then limiting the numbers of offspring actually is the sounder strategy. Thus, birth control can be a better option, from an evolutionary standpoint, that simply popping out more babies.
Evolution's stock response to overpopulation isn't birth control; it's baby-eating. Or having the alphas within any given group coerce the others into not breeding.

And yes, it's appalling. Think about it, though. Which behavior produces more descendants: having fewer offspring, or killing another organism's offspring?

They did this study in a variety of ways, including mathematical models; see here
This space dedicated to Vasily Arkhipov
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Re: Morality is relative

Post by Broomstick »

That's a very simplistic viewpoint, really. Yes, infanticide occurs in nature. It occurs today in civilization, too. The problem with infanticide, though, is that resources have already gone into a baby that's killed - better not to invest in what will be destroyed. Thus, "intimidation by alphas" is arguably much more sensible from an evolutionary standpoint, as resources are not diverted into blind ends but can be used to sustain adult members of the population.

Beyond that, though, evolution has also developed mechanisms where fertility drops off sharply during stress or starvation, when survival of an infant is likely to be poor. Other species - I'm thinking insects like ants, wasps, and bees - have evolved mechanisms where a large percentage of the population forgoes personal reproduction for the the sake of helping to raise up massive numbers of a close relatives. Species do not always relay upon death to limit their numbers, limiting offspring does occur time and again. If humans do it consciously as opposed to responding to physical stimuli, so what? We use culture to do many things that animals rely upon instinct for, we evolved as social and cultural animals.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: Morality is relative

Post by Darth Wong »

Alyrium Denryle wrote:
Darth Wong wrote:
Alyrium Denryle wrote:I have not once in this discussion declared any particular system to be correct (because it is based on natural laws). Rather, I am attempting to show how ethics are the result of natural laws, and how given this, we can understand and describe ethics (rather than justify or prescribe, as to do that WOULD be a naturalistic fallacy.
So you're saying that your entire contribution to this thread has been intentionally irrelevant to the conundrum we are asked to address in the OP? Forgive me for assuming that you were actually trying to say something relevant, instead of basically tossing long-winded red-herrings into the discussion.
The question essentially was, how we deal with the infinite regress of justifying ethical positions. I have given you how I deal with it, by rejecting absolutist ethics in favor of something I think is more tenable.
You did not deal with it at all. You just said that you have no intention of trying to justify anything. Your response to the question of how we justify an ethical system is to say "I won't justify anything. Instead, I'll say that this is where I say that ethics came from, based on a few premises which I won't bother justifying either".
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: Morality is relative

Post by Darth Wong »

Count Chocula wrote:I snipped RAH's prior paragraphs. What he had posited prior to my quote was that individuals die, but the human species as a whole does not have to die, unlike the dinosaurs (which, as far as we know, operated instinctually and had little ability to adapt to catastrophes). From there, Heinlein took the personal urge for survival into the levels of abstraction, i.e. towns, cities, states, societies we have today.
Still irrelevant. All he does is explain the evolutionary roots of ethics. That in no way justifies the continued use of such roots as the basis of future ethics.
Darth Wong wrote:I disagree with you here. Also, this speech was given in 1973, when the Vietnam war was still a festering psychic wound and the Cold War was still in full swing. I see where you're coming from, but patriotism in most places I've read entails placing the welfare of your state ahead of whoever your current enemy was. For John Adams, George Washington, Thomas Jefferson et al it was their fellow colonials vs. England. For the Tories, many of whom moved to Canada, the enemy of their tribe was the colonials. For the Vietnamese, the welfare of fellow Vietnamese against the French took precedence. IMO, all patriotism has its roots in tribalism, just of an expanded nature; however, except for extreme examples (like Nazi Germany) it's rarely been an "us against the world" mindset.
Or innocent bystanders. You mention the Vietnam War; that is a great example. The United States killed perhaps a half million people (and possibly more) in that war by bombing Laos and Cambodia: neutral nations which were merely adjacent to their enemy, just to cut off their enemy's supply lines. This would have been considered a horrifying war crime if anyone but the US did it. Two million Vietnamese died in the Vietnam War, and Americans considered their sacrifice worthwhile "in the fight against communism". Must be nice to decide how much of other peoples' sacrifice is worthwhile. Vietnam showed that American patriotism is "my group over anyone else that gets in the way of our goals", not just someone you could call an "enemy".
You raise a good question, and one for which I don't have a ready answer. Speculating, I'd say that the overall "group" is defined by the extent of communication and ease of travel an extended group has, along with geographic barriers and military capability to keep other tribes at bay. You can see it in the US' southern borders, where there's really been no real estate worth arguing over since the 1800s (the Rio Grande river is an arbitrary, not practical, barrier to transit), its northern borders where the Great Lakes limited the influence of competing governments and where differing cultures to the north and south of the arbitrary Great Lakes lines were drawn limited common interests, and in Europe where geography dominated the formation of states (Switzerland being a good example). The only place my speculation falls apart is the Middle East, but most of those states' borders were a result of WW I and little good has come of the Western partition of the ME.
I think you're missing the point: this is one potential definition (whose particular validity is somewhat arbitrary), but there are many others. As long as you define ethics based on in-group vs out-group competition, you will have people whose group self-identification differs from yours. As long as we're using the US as an example, recall that many Americans deeply despise other Americans and do not consider them part of the group. Is this kind of fratricidal behaviour ethical? Recall the Civil War.
I can't really disagree with you there. The big question is, how to eliminate the "out-group"? All of recorded history chronicles the struggle of group against group. Even positing that the Western standard of living and wealth is the ne plus ultra of Earthly life, there will still be significantly large populations that disagree with that proposition and will fight the notion. We in the West won't give up our standard of living willingly, and I suspect that millions of people in Japan, India, China, Russia and elsewhere wouldn't want that outcome either. I am obviously making a reference to Muslim Luddites in Afghanistan, Iran, Saudia Arabia, and Indonesia who are being rather...intransigent in their beliefs. While it may be a nice idea to consider, I don't see competition between groups/tribes/nations dying out in my lifetime.
No, the question is not how to eliminate the out-group. It is how to eliminate the kind of primitive tribal thinking that leads people to think that all ethics is based on the need to achieve supremacy of the in-group.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Morality is relative

Post by Surlethe »

Covenant wrote:You need a set of principles to work towards, but that's not surprising. A system of absolute ethics presumes that you can agree on an ideal end goal. You may say that means there's no absolute, in the sense that it's not written in the stars, and in that sense you are correct. But that does not mean it is relative, since you could run the data through the wringer and find out that doing thus-and-so is indeed a system that encourages lower crime, greater freedom, and a higher reported standard of living from amongst the participants. If those are goals to work towards, and the attendant costs (taxes, alterations to laws, changes in the way criminal justice systems are handled) are acceptable, then that's the kind of ethic you're better off having.
Right - as long as you have postulated an end-goal. But why should I prefer end-goal A over end-goal B? That's where I think you're ...
If you want to assert that the Sarah Palins would disagree, that's true, but they're still subject to research. We can look at the kinds fo things they want to do and evaluate it in an ethical context. We can evaluate what happens when you take away government aid, increase war spending, reign in freedom of choice, devalue scientific research... and then come away with it and say "No, this is less good for everyone." That's the kind of ethical absolutism you can work towards achieving. No one system may be the supreme best system unless you have a very large number of goals to work towards, but you can develop a set of principles that would more adequately achieve it than other ways.
... begging the question here, unless I misunderstand you. What is "good [for everyone]" is precisely what the initial goals of the system are, and my question is, is there an objective determination of those goals? Once you have the goals, you can figure out which system achieves them best, but initially is the choice completely arbitrary? If the Sarah Palins of the world profess agreement with my goals, I can reason with them; if not, I can't persuade them otherwise. In other words, if we agree on what "good" is, we can argue out how the best way to achieve that is; if we disagree, we are at an absolute impasse.
And in an effort to achieve that, I think it's handy to have a seperate term for it. Morality is just too emotional, subjective, and vague.
Okay, no point in arguing over semantics; I think we're on the same page as far as terms go. My objection is to the notion of an absolute ethical systems including goals, if that makes sense.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Morality is relative

Post by Surlethe »

Darth Wong wrote:
Surlethe wrote:I've been thinking about absolute morality recently. We all know that, ideally, moral codes are simply a set of deductions from noncontradictory axioms.
By that definition, the term "moral code" is completely synonymous with "logical conclusion". Methinks this definition is far too broad. Moral codes refer to a very specific sort of rule: namely, a human social conduct rule which is supposedly for the benefit of the larger group.
Right. I stated it imprecisely. I probably should have said that moral codes are a specific sort of logical conclusion arrived at by applying axiomatic rules to human behavior. I'm not sure, though, that in principle moral codes are necessarily in place for the benefit of the larger group; one could conceive of a moral code which elevates impulse fulfillment above all other concerns, although a group following such a code would quickly dissolve.
The best (and only, in fact) argument I've seen for choosing one moral code over another is this: we ought to choose the moral code that is best suited to fulfilling the evolutionary function of morality in society. But why is that an objective criterion?
There are other justifications for choosing one moral code over another: for example, it is entirely possible for many people to agree on certain desired outcomes for society while radically disagreeing about the preferred moral rules intended to achieve that outcome (a good example is sex education and teen pregnancy; both sides agree that high teen pregnancy is bad, but one side's preferred solution can be objectively demonstrated to be inferior, given the mutually agreed-upon goal).
See below re. terminology; my statement ought to be amended to: the best (and only) argument I've seen for choosing one goal set over another ... .
I've said this before and I'll say it again: in order to evaluate a moral code, you must first determine what the goal of morality is. You cannot evaluate performance without first deciding what constitutes success. The evolutionary impulse for morality is important in recognizing where morality historically came from, but it does not necessarily dictate where it must go.
You're using better terminology than I am, but I think we're still on the same page. The chief question of the OP is: why should we choose one set of goals over another set of goals?
After all, in other areas we don't choose the best behavior to fulfill an evolutionary function: for instance, we use birth control even though the evolutionary function of sex is reproduction. More to the point, though, why should we use the criterion of evolutionary function? That seems to imply some underlying preference scheme among criteria, which in turn implies either a preexisting moral code we're trying to justify (i.e., circular reasoning), or some sort of design scheme - in which case, there are design goals, which beg the question all over again.
There's nothing wrong with having design goals for human morality schemes. All human activity is based on goals, after all. The question is: what do we do when we have competing goals? One approach is to strip away all goals except those that are almost universally shared among human beings, such as "I want to live" and "I would prefer to avoid pain" and "I would like to have as much leisure and luxury as possible", and then see which system can best achieve those goals for the largest possible number of people in society.
Practical, but not a philosophically satisfactory solution: if someone has a different arbitrarily chosen goal system, is there any logical reason for him to switch to this one? It seems like there's no philosophically satisfactory solution.
Even if you can't necessarily produce an ironclad justification for the goals of a particular moral code, I would argue that any moral code which has an up-front stated goal is still philosophically superior to a moral code whose only known goal is to be true to itself. That is obviously circular.
I'd agree it's superior to any particular circular moral code, but is it superior to any other system with arbitrarily chosen goals? And if we have criteria for choosing which goal systems is superior, how do we justify them? It seems to me that your consensus approach above is a practical solution, but ultimately it's as arbitrary as a sociopathic goal system or one that requires covering the earth with solar nodes dedicated to solving the game of Go.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: Morality is relative

Post by Darth Wong »

Surlethe wrote:Right. I stated it imprecisely. I probably should have said that moral codes are a specific sort of logical conclusion arrived at by applying axiomatic rules to human behavior. I'm not sure, though, that in principle moral codes are necessarily in place for the benefit of the larger group; one could conceive of a moral code which elevates impulse fulfillment above all other concerns, although a group following such a code would quickly dissolve.
I would argue that some kind of group-oriented goal is intrinsic to the definition of morality, since ethics and morality are meaningless for a hypothetical person who is completely alone in the universe.
You're using better terminology than I am, but I think we're still on the same page. The chief question of the OP is: why should we choose one set of goals over another set of goals?
I think the real question you're asking is: "can you give me some kind of philosophically absolute reason to choose one set of goals over another", hence your dissatisfaction with my proposed answer.
Practical, but not a philosophically satisfactory solution: if someone has a different arbitrarily chosen goal system, is there any logical reason for him to switch to this one? It seems like there's no philosophically satisfactory solution.
Only if you define "philosophically satisfactory" as something other than "practical". I don't see why you should do that. Of course, I'm an engineer so perhaps I am just naturally inclined to pragmatism, but you admit yourself that your attempt at philosophical absolutism goes nowhere, so why not employ pragmatism?
It seems to me that your consensus approach above is a practical solution, but ultimately it's as arbitrary as a sociopathic goal system or one that requires covering the earth with solar nodes dedicated to solving the game of Go.
Hardly. Any attempt to derive social goals must have some kind of reference to the collective desires of the human beings who make up society, since a goal follows naturally from a desire. My solution of employing consensus-based determination of collective goals is a pragmatic attempt to produce goals which optimize the wish-fulfillment of members of society. How is that as arbitrary as completely random or sociopathic solutions, when the whole point of goal selection is wish fulfillment? Do you personally select goals which are completely contrary to your desires? Is a goal in fact not merely a different way of describing desires and wishes?
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Re: Morality is relative

Post by Rye »

Surlethe wrote:I've been thinking about absolute morality recently.
That's your first mistake. Since morality in real life is delivered mostly by a) custom and b) genetic predisposition towards social behaviour, going for "absolute morality" is going to be pretty silly. The whole desire for a justification for it, undertaken while rejecting a point of view or preference for why you might want a justification in the first place seems a bit misguided and redundant.
We all know that, ideally, moral codes are simply a set of deductions from noncontradictory axioms. The best (and only, in fact) argument I've seen for choosing one moral code over another is this: we ought to choose the moral code that is best suited to fulfilling the evolutionary function of morality in society. But why is that an objective criterion? After all, in other areas we don't choose the best behavior to fulfill an evolutionary function: for instance, we use birth control even though the evolutionary function of sex is reproduction.
Because it's measurable. Relative to fertility, using birth control is measurably poor. Relative to enjoyment from sex, they're pretty high.
More to the point, though, why should we use the criterion of evolutionary function? That seems to imply some underlying preference scheme among criteria, which in turn implies either a preexisting moral code we're trying to justify (i.e., circular reasoning), or some sort of design scheme - in which case, there are design goals, which beg the question all over again.
Work out what morality is and what you think it's for. Then you can evaluate which course of action achieves this best. What you can't do is take on the aura of a universal computer and work out why people ought to prefer life over death or ice cream over putting their dick in a blender. They prefer these things for obvious human reasons that boil down to biological origins.
I guess at the end of the day, I'm arguing that you can reduce to absurdity by always pushing the question of absolute morality back one more step, sort of like how in deistic arguments you can always ask who created God or what the purpose of God's life is. Man, religious people have it so easy. What are your thoughts?
What exactly do you think you're going to be able to prove "absolutely" in human life? We know how we got morality and to a significant extent how it works, we can measure outcomes relative to that. In morality as in all else, Hume supposed our beliefs and actions are the products of custom or habit, and now we understand a lot of the genetic influences that underpin those. Pure reason cannot "prove" information from the senses, and I have stated that I suspect that our consciousness and our awareness of morality are as sensory in basis too, meaning morality ought not to be able to be proven in some metaphysically absolute way either.

Frankly, the search for metaphysical truth from epistemology is, I think, a totally imaginary goal.

People mostly do good things because it makes them feel good. The hedonic calculus we all use intuitively and rationally has the same ultimate basis. Moral philosophies and codes (in theory) ought to describe that basis and how to achieve satiation; why it feels good to do these things, mainly to address our underlying feelings of fairness that appeared to facilitate familial then social behaviours. As for why they should, well now you're getting into an assumption of a reason that goes beyond the mechanistic operation of the human race. I don't think there is an "intentional" discernable reason for why people should prefer breathing over being stabbed, they just evolved that way and will behave in accordance with those primal drives. You might as well ask why water "ought" to collect in puddles.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Re: Morality is relative

Post by Alyrium Denryle »

Darth Wong wrote: You did not deal with it at all. You just said that you have no intention of trying to justify anything. Your response to the question of how we justify an ethical system is to say "I won't justify anything. Instead, I'll say that this is where I say that ethics came from, based on a few premises which I won't bother justifying either".
Replace "I wont" with "I cant". You cannot philosphically justify an absolutist ethical system. It does not work. Relativism implies that morality is arbitrary. If that were true there would be no common principles across cultures because morality would be subject essentially to the memetic equivalent of Neutral theory in evolution... the null hypothesis of mutation and drift with no selection.

As for consciousness and rational thought...
Volition and consciousness Voluntary actions are accompanied by specific subjective experiences. Indeed, the relation between these experiences and the brain activity that occurs before and during actions has been a key focus in the neuroscience of volition. The phenomenal experience of our own action is often not strong: psychologists often comment that motor control is ‘automatic’ and unconscious. nevertheless, the experience of making a voluntary action is clearly different from that of an equivalent passive movement that is applied to the body: as Wittgenstein asked, “What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?” (ReF. 76). More importantly, the conscious intention to make an action seems to cause the action itself: we feel we have ‘free will’. Most neuroscientists are suspicious of this idea, because it implies a “ghost in the machine” (ReF. 2). Rather, both conscious intention and physical movement might be consequences of brain activity. Wegner77, for example, has proposed that the human mind assumes a causal path from conscious intention to action in order to explain the correlation between them. In fact, the correlation occurs because both conscious intention and action are driven by a common cause, namely the neural preparation for action. A more radical view78 suggests that conscious intention is not a bona fide mental state at all, but rather an inference that is retrospectively inserted into the stream of consciousness as the hypothetical cause of the physical movement of our bodies. This view receives support from studies of psychosis, in which experiences of intention are associated with unusual causal explanations of connections between events79,80. Even in the healthy brain, the consequences of an action can strongly influence the experience of the action itself81,82. This influence is particularly strong in cases of action errors, in which feedback carries information about unexpected consequences of action83 (FIG. 4)....

libet himself suggested that the interval between conscious intention and movement onset was sufficient to allow a process of conscious veto, which would inhibit an impending action before execution8. Such ‘free won’t’ would have the philosophical advantage of salvaging traditional concepts of moral responsibility. However, dualism about conscious veto is just as problematic as dualism about conscious initiation of action. It is unclear how a Box 2 | The preSMA: a key structure for voluntary action The pre-supplementary motor area (preSMA) is located between the ‘cognitive’ areas of the frontal lobes and motor-execution areas, such as the SMA proper and the primary motor cortex99. It occupies a key position in the frontal network that transforms thoughts into actions (see figure). Direct stimulation of the preSMA through electrodes (red circle in part a of the figure) produces both a feeling of a conscious ‘urge to move’ and, at higher current, movement of the corresponding limb91,100. However, many neurological studies suggest that the main function of the preSMA is to inhibit actions rather than cause them. Lesions in this area can produce automatic execution of actions in response to environmental triggers. For example, when the patient sees a cup, they will reach for it and attempt to drink even if they do not wish to (for another example, see figure, part b)44,49. Studies in animals and humans suggest two further roles for the preSMA, which seem to be quite different from its role in voluntary action. Single neurons in the monkey preSMA code for the preparation of entire complex sequences of movements (see figure, part c, which shows preSMA activation during a turn movement that is followed by a ‘pull’ movement but not when the turn movement is followed by a ‘push’ movement), and also code for transitions between movements within the sequences101. Transcranial magnetic stimulation studies in humans also indicated a role for the preSMA in the preparation of entire sequences, but not in transitions within sequences102. The brain areas that allow voluntary control of action also seem to combine individual actions into more complex ones. Volition may have arisen as a result of this capacity for increasingly complex action. This suggestion receives support from the classic finding that monkeys with lesions of Brodmann’s area 6, including the preSMA, are unable to flexibly adjust the pattern of their movements when they fail to achieve a goal103. Instead, the monkeys repeatedly make the same stereotyped and unsuccessful action. Thus, voluntary action is largely a matter of finding convenient ways to get actions to fulfil current goals. Part a of the figure reproduced, with permission, from ReF. 91  (1991) Society for Neuroscience. Text in part b of the figure from ReF. 43. Part c of the figure reproduced, with permission, from ReF. 101  (1994) Macmillan Publishers Ltd. conscious veto might influence brain activity. Moreover, the veto, like the conscious intention, could itself be a consequence of some preceding unconscious neural activity94. The processes of voluntary inhibition of voluntary action are demonstrated in recent neuroscientific studies of the prefrontal cortex7,95. These processes might provide the final check, or ‘late whether decision’, before voluntary action (see above). They might be associated with specific conscious experiences, both of the impending action and of the decision to abandon it. but they do not imply any unusual or dualistic form of mind–brain causation.
Snipped for brevity (it is a 13 page review) from:

Haggard, P., 2008. Human volition: towards a neuroscience of will. Nature Reviews, (9):934-946

I snipped out the stuff on neural pathways. But what this means is that the general consensus in neuroscience is that consciousness (and thus what we perceive as rational though) is a perception and result of decision making, not a cause. I did not insert the section in future decision making, but the gist of that is that it is basically decision making that bypasses motor functions and accesses parts of the brain responsible for "prospective memory" (forethought) but otherwise operates in the same way.

The idea that rational thought causes these decision making processes (rather than being caused by them) is nothing less than Mind-Body dualism, and that is also addressed in this paper.

Moreover the evolution of the ability to override evolutionarily beneficial decisions would be selected against, very very strongly. And evolution is not narrowly defined here. We have an adaptive social consciousness for a reason, and culture as well as the
This Review has resisted the traditional, philosophical idea that conscious thoughts cause voluntary actions, in favour of
a neuroscientific model of decisions about action in relatively unconstrained situations. How might this model relate to
responsibility? The initial ‘whether decision’, based on reasons and motivations for action, and the final check before
action are both highly relevant to responsibility. By contrast, decisions regarding how and when an action is performed are
less crucial. Responsibility might depend on the reason that triggered a neural process culminating in action, and on
whether a final check should have stopped the action. Interestingly, both decisions have a strong normative element:
although a person’s brain decides the actions that they carry out, culture and education teach people what are acceptable
reasons for action, what are not, and when a final predictive check should recommend withholding action. Culture and
education therefore represent powerful learning signals for the brain’s cognitive–motor circuits. A neuroscientific
approach to responsibility may depend not only on the neural processes that underlie volition, but also on the brain
systems that give an individual the general cognitive capacity to understand how society constrains volition, and how to
adapt appropriately to those constraints.
So there we go. Justification for my premises.
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: Morality is relative

Post by Simon_Jester »

Broomstick wrote:That's a very simplistic viewpoint, really. Yes, infanticide occurs in nature. It occurs today in civilization, too. The problem with infanticide, though, is that resources have already gone into a baby that's killed - better not to invest in what will be destroyed. Thus, "intimidation by alphas" is arguably much more sensible from an evolutionary standpoint, as resources are not diverted into blind ends but can be used to sustain adult members of the population.
The flip side is that my suggested viewpoint is simplistic, but (in my opinion) less so than "breeding for birth control" is. Such a gene has to be extremely advantageous in terms of the survival rate of the group's young to spread, or to be specifically advantageous to one's own offspring, which is difficult. Having one less child benefits all the other children in the population, including the ones that don't carry the gene for having fewer children.

I overstated my case, but I think it's worth remembering that there are multiple evolvable solutions to the "scarce resources to raise offspring" problem, and that many of them are not ones we would normally favor.
This space dedicated to Vasily Arkhipov
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Re: Morality is relative

Post by Alyrium Denryle »

Simon_Jester wrote:
Broomstick wrote:That's a very simplistic viewpoint, really. Yes, infanticide occurs in nature. It occurs today in civilization, too. The problem with infanticide, though, is that resources have already gone into a baby that's killed - better not to invest in what will be destroyed. Thus, "intimidation by alphas" is arguably much more sensible from an evolutionary standpoint, as resources are not diverted into blind ends but can be used to sustain adult members of the population.
The flip side is that my suggested viewpoint is simplistic, but (in my opinion) less so than "breeding for birth control" is. Such a gene has to be extremely advantageous in terms of the survival rate of the group's young to spread, or to be specifically advantageous to one's own offspring, which is difficult. Having one less child benefits all the other children in the population, including the ones that don't carry the gene for having fewer children.

I overstated my case, but I think it's worth remembering that there are multiple evolvable solutions to the "scarce resources to raise offspring" problem, and that many of them are not ones we would normally favor.
To weigh in:

Any reproductive control that a female can exert over the number, timing, and genetic composition of her offspring is a massive evolutionary boon. Think about this for a moment. In a society food is not infinite. It is also not split up equally (most of the time) there is often some sort of hierarchy, be it in terms of wealth, social status etc that determines this. This means that for each parent or mated pair there is a finite amount of food that must be split among themselves and offspring. This means that there is an optimum number of offspring a given set of parents should have to maximize their fitness given their resources.

If they have more than that number, they loose fitness because their offspring do not do well, end up weak or die from disease. If they have fewer, they wont have as many grand kids due to intrinsic limitations on breeding rates.

Then there is timing. There is temporal variation in resource availability, and the most vulnerable time for offspring is in infancy. Not enough food, they die. Dut to starvation parasites or infection. This represents a waste in investment.

Then genetic composition. This should be obvious.

A good and relatively efficient way to control all three of these is infanticide. The female has to eat the resources invested during pregnancy. However that is better than wasting more resources and risking their other offspring present and future.

It is inferior to proactive birth control or mate selection... but those do not always work (rape, the needs of famales to maintain social status by latching onto a powerful male and giving him mating opportunities etc), particularly because males (because they do not often share the full costs of reproducing and can have MANY more offspring than females) dont care nearly as much about the quality of individual offspring...

Any questions?
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Morality is relative

Post by Surlethe »

Darth Wong wrote:I would argue that some kind of group-oriented goal is intrinsic to the definition of morality, since ethics and morality are meaningless for a hypothetical person who is completely alone in the universe.
I'm not sure I agree with that. Perhaps I'm working from a broader (incorrect?) definition of ethics, as a system that recommends a "best" course of action for a person relative to a set of goals or rules; this would apply equally well to our hypothetical alone person.
I think the real question you're asking is: "can you give me some kind of philosophically absolute reason to choose one set of goals over another", hence your dissatisfaction with my proposed answer.
What I'd like is a way to rationally persuade a person who doesn't share my goal system that he should share it. If my goals are X and your goals are Y, ultimately we'd have to literally agree to disagree. (We'd have to do that at an absolute, literal level, not with the usual meaning of "you're not going to persuade me, I'm not going to persuade you, and we're getting frustrated, so let's just call it quits for now.") Perhaps I'm just dissatisfied with the prospect of a philosophical argument that in principle has no winner. Parallels to solipsism spring to mind.
Only if you define "philosophically satisfactory" as something other than "practical". I don't see why you should do that. Of course, I'm an engineer so perhaps I am just naturally inclined to pragmatism, but you admit yourself that your attempt at philosophical absolutism goes nowhere, so why not employ pragmatism?
I'm in training to be a mathematician, so I guess I'm equally inclined toward philosophical absolutism. Regardless, in reality I'll employ the practical solution, dissatisfactory as it may be.
Hardly. Any attempt to derive social goals must have some kind of reference to the collective desires of the human beings who make up society, since a goal follows naturally from a desire. My solution of employing consensus-based determination of collective goals is a pragmatic attempt to produce goals which optimize the wish-fulfillment of members of society. How is that as arbitrary as completely random or sociopathic solutions, when the whole point of goal selection is wish fulfillment? Do you personally select goals which are completely contrary to your desires? Is a goal in fact not merely a different way of describing desires and wishes?
This seems circular. Your initial assumption is that an attempt to derive goals must reference the collective desires of the people in the society - why is this the case? Why must the overall goals reflect the desires of the individuals in society, unless you're already tacitly assuming that the job of our hypothetical social designer is to optimize the peoples' happiness and minimize their harm? When we're talking about goal selection for a moral code as wish fulfillment, the only relevant wishes are the designer's; why should he give any weight to other peoples' desires?

On the relationship between goals and desires, I'd argue that all goals can be thought of as desires, but not vice-versa - for example, I might have an impulsive desire to have sex with someone not my wife, but I certainly don't make that a goal. Similarly, there's no reason in principle an arbitrary person couldn't have an arbitrary set of desires (see: Go example, borrowed from Robots Learn to Lie thread). In practice, people have similar ones, so one can generally find common ground, but that's not going to persuade someone who has an alien or arbitrary system.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: Morality is relative

Post by Darth Wong »

Alyrium Denryle wrote:
Darth Wong wrote:You did not deal with it at all. You just said that you have no intention of trying to justify anything. Your response to the question of how we justify an ethical system is to say "I won't justify anything. Instead, I'll say that this is where I say that ethics came from, based on a few premises which I won't bother justifying either".
Replace "I wont" with "I cant". You cannot philosphically justify an absolutist ethical system. It does not work. Relativism implies that morality is arbitrary. If that were true there would be no common principles across cultures because morality would be subject essentially to the memetic equivalent of Neutral theory in evolution... the null hypothesis of mutation and drift with no selection.
Relativism does not imply that morality is arbitrary. The fact that you can't justify something absolutely does not mean you cannot justify it at all. Tell me, did Einstein's theory of relativity make physics arbitrary?
As for consciousness and rational thought...
Volition and consciousness Voluntary actions are accompanied by specific subjective experiences. Indeed, the relation between these experiences and the brain activity that occurs before and during actions has been a key focus in the neuroscience of volition. The phenomenal experience of our own action is often not strong: psychologists often comment that motor control is ‘automatic’ and unconscious. nevertheless, the experience of making a voluntary action is clearly different from that of an equivalent passive movement that is applied to the body: as Wittgenstein asked, “What is left over if I subtract the fact that my arm goes up from the fact that I raise my arm?” (ReF. 76). More importantly, the conscious intention to make an action seems to cause the action itself: we feel we have ‘free will’. Most neuroscientists are suspicious of this idea, because it implies a “ghost in the machine” (ReF. 2). Rather, both conscious intention and physical movement might be consequences of brain activity. Wegner77, for example, has proposed that the human mind assumes a causal path from conscious intention to action in order to explain the correlation between them. In fact, the correlation occurs because both conscious intention and action are driven by a common cause, namely the neural preparation for action. A more radical view78 suggests that conscious intention is not a bona fide mental state at all, but rather an inference that is retrospectively inserted into the stream of consciousness as the hypothetical cause of the physical movement of our bodies. This view receives support from studies of psychosis, in which experiences of intention are associated with unusual causal explanations of connections between events79,80. Even in the healthy brain, the consequences of an action can strongly influence the experience of the action itself81,82. This influence is particularly strong in cases of action errors, in which feedback carries information about unexpected consequences of action83 (FIG. 4)....

libet himself suggested that the interval between conscious intention and movement onset was sufficient to allow a process of conscious veto, which would inhibit an impending action before execution8. Such ‘free won’t’ would have the philosophical advantage of salvaging traditional concepts of moral responsibility. However, dualism about conscious veto is just as problematic as dualism about conscious initiation of action. It is unclear how a Box 2 | The preSMA: a key structure for voluntary action The pre-supplementary motor area (preSMA) is located between the ‘cognitive’ areas of the frontal lobes and motor-execution areas, such as the SMA proper and the primary motor cortex99. It occupies a key position in the frontal network that transforms thoughts into actions (see figure). Direct stimulation of the preSMA through electrodes (red circle in part a of the figure) produces both a feeling of a conscious ‘urge to move’ and, at higher current, movement of the corresponding limb91,100. However, many neurological studies suggest that the main function of the preSMA is to inhibit actions rather than cause them. Lesions in this area can produce automatic execution of actions in response to environmental triggers. For example, when the patient sees a cup, they will reach for it and attempt to drink even if they do not wish to (for another example, see figure, part b)44,49. Studies in animals and humans suggest two further roles for the preSMA, which seem to be quite different from its role in voluntary action. Single neurons in the monkey preSMA code for the preparation of entire complex sequences of movements (see figure, part c, which shows preSMA activation during a turn movement that is followed by a ‘pull’ movement but not when the turn movement is followed by a ‘push’ movement), and also code for transitions between movements within the sequences101. Transcranial magnetic stimulation studies in humans also indicated a role for the preSMA in the preparation of entire sequences, but not in transitions within sequences102. The brain areas that allow voluntary control of action also seem to combine individual actions into more complex ones. Volition may have arisen as a result of this capacity for increasingly complex action. This suggestion receives support from the classic finding that monkeys with lesions of Brodmann’s area 6, including the preSMA, are unable to flexibly adjust the pattern of their movements when they fail to achieve a goal103. Instead, the monkeys repeatedly make the same stereotyped and unsuccessful action. Thus, voluntary action is largely a matter of finding convenient ways to get actions to fulfil current goals. Part a of the figure reproduced, with permission, from ReF. 91  (1991) Society for Neuroscience. Text in part b of the figure from ReF. 43. Part c of the figure reproduced, with permission, from ReF. 101  (1994) Macmillan Publishers Ltd. conscious veto might influence brain activity. Moreover, the veto, like the conscious intention, could itself be a consequence of some preceding unconscious neural activity94. The processes of voluntary inhibition of voluntary action are demonstrated in recent neuroscientific studies of the prefrontal cortex7,95. These processes might provide the final check, or ‘late whether decision’, before voluntary action (see above). They might be associated with specific conscious experiences, both of the impending action and of the decision to abandon it. but they do not imply any unusual or dualistic form of mind–brain causation.
Snipped for brevity (it is a 13 page review) from:

Haggard, P., 2008. Human volition: towards a neuroscience of will. Nature Reviews, (9):934-946

I snipped out the stuff on neural pathways. But what this means is that the general consensus in neuroscience is that consciousness (and thus what we perceive as rational though) is a perception and result of decision making, not a cause. I did not insert the section in future decision making, but the gist of that is that it is basically decision making that bypasses motor functions and accesses parts of the brain responsible for "prospective memory" (forethought) but otherwise operates in the same way.

The idea that rational thought causes these decision making processes (rather than being caused by them) is nothing less than Mind-Body dualism, and that is also addressed in this paper.

Moreover the evolution of the ability to override evolutionarily beneficial decisions would be selected against, very very strongly. And evolution is not narrowly defined here. We have an adaptive social consciousness for a reason, and culture as well as the
This Review has resisted the traditional, philosophical idea that conscious thoughts cause voluntary actions, in favour of
a neuroscientific model of decisions about action in relatively unconstrained situations. How might this model relate to
responsibility? The initial ‘whether decision’, based on reasons and motivations for action, and the final check before
action are both highly relevant to responsibility. By contrast, decisions regarding how and when an action is performed are
less crucial. Responsibility might depend on the reason that triggered a neural process culminating in action, and on
whether a final check should have stopped the action. Interestingly, both decisions have a strong normative element:
although a person’s brain decides the actions that they carry out, culture and education teach people what are acceptable
reasons for action, what are not, and when a final predictive check should recommend withholding action. Culture and
education therefore represent powerful learning signals for the brain’s cognitive–motor circuits. A neuroscientific
approach to responsibility may depend not only on the neural processes that underlie volition, but also on the brain
systems that give an individual the general cognitive capacity to understand how society constrains volition, and how to
adapt appropriately to those constraints.
So there we go. Justification for my premises.
No it isn't. It's just your habit of marginally relevant long-winded quoting yet again. Nowhere does it state that all human actions and decisions are necessarily made at a subconscious level; it merely shows that some actions are made before conscious thought. The cases listed refer specifically to very ordinary motor functions like picking up a cup to drink water, etc. The idea that this can necessarily be extended to all human decisions is in no way proven by your references, particularly with reference to complex decisions such as (for example) drawing up a living will or deciding whether it is right or wrong to discriminate against homosexuals.

Moreover, this is all still entirely irrelevant to the question posed in the OP. But of course, according to you, you had no choice but to post a red-herring, since nothing you've posted in this thread involved any conscious thought.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Re: Morality is relative

Post by Darth Wong »

Surlethe wrote:
Darth Wong wrote:I would argue that some kind of group-oriented goal is intrinsic to the definition of morality, since ethics and morality are meaningless for a hypothetical person who is completely alone in the universe.
I'm not sure I agree with that. Perhaps I'm working from a broader (incorrect?) definition of ethics, as a system that recommends a "best" course of action for a person relative to a set of goals or rules; this would apply equally well to our hypothetical alone person.
By this extraordinarily broad definition of ethics, it is unethical to use a poor strategy when playing Call of Duty 5: World At War on your X-Box, because it is not a "best" course of action relative to a set of particular goals (those goals being to get a good score at the game). So yes, I'd say that your definition of ethics is over-broad, by a pretty large margin.
I think the real question you're asking is: "can you give me some kind of philosophically absolute reason to choose one set of goals over another", hence your dissatisfaction with my proposed answer.
What I'd like is a way to rationally persuade a person who doesn't share my goal system that he should share it. If my goals are X and your goals are Y, ultimately we'd have to literally agree to disagree. (We'd have to do that at an absolute, literal level, not with the usual meaning of "you're not going to persuade me, I'm not going to persuade you, and we're getting frustrated, so let's just call it quits for now.") Perhaps I'm just dissatisfied with the prospect of a philosophical argument that in principle has no winner. Parallels to solipsism spring to mind.
The best way to do that is to work back to the most elementary goals that the two of you can agree on, not to ask for some mythical argument through which you could transform someone's goals. If you're arguing with someone who vehemently disagrees that it is good to reduce human suffering, then it is pretty much impossible to find common ground and there's not much point arguing. The guy is probably a sociopath.

Sometimes (actually most of the time), what someone thinks of as socio-political goals are actually conclusions, derived from a philosophical system which he has accepted without question. Getting him to discard the higher-order socio-political arguments and start from more basic elemental goals can short-circuit that process.
Only if you define "philosophically satisfactory" as something other than "practical". I don't see why you should do that. Of course, I'm an engineer so perhaps I am just naturally inclined to pragmatism, but you admit yourself that your attempt at philosophical absolutism goes nowhere, so why not employ pragmatism?
I'm in training to be a mathematician, so I guess I'm equally inclined toward philosophical absolutism. Regardless, in reality I'll employ the practical solution, dissatisfactory as it may be.
Mathematics is a useful tool, but it's a shitty world-view because it always calls for idealizing things. Reality is a problem for a mathematician.
Hardly. Any attempt to derive social goals must have some kind of reference to the collective desires of the human beings who make up society, since a goal follows naturally from a desire. My solution of employing consensus-based determination of collective goals is a pragmatic attempt to produce goals which optimize the wish-fulfillment of members of society. How is that as arbitrary as completely random or sociopathic solutions, when the whole point of goal selection is wish fulfillment? Do you personally select goals which are completely contrary to your desires? Is a goal in fact not merely a different way of describing desires and wishes?
This seems circular. Your initial assumption is that an attempt to derive goals must reference the collective desires of the people in the society - why is this the case?
Because goals are merely attempts to satisfy desires. This is like asking why you should eat when you're hungry.
Why must the overall goals reflect the desires of the individuals in society, unless you're already tacitly assuming that the job of our hypothetical social designer is to optimize the peoples' happiness and minimize their harm? When we're talking about goal selection for a moral code as wish fulfillment, the only relevant wishes are the designer's; why should he give any weight to other peoples' desires?
I'm not assuming anything other than the idea that society is a collective organism. The construction of collective social goals from the goals of the individual components of that society follows naturally from that idea.
On the relationship between goals and desires, I'd argue that all goals can be thought of as desires, but not vice-versa - for example, I might have an impulsive desire to have sex with someone not my wife, but I certainly don't make that a goal.
That's because you have other competing desires which override that impulse, not because your goals are unrelated to your desires.
Similarly, there's no reason in principle an arbitrary person couldn't have an arbitrary set of desires (see: Go example, borrowed from Robots Learn to Lie thread).
Correct. That's why there is no such thing as morality for an individual who is disconnected from society. Morality regulates the conflict between the desires of the individual and the collective desires of society.
In practice, people have similar ones, so one can generally find common ground, but that's not going to persuade someone who has an alien or arbitrary system.
Correct. Nothing can persuade a sufficiently sociopathic individual. However, the vast majority of people are not psychopathic; they are simply brainwashed into accepting pre-fab social rule sets and it is difficult to persuade them to really give serious thought to ethics as a philosophical construct.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
Post Reply