Kuroneko wrote:That's just a platitude that does nothing but, ironically enough, paralyzes one's ability to act. You're focusing on the psychological details of sadists' world because you're missing the point of the general type of problem, and that is not nearly as easily dismissible. The hypothetical 'sadists' do not have to take to take pleasure in just the brute fact that their victims are in pain, just derive some benefit from it.
That's because I think of this from pragmatic viewpoint: morality is a set of strategies and tools that humans use to achieve certain collective goals and satisfy certain needs. Our collection of positive and negative emotions are rewards and warnings (respectively) that keep us alive and working towards those goals and keeping those needs stable. The fact that we all have them and we all think of them as good and bad is what
makes them good and bad. We get to define the concept, after all.
Before I continue, I should apologize for misleading you into thinking I was using preference utilitarianism: I don't think there is a name for the kind of utilitarianism I'm using. Yet.
The key here is that the goals and needs are
collective-- not just in the sense that many of them are goals of societies, but also in the sense that they are universal to humans (and other animals as well). That is what makes them intrinsically good-- they are intrinsic to us as living things. But on the other hand, there are also
relative goods: things that the individual wants but may not share with the rest of the species *. For any given desire (and by extension, the pain/pleasure associated with it) we can evaluate how relative or universal it is among humans based on the percentage of the human species that shares it. The utility of an action therefor is not just based on the amount or intensity of the positive/negative emotions experienced, but on how relative/universal the desires beneath those emotions are. If an action causes lots of negative emotions, but they are highly relative (say you have someone who is easily offended) then its negative utility is much lower than something that has lots of negative emotions AND is highly universal in nature (like murdering someone).
Of course, many desires are tied to false or untenable beliefs about the world, and some beliefs can trigger emotions directly. In these cases we can substitute the relative/universal value with a truth value of the belief instead, since belief systems should be rational. Science and logic, those are the ethics of belief. Agreed?
So in the case that someone does harm to others to benefit only themselves, the happiness they experience is highly relative, while the pain and suffering others experience is highly universal, with utility values to match. Indeed, the simple fact that not even the sadist, sociopath, or greedy bastard would want to experience what their victims experience should speak for itself of how universal that suffering is.
To take a more extreme example, say the human species confronted a sentient alien probe capable of feeling pain and pleasure. However, its a berserker programmed to destroy all life it comes across, and that's what makes it happy. The pleasure it gets from wiping us out is just about 100% relative on this scale, and the risk we face is 100% universal negative utility. The only option we have is to destroy it before it destroys us because its goal system and personality is completely incompatible with our existence.
Sometimes, one can choose to undergo some amount of discomfort so that other people's lives are improved. Most times, we consider such behavior laudable. Is it ever permissible to make others do such things by force? Is it obligatory?
Say we are forced to relocate a small community living near the aquifer of a major river. The mere presence of this community is using up or polluting all the water, causing the desertification of the area downstream. To make things more interesting, lets say that water has become very scarce downstream due to a decades long drought, and violence has broken out before over water rights and rationing disputes. Now, the people who live upriver do not live there by choice, and so could be considered blameless for the impact their life has on the people and ecosystem living downriver from them. Understandably, they don't want to leave. They are comfortable with the way things are, and don't want that kind of stress. Some are nostalgic for the place and have formed attachments to it. Some of them are leery of the government asking such a thing from them. Some aren't convinced that they are causing a problem in the first place. Some have prejudices against the people downriver. Wait, what?
We have to remember that in cases such as this most of the people won't leave willingly, even though by any measure their continued presence has become unethical. Many of those reasons sound plausible, but aren't very noble either. To use the ethics I described above, the stress of moving is not nearly as bad as experiencing violence like some of the people downriver. Being comfortable is understandable, but again not something that should sway our hand when stood up to the suffering others are experiencing. The nostalgia and attachments they have formed are also understandable, but the people living downriver don't care-- clearly not a very universal sentiment. And the one's who are paranoid about the government, don't believe the measurable harm that's going on downriver, or are prejudiced against the people living there clearly are not being very rational. And besides, the community is fairly small, but disproportionately problematic. Therefor, the utility of moving them against their will is higher than letting them stay there by several measures. If it helps, we can offer them money in compensation.
Now here's a question for you: if compensation can be provided to the sacrificed minority after the threat has passed, does that mitigate things in your view?
This isn't some pie-in-the-sky situation. If you have a position of power in one nation that's at constant state of cold war with another, and an act of terrorism against them threatens to turn it into a very hot war unless the perpetrators are brought to justice immediately, what do you do if you've no idea who did actually did it? Suppose further that the only scapegoat you can arrange on short notice without too large of a chance your enemy discovering you're hoodwinking them is innocent of anything heinous.
I'm not a politician or a diplomat, so take my ideas with a grain of salt. With that disclaimer out of the way, first I would reassure them that it was not by my government and that we are currently searching for the real perpetrators. Even if they don't buy my honesty, hopefully that should buy time with which to either identify or catch the real perpetrators. In the meantime, the scapegoat option shouldn't go onto the table unless I am sure the risk of nuclear war is too great-- the sheer negative utility of that risk should be obvious. If I have to resort to a scapegoat, I might as well lie twice and give the scapegoat a new name, life, and government paid permanent vacation. Then the worst I have to put up with is conspiracy theorists who get too curious.
Or: you're the leader of the nation C in the previous scenario, and you're watching as the A's commit atrocities against B's. Your nation has a more powerful military, but your people are generally apathetic, and some even agree with the A's. However, you know that (1) if you give aid to B's, the A's will likely attack you, (2) your intelligence network is good enough to predict where they will attack you, and (3) if you allow the attack to happen without mobilization beforehand, your people will be much more energized against the C's because you can paint the incident as an atrocity, even though it will sacrifice many innocents among your own troops. Does utilitarianism predict that you're obligated to do this?
Again, remembering my disclaimer about politics: first, because of the fact that the belief system of the A's is of suspect epistemology (to say the least) and because they quite possibly face a serious threat to my country and others in the future, I most certainly have a duty to act according to my utilitarian scheme. In this case, I would prefer to push sanctions against the government of the A's through whatever international body I can (hopefully with me in office they will have some real power, unlike the UN). That takes the burden off of me, and hopefully places the behavior of the A's into the scrutiny of the international community.
Barring that, I secretly help set up and fund a resistance movement among the B's while using the media channels of my own country to help generate attention and public scrutiny from my own people (hey, I'm their leader, I personally can't imagine accepting such a position unless I was elected legitimately so they must care a little about the things I care about, right?). This way, if I did it right, there will be no evidence to tie me to the B's resistance movement, so the A's will look like the aggressor (and will certainly be guilty of escalation) if they attack my country.
But again, I'm no statesman, so take my ideas with a grain of salt.
Holistic utilitarianists, for example. I even alluded to Jack Smart earlier, who is but the most famous example. If you're curious, his answers would be that yes, you can be obligated to frame an innocent person as a scapegoat, and yes, intentionally sacrificing some innocents can similarly be morally obligatory. That's what I meant by 'bite the bullet' in my first post in this thread.
Holistic utilitarianism? Never heard of it. Mind explaining?
The former. Character is a kind of internal disposition, or reaction over all possible circumstances (depending on whether or not you're a behaviorist, which is not something I'd like to get into at this point). You become virtuous by doing good--but this isn't by mere fact that the good actions were performed, but rather just a reflection of the fact that people are ultimately trainable animals. Thus, if you who do good actions only because, purely by chance, you never find yourself in circumstances where you realize the opportunity to get away with doing otherwise, you're still not virtuous.
And there seems to me to be something wrong with that. We can't get into that person's head, so how are we supposed to say that they don't have a virtuous character? That seems like an epistemological flaw in your theory. Also, I understand behaviorism, but there is also a lot of evidence for cognitive and evolutionary theories that should be considered, I think. That's why I base my ethics on the idea of common goals, since I think that they are more universal, easier to explain, and why, although most people harp on the differences, human cultures came to such similar morals. It can't
all be training (as behaviorism suggests), or we should see more differences between groups the farther apart they are than we actually do. At least, that's my understanding. Of course, another advantage is that my system was designed explicitly to be able to deal with entities other than humans, such as animals or artificial intelligences (hey, if the British Parliament thinks we'll have to consider the rights of robots someday, good enough for me!). We may have much less in common with them, but they should have
some kind of moral consideration!
Also, it just strikes me as somehow condescending (and selfish) to say "I'm don't think animals have value, but I won't be cruel to them because it would be wrong of me to get
my hands dirty with their blood." Even if it is a valid judgment according to the ethical system, let alone my own.
Given that deontologists focus on behavior proscription much more directly than other flavors of ethicists, including even utilitarianists, what in the world are you talking about?
I think you misunderstand-- I know what they are
trying to accomplish. The problem is, IMO, I don't think they accomplish that task very well. For example, such systems often lack the ability to adapt to the situation at hand, and have difficulty dealing with forced choice scenarios in which different rights or duties conflict. A classic example would be going down a hill in a car whose breaks have been cut, and all paths lead to someone getting hurt. Such a dilemma
could actually happen, and we don't need Heath Ledger's Joker to set it up. What is a deontologist supposed to do? Blame himself? And perhaps the most damning to me, does he take responsibility for any unintentional carelessness on his part *? Near as I can tell, no, because the chain of cause and effect is not considered in deontology. And that seems to me to be nothing short of reprehensible.
* As a side note, would you consider intent to be an agent centric evaluation? It never made much metaphysical sense to me to treat intent as a characteristic of an action when actions are something that only have
physical characteristics.