Formless wrote:What it means is that even if Nigerians are happy with what they have, it does not necessarily mean our society can emulate Nigeria.
It means that we don't have a good basis for wanting Nigeria to better emulate, say, a classically Liberal Democracy or Welfare State, and vice versa for our goal to be to construct such a model of a society when it is apparent that such a State is not what gives its citizens the most happiness. Not that emulating Nigeria would - but once we can't say what specifically would, what use is the system?
I never said it was our genes that grant us value. Where did you get that impression? Most ethical questions pertain to humans. If you want to talk about animals, fine, then we can talk about animals and how answers pertaining to them differ from answers pertaining to us. How does that change the ethics we use to address human problems? I don't think it does, but go ahead and explain that one to me.
Goddamn it, not. I brought this up in my first post, because you haven't brought up a good reason to be talking about specifically humans or animals at all. Under Utilitarianism, it's just the general happiness of a number of beings. What sort of beings and what sort of happiness, then? The point is that you need to find a consistent yardstick for what constitute a being that should be judged and its worth as a being, and once you do, we still won't be talking about humans because that still covers an indefinite number of possible beings, and the fact that you can intuitively understand a few of them still won't make Utilitarianism valid.
You are calling out a fallacy where none exists. I never stated that we must evaluate ethical questions pertinent to humans based on our social instincts alone (though we certainly do every day), I used that as a counter argument against the assumption that other human minds are so alien to the individual that he or she cannot have knowledge of them or how consequences effect them. Whether that knowledge is scientific or intuitive or just practical is besides the point, though I prefer scientific knowledge where I can get it. Is it not that lack of knowledge that underlies why you don't think Utilitarianism or other Consequentialist systems infeasible or impossible to work out?
Oh, I was confused. I thought you were actuaslly proposing an argument against the epistemic problems with Utilitarianism, but it seems you actually just wanted to counter the fact that it is difficult or impossible to judge others' mental states well enough to make the best decision for their well-being with 'lol no i bet we can'.
In fact, if we cannot have such knowledge, then deontological ethics are also infeasible. But I digress.
There are barriers for deontological ethics, but it isn't concerned specifically with A. Producing the best possible consequences of any single action you take, or B. Requiring knowledge of all your subjects' mental states, so it comes out of this critique pretty clean.
How, exactly, is it so hard to understand? Your argument is that the world is too complex to make accurate predictions about the future, and that other humans are too alien from the individual to assign utility values properly. But although we aren't perfect at it, we do it all the time to within enough accuracy as to make your criticism seem... weird. Heck, at least two methods of making these predictions easier have been stated in this thread-- Rule Utilitarianism and Virtue Ethics. What do you really think we need before such ethics are possible? And I do hope you manage to come up with a criticism that doesn't hamper all ethics pretty equally, though given your username I guess I can't be disappointed if you don't.
To sum up:
1. The fact that we are good at intuiting other human beings and working together for our shared survival is not helpful to consequentialism, since to make it a universal system would require that it be as applicable to beings who are not intelligible to us,
2. 'We are good at using instincts to get along with humans' /= 'We have enough information to make the choice with the most possible happiness for the most number of beings, where 'beings' and 'happiness' have yet to be defined.'
How are you not getting this?
And as for your last few sentences there... I don't even... No.