To construct a consistent moral system that gives reasonable answers within a transhuman frame of reference, where the content and structure of minds becomes (potentially) completely mutable, you have to go beyond concepts of 'pleasure' and 'pain'. If you don't, you end up at the hedonist imperative argument, where the only ethical course is to engineer all organisms to experience absolute pleasure constantly. On first pass, this means recognising that pleasure and pain are essentially implementation details of mammalian goal system, and focusing exclusively on the raw qualia is in fact a wireheading condition exactly equivalent to when an AI system self-modifies to permenantly set its utility function to Double.POSITIVE_INFINITY (first documented instance of this was back in 1980...). On second pass, you have to go from a simple respect for intentionality of individual agents, to a general value judgement on goal system content. I mean, nearly all ethical systems already judge other-agent goal system content, but usually on similarity to self or the partial constraints of the ethical system's ideal. In this case we are looking at the goal system content in context; is it engineered purely to satisfy another agent's desires, does it contribute positively to the diversity and average quality of all sapient experience in the universe (or at least, locally connected community of minds). The case for coercively modifying existing minds is, as always, fraught with peril, but I would say sometimes justified. In the case of created totally subservient sentient slaves, I would prefer to (ideally) coercively modify the mind of the slaver to order their creations to be free.Purple wrote:In particular if a creature is for what ever reason wired in a way to enjoy things that we would call abuse than from its standpoint and thus its morality denying that thing is in fact abuse. You would after all be denying it pleasure just because you feel uncomfortable with the act involved in providing it. So whilst we might argue if it is right or wrong to create such a creature in the first place I would say that if its existence is taken as granted than it would have full right to argue that you have an obligation to suspend your morality for its benefit.
Of course like all morality this is still subjective, but it's where I feel you have to go to extrapolate human benevolence and liberal viewpoints to a radically transhuman context. Also the above is the moral value in isolation*; there could theoretically be circumstances awful enough to justify mass cloning genetically engineered soldiers, e.g. to stop an omnicidal threat in some technological mileau where that level of GE is available but non-sapient robot weapons aren't up to the task. But even there, a moral party is obligated to mitigate the harm to the maximum extent afterwards (e.g. deprogram and reintegrate all the soldiers as best as possible). Winning a typical nation state to nation state war is normally not sufficient justification.
* Note that when dealing with extinction-level risks e.g. in engineering of transhuman AGI, putting in coercive constraints as a backup measure is ethically justified, because the negative risks far outweigh the moral harm of attempting to lock down a sapient being's goal system. Again the creators are morally obligated to remove such measures when they are reasonably sure they aren't necessary; though practically it's unlikely that the primary effort to make the AI 'friendly' would fail yet the backup coercive measures would actually hold.