My question is, why should the fact that the AI is human matter? Being human is merely a question of chemistry, and I see no difference between an AI composed of plastic and metal, and an AI composed of water and carbon. Water, carbon, plastic, and metal are all amoral, nothing is good or evil because of its composition. Thus, the morality or ethics of AI has nothing to do with the AI's composition, and the fact that the AI is a member of Homo Sapiens is irrelevant.
I don't know what you mean here? Who said that the AI is human and that matters? I know I didn't. I don't think it matters whether they are flesh and blood or not. Humans are a type of biological machine, but with a high degree of intelligence. I just it would be a bad precedent and taking advantage of a weakess in something else deliberately to make sapient creatures who retarded, yet content slaves. Maybe I am just dense (I probably am), but even it makes them happy, I wouldn't think that, according to Ideal Utilitarianism, it would be the best option if the individual could rationally choose (and it cannot in reality, since you made it that way). I am perfectly willing to consider it, at least, though.
I just find it shocking that you seem to have no problem with slavery as long as you make a slave that cannot resist, due to your genetic or computer programming, and likes it--again, because you enslaved it and make it think that. If the justification of it is based on Hedonistic and Preference Utility, it gets interesting.
According to Preference Utilitarianism, someting is right insofar as it satisfies the desires, preferences of those involved and wrong insofar as it violates them. If we make a slave race of humans or AI and implant within them at "birth" limitations on possible desires (or give them active desires to serve us, regardless of what they would want if they weren't programmed), they have no preferences you can violate, unless you violate the preferences you already programmed them to have access to. So if the players in the calculation are X, Y, and Z, who only have a desire to serve you and no desire for any freedom from your bondage (and are actually happy when they serve you and unhappy when they don't) it seems as if Preference Utilitarianism would say that's acceptable, but only given that any other situation wouldn't maximize utility more.
From a hedonistic perspective, the idea is to maximize the happiness and pleasure interests of all those affected by an action or policy. In a way, it's subsumed by Preference Utilitarianism. Again, if you create a slave race of AI or humans and you make it ove to be a slave and serve you, you technically aren't making it unhappy. You are merely taking away any choice of doing anything else or disobeying you. The only reason it is happy to serve you and doesn't want to do anything else is that you manacled its mind. Something cannot begin to be wrong if it's not making them unhappy according to hedonistic brands of utility, given that you are't objectively causing them physical harm that they just cannot feel, anyway, which would bring you back to Preference Utilitarianism.
This presents a peculiarity if the argument hods. I just ask because it's quite ironic that a similar argument was used by white southern plantation owners trying to defend african slavery. For instance, there was a virginia plantation owner's letter to the governor in the 18th century that can be found in the "American Pageant" which describes him trying to defend the institution of slavery against the "abolitionist devils" by commenting that the slaves like serving the whites, are happy, and are well-taken care of. Now, even though this wasn't true anyway, are you really saying it would be ok to enslave, say, africans, if we were to breed them such that they actually would like serving white overlords? Even though they would have no choice in the matter? They would still very much be in the same situation as the normal slaves, but in this alternative reality, you make them want to serve you. They don't mind the harsh conditions or the backbreaking slave labour. They enjoy it. You can even make it enjoy pain and the tedious labour.
The hedonistic and preference arguments get weird here, it seems, since causing it to go through boring, monotonous tedium and pain is what it wants, thus you are satisfying a preference. You aren't violating one again, much like in the case where you merely programme the human to like to serve you. You are also making it happy by doing it, because they would enjoy going though that just to please you.
This one is somewhat unrelated, but I don't know how you feel on it.
What if we deliberately created functionally retarded humans for dangerous jobs, but made it such that they, like the slave above, liked doing it and serving us? Would it matter that we made them retarded, but happy being retarded? Why should it matter if it's ok to make people slaves and servants, so long as you make them happy being your monkey? In both cases, you aren't violating a preference (since the preference is preprogrammed) and you wouldn't be making them unhappy, since that too is predicated upon the programming you use when you engineer and breed them.
You could virtually do anything to them as long as you programmed them to like and or prefer it (under a strict preference or hedonistic system) and you seem to be using both of those systems. Consent would be an issue, like in the case of child molestation, but oddly, does consent matter when the person giving consent cannot possibly EVER do anything else? This is the case for the slave AI or human you breed. The consent there is hollow. It technically does consent, but only because you ultimately force it to and give it no option, whereas normal sapient creatures would have the option and ideally prefer not to be your monkey. When does it end or become absurd, and is that line arbitrary?? Would you ever think it wrong, so long as it continued to make them happy or fulfill your predesigned preferences they held? I don't see how you can say X would be ok, for Y reasons, but not A, for Y reasons.