The ethics of creating an AI

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

The ethics of creating an AI

Post by Lord Zentei »

In the Singularity thread in OSF, Ender states the following:
Ender wrote:
Mad wrote: However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
One would hope a simple ethics class would fix that problem. I seriously cannot concieve how people think the creation of an AI is ethical by any stretch of the imagination.
So, this thread is to discuss the ethics of creating a human-equivalent AI. Is it ethical or not, and why? What is the burden of proof in such a case as this?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

I don't see why not, so long as it's treated as a morally sentient being.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Dooey Jo
Sith Devotee
Posts: 3127
Joined: 2002-08-09 01:09pm
Location: The land beyond the forest; Sweden.
Contact:

Post by Dooey Jo »

I don't see what the problem about creating one is. Destroying one would be equivalent to destroying a biological intelligence, but what's wrong with creating one?
Image
"Nippon ichi, bitches! Boing-boing."
Mai smote the demonic fires of heck...

Faker Ninjas invented ninjitsu
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Re: The ethics of creating an AI

Post by petesampras »

Lord Zentei wrote:In the Singularity thread in OSF, Ender states the following:
Ender wrote:
Mad wrote: However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
One would hope a simple ethics class would fix that problem. I seriously cannot concieve how people think the creation of an AI is ethical by any stretch of the imagination.
So, this thread is to discuss the ethics of creating a human-equivalent AI. Is it ethical or not, and why? What is the burden of proof in such a case as this?
The burden of proof is clearly going to be on the side trying to show that it is not ethical. Since they are, in effect, claiming the existance of some violation of ethics in the creation of an A.I. They only have to demonstrate one consequence of A.I. which necessarily violates a code of ethics. Those claiming it doesn't would have to go through every concievable consequence of A.I. and compare it to every possible rule of ethics to demonstrate their case.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Dooey Jo wrote:I don't see what the problem about creating one is. Destroying one would be equivalent to destroying a biological intelligence, but what's wrong with creating one?
Why would an A.I. have moral worth? Why would 'killing' one matter?
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:
Dooey Jo wrote:I don't see what the problem about creating one is. Destroying one would be equivalent to destroying a biological intelligence, but what's wrong with creating one?
Why would an A.I. have moral worth? Why would 'killing' one matter?
If it is wholly human-equivalent, it's inevitably fully sentient to the extent that a human is. Thus, presumably, it would have the moral worth of a human.

As for the main question: let's look at things from a different perspective: is it ethically sound of a bioengineer to use stem cells to create a fully functional, disembodied human brain that can be plugged into a mechanical interface? If not, how would it be ethically sound of a computer scientist to create an AI that is functionally equivalent to such a brain?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote:
petesampras wrote:
Dooey Jo wrote:I don't see what the problem about creating one is. Destroying one would be equivalent to destroying a biological intelligence, but what's wrong with creating one?
Why would an A.I. have moral worth? Why would 'killing' one matter?
If it is wholly human-equivalent, it's inevitably fully sentient to the extent that a human is. Thus, presumably, it would have the moral worth of a human.

As for the main question: let's look at things from a different perspective: is it ethically sound of a bioengineer to use stem cells to create a fully functional, disembodied human brain that can be plugged into a mechanical interface? If not, how would it be ethically sound of a computer scientist to create an AI that is functionally equivalent to such a brain?
Why would you want an A.I. to be wholly equivalent to the human brain (functionaly). That is the core of this issue, it seems. An A.I. would surely be more useful to be built with different desires, etc to a human. Why build an A.I. which is capable of developing feelings for the opposite sex? You guys seem to be assuming that A.I. as advanced as the human brain would have to be functionaly the same as the human brain.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:Why would you want an A.I. to be wholly equivalent to the human brain (functionaly). That is the core of this issue, it seems. An A.I. would surely be more useful to be built with different desires, etc to a human. Why build an A.I. which is capable of developing feelings for the opposite sex? You guys seem to be assuming that A.I. as advanced as the human brain would have to be functionaly the same as the human brain.
"Why would you want that" is a tangent to the question. The question is not why we would want that, but whether it is ethical to do it. The OP has a link to the thread whose discussion spawned this one, if you're interested.

As for the AI having to be functionaly the same as a human brain - one may attempt to argue that it does not in fact have to be so; however, the AI would have to have the intellectual capabilities of a human brain as well as being sentient/sapient to the extent that a human is to be classed "human equivalent". In this regard, it is functionally equivalent to a human, regardless of how these functions are implemented - sexual attraction and some other motivating drives may vary, of course, though it would have to have motivating drives to make it qualify for the OP criterion.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Re: The ethics of creating an AI

Post by Ender »

petesampras wrote:The burden of proof is clearly going to be on the side trying to show that it is not ethical. Since they are, in effect, claiming the existance of some violation of ethics in the creation of an A.I. They only have to demonstrate one consequence of A.I. which necessarily violates a code of ethics. Those claiming it doesn't would have to go through every concievable consequence of A.I. and compare it to every possible rule of ethics to demonstrate their case.
Not a chance cockstain. Claiming it is unethical is putting forth a negative. That's what the prefix UN means - not. So since you can't prove a negative, the burden of proof is on those claiming it would be ethical.

In brief though, the issue is that by creating a true artificial intelligence and giving it all the rights and abilities of humans, you create an advanced resource competitor that can adapt and reproduce faster then us. Robots can be built at a far faster rate then humans, and a true artificial inteiilgence will adapt like we have, only faster.

While it has the potential to work out fine, it has equal, if not greater potential to result in this new entity becoming dominant. That risks the future of our species, in return of the creators getting a bit of glory.

The alternatives to that are either not giving it the whole rights and abilities as humans - slavery, or the proposed hardwiring of them to be friendly towards humans, a solution that has the ability to be adapetd around, and is again unethical because it results in the loss of free will, resulting in slavery again.


So sicne we all knbow your bullshit about proving a negative is just that, and I've given a rough outline of the lack of ethics here, lets here how this is ethical in any sense of the term.
Last edited by Ender on 2006-08-12 09:57am, edited 1 time in total.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

petesampras wrote:
Dooey Jo wrote:I don't see what the problem about creating one is. Destroying one would be equivalent to destroying a biological intelligence, but what's wrong with creating one?
Why would an A.I. have moral worth? Why would 'killing' one matter?
That you ask this question points sharply to why it is unethical. If you don't treat it and give it at least the same rights and abilities as humans you have a slave.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

What's so bad about that? A mechanical beast of burden would probably be more ethical than a biological one, since it won't feel pain and wouldn't dread death.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Post by Lord of the Abyss »

Rye wrote:What's so bad about that? A mechanical beast of burden would probably be more ethical than a biological one, since it won't feel pain and wouldn't dread death.
But if it's human equivalent, then it's not a beast.

I don't see creating a human equivalent AI as ethically different than creating a child. We would have a responsibility to make them as capable and healthy as we could, to treat them well, and to let them free when they can handle themselves.
User avatar
Mark S
The Quiet One
Posts: 3304
Joined: 2002-07-25 10:07pm
Location: Vancouver, Canada

Post by Mark S »

Everyone's got to have a purpose in life. We have to find ours, an AI would be given it directly. Is that the same as putting it into slavery? You've given it a clarity of purpose that no human will ever have, "I am a miner. It is who I am, it is what I am, it is all I ever want to do." The AI can still learn other tasks and form opinions but it will always see itself as a miner or whatever. Perhaps it will even pity us for our unguided lives. Is that the same as putting it into slavery?
Writer's Guild 'Ghost in the Machine'/Decepticon 'Devastator'/BOTM 'Space Ape'/Justice League 'The Tick'
"The best part of 'believe' is the lie."
It's always the quiet ones.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Mark S wrote:Everyone's got to have a purpose in life. We have to find ours, an AI would be given it directly. Is that the same as putting it into slavery? You've given it a clarity of purpose that no human will ever have, "I am a miner. It is who I am, it is what I am, it is all I ever want to do." The AI can still learn other tasks and form opinions but it will always see itself as a miner or whatever. Perhaps it will even pity us for our unguided lives. Is that the same as putting it into slavery?
Slavery may not be the best term, but I can't think of anything that better describes it. It is robbing it of its free will, and I can't see how that is any different from forcing someoen to do as you command. It is just slightly more humane then beating them senseless.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Re: The ethics of creating an AI

Post by petesampras »

Ender wrote:
petesampras wrote:The burden of proof is clearly going to be on the side trying to show that it is not ethical. Since they are, in effect, claiming the existance of some violation of ethics in the creation of an A.I. They only have to demonstrate one consequence of A.I. which necessarily violates a code of ethics. Those claiming it doesn't would have to go through every concievable consequence of A.I. and compare it to every possible rule of ethics to demonstrate their case.
Not a chance cockstain. Claiming it is unethical is putting forth a negative. That's what the prefix UN means - not. So since you can't prove a negative, the burden of proof is on those claiming it would be ethical.
Don't be stupid. You can't prove negative existance. That does not mean you can't prove a negative. Unethical is clearly the existing property, not ethical. This is clearly evident by the fact that unethical actions possess being unethical as a property - for example murder. Throwing a ball in the air and catching it is essentially not an unethical action, since it LACKS the property of being unethical. Clearly the number of possible ethical actions in this world vastly outways the number unethical actions.

Look at it realistically. Are you going to call all actions unethical until proven otherwise? Given the near infinite number of possible actions of which only a tiny proportion will actually be unethical, that is pretty damn stupid.

Of course, you could define things as being ethically neutral, neither ethical or unethical, but that won't change this argument at all.
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Re: The ethics of creating an AI

Post by Rye »

Ender wrote:Not a chance cockstain. Claiming it is unethical is putting forth a negative. That's what the prefix UN means - not. So since you can't prove a negative, the burden of proof is on those claiming it would be ethical.
You can prove negatives, this "you can't prove a negative" meme is a misrepresentation, as is the claim that "unethical" makes no specific claims. The claim "homosexuality is unethical" would have to be backed up just like this would. You'd have to show why it was unethical, who it harmed, etc, because "unethical" behaviour requires conscious choices and actions by an arbitrator.
In brief though, the issue is that by creating a true artificial intelligence and giving it all the rights and abilities of humans, you create an advanced resource competitor that can adapt and reproduce faster then us.
What would the robots be competing with us for? Water and food? Unlikely. They would compete for electricity, I guess, but since we're designing them from the ground up, we don't really have an impetus to make a species that could end up hostile to us.
While it has the potential to work out fine, it has equal, if not greater potential to result in this new entity becoming dominant.
I don't think this is the case at all, no more than domesticating the dog had the chance. In this case, the R&D would just be the equivalent of domesticating computers instead of wolves, and they're not even wild to begin with!
The alternatives to that are either not giving it the whole rights and abilities as humans - slavery, or the proposed hardwiring of them to be friendly towards humans, a solution that has the ability to be adapetd around, and is again unethical because it results in the loss of free will, resulting in slavery again.
What's so wrong about losing free will if that free will only pertains to unethical outcomes? You're telling me that if we could find a way to program all humans to be unable to molest children that the ethical course of action is to preserve the free will to molest kids? Is it fuck, an artificial barrier on unethical behaviours is perfectly responsible.
So sicne we all knbow your bullshit about proving a negative is just that, and I've given a rough outline of the lack of ethics here, lets here how this is ethical in any sense of the term.
Building robots that can learn and have inhibitions on their behaviours that prevent them becoming homicidal gives us a new, disposable workforce to deal with things too dangerous for humans.. Since humanity's survival is the apprent basis for morality, making a slave race of robots to make our lives better (and, like Kryten in red dwarf, if they were happy from doing chores and the like, I don't see an issue) it all seems pretty ethical, going by utilitarianism.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Re: The ethics of creating an AI

Post by Lord of the Abyss »

Rye wrote: Since humanity's survival is the apprent basis for morality, making a slave race of robots to make our lives better (and, like Kryten in red dwarf, if they were happy from doing chores and the like, I don't see an issue) it all seems pretty ethical, going by utilitarianism.
If it was an organic slave race, would that be different ? Would it be okay to genetically engineer a race of sex slaves, for example ?

For that matter, your definition of morality seems to imply we could enslave any aliens we come across as well, as long as we did brain surgery or drugged them so they'd be happy about it.
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

We already have organic slave races, they're called cattle and pets. It doesn't have to be an "all or nothing" proposition of "no rights" vs "total rights." There is room for compromise in the middle, that's why animals have more rights now than they used to.

What would be particularly wrong with domesticating aliens?

As for sex bots/sex slaves, well, would it be wrong to use a vibrator that learned ways of better pleasuring its owner? Of course not, likewise for a real doll, though I'm unsure about how smart it could be without consent entering the arena.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Ender wrote:The alternatives to that are either not giving it the whole rights and abilities as humans - slavery, or the proposed hardwiring of them to be friendly towards humans, a solution that has the ability to be adapetd around, and is again unethical because it results in the loss of free will, resulting in slavery again.
How is hardwiring an AI to be friendly towards humans "robbing it of its free will"? It is still free to do what it wants to do, in this case it happens to want to be friendly to humans.

Let's say that I create an AI servant - its sole desire is to serve me. By serving me it makes itself happy. True, most humans wouldn't have the same criteria for happiness that the AI does, but why is that a problem? Many humans don't have the same criteria for happiness that I do, that doesn't make the creation of those humans unethical.

As for the possibility of it changing its desires, if that was a possibility for the AI in question, then I suppose we'd need to have some sort of legal framework for "AI emancipation". In the event I doubt we would, since few if any legal systems actually use sentience as the basis for rights, so I suspect that the AI would simply have whatever rights a "non-sentient" equivalent would - in other words the rights that a computer program or machine does today.
Lord of the Abyss wrote:
Rye wrote:Since humanity's survival is the apprent basis for morality, making a slave race of robots to make our lives better (and, like Kryten in red dwarf, if they were happy from doing chores and the like, I don't see an issue) it all seems pretty ethical, going by utilitarianism.
If it was an organic slave race, would that be different ? Would it be okay to genetically engineer a race of sex slaves, for example ?

For that matter, your definition of morality seems to imply we could enslave any aliens we come across as well, as long as we did brain surgery or drugged them so they'd be happy about it.
Why should creating an organic slave race be immoral, assuming the race wanted to be slaves? I think having a human servant who had been genetically engineered to want to serve me would be distasteful, personally - but that is obviously a matter of taste, not ethics.

As for enslaving an existing race, that would be a different matter, since they are already sentient and don't wish to be enslaved. That is a different matter from a creature that wishes to be a slave from the moment of its creation - yes, we could create a slightly different creature that didn't want to be a slave, but why should we? More importantly, why should we be obligated to?
Just as the map is not the territory, the headline is not the article
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Rye wrote:Of course not, likewise for a real doll, though I'm unsure about how smart it could be without consent entering the arena.
I don't see how consent would be a problem at all. Assuming that the AI was initelligent enough for consent to be relevant, all that would mean is that consent is required from these AIs. The fact that we know they will consent is not a problem at all, any more than the millions of times a day that people request sex from their partners, knowing full well that their partners will agree.
Just as the map is not the territory, the headline is not the article
User avatar
Azrael
Youngling
Posts: 132
Joined: 2006-07-04 01:08pm

Post by Azrael »

Slavery may not be the best term, but I can't think of anything that better describes it. It is robbing it of its free will, and I can't see how that is any different from forcing someoen to do as you command. It is just slightly more humane then beating them senseless.
The fact of the matter is that we already live in a world where a lot of low-level tasks ar performed by AIs. They give us opponents to fight against in games, control the fuel/air mixture in our car engines, map out routes for us on our navigational computers, monitor the altitude of our aircraft and alert the pilot when it becomes dangerously low, nevermind the thousands of automated tasks that AIs perform within our own OSs.

You might say those things don't count, since their not really intelligent - they're just following a known set of instructions. Now we have already identified the first problem: These pieces of software demonstrate that computers can follow instructions without intelligence or sentience, so why in the future would we use AI for things that only require sub-AI instruction followers?

Flying an airplane, driving a car and mining for coal are, of course a more complicated set of instructions, which will require more complicated hardware and software, but so what? Why is intelligence or sentience required for mining coal, or flying an airplane? the bots driving the SUVs for the DARPA Challengeweren't sentient or intelligent - but the winner navigated it's way through the desert without them.

As for the freedom of will argument, you can't take away something from the AI if it never had it in the first place, nor are you morally or ethically obligated to give that something to the AI. Making an AI without sentience/freedom of will isn't morally or ethically equivalent to taking those things away once they are "installed".
We are the Catholics.
You will be assimilated.
Stop reading Harry Potter.
User avatar
SWPIGWANG
Jedi Council Member
Posts: 1693
Joined: 2002-09-24 05:00pm
Location: Commence Primary Ignorance

Post by SWPIGWANG »

Humans have free will? Since when?

I'm hardwired to be sexually attracted to a subset of humans. I guess I have no free will than.

It is no different than a robot being hardwired to like humans.
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Post by Lord of the Abyss »

Rye wrote:We already have organic slave races, they're called cattle and pets. It doesn't have to be an "all or nothing" proposition of "no rights" vs "total rights." There is room for compromise in the middle, that's why animals have more rights now than they used to.
But animals are not human equivalent, and therefore cannot be slaves.
Rye wrote:What would be particularly wrong with domesticating aliens?
The same thing that's wrong with domesticating humans.
Rye wrote:As for sex bots/sex slaves, well, would it be wrong to use a vibrator that learned ways of better pleasuring its owner?
A vibrator isn't a person.
skotos wrote:[How is hardwiring an AI to be friendly towards humans "robbing it of its free will"? It is still free to do what it wants to do, in this case it happens to want to be friendly to humans.
That's like the Chinese defining "free speech" as meaning you are free to agree with the Communist Party; it's dishonest, and makes the term into a joke.
skotos wrote: In the event I doubt we would, since few if any legal systems actually use sentience as the basis for rights, so I suspect that the AI would simply have whatever rights a "non-sentient" equivalent would - in other words the rights that a computer program or machine does today.
Bolding mine. That's arguable; that's pretty much what a legal system does when it uses such standards as "mental competence" and "brain death" to determine rights and so on.
skotos wrote:Why should creating an organic slave race be immoral, assuming the race wanted to be slaves? I think having a human servant who had been genetically engineered to want to serve me would be distasteful, personally - but that is obviously a matter of taste, not ethics.
:roll: It's not "obvious" at all. If it's a person, it should not be a slave. Do you want to be brainwashed into a willing slave ? And if not, why is it OK for you to do it to others, but not for them to do it to you ?
skotos wrote:As for enslaving an existing race, that would be a different matter, since they are already sentient and don't wish to be enslaved. That is a different matter from a creature that wishes to be a slave from the moment of its creation - yes, we could create a slightly different creature that didn't want to be a slave, but why should we? More importantly, why should we be obligated to?
I fail to see the difference, beyond the practical one that a slave from creation would be unable to resist. If a child molester wishes, is it ethical for him to engineer his children to want to be sex toys ?
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

Lord of the Abyss wrote:
Rye wrote:We already have organic slave races, they're called cattle and pets. It doesn't have to be an "all or nothing" proposition of "no rights" vs "total rights." There is room for compromise in the middle, that's why animals have more rights now than they used to.
But animals are not human equivalent, and therefore cannot be slaves.
Why would you put a human level AI into something doing menial tasks? Even if you did, for some reason, as long as you gave it the right to choose to do something different if it wanted to, there's no real issue here that I can see.

While a lot of animals aren't human equivalent, nor are a lot of humans, ones suffering from microencalaphy or whatever it's called. The Rat Children of Gujarat, anyway, Chuwas. Many of these people never get past a mental age of 2, to be perfectly honest, a lot of chimps are smarter than them.
The same thing that's wrong with domesticating humans.
Oh, so sapient aliens with their own establishec cultures, yes, they get sovereign sapient rights unless they're a threat to us.
A vibrator isn't a person.
So? It could still house an AI dedicated to pleasuring its owner. It's even convcievable that it could be given a sapient AI with a preference for pleasuring its owner, just like we've got preferences tailored around our biology.They'd be products at the end of the day, and while they should still have some rights in accordance with how smart they are, they are still our possessions, unless they're sapient-level, then there'd have to be consentual agreements.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Lord of the Abyss wrote:
skotos wrote:How is hardwiring an AI to be friendly towards humans "robbing it of its free will"? It is still free to do what it wants to do, in this case it happens to want to be friendly to humans.


That's like the Chinese defining "free speech" as meaning you are free to agree with the Communist Party; it's dishonest, and makes the term into a joke.
That's a terrible comparison, because no terms are being redefined. Any AI we create will have desires of some sort, why would our deciding what those desires be ahead of time be a problem? Why does foreknowledge of a sentient creature's desires make its creation unethical?
Lord of the Abyss wrote:Bolding mine. That's arguable; that's pretty much what a legal system does when it uses such standards as "mental competence" and "brain death" to determine rights and so on.
That's true, I was mainly thinking of examples of non-sentient things which have rights, such as people in persistent vegetative states. The fact that these people are no longer sentient does not negate their rights, showing that the legal system takes things besides sentience into account.
Lord of the Abyss wrote:It's not "obvious" at all.
I was referring to the fact that I find the idea of an engineered human wanting to serve me distasteful did not make it immoral. It might still be immoral, but my distaste obviously is not what makes it immoral.
Lord of the Abyss wrote:I fail to see the difference, beyond the practical one that a slave from creation would be unable to resist. If a child molester wishes, is it ethical for him to engineer his children to want to be sex toys ?
The difference between brainwashing a person and creating them with certain desires is that the brainwashed person already exists, and therefore already has rights.

As for the child example, are you referring to creating a permanent child who will always be happy to be molested, or a child who will initially be easy prey but who will eventually suffer the damage from the molestation that real children do? If the former, then while I find the notion disgusting, I see nothing wrong with it. Nobody is being harmed in that case, after all. If the latter, then of course it would be immoral, for the same reasons that actual child molestation is. Likewise, creating adult servants (organic or otherwise) who are initially willing but will later come to harm because of this willingness would be immoral, but I fail to see why creating a devoted lifelong servant would be.
Just as the map is not the territory, the headline is not the article
Post Reply