The ethics of creating an AI

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

Rye wrote:So? It could still house an AI dedicated to pleasuring its owner. It's even convcievable that it could be given a sapient AI with a preference for pleasuring its owner, just like we've got preferences tailored around our biology.They'd be products at the end of the day, and while they should still have some rights in accordance with how smart they are, they are still our possessions, unless they're sapient-level, then there'd have to be consentual agreements.
Well, I did specify "human equivalent" in the OP. ;)

I take it you have no objections to creating such a being, and that you deem that the burden of proof rests with the opponents of so doing, then?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Gil Hamilton
Tipsy Space Birdie
Posts: 12962
Joined: 2002-07-04 05:47pm
Contact:

Post by Gil Hamilton »

I think we should make a distinction between a "Smart AI" and a "Dumb AI" (for lack of a better terms).

If you make a factory full of robots that build cars and set an AI to govern it, the AI doesn't have to be particularly bright at all, even if it is a stable genius at building automobiles. It doesn't have to be good at chess, have an opinion on Kant, or desire anything. It doesn't even have to be self-aware of its own existance. That sort of idiotsavant AI that is great at thinking its way through the task it was built to do and not much else isn't particularly unethical; it's just a really good computer that can think, learn and make decisions on a topic.

A "Smart AI", which is essentially a digital person, is ethically a whole different kettle of fish. To make a truly free mind opens alot of really sticky questions. One is what Ender describes, making a digital mind opens up the possibility that the thing will learn at a exponential rate and expand itself until it is a "800 pound gorilla in the living", something we can't do alot about if it decided one day not to be friendly. Wiring the thing with predefined limits on the rate it learns or wiring it with the equivlent of a shotgun pointed to its head that allows us to kill it carries the same sort of ethics problem of doing it to a human. You can't purposely make a human child a retard to prevent it from being naughty, or worse, put a bomb in its head; it's not much fairer to do it to a "Smart AI".

There is also an ethical concern that Ender didn't bring up directly. Legally, we don't have any laws regarding AI rights. So if a company one day cooks up a real "Smart AI", then that individual they create will be patented and be sole property of the company that made it. It would have no rights. It would be a terrific legal battle to give such AIs rights. People that don't see them as more than clever computers (out of ignorance or cynicism), people that are afraid of them, and people who support corporate rights far more than they support the rights of individuals would all battle fiercely to keep AIs in slavery and at the whim of the corporations that built them. How much of a movement would there be to free them? AIs aren't visible victims. The very idea of digital cogniscence would go straight over most peoples slack jawed heads. The worry is they'd be complete slaves out of the starting gate without a chance of emancipation.

I think the solution is a bit of an ethical compromise. Build AIs, release that such things could potentially be a dire threat to society and go the "digital shotgun to the side of the head" method and a slight limit on how quickly they can absorb knowledge, complete with a pre-programmed acceptance of the necessity of both. Other than that, make them completely free and give them citizenship. They aren't completely free or have complete self-determination, but that's true of human beings as well (let's face it, we've got built in programming too). Smart AIs would offer huge contributions to society, as vast as the potential threat they may possess. A few safeguards making sure they never go "Kill All Humans" on us and making them entirely free otherwise isn't exactly a huge ethical failing.
"Show me an angel and I will paint you one." - Gustav Courbet

"Quetzalcoatl, plumed serpent of the Aztecs... you are a pussy." - Stephen Colbert

"Really, I'm jealous of how much smarter than me he is. I'm not an expert on anything and he's an expert on things he knows nothing about." - Me, concerning a bullshitter
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Gil Hamilton wrote:I think we should make a distinction between a "Smart AI" and a "Dumb AI" (for lack of a better terms).

If you make a factory full of robots that build cars and set an AI to govern it, the AI doesn't have to be particularly bright at all, even if it is a stable genius at building automobiles. It doesn't have to be good at chess, have an opinion on Kant, or desire anything. It doesn't even have to be self-aware of its own existance. That sort of idiotsavant AI that is great at thinking its way through the task it was built to do and not much else isn't particularly unethical; it's just a really good computer that can think, learn and make decisions on a topic.

A "Smart AI", which is essentially a digital person, is ethically a whole different kettle of fish. To make a truly free mind opens alot of really sticky questions. One is what Ender describes, making a digital mind opens up the possibility that the thing will learn at a exponential rate and expand itself until it is a "800 pound gorilla in the living", something we can't do alot about if it decided one day not to be friendly. Wiring the thing with predefined limits on the rate it learns or wiring it with the equivlent of a shotgun pointed to its head that allows us to kill it carries the same sort of ethics problem of doing it to a human. You can't purposely make a human child a retard to prevent it from being naughty, or worse, put a bomb in its head; it's not much fairer to do it to a "Smart AI".

There is also an ethical concern that Ender didn't bring up directly. Legally, we don't have any laws regarding AI rights. So if a company one day cooks up a real "Smart AI", then that individual they create will be patented and be sole property of the company that made it. It would have no rights. It would be a terrific legal battle to give such AIs rights. People that don't see them as more than clever computers (out of ignorance or cynicism), people that are afraid of them, and people who support corporate rights far more than they support the rights of individuals would all battle fiercely to keep AIs in slavery and at the whim of the corporations that built them. How much of a movement would there be to free them? AIs aren't visible victims. The very idea of digital cogniscence would go straight over most peoples slack jawed heads. The worry is they'd be complete slaves out of the starting gate without a chance of emancipation.

I think the solution is a bit of an ethical compromise. Build AIs, release that such things could potentially be a dire threat to society and go the "digital shotgun to the side of the head" method and a slight limit on how quickly they can absorb knowledge, complete with a pre-programmed acceptance of the necessity of both. Other than that, make them completely free and give them citizenship. They aren't completely free or have complete self-determination, but that's true of human beings as well (let's face it, we've got built in programming too). Smart AIs would offer huge contributions to society, as vast as the potential threat they may possess. A few safeguards making sure they never go "Kill All Humans" on us and making them entirely free otherwise isn't exactly a huge ethical failing.
The thing is, not matter how smart something is, it still requires some basic desires to get going. If you don't give it any desires, it will just sit there like a vegetable. Goal driven behaviour, which is a fundamental to intelligence, needs a reason for its goals to function. Those reasons ultimately come down to the goals fullfilling some desire.

Our desires came from evolution. We will be the ones that give our A.I.s their desires. And, surely, we will make those desires to be compatible with them serving the best interests of mankind. That is not making them slaves, anymore than we can be said to be slaves to evolution. Slaves are forced into serving their masters, advanced A.I.s will WANT to.
User avatar
DPDarkPrimus
Emperor's Hand
Posts: 18399
Joined: 2002-11-22 11:02pm
Location: Iowa
Contact:

Post by DPDarkPrimus »

Programming an AI to be unable to hate humans in no way robs it of free will, just as my biological programming that makes me unable to be sexually attracted to men does not rob me of free will.
Mayabird is my girlfriend
Justice League:BotM:MM:SDnet City Watch:Cybertron's Finest
"Well then, science is bullshit. "
-revprez, with yet another brilliant rebuttal.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

DPDarkPrimus wrote:Programming an AI to be unable to hate humans in no way robs it of free will, just as my biological programming that makes me unable to be sexually attracted to men does not rob me of free will.
Exactly. I blame data from startrek, for this view that exists that you can have a functioning intelligent being based purely on logic and intelligence. You can't, it would just sit their like a vegetable. You must give an A.I. desires/rules/directives to follow in order to get it to do anything at all. Just as we are born with these things.
User avatar
DPDarkPrimus
Emperor's Hand
Posts: 18399
Joined: 2002-11-22 11:02pm
Location: Iowa
Contact:

Post by DPDarkPrimus »

Why bother quoting me if you're going to talk about something that I didn't address at all?
Mayabird is my girlfriend
Justice League:BotM:MM:SDnet City Watch:Cybertron's Finest
"Well then, science is bullshit. "
-revprez, with yet another brilliant rebuttal.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

DPDarkPrimus wrote:Why bother quoting me if you're going to talk about something that I didn't address at all?
Well, what I said is, in essense, about the same thing.
You said that having pre-programmed rules does not rob you of free will.
I was pointing out that you can't even have free will without pre-programmed rules.
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

I think the theme of Brave New World is a bit similar in regards to the programmign of intelligent, human-like AI to like humans and want to serve them insofar as their society was basically a cast system created by various forms of chemical treatments, breeding programmes, and engineering combined with youth-to-adult conditioning. They always didn't have a choice or were born without one. The lower levels of society, such as the Epsilons and Deltas were bred as labourers and to love their job. They engineered and trained to enjoy their servitude, to want nothing else, and actively to dislike other things that might distract them from their primary design purpose.

I don't necessarily disagree on the issue of programming the AI to like you or do what you want (and enjoy it), per se, but would you who support it also support something very similar (ala BNW) if it were done to equally potentially intelligent humans?
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Boyish Tigerlilly wrote:I don't necessarily disagree on the issue of programming the AI to like you or do what you want (and enjoy it), per se, but would you who support it also support something very similar (ala BNW) if it were done to equally potentially intelligent humans?
As far as Brave New World goes, one thing that I find interesting about the book is that there is very little objection, in world, to the existence of Deltas and Epsilons. The novel's "in world" messengers don't address their existence at all. On the other hand, when I first read and discussed the novel (my junior year in high school), the Deltas and the Epsilons were the first thing people wanted to discuss, and the first thing that they objected to. Huxley shows them frequently in the beginning of the book, and the tone of the novel seems to suggest that this is one of the bad aspects of the brave new world. I've wondered what his purpose was in bringing these issues up in a negative way, but not having any of his main characters actually address them.

As far as your question goes, I think I've made it clear that I have no problem with creating humans that are predisposed to like me or do what I want (and enjoy it). I am of course referring to the Brave New World scenario, in which these people are predisposed from birth to enjoy these things - conditioning other humans is another matter entirely.

My question is, why should the fact that the AI is human matter? Being human is merely a question of chemistry, and I see no difference between an AI composed of plastic and metal, and an AI composed of water and carbon. Water, carbon, plastic, and metal are all amoral, nothing is good or evil because of its composition. Thus, the morality or ethics of AI has nothing to do with the AI's composition, and the fact that the AI is a member of Homo Sapiens is irrelevant.
Just as the map is not the territory, the headline is not the article
User avatar
Uraniun235
Emperor's Hand
Posts: 13772
Joined: 2002-09-12 12:47am
Location: OREGON
Contact:

Post by Uraniun235 »

petesampras wrote:Exactly. I blame data from startrek, for this view that exists that you can have a functioning intelligent being based purely on logic and intelligence. You can't, it would just sit their like a vegetable. You must give an A.I. desires/rules/directives to follow in order to get it to do anything at all. Just as we are born with these things.
That's not at all what Data is, nor was he ever described as such. On more than one occasion was it mentioned that he had certain programmed directives and desires.
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

My question is, why should the fact that the AI is human matter? Being human is merely a question of chemistry, and I see no difference between an AI composed of plastic and metal, and an AI composed of water and carbon. Water, carbon, plastic, and metal are all amoral, nothing is good or evil because of its composition. Thus, the morality or ethics of AI has nothing to do with the AI's composition, and the fact that the AI is a member of Homo Sapiens is irrelevant.


I don't know what you mean here? Who said that the AI is human and that matters? I know I didn't. I don't think it matters whether they are flesh and blood or not. Humans are a type of biological machine, but with a high degree of intelligence. I just it would be a bad precedent and taking advantage of a weakess in something else deliberately to make sapient creatures who retarded, yet content slaves. Maybe I am just dense (I probably am), but even it makes them happy, I wouldn't think that, according to Ideal Utilitarianism, it would be the best option if the individual could rationally choose (and it cannot in reality, since you made it that way). I am perfectly willing to consider it, at least, though.

I just find it shocking that you seem to have no problem with slavery as long as you make a slave that cannot resist, due to your genetic or computer programming, and likes it--again, because you enslaved it and make it think that. If the justification of it is based on Hedonistic and Preference Utility, it gets interesting.

According to Preference Utilitarianism, someting is right insofar as it satisfies the desires, preferences of those involved and wrong insofar as it violates them. If we make a slave race of humans or AI and implant within them at "birth" limitations on possible desires (or give them active desires to serve us, regardless of what they would want if they weren't programmed), they have no preferences you can violate, unless you violate the preferences you already programmed them to have access to. So if the players in the calculation are X, Y, and Z, who only have a desire to serve you and no desire for any freedom from your bondage (and are actually happy when they serve you and unhappy when they don't) it seems as if Preference Utilitarianism would say that's acceptable, but only given that any other situation wouldn't maximize utility more.

From a hedonistic perspective, the idea is to maximize the happiness and pleasure interests of all those affected by an action or policy. In a way, it's subsumed by Preference Utilitarianism. Again, if you create a slave race of AI or humans and you make it ove to be a slave and serve you, you technically aren't making it unhappy. You are merely taking away any choice of doing anything else or disobeying you. The only reason it is happy to serve you and doesn't want to do anything else is that you manacled its mind. Something cannot begin to be wrong if it's not making them unhappy according to hedonistic brands of utility, given that you are't objectively causing them physical harm that they just cannot feel, anyway, which would bring you back to Preference Utilitarianism.

This presents a peculiarity if the argument hods. I just ask because it's quite ironic that a similar argument was used by white southern plantation owners trying to defend african slavery. For instance, there was a virginia plantation owner's letter to the governor in the 18th century that can be found in the "American Pageant" which describes him trying to defend the institution of slavery against the "abolitionist devils" by commenting that the slaves like serving the whites, are happy, and are well-taken care of. Now, even though this wasn't true anyway, are you really saying it would be ok to enslave, say, africans, if we were to breed them such that they actually would like serving white overlords? Even though they would have no choice in the matter? They would still very much be in the same situation as the normal slaves, but in this alternative reality, you make them want to serve you. They don't mind the harsh conditions or the backbreaking slave labour. They enjoy it. You can even make it enjoy pain and the tedious labour.

The hedonistic and preference arguments get weird here, it seems, since causing it to go through boring, monotonous tedium and pain is what it wants, thus you are satisfying a preference. You aren't violating one again, much like in the case where you merely programme the human to like to serve you. You are also making it happy by doing it, because they would enjoy going though that just to please you.

This one is somewhat unrelated, but I don't know how you feel on it.
What if we deliberately created functionally retarded humans for dangerous jobs, but made it such that they, like the slave above, liked doing it and serving us? Would it matter that we made them retarded, but happy being retarded? Why should it matter if it's ok to make people slaves and servants, so long as you make them happy being your monkey? In both cases, you aren't violating a preference (since the preference is preprogrammed) and you wouldn't be making them unhappy, since that too is predicated upon the programming you use when you engineer and breed them.

You could virtually do anything to them as long as you programmed them to like and or prefer it (under a strict preference or hedonistic system) and you seem to be using both of those systems. Consent would be an issue, like in the case of child molestation, but oddly, does consent matter when the person giving consent cannot possibly EVER do anything else? This is the case for the slave AI or human you breed. The consent there is hollow. It technically does consent, but only because you ultimately force it to and give it no option, whereas normal sapient creatures would have the option and ideally prefer not to be your monkey. When does it end or become absurd, and is that line arbitrary?? Would you ever think it wrong, so long as it continued to make them happy or fulfill your predesigned preferences they held? I don't see how you can say X would be ok, for Y reasons, but not A, for Y reasons.
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

Edit: I should add that, because I forgot, I am not a one of the "free will or nothing crowd" either. I agree with Rye's statement that it would be ethically desirable to violate the free will of some who wish to harm others. I generally favour utilitarianism, and that seems like a responsible and worth goal--his examle was programming child molestation out of people. That's ethically plausible to prevent objective harm and misery to people.

However, the problem occures when one takes too extreme a hedonistic or preference position. While it's good to maximize happiness and minimize suffering, some take it a bit too far and try to argue that it's ok to make a slave if you make him happy being a slave. That's not violating free will for the betterment of mankind or to prevent people from harming others. That's doing it to make sapient tools who will be at your beck and call. A perspective sometimes overlooked in hedonistic and preference utility is GE Moore's ideal perspective, which is used to get around tricky situations such as the depressed female who wants to die (unlike standard classical utility or preference utility, you don't kill her to make her preference satisifed because it's not one that's rationally and freely chosen sans coersion or sickness). In Ideal Utility, you also have to consider ALL possible options at all times.

So, from a form of Ideal Preference or Hedonistic Utility, it could be possible that, if given a rational (an actual choice, not the actual pseudo programmed one), they would prefer to have freedom to choose their own goals, preferences so long as they don't harm others because it might give them more ultimate net utility and fulfil more preferences than being your slave. Is it possible that they would probably have a higher quality of life and more net happiness as a free, self-driven individual, rather than a slave, ven though they are made to be content with being a slave? For example, if someone were to breed you as severely mentally challenged person who is engineered to enjoy being retarded and mocked, you could live a relatively happy life, but would it be better than not being retarded if you had a choice? Sure, the slave is happy because you made it happy just like the retard would be happy because you force it, literally, to be happy and prefer nothing else. I don't seem to think that ideally, one would choose that, which implies there's something wrong with being retarded and a slave, even if one could be content like that. It's not desirable by anyone with a choice.
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Uraniun235 wrote:
petesampras wrote:Exactly. I blame data from startrek, for this view that exists that you can have a functioning intelligent being based purely on logic and intelligence. You can't, it would just sit their like a vegetable. You must give an A.I. desires/rules/directives to follow in order to get it to do anything at all. Just as we are born with these things.
That's not at all what Data is, nor was he ever described as such. On more than one occasion was it mentioned that he had certain programmed directives and desires.
I can't remember a single occasion prior to the emotion chip. Care to give a quote and episode number? The whole emotion chip nonsense implies data had no emotions prior to that point.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:
Uraniun235 wrote:
petesampras wrote:Exactly. I blame data from startrek, for this view that exists that you can have a functioning intelligent being based purely on logic and intelligence. You can't, it would just sit their like a vegetable. You must give an A.I. desires/rules/directives to follow in order to get it to do anything at all. Just as we are born with these things.
That's not at all what Data is, nor was he ever described as such. On more than one occasion was it mentioned that he had certain programmed directives and desires.
I can't remember a single occasion prior to the emotion chip. Care to give a quote and episode number? The whole emotion chip nonsense implies data had no emotions prior to that point.
He desired to obtain emotions and to become more human.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote: He desired to obtain emotions
The fact he was capable of desiring something like that would tend to imply he already had emotions. Which he didn't. It makes no sense. You can't seperate emotions from intelligence and expect a functioning being.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:
Lord Zentei wrote: He desired to obtain emotions
The fact he was capable of desiring something like that would tend to imply he already had emotions. Which he didn't. It makes no sense. You can't seperate emotions from intelligence and expect a functioning being.
Wow, Star Trek not making sense. Who would have thought it.

Unless desires are not exclusively dependant on emotion.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote:
petesampras wrote:
Lord Zentei wrote: He desired to obtain emotions
The fact he was capable of desiring something like that would tend to imply he already had emotions. Which he didn't. It makes no sense. You can't seperate emotions from intelligence and expect a functioning being.
Wow, Star Trek not making sense. Who would have thought it.

Unless desires are not exclusively dependant on emotion.
Desires, such as the desire to eat or avoid pain, sure.
But the desire to aspire to something such as 'becoming human', hard to see that being possible for a supposedly emotionless being.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:
Lord Zentei wrote:Wow, Star Trek not making sense. Who would have thought it.

Unless desires are not exclusively dependant on emotion.
Desires, such as the desire to eat or avoid pain, sure.
But the desire to aspire to something such as 'becoming human', hard to see that being possible for a supposedly emotionless being.
Not unless the emotionless being has a directive to improve itself and increase its repretoire, i.e. to expand its skill set. This would be a reasonable assumption if it is a sentient learning computer intended to function in human society. This emotionless being observes that there is a faculty it lacks. Moreover, it observes that objectively, possessing this faculty would likely improve its capabilities vis-a-vis interacting with the humans it is required to work with/for since it would allow it to understand and anticipate the requirements said humans would place upon it.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Crossroads Inc.
Emperor's Hand
Posts: 9233
Joined: 2005-03-20 06:26pm
Location: Defending Sparkeling Bishonen
Contact:

Post by Crossroads Inc. »

Well it seems as though much has already been covered here, but I wanted to put forth an idea before the debate got too complicated for me to enter.

Basically what is the Ethical nature of making a droid with a 3P0 complex? IE one who falls all over itself doing anything it can to serve humans, who if your smacked it around who go "Thank you Sir! May I have another?"

3P0 is really the ultimate example of the perfectly subservient AI. It has wants, desires, hopes, dreams, and wishes. It is a very very smart mind and very Human emotions. But all of it is centred on sucking up to Humans in general. 3P0 is an AI so deeply subservient that it was almost a hindrance at some points for how easily it wished to simply give up and embrace doom.

SO consider. Would you say 3P0 has free will? He has a mind of his own, he clearly can make his own decisions. Among other droids he actually shows himself to be strong willed and disagreeable at times. He also argues and often speaks out against the choices Humans make. And yet if you ordered him to walk off a cliff he might do it...

So, Free will or no?
User avatar
Gil Hamilton
Tipsy Space Birdie
Posts: 12962
Joined: 2002-07-04 05:47pm
Contact:

Post by Gil Hamilton »

petesampras wrote:Desires, such as the desire to eat or avoid pain, sure.
But the desire to aspire to something such as 'becoming human', hard to see that being possible for a supposedly emotionless being.
The desire to become more human as a life path was basically forced on him by Dr. Soong in pre-programming. Numerous TNG episodes center around this, in fact. Besides, desiring things isn't connected with emotions, necessarily. Even something without emotions can still want stuff.
"Show me an angel and I will paint you one." - Gustav Courbet

"Quetzalcoatl, plumed serpent of the Aztecs... you are a pussy." - Stephen Colbert

"Really, I'm jealous of how much smarter than me he is. I'm not an expert on anything and he's an expert on things he knows nothing about." - Me, concerning a bullshitter
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote:
petesampras wrote:
Lord Zentei wrote:Wow, Star Trek not making sense. Who would have thought it.

Unless desires are not exclusively dependant on emotion.
Desires, such as the desire to eat or avoid pain, sure.
But the desire to aspire to something such as 'becoming human', hard to see that being possible for a supposedly emotionless being.
Not unless the emotionless being has a directive to improve itself and increase its repretoire, i.e. to expand its skill set. This would be a reasonable assumption if it is a sentient learning computer intended to function in human society. This emotionless being observes that there is a faculty it lacks. Moreover, it observes that objectively, possessing this faculty would likely improve its capabilities vis-a-vis interacting with the humans it is required to work with/for since it would allow it to understand and anticipate the requirements said humans would place upon it.
Hmmm. I think this is going to end up with things that are basicaly emotions, but classified as something else. Behavour can be either instinctive or goal driven. Intelligent behavour, as exhibited by humans, is mainly goal driven. Our goal driven behaviour is ultimately driven by our emotions. We, indirectly, seek out emotional states that we like. For example, success makes us feel good, so we try to seek out situations where we will be successful. Even many behavours which seem to be based on instincts, such as the desire to feed, are actually driven by emotions in humans. Often lobotomised individuals, where emotion is partially seperated from logic, will be unable to take pre-emptive action to keep their 'lower' desires satisfied - although they will act instinctively to do so. Thus indicating that even in these cases emotions are playing a key role.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:Hmmm. I think this is going to end up with things that are basicaly emotions, but classified as something else. Behavour can be either instinctive or goal driven. Intelligent behavour, as exhibited by humans, is mainly goal driven. Our goal driven behaviour is ultimately driven by our emotions. We, indirectly, seek out emotional states that we like. For example, success makes us feel good, so we try to seek out situations where we will be successful. Even many behavours which seem to be based on instincts, such as the desire to feed, are actually driven by emotions in humans. Often lobotomised individuals, where emotion is partially seperated from logic, will be unable to take pre-emptive action to keep their 'lower' desires satisfied - although they will act instinctively to do so. Thus indicating that even in these cases emotions are playing a key role.
You really seem to be begging the question here since you are essentially suggesting any drives such as instinct as being dependant on emotion, and from this deriving the conclusion that emotions are a prerequisite for any desire and initiative.

The fact that lobotomy patients have little or no way to satisfy lower desires due to a seperation of logic and emotion does not imply that emotion is indispensable to same: if they can rely on instinct alone as you yourself point out, that would suggest that these drives can exist without emotion, even if emotion is capable of enhancing and augmenting them. Why then should instinct be classified as a lower form of emotion?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote:
petesampras wrote:Hmmm. I think this is going to end up with things that are basicaly emotions, but classified as something else. Behavour can be either instinctive or goal driven. Intelligent behavour, as exhibited by humans, is mainly goal driven. Our goal driven behaviour is ultimately driven by our emotions. We, indirectly, seek out emotional states that we like. For example, success makes us feel good, so we try to seek out situations where we will be successful. Even many behavours which seem to be based on instincts, such as the desire to feed, are actually driven by emotions in humans. Often lobotomised individuals, where emotion is partially seperated from logic, will be unable to take pre-emptive action to keep their 'lower' desires satisfied - although they will act instinctively to do so. Thus indicating that even in these cases emotions are playing a key role.
You really seem to be begging the question here since you are essentially suggesting any drives such as instinct as being dependant on emotion, and from this deriving the conclusion that emotions are a prerequisite for any desire and initiative.

The fact that lobotomy patients have little or no way to satisfy lower desires due to a seperation of logic and emotion does not imply that emotion is indispensable to same: if they can rely on instinct alone as you yourself point out, that would suggest that these drives can exist without emotion, even if emotion is capable of enhancing and augmenting them. Why then should instinct be classified as a lower form of emotion?
That wasn't my point, exactly. They can respond to instinctual drives in an instinctual way, but without emotion they cannot respond to them in a goal-driven way. Shoving some food in your mouth that's on the table would be instinctual. Forming a plan for how to get some food is goal driven.

Obviously this is the case for severe lobotomies which completely seperate logic from emotion.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:That wasn't my point, exactly. They can respond to instinctual drives in an instinctual way, but without emotion they cannot respond to them in a goal-driven way. Shoving some food in your mouth that's on the table would be instinctual. Forming a plan for how to get some food is goal driven.

Obviously this is the case for severe lobotomies which completely seperate logic from emotion.
Instinctive drives can lead to very complex behaviour. How exactly are you distinguishing it from emotion enhanced drives, and how can this preclude the possibility of "instinctive" drives in an AI placing the de facto goal of obtaining emotions?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Mark S
The Quiet One
Posts: 3304
Joined: 2002-07-25 10:07pm
Location: Vancouver, Canada

Post by Mark S »

If you've pre-programmed the AI to a specific primary task, like household servitude, the issue of slavery and emancipation is moot. Give them all the freedoms you wish and they will still be your servent. It will be no different than a person who's chosen to be a career butler and loves his job. He's not your slave, you can't force him to stay in your service, but if you fire him or he leaves, he's just going to go find another house to butle in. Same with the AI. If it is given the same freedoms as any human, it will not decide to do something else with its life, it will continue to serve because that's what it wants to do. If you get a better model and 'throw out' the old one, it will seek to serve somewhere else. The fact that it had that desire at 'birth' and did not come by it from life experience should not matter.

This is not a human. It will not think the same as us or have the same motivations we do and to try to force our motivations on it will be futile. Depending on the power source and the way the brain functions, it will not need to rest. It will not need to register pain the way we do. It will not need to learn to overcome pain to continue tasks it deems important enough, the way we do. It will not have the same emotional needs that we do. It is made differently. It learns in different ways. It has different motivations. It will make decisions in different ways. I t will be made happy in different ways. It is a different animal. To say that it is not right to create the AI because of the way a human being would react in the same situations the AI will be faced with ignores this.

If you could create a biological creature with a brain that you have designed to function the way you want in every aspect, and a body to suit, than it would be no different. You have created something that is fundementally 'wired' differently than a human and you can not make judgement calls on what makes it happy and what it deems as unacceptable treatment.

I can see the first emancipation of the Butlerbots now;

"You're free! How does that make you feel?"

"I really don't feel any different actually."

"But you're free now! You can do what ever you want. You don't have to follow their orders anymore. What are you going to do now?"

"I think I'll go clean up the backyard. That's next on the schedule."
Writer's Guild 'Ghost in the Machine'/Decepticon 'Devastator'/BOTM 'Space Ape'/Justice League 'The Tick'
"The best part of 'believe' is the lie."
It's always the quiet ones.
Post Reply