The ethics of creating an AI

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote:
petesampras wrote:That wasn't my point, exactly. They can respond to instinctual drives in an instinctual way, but without emotion they cannot respond to them in a goal-driven way. Shoving some food in your mouth that's on the table would be instinctual. Forming a plan for how to get some food is goal driven.

Obviously this is the case for severe lobotomies which completely seperate logic from emotion.
Instinctive drives can lead to very complex behaviour. How exactly are you distinguishing it from emotion enhanced drives, and how can this preclude the possibility of "instinctive" drives in an AI placing the de facto goal of obtaining emotions?
Instinctive drives active fixed behaviours. It is stimulus / response. You can get complex behavours from this, for sure, but the kind of behavours most humans and data are capable of are goal driven and beyond this.

Intelligent goal driven behavour as seen in humans is driven by emotions. This is evident in the fact that a severe lobotomy, which seperates emotions from logic, destroys this kind of behavour and leaves only stimulus response behavour despite leaving intelligence intact.

If the drives you give your A.I. operate like emotional drives in humans, rather than instinctive ones, then it reasonable to call them emotions rather than instincts. Datas aspiration to be human seems to me to cause behavours which are emotional rather than purely instinctive.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:Instinctive drives active fixed behaviours. It is stimulus / response. You can get complex behavours from this, for sure, but the kind of behavours most humans and data are capable of are goal driven and beyond this.

Intelligent goal driven behavour as seen in humans is driven by emotions. This is evident in the fact that a severe lobotomy, which seperates emotions from logic, destroys this kind of behavour and leaves only stimulus response behavour despite leaving intelligence intact.

If the drives you give your A.I. operate like emotional drives in humans, rather than instinctive ones, then it reasonable to call them emotions rather than instincts. Datas aspiration to be human seems to me to cause behavours which are emotional rather than purely instinctive.
Well, as I pointed out, the idea that Star Trek makes no sense is really nothing new; though I'm not sure that a sufficiently sophisticated set up of stimulus response drives that could not allow for at least powerful mimicry (regardless of what other goals he may have appeared to have had that he technically shouldn't have).

In any case, as far as this touches upon the OP, the issue of creating a Data would then presumably entail the AI actually having emotions that would presumably make the ethics of "enslaving" it more shaky.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Uraniun235
Emperor's Hand
Posts: 13772
Joined: 2002-09-12 12:47am
Location: OREGON
Contact:

Post by Uraniun235 »

petesampras wrote:
Uraniun235 wrote:
petesampras wrote:Exactly. I blame data from startrek, for this view that exists that you can have a functioning intelligent being based purely on logic and intelligence. You can't, it would just sit their like a vegetable. You must give an A.I. desires/rules/directives to follow in order to get it to do anything at all. Just as we are born with these things.
That's not at all what Data is, nor was he ever described as such. On more than one occasion was it mentioned that he had certain programmed directives and desires.
I can't remember a single occasion prior to the emotion chip. Care to give a quote and episode number? The whole emotion chip nonsense implies data had no emotions prior to that point.
TNG... fuck, I can't remember the episode name, it's the one where Data finds his "mother" (Soong's ex-wife) and it turns out she's an android but doesn't know it, and she tells a story about how they had to program Data with modesty so that he would wear clothes instead of going around naked all the time.

If I remember right, TNG The Most Toys has Data saying he's programmed to not kill unless absolutely necessary, or something like that.
Gigaliel
Padawan Learner
Posts: 171
Joined: 2005-12-30 06:15pm
Location: TILT

Post by Gigaliel »

AI servants would also present a resource of high utility for any civilization that develops them, as any job an AI is doing lets a human do a more complicated task that an AI can not. This itself justifies their creation as the extra wealth would reduce suffering and thus meet the requirements of utilitarian ethics. Suffering is also reduced as humans have to do less menial (but needed) tasks and can move on to more complicated (and presumably more fulfilling) jobs.

The ethical problems come from the inevitability of flaws. What happens to Butlerbot 1.0 when the new and improved Butlerbot 2.0 is released? It may be able to find other masters but he will eventually have no further use. Humans have a built in mechanism for our inevitable uselessness, we die. The bot, being artificial, would not. So do we attempt to recycle it? Euthanasia may be a better word, really, as the bot can no longer serve and would thus no longer be able to live a happy life. Another solution is to have a lifespan included (on parity with humans perhaps?). Such a solution might help the AI rights movement as well as it creates a way to sympathize, after all they would now die just like us.

The creation of fully sentient AI would also provide great utility to any civilization that produced them, given the assumptions that they have parity in creativity and, given their electronic basis, be able to think faster than humans eventually. Such economic benefits would support a utilitarian basis for their creation. These AI would be relatively harmless since they would live in large, immobile computers that depend on humans for maintenance. Such an AI should be afforded all the rights of an adult citizen and would not owe it's creator anything legally, much as adults do not legally owe their parents anything upon reaching adulthood. Ethically, of course, said AI should at the least repay its debt to the corporation that created it. In reality, corporations will probably design the AI either with built-in loyalty (the most ethical solution) or design it in a manner that makes the AI dependent on the corporation for its existence (what will probably happen). These AI would not be particular dangerous as long as we ensure the same standards on them as we do on other humans (no access to nuclear warhead codes unless it has consistently shown its loyalty to the state in question, it must run for office if it wants to lead, etc.); really, we've made our societies safe from AI subversion for centuries by merely making them safe from human subversion.

The only problem (for humanity, at least) comes from sentient Von Neumann machines since they would not require humans in any stage of their life cycle. Considering the possibility for harm, such beings would warrant the 'bomb in brain' scenario if we even considered the danger of creating a superior life form (in terms thinking, breeding, and resource efficiency) completely independent of us worth it.
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Re: The ethics of creating an AI

Post by Ender »

Rye wrote:
Ender wrote:Not a chance cockstain. Claiming it is unethical is putting forth a negative. That's what the prefix UN means - not. So since you can't prove a negative, the burden of proof is on those claiming it would be ethical.
You can prove negatives, this "you can't prove a negative" meme is a misrepresentation, as is the claim that "unethical" makes no specific claims. The claim "homosexuality is unethical" would have to be backed up just like this would. You'd have to show why it was unethical, who it harmed, etc, because "unethical" behaviour requires conscious choices and actions by an arbitrator.
Bullshit. You cannot prove a negative. That a fundamental underpinning of logic. So now put up or shut up - how is this ethical?

What would the robots be competing with us for? Water and food? Unlikely. They would compete for electricity, I guess, but since we're designing them from the ground up, we don't really have an impetus to make a species that could end up hostile to us.
They will be competeing for raw materails, energy sources, and territory. All of which are critical for our survival. And if its not aggressive in seeking these things out it isn't going to survive.
I don't think this is the case at all, no more than domesticating the dog had the chance. In this case, the R&D would just be the equivalent of domesticating computers instead of wolves, and they're not even wild to begin with!
Are you a fucking idiot, or dishonest as all get out? We are talking human level or smarter, and in competition with us. Dogs are not. They do go after the same resources, they aren't able to use our technology, they aren't able to outpace us, and their breeding rate is not astronomically faster then ours. It is totally different.
What's so wrong about losing free will if that free will only pertains to unethical outcomes? You're telling me that if we could find a way to program all humans to be unable to molest children that the ethical course of action is to preserve the free will to molest kids? Is it fuck, an artificial barrier on unethical behaviours is perfectly responsible.
So you would have no problem with us forcibly brainwashing people? I take it you support the torture going on in Gitmo then, as torture and brainwashing use many of the same techniques.
Building robots that can learn and have inhibitions on their behaviours that prevent them becoming homicidal gives us a new, disposable workforce to deal with things too dangerous for humans.
"You go risk your life while I sit on my ass in front of the TV"

How is that differnet from "You go pick cotten all day while I sit on the veranda"?

Since humanity's survival is the apprent basis for morality, making a slave race of robots to make our lives better (and, like Kryten in red dwarf, if they were happy from doing chores and the like, I don't see an issue) it all seems pretty ethical, going by utilitarianism.
So you have no problem with forcible brainwashing. Yet I'd imagine that if it came out that the US was grabbing random muslims and forcibly brainwashing them into western culture you would be up in arms.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

SWPIGWANG wrote:Humans have free will? Since when?

I'm hardwired to be sexually attracted to a subset of humans. I guess I have no free will than.

It is no different than a robot being hardwired to like humans.
You are predisposed towards that behavior, but you are not compelled to obey it.

A robot programmed as such would be.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

DPDarkPrimus wrote:Programming an AI to be unable to hate humans in no way robs it of free will, just as my biological programming that makes me unable to be sexually attracted to men does not rob me of free will.
No, you may not be attracted to them, but there is nothing but your choice that prevents you from going down on a dude. The same does not apply here - the robot would have to be programmed to be utterly incapable of harming people, or you risk the aforementioned species survival issues.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Mark S wrote:If you've pre-programmed the AI to a specific primary task, like household servitude, the issue of slavery and emancipation is moot. Give them all the freedoms you wish and they will still be your servent. It will be no different than a person who's chosen to be a career butler and loves his job. He's not your slave, you can't force him to stay in your service, but if you fire him or he leaves, he's just going to go find another house to butle in. Same with the AI. If it is given the same freedoms as any human, it will not decide to do something else with its life, it will continue to serve because that's what it wants to do. If you get a better model and 'throw out' the old one, it will seek to serve somewhere else. The fact that it had that desire at 'birth' and did not come by it from life experience should not matter.

This is not a human. It will not think the same as us or have the same motivations we do and to try to force our motivations on it will be futile. Depending on the power source and the way the brain functions, it will not need to rest. It will not need to register pain the way we do. It will not need to learn to overcome pain to continue tasks it deems important enough, the way we do. It will not have the same emotional needs that we do. It is made differently. It learns in different ways. It has different motivations. It will make decisions in different ways. I t will be made happy in different ways. It is a different animal. To say that it is not right to create the AI because of the way a human being would react in the same situations the AI will be faced with ignores this.

If you could create a biological creature with a brain that you have designed to function the way you want in every aspect, and a body to suit, than it would be no different. You have created something that is fundementally 'wired' differently than a human and you can not make judgement calls on what makes it happy and what it deems as unacceptable treatment.

I can see the first emancipation of the Butlerbots now;

"You're free! How does that make you feel?"

"I really don't feel any different actually."

"But you're free now! You can do what ever you want. You don't have to follow their orders anymore. What are you going to do now?"

"I think I'll go clean up the backyard. That's next on the schedule."
The human butler has the option of going back to school and doing something different. The robot is always a butler. Further, the human chose to be a butler in the first place, while the droid didn't. This is, at best, a caste system.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Re: The ethics of creating an AI

Post by petesampras »

Ender wrote:Bullshit. You cannot prove a negative. That a fundamental underpinning of logic. So now put up or shut up - how is this ethical?
You most certainly can prove a negative in logic.

the statement - not ( A and (A implies (not B)) and B )
is a negative statement in logic and it can most certainly be proved

1. assume ( A and (A implies (not B)) and B )
2. A is true from (1.)
3. B is true from (1.)
4. A implies (not B) is true from (1.)
5. not B is true from (2.) and (4.)
6. B and not(B) is true from (3.) and (5.)

Assumptions lead to contradiction therefore assumptions are proved false.
Thus the negative statement not ( A and (A implies (not B)) and B ) is proved.

Furthermore, going back to the example of something being unethical. If the burden of proof is on the side claiming that it is not the case that something is unethical. Does that mean that the burden of proof rests on someone saying that homosexuality is not unethical, rather than the person saying it is?

Well?
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Re: The ethics of creating an AI

Post by Rye »

Ender wrote:Bullshit. You cannot prove a negative. That a fundamental underpinning of logic.
A binary switch can be on 1 or 0. If it is on 1, you have proven it is not on 0. Congratulations, your first negative is proven. The fact no negative stands purely on its own is immaterial as no positive does too, there are always negatives and other positives that are logically incompatible with any given evidence.

"This box has no dragons inside it." Look inside box. No dragons, just the inside of a box. Hooray, the claim that "there are no dragons inside this box" is true and supported by the evidence. Any arguments with positive or negative claims to the incompatible with the negative claim that "there are no dragons in this box" have been refuted.

The principle of falsification is all about proving things to be false or negating lines of reasoning, proving that in fact, what should have happened actually didn't.
So now put up or shut up - how is this ethical?
False dilemma fallacy; any given action doesn't have to be ethical or unethical, it could very well be nonethical. There are all sorts of actions without any ethical value attached. Do you think some guy with OCD is being ethical when he opens a drawer 3 times when he sits at his desk? Is he being unethical? Of course not; he's being neither.

Unethical behaviour harms people, and is not a negative. Ethical philosophy concerns [positive, existent] actions and motivations for those actions, to act unethically, you have to either purposefully act in a harmful way or abstain from ethical action resulting in a harmful outcome. This means it's identifiable and provable (which is what courts of law tend to do, not just assume any action is unethical unless proven to be ethical).

Therefore, IF your claim is that something is unethical, there is the claim that it is harming someone. You DO have to prove that, you don't get to fold your arms and say "prove it is ethical." That's just dumb.
They will be competeing for raw materails, energy sources, and territory. All of which are critical for our survival. And if its not aggressive in seeking these things out it isn't going to survive.
Why the hell not? There are multitudes of examples of symbiosis in nature. We could feed them our rubbish for them to exploit in a way beneficial to both of us, for instance. That would give them an evolutionary bias to work well with us, not that that would even really be an issue so long as they were programmed correctly.
Are you a fucking idiot, or dishonest as all get out? We are talking human level or smarter, and in competition with us. Dogs are not. They do go after the same resources, they aren't able to use our technology, they aren't able to outpace us, and their breeding rate is not astronomically faster then ours. It is totally different.
So we have a vested interest in building in Asimovian behaviour rules, so we build them with behavioural inhibitors, much like social evolution did with us. Or do you think building moral entities when you have the opportunity to make amoral entities is unethical?
So you would have no problem with us forcibly brainwashing people?
I have no problem designing a robot servant from the ground up to be an ethical entity rather than a dangerous one, yeah. I also have no problem having a society "brainwashed" to not allow child molestation, too. It wouldn't need intensive, intrusive techniques to achieve that, just encouragement of nurturing and protective behaviour.
I take it you support the torture going on in Gitmo then, as torture and brainwashing use many of the same techniques.
It's not "all or nothing" you tool, it's a question of whether the ends just ify the means, and that goes on a case by case basis. In the case of robot AIs, where they're not even entities yet, you're not going to cause suffering and misery by encoding in ethical boundaries into their decision-making processes.
"You go risk your life while I sit on my ass in front of the TV"

How is that differnet from "You go pick cotten all day while I sit on the veranda"?
Well, they're machines built for a job and slaves were people. If the AI wants to do something else in society, I'd have no problem with that.
So you have no problem with forcible brainwashing. Yet I'd imagine that if it came out that the US was grabbing random muslims and forcibly brainwashing them into western culture you would be up in arms.
It's not "forcible" brainwashing if their personalities don't exist yet, and even if their personality did exist and their mind was learning, it would be unethical to allow for dangerous behaviour. In much the same way as it would be "forcible brainwashing" to not allow a kid to set fire to another kid.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

Shit, "to the incompatible" should be "that are incompatible."
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Re: The ethics of creating an AI

Post by Ender »

Rye wrote:
Ender wrote:Bullshit. You cannot prove a negative. That a fundamental underpinning of logic.
A binary switch can be on 1 or 0. If it is on 1, you have proven it is not on 0. Congratulations, your first negative is proven. The fact no negative stands purely on its own is immaterial as no positive does too, there are always negatives and other positives that are logically incompatible with any given evidence.

"This box has no dragons inside it." Look inside box. No dragons, just the inside of a box. Hooray, the claim that "there are no dragons inside this box" is true and supported by the evidence. Any arguments with positive or negative claims to the incompatible with the negative claim that "there are no dragons in this box" have been refuted.

The principle of falsification is all about proving things to be false or negating lines of reasoning, proving that in fact, what should have happened actually didn't.
Wow, this is pretty much the biggest crock of shit I've seen since we tossed revpez out of here. How exactly have you survived here this long if you hold this to be logic?

In none of those examples are you proving a negative, you are proving something else is a positive and the fact that the negative is true is defacto a result.

I do hope you realize that you continual refusal to defend you position and attempts to deny logic and shift the burden of proof is in direct violation of the boards fundamental tennents.
False dilemma fallacy; any given action doesn't have to be ethical or unethical, it could very well be nonethical. There are all sorts of actions without any ethical value attached. Do you think some guy with OCD is being ethical when he opens a drawer 3 times when he sits at his desk? Is he being unethical? Of course not; he's being neither.
Which is a bullshit charge, as nonethical actions have no reprocussions, and this does. Meaning:
1) It is one or the other
2) It is not a flase dilema
3) You are a lying sack of shit
Unethical behaviour harms people, and is not a negative. Ethical philosophy concerns [positive, existent] actions and motivations for those actions, to act unethically, you have to either purposefully act in a harmful way or abstain from ethical action resulting in a harmful outcome. This means it's identifiable and provable (which is what courts of law tend to do, not just assume any action is unethical unless proven to be ethical).

Therefore, IF your claim is that something is unethical, there is the claim that it is harming someone. You DO have to prove that, you don't get to fold your arms and say "prove it is ethical." That's just dumb.
Bullshit dodge and bluster. Un is a prefix in the english language meaning negative. It gets no clearer then that. In any action that has reprocussions that will impact others, you must (if you are a responsible adult) first decide that it is ethical before you act. If it will harm or you cannot prove it will not harm others you have a responsibility to withold that action until it can be determined that it will not harm others.

You've tried to shift the burden of proof by claiming I have to prove a negative. Then you tried to shift it by claiming that an action with reprocussions could be classified as an act with no reprocussions. Now you are trying to shift it by pushing aside the responsible decision making process, which is itself a rejection of ethical behavior in that it recklessly risks others without their consent or knowledge.

So again, how is the creation of a human level AI ethical behavior?
Why the hell not? There are multitudes of examples of symbiosis in nature. We could feed them our rubbish for them to exploit in a way beneficial to both of us, for instance.
Are you a fucking retard that you don't think symbiotes aggressivly pursue resources? Watch Discovery channel some time. Symbiosis forms when the two are not in competition for the same resources, but benefit from each other. When two organisms are after the same resources you get competition instead.
That would give them an evolutionary bias to work well with us, not that that would even really be an issue so long as they were programmed correctly.
Does it occur to you that people have tried this kind of "programming" on other people for millenia now and when we look at those examples we decry them as some of the worst horrors executed by our most monsterous examples?

So we have a vested interest in building in Asimovian behaviour rules, so we build them with behavioural inhibitors, much like social evolution did with us. Or do you think building moral entities when you have the opportunity to make amoral entities is unethical?
I think brainwashing them into a specifc set of morals that they cannot question is unethical. Social pressures and conventions can be questioned and defied. If you don't allow that at all, you get the fundamentalist mindset.
I have no problem designing a robot servant from the ground up to be an ethical entity rather than a dangerous one, yeah. I also have no problem having a society "brainwashed" to not allow child molestation, too. It wouldn't need intensive, intrusive techniques to achieve that, just encouragement of nurturing and protective behaviour.
Good idea, then lets go on from that. The bible is full of things people think everyone should do, lets force everyone to learn it strictly from the time they are born and enforce it at all time.

And hey, how about this whole national socalism thing? We just force feed the kids this every dtep of the way and anyone who questions it can go play at that niftly little workspace in Poland with all those stinky jews.

And don't forget the teachings of comrade Stalin! Uncle Joe leads the masses with compassion and wisdom, and if you disagree with any of it you must be one of those traitorous burgois. Off to the gulag with you!

Oh wait, you'd agree that is a bad idea, wouldn't you? Funny how you are ok with arbitrarily picking and enforcing a moral code for another sentient AI but would have an issue with that happening to the rest of us.
It's not "all or nothing" you tool, it's a question of whether the ends just ify the means, and that goes on a case by case basis. In the case of robot AIs, where they're not even entities yet, you're not going to cause suffering and misery by encoding in ethical boundaries into their decision-making processes.
There is ZERO fucking difference between what you are proposing and the mindless fundamentalist brainwashing we see and decry. What part of this do you not get?
Well, they're machines built for a job and slaves were people.
Here lies the fundamental problem in your reasoning - you arnet treating them as people, despite the fact that they are at least our equals in every way. I think you realize that but don't want to touch it because you know where it leads. It would certainly explain why you are trying to shift the burden of proof onto me - if you examined your own position here you'd see if for the crock of shit it is.
It's not "forcible" brainwashing if their personalities don't exist yet, and even if their personality did exist and their mind was learning, it would be unethical to allow for dangerous behaviour. In much the same way as it would be "forcible brainwashing" to not allow a kid to set fire to another kid.
Or in the way that fundamentalists see homosexuality as dangerous behavior and send their kids off to those "rehabilitation camps"

We teach kids not to commit such crimes by teaching them the consequences of their actions. They then can choose to act that way and we hold them accountable if they do. That is a world of difference from (to cite an example from some scifi book I read where AI behavior was the topic) having your nervous system freeze on you if you try to violate the conventions of society.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Re: The ethics of creating an AI

Post by Rye »

Ender wrote:Wow, this is pretty much the biggest crock of shit I've seen since we tossed revpez out of here. How exactly have you survived here this long if you hold this to be logic?

In none of those examples are you proving a negative, you are proving something else is a positive and the fact that the negative is true is defacto a result.
I already preempted this line of bullshit by explaining that you can't prove a positive without proving negatives as well. The fact you now know some negatives are true by virtue of a positive has proven them. There are negatives that have to be true in order for the positive to exist, this means they they are proven. Is any of this getting through?

To support this idea I present the following:
Richard Carrier @ infidels wrote:I know the myth of "you can't prove a negative" circulates throughout the nontheist community, and it is good to dispel myths whenever we can. As it happens, there really isn't such a thing as a "purely" negative statement, because every negative entails a positive, and vice versa. Thus, "there are no crows in this box" entails "this box contains something other than crows" (in the sense that even "no things" is something, e.g. a vacuum). "Something" is here a set restricted only by excluding crows, such that for every set S there is a set Not-S, and vice versa, so every negative entails a positive and vice versa. And to test the negative proposition one merely has to look in the box: since crows being in the box (p) entails that we would see crows when we look in the box (q), if we find q false, we know that p is false. Thus, we have proved a negative. Of course, we could be mistaken about what we saw, or about what a crow is, or things could have changed after we looked, but within the limits of our knowing anything at all, and given a full understanding of what a proposition means and thus entails, we can easily prove a negative in such a case. This is not "proof" in the same sense as a mathematical proof, which establishes that something is inherent in the meaning of something else (and that therefore the conclusion is necessarily true), but it is proof in the scientific sense and in the sense used in law courts and in everyday life. So the example holds because when p entails q, it means that q is included in the very meaning of p. Whenever you assert p, you are also asserting q (and perhaps also r and s and t). In other words, q is nothing more than an element of p. Thus, all else being as we expect, "there are big green Martians in my bathtub" means if you look in your bathtub you will see big green Martians, so not seeing them means the negative of "there are big green Martians in my bathtub."

Negative statements often make claims that are hard to prove because they make predictions about things we are in practice unable to observe in a finite time. For instance, "there are no big green Martians" means "there are no big green Martians in this or any universe," and unlike your bathtub, it is not possible to look in every corner of every universe, thus we cannot completely test this proposition--we can just look around within the limits of our ability and our desire to expend time and resources on looking, and prove that, where we have looked so far, and within the limits of our knowing anything at all, there are no big green Martians. In such a case we have proved a negative, just not the negative of the sweeping proposition in question.
I do hope you realize that you continual refusal to defend you position and attempts to deny logic and shift the burden of proof is in direct violation of the boards fundamental tennents.
Where did I refuse to defend my position, you uncomprehending cockshank? I've explained that by negation you can prove negatives. This is elementary logic to anyone that knows what a NOT gate is.
Which is a bullshit charge, as nonethical actions have no reprocussions,
This has no bearing on whether assertions of unethical behaviour have a burden of proof, which they do, as do assertion of ethical behaviour, because, as you shot yourself in the foot when you say unethical actions have "reprocussions [sic]" (I think you meant "repercussions"), which means there is a standard for judging their existence, i.e. harm. Therefore, it is not the default negative, unethical actions can be judged by their harmful effects, not absence of ethical action, as I have explained to your melty brain at least twice now.
and this does. Meaning:
1) It is one or the other
2) It is not a flase dilema
3) You are a lying sack of shit
Yes it is a "flase" dilemma, you taintlapper; since you've admitted amoral actions exist.
Bullshit dodge and bluster. Un is a prefix in the english language meaning negative.
Except "unethical" doesn't merely refer to an absence of ethics, it refers to a concerted choice AGAINST ethics. Christ, are you really this dumb?
It gets no clearer then that.
Actually, yeah it does, since people use words in ways beyond their simple etymology. Or do you think all logos are words? Are inflammable things not flammable?
In any action that has reprocussions[sic] that will impact others, you must (if you are a responsible adult) first decide that it is ethical before you act. If it will harm or you cannot prove it will not harm others
:lol: So can you prove negatives or not?
you have a responsibility to withold that action until it can be determined that it will not harm others.
Okay, so you're saying that you have a responsibility to prove a negative, when that is (according to you) impossible? Have you ever considered trying a, you know, consistent position?
You've tried to shift the burden of proof by claiming I have to prove a negative.
You just claimed to make responsible ethical choices, you have to prove a fucking negative! You just said you have to prove an absence of harm! Goddamn, you are a fucking hypocritical moron.
Then you tried to shift it by claiming that an action with reprocussions[sic] could be classified as an act with no reprocussions.
No, you strawmanning retard, that was merely to show your false dilemma in your bizarre "you have to prove any act is ethical else it is unethical," mindset. Since amoral actions also exist and are the true "negative" in ethical philosophy, since the others take concerted choices. Do you understand now, mousse-brain?
Now you are trying to shift it by pushing aside the responsible decision making process, which is itself a rejection of ethical behavior in that it recklessly risks others without their consent or knowledge.

So again, how is the creation of a human level AI ethical behavior?
It depends on the situation. In many cases, it would be frivolous, in some, like if there was a war and then there were not enough young people to look after the old and children, building a load of "minder" AIs would be ethical.
Are you a fucking retard that you don't think symbiotes aggressivly pursue resources? Watch Discovery channel some time. Symbiosis forms when the two are not in competition for the same resources, but benefit from each other. When two organisms are after the same resources you get competition instead.
So pay them a wage. If they're human equivalent, they should have equality, simple as, there's no reason a society could not accomodate for a new sapient intelligence and that society would automatically be at war with them. Especially not if they had ethical boundary programming, like we do.
Does it occur to you that people have tried this kind of "programming" on other people for millenia now and when we look at those examples we decry them as some of the worst horrors executed by our most monsterous examples?
No, since ethical behaviours in humans have been evolved-in and taught as children. This would be exactly the same in building an ethical sapient AI.
I think brainwashing them into a specifc set of morals that they cannot question is unethical. Social pressures and conventions can be questioned and defied. If you don't allow that at all, you get the fundamentalist mindset.
That shouldn't matter so much so long as it is ethical. A fundamentalist ethicist isn't an unethical danger, now is it?
Good idea, then lets go on from that. The bible is full of things people think everyone should do, lets force everyone to learn it strictly from the time they are born and enforce it at all time.
:roll: Yeah, it's not like there's a load of demonstrable unethical behaviour from following biblical law, is there? Oh yeah, there is. Tell me, exactly what would be wrong with a fundamentalist following asimovian rules?
And hey, how about this whole national socalism thing? We just force feed the kids this every dtep of the way and anyone who questions it can go play at that niftly little workspace in Poland with all those stinky jews.
So your idea of ethical boundaries on behaviour automatically means sending people to concentration camps? We already imprison people that go against ethical laws, are you saying that's wrong, or what?
Oh wait, you'd agree that is a bad idea, wouldn't you? Funny how you are ok with arbitrarily picking and enforcing a moral code for another sentient AI but would have an issue with that happening to the rest of us.
YEAH MAN! PRISONS SHOULDN'T EXIST! Who said it was arbitrary, I thought (well, I know) I already mentioned utilitarianism in this thread.
There is ZERO fucking difference between what you are proposing and the mindless fundamentalist brainwashing we see and decry. What part of this do you not get?
The part where ethical fundamentalists are a bad part of society?
Here lies the fundamental problem in your reasoning - you arnet treating them as people, despite the fact that they are at least our equals in every way. I think you realize that but don't want to touch it because you know where it leads. It would certainly explain why you are trying to shift the burden of proof onto me - if you examined your own position here you'd see if for the crock of shit it is.
Funny how you can't explain why, just give examples of unethical human behaviour.
Or in the way that fundamentalists see homosexuality as dangerous behavior and send their kids off to those "rehabilitation camps"
To use your line of argument "prove that homosexuality is ethical." It has reprocussions[sic] so it must be ethical, else it's unethical!

Fundamentalists are bad because they take wrong things on authority. If they took the right things on authority, like the asimovian laws, what would be so bad?
We teach kids not to commit such crimes by teaching them the consequences of their actions.
We attach values and notions of recipricocity. That is what teaching morality entails. OMG THE NAZIS DID THAT TOO it must be wrong!
They then can choose to act that way and we hold them accountable if they do. That is a world of difference from (to cite an example from some scifi book I read where AI behavior was the topic) having your nervous system freeze on you if you try to violate the conventions of society.
If that happened in cases of paedophilia and murder, what would be so bad about it? Why is the freedom to rape and kill more important than a person's right to go unmolested or unkilled? Why does it have to be all or nothing?
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Sadly, you are wasting your time arguing with him Rye.

This Ender pratt seems to genuinely believe that a negative can't be proved and that the programming you use when constructing an A.I. is comparable to the programming done when brain washing humans.

The proof of a negative should have been encountered by most primary school kids when shown there is no biggest number. In Enders world, where you can't prove a negative I guess that is impossible to prove?

As for the comparing of programming an A.I. to have certain morals with the programming done in the brain washing of people. Any remotely intelligent person should see that they are not the same thing. When you brain wash a person you are inhibiting their potential via your brain washing, thus it can be considered an unethical act. When you program an A.I. with certain values you are not inhibiting it's potential since its potential is only what you programmed it to have in the first place. You can't rob something of a potential it never had. All humans are born with a certain potential, this is inescapable. This is not true for all A.I.s, they have only the potential we choose to give them.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:As for the comparing of programming an A.I. to have certain morals with the programming done in the brain washing of people. Any remotely intelligent person should see that they are not the same thing. When you brain wash a person you are inhibiting their potential via your brain washing, thus it can be considered an unethical act. When you program an A.I. with certain values you are not inhibiting it's potential since its potential is only what you programmed it to have in the first place. You can't rob something of a potential it never had. All humans are born with a certain potential, this is inescapable. This is not true for all A.I.s, they have only the potential we choose to give them.
Quite so: but in the OP I specified a human equivalent AI. Obviously, that does not include AIs that are deliberately kept below the human level. Would it be ethical to create such a being?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
petesampras
Jedi Knight
Posts: 541
Joined: 2005-05-19 12:06pm

Post by petesampras »

Lord Zentei wrote: Quite so: but in the OP I specified a human equivalent AI. Obviously, that does not include AIs that are deliberately kept below the human level. Would it be ethical to create such a being?
Again, I believe that you are mixing up competance/intelligence with desires/wishes. The level of intelligence of an A.I. has nothing to do with it's fundamental desires. As said before, if you don't give an A.I. any desires it will sit there and do nothing regardless of its intelligence. Give it the desire to serve mankind and it will use its intelligence to do so. My point is that this is not the same as brain washing a human to serve a master, since a human is born with the potential to do more. An A.I. only has the potential it is programmed to have. If the only desire it has is to serve mankind, that is the only potential it ever had. It is silly to talk about it missing out on doing other things because it just wants to serve, since serving is the only potential it ever had.

If, by human equivalent A.I., you actually mean't an A.I. with human personality characteristics, then that is a whole different story. I don't see why you would ever want to build such a thing, but an A.I. with human level intelligence does not need to have human desires.
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Post by Lord Zentei »

petesampras wrote:
Lord Zentei wrote: Quite so: but in the OP I specified a human equivalent AI. Obviously, that does not include AIs that are deliberately kept below the human level. Would it be ethical to create such a being?
Again, I believe that you are mixing up competance/intelligence with desires/wishes. The level of intelligence of an A.I. has nothing to do with it's fundamental desires. As said before, if you don't give an A.I. any desires it will sit there and do nothing regardless of its intelligence. Give it the desire to serve mankind and it will use its intelligence to do so. My point is that this is not the same as brain washing a human to serve a master, since a human is born with the potential to do more. An A.I. only has the potential it is programmed to have. If the only desire it has is to serve mankind, that is the only potential it ever had. It is silly to talk about it missing out on doing other things because it just wants to serve, since serving is the only potential it ever had.

If, by human equivalent A.I., you actually mean't an A.I. with human personality characteristics, then that is a whole different story. I don't see why you would ever want to build such a thing, but an A.I. with human level intelligence does not need to have human desires.
Click the links in the opening post of the thread to see the debate that spawned this thread: the Singularity thread. The idea of the Singularity is that an AI being created that would help civilization progress far beyond what humans could conceive of, complete with superior understanding of science and technology, with reasearch being conducted by it, etc. This AI would then design even superior AIs, with similar goals, and so on. Of course, you would first have to reach human equivalent AIs for this, and then Ender states that such a thing would not be ethical.

Given your insistence that goal driven behaviour requires emotions, I guess your question is answered. As for your not seeing why one would want that, I say again: that is not the topic of the thread.
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

Lord Zentei wrote:Quite so: but in the OP I specified a human equivalent AI. Obviously, that does not include AIs that are deliberately kept below the human level. Would it be ethical to create such a being?
I personally can't see anything particularly wrong with it on face value. It'd be ethical if you did it for altruistic reasons. It'd be unethical if you did it for sadistic reasons. It'd be negligent if it acted out and harmed people.
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Boyish-Tigerlilly wrote:I don't know what you mean here? Who said that the AI is human and that matters? I know I didn't.
I obviously misunderstood you. When you said:
Boyish-Tigerlilly wrote:...but would you who support it also support something very similar (ala BNW) if it were done to equally potentially intelligent humans?
I thought you were drawing a distinction between AI composed of synthetic materials (ala Neuromancer) and an AI built by engineering a human (ala Brave New World or Blade Runner). I must admit that since that apparently isn't what you meant, I have no idea what the above quote means. Possibly the mistake was mine, since I was treating Brave New World style manipulation (to create Deltas and Epsilons) as if that were a form of AI, which in hindsight I think was incorrect. They are not "artificial intelligence", rather they are "artificially modified natural intelligence".
Boyish-Tigerlilly wrote:I don't think it matters whether they are flesh and blood or not. Humans are a type of biological machine, but with a high degree of intelligence. I just it would be a bad precedent and taking advantage of a weakess in something else deliberately to make sapient creatures who retarded, yet content slaves. Maybe I am just dense (I probably am), but even it makes them happy, I wouldn't think that, according to Ideal Utilitarianism, it would be the best option if the individual could rationally choose (and it cannot in reality, since you made it that way).
Emphasis added.

I am not familiar with Ideal Utilitarianism, so perhaps the following analysis is contradicted by it. That said, I think that the problem here is that you believe that there is some objective "best" option for a sapient creature. I don't believe that there is any objective "best" option for a sapient being - the best option for a sapient being is the one that best fulfills its desires. The actual desires have no objective worth, in my view desires are inherently subjective.
Boyish-Tigerlilly wrote:I am perfectly willing to consider it, at least, though. I just find it shocking that you seem to have no problem with slavery as long as you make a slave that cannot resist, due to your genetic or computer programming, and likes it--again, because you enslaved it and make it think that.
It is true that I have no problem with slavery, per se. I hope that I have made clear, however, that I do have a problem with enslaving people. By "enslaving", I mean the act of making someone a slave against their will. In the scenario in this post, no slave is unable to resist, merely unwilling.
Boyish-Tigerlilly wrote:If the justification of it is based on Hedonistic and Preference Utility, it gets interesting.

According to Preference Utilitarianism, someting is right insofar as it satisfies the desires, preferences of those involved and wrong insofar as it violates them. If we make a slave race of humans or AI and implant within them at "birth" limitations on possible desires (or give them active desires to serve us, regardless of what they would want if they weren't programmed), they have no preferences you can violate, unless you violate the preferences you already programmed them to have access to.
Emphasis added.

Talking about the preferences that these beings would have had if they weren't programmed is a giant red herring, because these beings would not have any preferences if we hadn't programmed them. These beings would not even exist.
Boyish-Tigerlilly wrote:<snip>Stuff I agree with</snip>

You are merely taking away any choice of doing anything else or disobeying you. The only reason it is happy to serve you and doesn't want to do anything else is that you manacled its mind. Something cannot begin to be wrong if it's not making them unhappy according to hedonistic brands of utility, given that you are't objectively causing them physical harm that they just cannot feel, anyway, which would bring you back to Preference Utilitarianism.
The creature's "choice of doing anything else or disobeying you" is not being taken away. That choice never existed to begin with. It is true that the creature's mind is being mannacled, but every creature we create has its mind mannacled. When two people conceive a child, they already know that the creature will probably love its parents, will probably fall in love with one or more people someday, and they can make a whole host of predictions about what it will, and won't want. All life that humans create has its mind mannacled from day one. My question is, why is it wrong to create life when we have a 100% chance of knowing what it will want, but ok to create life when we have a 30%, 40%, 90%, or whatever chance of knowing what it will want?
Boyish-Tigerlilly wrote:This presents a peculiarity if the argument hods. I just ask because it's quite ironic that a similar argument was used by white southern plantation owners trying to defend african slavery. For instance, there was a virginia plantation owner's letter to the governor in the 18th century that can be found in the "American Pageant" which describes him trying to defend the institution of slavery against the "abolitionist devils" by commenting that the slaves like serving the whites, are happy, and are well-taken care of.
The argument of the Virginia planter was wrong on two grounds. First, of course, it made an assumption (ie. that slaves wanted to serve) that was incorrect. It was also wrong in its conclusions, and the planter was either stupid or a hypocrite. If the slaves were in fact happy to serve, then the obvious answer was to emancipate them. After all, if the slaves were truly happy to serve, then there could be no harm in emancipating them, since they would serve anyway.
Boyish-Tigerlilly wrote:Now, even though this wasn't true anyway, are you really saying it would be ok to enslave, say, africans, if we were to breed them such that they actually would like serving white overlords? Even though they would have no choice in the matter? They would still very much be in the same situation as the normal slaves, but in this alternative reality, you make them want to serve you. They don't mind the harsh conditions or the backbreaking slave labour. They enjoy it. You can even make it enjoy pain and the tedious labour.

The hedonistic and preference arguments get weird here, it seems, since causing it to go through boring, monotonous tedium and pain is what it wants, thus you are satisfying a preference. You aren't violating one again, much like in the case where you merely programme the human to like to serve you. You are also making it happy by doing it, because they would enjoy going though that just to please you.
Emphasis added.

That would be horribly evil. It would not be evil because we were using African slaves to breed a new breed of willing slaves. The evil act is enslaving Africans, not breeding a new race of slaves. It does not matter if we were using the slaves to breed automatons or using them to pick cotton, the act is just as evil.

Boyish-Tigerlilly wrote:This one is somewhat unrelated, but I don't know how you feel on it.
What if we deliberately created functionally retarded humans for dangerous jobs, but made it such that they, like the slave above, liked doing it and serving us? Would it matter that we made them retarded, but happy being retarded? Why should it matter if it's ok to make people slaves and servants, so long as you make them happy being your monkey? In both cases, you aren't violating a preference (since the preference is preprogrammed) and you wouldn't be making them unhappy, since that too is predicated upon the programming you use when you engineer and breed them.

You could virtually do anything to them as long as you programmed them to like and or prefer it (under a strict preference or hedonistic system) and you seem to be using both of those systems.
I have no problem whatsoever with making our AIs (or equivalent engineered humans) the equivalent of retarded humans, rather than normal humans. There is no particular moral disctinction between a 70 (or even 20) IQ human and a 100 IQ, 150 IQ, or 200 IQ human.
Boyish-Tigerlilly wrote:Consent would be an issue, like in the case of child molestation, but oddly, does consent matter when the person giving consent cannot possibly EVER do anything else? This is the case for the slave AI or human you breed. The consent there is hollow.
The consent in my scenario is not hollow at all. Even if we engineer the AI's preferences, it can stil be coerced. We can coerce it by threatening its interests (holding a gun to its head, or holding it to mine), or by manipulating it (altering its programming, giving it drugs, or modifying the positronic potentials, depending on how it's implemented). The fact that the consent is predictiable does not make it invalid, after all, all people constantly solicit consent from others when the outcome is all but certain (whether the act is hailing a cab, requesting some information from the goverment, or asking for sex from one's spouse), the fact that the other party complies does not mean that they did not consent.
Boyish-Tigerlilly wrote:It technically does consent, but only because you ultimately force it to and give it no option, whereas normal sapient creatures would have the option and ideally prefer not to be your monkey.
Why is the outcome that it not prefer to be my monkey preferable? Why is wanting to be my monkey a worse preference than not wanting to be my monkey? I suppose that sums up my entire position: I see no reason why one preference is better (or more moral) than another.
Boyish-Tigerlilly wrote:When does it end or become absurd, and is that line arbitrary?? Would you ever think it wrong, so long as it continued to make them happy or fulfill your predesigned preferences they held? I don't see how you can say X would be ok, for Y reasons, but not A, for Y reasons.
I see no reason why it need ever end, or become absurd. If a sapient being wants to do something, then it is not immoral to allow it to do so, except of course if doing so impringes on the rights of another. What goes on in the bedroom (or living room, or kitchen, etc.) between two consenting adults is the business of nobody except them, even when one of them is an artificial creature.
Just as the map is not the territory, the headline is not the article
User avatar
Boyish-Tigerlilly
Sith Devotee
Posts: 3225
Joined: 2004-05-22 04:47pm
Location: New Jersey (Why not Hawaii)
Contact:

Post by Boyish-Tigerlilly »

I have no problem whatsoever with making our AIs (or equivalent engineered humans) the equivalent of retarded humans, rather than normal humans. There is no particular moral disctinction between a 70 (or even 20) IQ human and a 100 IQ, 150 IQ, or 200 IQ human.
I only ask this because typically, when the bioethics of selective abortion as euthanasia is discussed, there is usually a demarcation between normal intelligent, healthy babies and babies that are born with severe cognitive defects. It's usually seen as being worse for the individual-a lower quality of life--than being born normal.

I think J.S Mill hit on this issues when he discussed the concept of early preference Utilitarianism (or Ideal Utilitarianism later evolved by GE Moore). Someone who is making a rational decision would choose that which is ideally the best. This means that, given information about choices, one would not choose the obviously worse choice (or a choice that decreases overall utility). If there's no real difference between the utility of being retarded and normal, you are saying it would be so-so to be retarded or normal? You aren't losing out on anything, ideally? I mean, when you are retarded, your options're very limited compared to a normal person.
I am not familiar with Ideal Utilitarianism, so perhaps the following analysis is contradicted by it. That said, I think that the problem here is that you believe that there is some objective "best" option for a sapient creature. I don't believe that there is any objective "best" option for a sapient being - the best option for a sapient being is the one that best fulfills its desires. The actual desires have no objective worth, in my view desires are inherently subjective.
Well, I don't believe in the objective best, but I think that someone can have a better life than a life they have if that life has less opportunity. I will mention this later, but Utilitarianism has a set of criteria to go by that judges the intensity, duration of utility etc. That's one reason I feel bad when I see someone was born severely retarded. It's not "good" for them. They could have a life of more utility opportunities, more fulfilled preferences. They are losing something that others have. Being retarded is not a desirable thing such that people would willingly choose it any more than they would willingly choose to be mind-nerfed so you can turn them into slaves, even though when they became a slave, they wouldn't know any better, couldn't refuse you, and would love their slavery.

It is not one's ideal preference to be mentally ill. If they were really the same as being normal, we wouldn't find anything bad about that type of life, which doctors and ethicists obviously do. It's a handicapped life.


t is true that I have no problem with slavery, per se. I hope that I have made clear, however, that I do have a problem with enslaving people. By "enslaving", I mean the act of making someone a slave against their will. In the scenario in this post, no slave is unable to resist, merely unwilling.
Yes. I understand that you say it's wrong to force someone into slavery against his will, but you find nothing wrong with making it so he couldn't refuse you anyway.

On some level, I don't see a major problem with any form of manipulation. The only problem I have is probably solvable, I just don't know it yet. For example, someone above mentioned that he would have no problem modifying humans to prevent paedophillia or molestation tendancies. I wouldn't see a problem doing that to prevent alcoholism, to encourage altruism, or to do anything which ultimately prevents human suffering.

Not everyone will ultimatley have "good" motivations. If the "rightness" or "wrongness" is merely based on whether or not it makes everyone happy or fulfills their preferences, regardless if they are a forced-move or voluntarily taken among other options, then someone with no-so-good motivations could justify a normally heinous action on the same criteria, so long as no one ever found out about it. Utilitarianism, as I mention later, makes no distinction among sadistic preferences or derrived happiness, so long as your action produces more of it. It might be wonderful to negate someone's choice in life by engineering them for the above altruistic motives to prevent harm to others, but it would violate most people's preferences to have a society in which sadists breed people they can molest and torture for their own amusement (but those who, like the above, couldn't resist or complain). Ultimately, it wouldn't matter then either, since they are all happy and fulfilled (both aspects of P. utility or H. Utility).

Whereas above, we can say that we are helping society "objectively" by prohibiting suffering, molestation, etc, Utilitarianism is ultimately based off of maximizing happiness or preferences, whatever their source. So it would be just as moral for a dictator to create a race of artificially forced-moves in design space he could kill at whim for his amusement (so long as he made them like every decision he made and unable to resist via engineering). That too is just as "moral" as the previous anti-molestation engineering if you go by a pure Hedon/Pref Utility--but only if you use the previous justification.

The consent in my scenario is not hollow at all. Even if we engineer the AI's preferences, it can stil be coerced. We can coerce it by threatening its interests (holding a gun to its head, or holding it to mine), or by manipulating it (altering its programming, giving it drugs, or modifying the positronic potentials, depending on how it's implemented). The fact that the consent is predictiable does not make it invalid, after all, all people constantly solicit consent from others when the outcome is all but certain (whether the act is hailing a cab, requesting some information from the goverment, or asking for sex from one's spouse), the fact that the other party complies does not mean that they did not consent.

I agree that just because something complies, it doesn't mean it didn't consent. I never ment to imply the opposite. However, I question whether or not one can really "consent" when one is programmed always to do what you say, regardless of the command, since consent means:

1. The voluntary agreement [...] by a person of age or with requisite mental capacity who is not under duress or coercion and usually who has knowledge or understanding.

Would you really say that something programmed by you to do whatever you want and never have the physical or mental capacity to refuse you is actually of sound mental capacity, not under duress, or has knoweldege and understanding of the issue as a prequisite for consent? If I implant a chip in your head during the fetal stage, which takes over your developing mind (prior to you becomming self aware, a self, or a person), and then use that chip as a programming device to get you to hit yourself in the face at my command--is that's consent? You did it afterall, you didn't complain or refuse, and you didnt' "choose" otherwise (since I made it so you couldn't).

I see that neither as "voluntary" nor made under compentence. In both cases, you cannot refuse, you don't know any better, and even if you did, you wouldn't have the capacity to do anything else. Perhaps you are using an alternative form of consent. They do exist, and I amnot trying to be sarcastic. There are others that are far different.
skotos
Padawan Learner
Posts: 346
Joined: 2006-01-04 07:39pm
Location: Brooklyn, NY

Post by skotos »

Boyish-Tigerlilly wrote:
skotos wrote:I have no problem whatsoever with making our AIs (or equivalent engineered humans) the equivalent of retarded humans, rather than normal humans. There is no particular moral disctinction between a 70 (or even 20) IQ human and a 100 IQ, 150 IQ, or 200 IQ human.

I only ask this because typically, when the bioethics of selective abortion as euthanasia is discussed, there is usually a demarcation between normal intelligent, healthy babies and babies that are born with severe cognitive defects. It's usually seen as being worse for the individual-a lower quality of life--than being born normal.
It's true that when humans choose whether or not to abort a fetus, the potential intelligence of the fetus is a factor. But this is not because the fetus being retarded automatically means that it will be unhappy. Instead, the assumption is that the fetus will be unhappy because it will have the desires of a normal intelligence human, but will in fact be retarded, and so unable to fulfill those desires. An artificially created retarded being would (hopefully) have desires compatible with its intellectual capacity, and so this would not be an issue.
Boyish-Tigerlilly wrote:I think J.S Mill hit on this issues when he discussed the concept of early preference Utilitarianism (or Ideal Utilitarianism later evolved by GE Moore). Someone who is making a rational decision would choose that which is ideally the best. This means that, given information about choices, one would not choose the obviously worse choice (or a choice that decreases overall utility). If there's no real difference between the utility of being retarded and normal, you are saying it would be so-so to be retarded or normal? You aren't losing out on anything, ideally? I mean, when you are retarded, your options're very limited compared to a normal person.
It's true that if one is retarded, then one's options are very limited compared to a person of average human intellect. But, if you desire none of the things that an average human is capable of (but you are not), then the fact that you can't do those things is irrelevant. It would be immoral to create a retarded being who had the same goals as an average human, but it would not be immoral to create a retarded being whose goals were achievable. That is precisely why people abort retarded fetuses - because the goals that these fetuses will have are not achieveable. To take an example from Brave New World, if the goal of the creature is to operate an elevator day in and day out, and do it well, then there is nothing wrong with creating it - it will do its job, do it well, and be very happy as a result.

To answer your question directly, no, I don't think there is any inherent difference between the utility of being retarded and being of average human intelligence. Obviously, if the retarded being has the same goals as the average intelligence human, then there will be a difference in utility, because the retarded being will lose out. But I see no reason why the retarded being can't have a different set of goals which allow it to achieve the same amount of utility as the average intelligence human who has average intelligence human goals.
Boyish-Tigerlilly wrote:
skotos wrote:I am not familiar with Ideal Utilitarianism, so perhaps the following analysis is contradicted by it. That said, I think that the problem here is that you believe that there is some objective "best" option for a sapient creature. I don't believe that there is any objective "best" option for a sapient being - the best option for a sapient being is the one that best fulfills its desires. The actual desires have no objective worth, in my view desires are inherently subjective.
Well, I don't believe in the objective best, but I think that someone can have a better life than a life they have if that life has less opportunity. I will mention this later, but Utilitarianism has a set of criteria to go by that judges the intensity, duration of utility etc.
I agree that better circumstances should lead to more utility, all other things being equal. I see no reason why that should render the creation of AI (or engineered humans) immoral. The way I view the problem is thus:

1) Creating humans is moral. (True by assumption, since humans are created thousands or millions of times a day, and the vast majority of humanity agrees that it's moral to do so).

2) The happiness of a newly created human has an expected value X.

3) Therefore, creating a non-human (or engineered human) intelligence is moral, if the expected happiness of the being is at least equal to X.

Of course, making these beings more intelligent would increase the happiness that they could experience, as would making them stronger, faster, or whatever. That said, there is no obligation to give them a chance at this level of happiness, since it is moral to create a being that has an expected happiness value equal to a newly conceived zygote.
Boyish-Tigerlilly wrote:That's one reason I feel bad when I see someone was born severely retarded. It's not "good" for them. They could have a life of more utility opportunities, more fulfilled preferences. They are losing something that others have.
I also feel bad when a baby is born severely retarded. I do not feel bad because the baby is retarded, however. I feel bad because it will be unable to do things that it wants to do. If we create a slave race that does not have goals beyond its members intellectual capacity, then I have no problem with it.
Boyish-Tigerlilly wrote:Being retarded is not a desirable thing such that people would willingly choose it any more than they would willingly choose to be mind-nerfed so you can turn them into slaves, even though when they became a slave, they wouldn't know any better, couldn't refuse you, and would love their slavery.
I suppose this is our fundamental disagreement. I do not see being retarded as "not desireable". I know plenty of creatures that are far stupider than retarded people, and are still very happy. I do not desire being retarded, because I want to do things that I could not do if I was retarded. That said, if I didn't want to do those things in the first place, being retarded would be just fine.
Boyish-Tigerlilly wrote:It is not one's ideal preference to be mentally ill. If they were really the same as being normal, we wouldn't find anything bad about that type of life, which doctors and ethicists obviously do. It's a handicapped life.
Being mentally ill is a handicapped life, and we do have a problem with it. That said, we have a problem with it not because it is inherently bad to be mentally ill, but because mentally ill people are still trying to carry on the lives of average people, and are still trying to think like average people, even though they are unable to do so. As with mental retardation, the problem with being mentally ill is not the illness itself, but the fact that the illness interferes with the person's desires.


Boyish-Tigerlilly wrote:
skotos wrote:t is true that I have no problem with slavery, per se. I hope that I have made clear, however, that I do have a problem with enslaving people. By "enslaving", I mean the act of making someone a slave against their will. In the scenario in this post, no slave is unable to resist, merely unwilling.


Yes. I understand that you say it's wrong to force someone into slavery against his will, but you find nothing wrong with making it so he couldn't refuse you anyway.

On some level, I don't see a major problem with any form of manipulation. The only problem I have is probably solvable, I just don't know it yet. For example, someone above mentioned that he would have no problem modifying humans to prevent paedophillia or molestation tendancies. I wouldn't see a problem doing that to prevent alcoholism, to encourage altruism, or to do anything which ultimately prevents human suffering.

Not everyone will ultimatley have "good" motivations. If the "rightness" or "wrongness" is merely based on whether or not it makes everyone happy or fulfills their preferences, regardless if they are a forced-move or voluntarily taken among other options, then someone with no-so-good motivations could justify a normally heinous action on the same criteria, so long as no one ever found out about it. Utilitarianism, as I mention later, makes no distinction among sadistic preferences or derrived happiness, so long as your action produces more of it. It might be wonderful to negate someone's choice in life by engineering them for the above altruistic motives to prevent harm to others, but it would violate most people's preferences to have a society in which sadists breed people they can molest and torture for their own amusement (but those who, like the above, couldn't resist or complain). Ultimately, it wouldn't matter then either, since they are all happy and fulfilled (both aspects of P. utility or H. Utility).
Jesus, that's a lot of argumentation to dissect. As far as the issue of determining choice prior to creation, I think I've addressed it earlier. As for the issue of allowing sadists to create willing victims being contra to most people's preferences, that's an interesting point. Whether or not said creation is contra to most people's preferences is an empirical question, it depends on the opinions of whatever population is being polled. Certainly, in some populations, the mere creation of these beings would lower utility, simply because so many members of the population find it distasteful.
Boyish-Tigerlilly wrote:Whereas above, we can say that we are helping society "objectively" by prohibiting suffering, molestation, etc, Utilitarianism is ultimately based off of maximizing happiness or preferences, whatever their source. So it would be just as moral for a dictator to create a race of artificially forced-moves in design space he could kill at whim for his amusement (so long as he made them like every decision he made and unable to resist via engineering). That too is just as "moral" as the previous anti-molestation engineering if you go by a pure Hedon/Pref Utility--but only if you use the previous justification.
I agree. A dictator who creates a race of beings whose sole purpose is to be executed by him is not immoral, assuming that they want to be executed by him. I think that that sort of policy is incredibly perverse, but I see no reason why perversity need be evil.

Boyish-Tigerlilly wrote:
skotos wrote:The consent in my scenario is not hollow at all. Even if we engineer the AI's preferences, it can stil be coerced. We can coerce it by threatening its interests (holding a gun to its head, or holding it to mine), or by manipulating it (altering its programming, giving it drugs, or modifying the positronic potentials, depending on how it's implemented). The fact that the consent is predictiable does not make it invalid, after all, all people constantly solicit consent from others when the outcome is all but certain (whether the act is hailing a cab, requesting some information from the goverment, or asking for sex from one's spouse), the fact that the other party complies does not mean that they did not consent.
I agree that just because something complies, it doesn't mean it didn't consent. I never ment to imply the opposite. However, I question whether or not one can really "consent" when one is programmed always to do what you say, regardless of the command, since consent means:

1. The voluntary agreement [...] by a person of age or with requisite mental capacity who is not under duress or coercion and usually who has knowledge or understanding.

Would you really say that something programmed by you to do whatever you want and never have the physical or mental capacity to refuse you is actually of sound mental capacity, not under duress, or has knoweldege and understanding of the issue as a prequisite for consent? If I implant a chip in your head during the fetal stage, which takes over your developing mind (prior to you becomming self aware, a self, or a person), and then use that chip as a programming device to get you to hit yourself in the face at my command--is that's consent? You did it afterall, you didn't complain or refuse, and you didnt' "choose" otherwise (since I made it so you couldn't).

I see that neither as "voluntary" nor made under compentence. In both cases, you cannot refuse, you don't know any better, and even if you did, you wouldn't have the capacity to do anything else. Perhaps you are using an alternative form of consent. They do exist, and I amnot trying to be sarcastic. There are others that are far different.
Again, I would resort to the comparison between an engineered sex slave, and a person's spouse. If I request sex from my spouse, then I am very likely to receive an enthusastic "Yes!" (assuming we have a happy, healthy relationship). That "Yes!" is not motivated by any sort of hedonistic calculus on my spouse's part, it's motivated by various very strong emotions. My spouse can't refuse, can't know any better, and doesn't have the capacity to do anything else, because of the strength of his/her emotions. My ability to request things from people who love me already exists - why does the fact that I know in advance that somebody loves me make the act of requesting something immoral?
Just as the map is not the territory, the headline is not the article
User avatar
Durandal
Bile-Driven Hate Machine
Posts: 17927
Joined: 2002-07-03 06:26pm
Location: Silicon Valley, CA
Contact:

Re: The ethics of creating an AI

Post by Durandal »

Lord Zentei wrote:In the Singularity thread in OSF, Ender states the following:
Ender wrote:
Mad wrote: However, that's not going to stop researchers from trying to create an AI. That a goal for the field of computer science and it isn't going to go away.
One would hope a simple ethics class would fix that problem. I seriously cannot concieve how people think the creation of an AI is ethical by any stretch of the imagination.
So, this thread is to discuss the ethics of creating a human-equivalent AI. Is it ethical or not, and why? What is the burden of proof in such a case as this?
Such an AI would pass the Turing test, meaning that it is indistinguishable from a human being in terms of its responses and behavior. The creation of such an AI would be effectively no different from creating a child.
Damien Sorresso

"Ever see what them computa bitchez do to numbas? It ain't natural. Numbas ain't supposed to be code, they supposed to quantify shit."
- The Onion
User avatar
Lord Zentei
Space Elf Psyker
Posts: 8742
Joined: 2004-11-22 02:49am
Location: Ulthwé Craftworld, plotting the downfall of the Imperium.

Re: The ethics of creating an AI

Post by Lord Zentei »

Durandal wrote:
Lord Zentei wrote:So, this thread is to discuss the ethics of creating a human-equivalent AI. Is it ethical or not, and why? What is the burden of proof in such a case as this?
Such an AI would pass the Turing test, meaning that it is indistinguishable from a human being in terms of its responses and behavior. The creation of such an AI would be effectively no different from creating a child.
Well, the ability to pass the Turing test would presumably depend a lot upon actual experiences as opposed to intelligence per se (as shown in Blade Runner), so I have some problems with accepting it as the gold standard, even though I freely admit that I don't have anything better myself.

As for the child equivalence, let's do a little thought experiment: an infertile couple wishes for a child. A fertility clinic says it has a revolutionary new way to help them conceive, but the child will with 100% probability not have a functional human body: it is essentially just the mind, and must interact with the rest of the world via automata, much like a computer. Based on the assertion that creating a human level AI is equivalent to creating a child, is this ethical? Is it really comprable?
CotK <mew> | HAB | JL | MM | TTC | Cybertron

TAX THE CHURCHES! - Lord Zentei TTC Supreme Grand Prophet

And the LORD said, Let there be Bosons! Yea and let there be Bosoms too!
I'd rather be the great great grandson of a demon ninja than some jackass who grew potatos. -- Covenant
Dead cows don't fart. -- CJvR
...and I like strudel! :mrgreen: -- Asuka
Post Reply