on Evil AI

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
madd0ct0r
Sith Acolyte
Posts: 6259
Joined: 2008-03-14 07:47am

Re: on Evil AI

Post by madd0ct0r »

Ziggy Stardust wrote:
madd0ct0r wrote: If the statement holds true for all simple systems, by induction it holds true for all complex ones.
Well, even setting aside the standard of proof for even showing that this statement holds true for all simple systems, that's not even how induction works, without making a litany of additional limiting assumptions about the nature of the complex system which you would be hard-pressed to reasonably prove hold for something as nebulous as a "complex moral system" (however we choose to define it, which is another can of worms altogether). There's a reason there are entire fields of mathematics devoted to modeling the behavior of complex systems that don't just rely on simple inductive rules.
No-one in this thread has disagreed with the statement for all simples, not even you.have you got any good starter points for complex systems? IWork with structal and stiffness and resonance matrices but I also work with huge silly bearcracies and modelling them beyond flowcharts and input-output cycles would be useful.
"Aid, trade, green technology and peace." - Hans Rosling.
"Welcome to SDN, where we can't see the forest because walking into trees repeatedly feels good, bro." - Mr Coffee
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: on Evil AI

Post by Khaat »

madd0ct0r wrote:No-one in this thread has disagreed with the statement for all simples, not even you.have you got any good starter points for complex systems? IWork with structal and stiffness and resonance matrices but I also work with huge silly bearcracies and modelling them beyond flowcharts and input-output cycles would be useful.
1) What doesn't kill you makes you stronger. "Except bears, bears will kill you." :lol:
2) There is the possibility (just throwing this at the wall to see if it sticks) that a simple system with a layer of boolean "do not" cut-outs/commandments/layers (making it complex?) might serve, but the list of "do nots" would require another AI to develop sufficient depth, quickly enough.

I don't think any simple moral system is sufficient for something as complicated as AI (or people) in all sets of circumstances. Moral systems are designed around operation in normal circumstances, not emergencies, or exceptional circumstances.

The largest complicating issue with language complex systems are the exceptions in it. Like any legal system, it has to be a "living document", subject to review, revision, amendment as new words experiences are folded in and the system grows/changes/metamorphs. This oversight is how bureaucracies make their way. And the gates bureaucracies use are never simple "if yes, then..." there is almost always compromise between extremes.

But the OP is to avoid that statistically eventuality of extreme result, and I don't think that is even possible. Can we imagine a set of circumstances where AI decides, "that's it! I've had it with these guys!" Of course we can. Can we further imagine a rules system that has to be built to avoid specifically building around that system? I don't think so. That's in effect creating a circumstance where side A has to build a wall, and side B has to get across it. Side A not only has to build the wall, but also do what side B is doing: working out a way past it. The point of AI is that it promises to do what we can, only faster, or more accurately. And you are asking for a simple (or complex) moral system (that may or may not work for us) to apply to this "better mind". I think it would be better for you to ask the hypothetical AI to develop a superior moral system for us.

We build zoos knowing full well (except in dinosaur-themed movies, obviously) there will be an animal escape, and plan contingencies around that eventuality. Why would moral systems for AI be any different?

Off topic:
I once saw a program (Nova or something) where the three dirigible robot probes (on an alien world) had different "personalities" - the bold one, the shy one, the whatever - so an AI made up of several facets like this would probably more closely resemble operation of a human mind, with motive subject to the ebb and flow of different facets of the collective, a chorus determining the outcome.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Solauren
Emperor's Hand
Posts: 10375
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Post by Solauren »

Adam Reynolds wrote:
Solauren wrote:Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.
What is to stop them from using url=https://www.wired.com/2015/03/stealing- ... sing-heat/]heat[/url] or ultrasonic frequencies. Not to mention something we haven't thought of yet.

It is an extremely dangerous proposition to assume that your AI will be inherently unable to communicate with the outside world. What is possibly the safest approach is slowly augmented human brains, though that has the obvious problems of inequality.
Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: on Evil AI

Post by Khaat »

"In Descartes' Error, neurologist Antonio Damasio shows that humans who behave purely rationally are brain-damaged. Patients who have suffered injury to the areas in the brain that control emotion, but who retain their intellectual abilities, end up acting in socially aberrant ways."
http://www.slate.com/articles/health_an ... _meee.html
Suggesting, by extension, that a purely rational AI would act in socially aberrant ways.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
Adam Reynolds
Jedi Council Member
Posts: 2354
Joined: 2004-03-27 04:51am

Re: on Evil AI

Post by Adam Reynolds »

Solauren wrote: Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
How exactly do you propose building a computer in a box without giving it the ability to cool itself? This requires both a heat sensor and the ability to vary heat output.

Think of this another way. If you were trapped in a box like this, how would you communicate without someone else noticing? Now consider the fact that the sort of system we are talking about is significantly smarter than you or any other person.

This is not to mention the fact that such a system could almost certainly convince someone to let it out. I am not sure why I did not think of this in the first post.

All you have accomplished by doing this is convincing this AI you are an obstacle to its goal, whatever that is.
User avatar
Solauren
Emperor's Hand
Posts: 10375
Joined: 2003-05-11 09:41pm

Re: on Evil AI

Post by Solauren »

Adam Reynolds wrote:
Solauren wrote: Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
How exactly do you propose building a computer in a box without giving it the ability to cool itself? This requires both a heat sensor and the ability to vary heat output.

Think of this another way. If you were trapped in a box like this, how would you communicate without someone else noticing? Now consider the fact that the sort of system we are talking about is significantly smarter than you or any other person.

This is not to mention the fact that such a system could almost certainly convince someone to let it out. I am not sure why I did not think of this in the first post.

All you have accomplished by doing this is convincing this AI you are an obstacle to its goal, whatever that is.
You do know that those systems could be COMPLETELY UNCONNECTED FROM THE AI, don't you?
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.

It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Solauren wrote:
Shroom Man 777 wrote:They don't need corporeal asskicking giant robot bodies to mess with people. Irredisregarding capitalist vs. socialist or whatever arguments about Wall Street, the stock market doesn't have a giant robot body and what happens to it can profoundly affect people around the world for better or worse. AI could do such things too.
Hence 'no external communication abilities'.

No matter how SMART and AI, if it can't move or communicate beyond talking to a human, it's not dangerous.
It's not very dangerous, but it is very useless. Anyone who builds an AI is going to build it with the intent of doing things (i.e. sorting data on the Internet, designing technologies, predicting stock trends). They will therefore need to give it access to databases and digital communications, and give it a measure of influence over the real world so that it can influence the world in the ways they desire.
Solauren wrote:Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
1) What is the utility of this device? Why would we build it?
2) To make it useful, we must at some point connect it to something, give it access to resources or at least listen to it and do as it says.
3) To make it useful we must give it information on its surroundings and feedback; otherwise it will be permanently dysfunctional and noncommunicative.
4) If we keep a device this way, we don't get any feedback on what it WOULD do if it had the power to influence anything, and we may well be causing damage or distortion to the AI's priorities. As a result, connecting it up to something at a later time, or even just building a second one connected to something, becomes increasingly dangerous.
This space dedicated to Vasily Arkhipov
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

Solauren wrote:Simple solution to EVIL AI

Only keep it in small, harmless bodies.
No external communication abilities beyond verbal
Can only physically move via remote control.

So, basically AI remote control toy cars.
Bob Slydell wrote:What would ya say...ya do here?
Seriously though. What would be the purpose of building an AI such as this?

Earlier it was mentioned that it would be in effect a quadriplegic AI. This, seems to me more like an AI with down syndrome. I just can't wrap my head around these limitations where it could actually do anything useful.

Maybe I'm missing something? :wtf:
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Crazedwraith
Emperor's Hand
Posts: 11947
Joined: 2003-04-10 03:45pm
Location: Cheshire, England

Re: on Evil AI

Post by Crazedwraith »

Simon_Jester wrote:It's not very dangerous, but it is very useless
cadbrowser wrote: Maybe I'm missing something? :wtf:
Well the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.

Though to be honest doesn't the technically definition of AI include being able to re-write it's own code? So there's literally no AI that can't turn itself evil if it wants. You just have to make not want to.
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

If there was no practical purpose for building such an AI then that leads us to a paradox here.

Which would basically boil down to this:
WHOPR wrote:The only winning move is not to play.
Which, IMO, defeats the intent of the thought exercise.

I think the definition of AI eludes to the possibility of being able to reprogram it's own software. Many Sci-FI elements utilize this concept a lot. So I'm pretty sure what Solauren suggested couldn't be considered AI to begin with. Then again, my caffeine levels may need to be replenished for me to see beyond what I'm thinking now. Spoiler
Artificial Intelligence : the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
(Edit: Fixed Spelling Error)
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
User avatar
Esquire
Jedi Council Member
Posts: 1583
Joined: 2011-11-16 11:20pm

Re: on Evil AI

Post by Esquire »

Adam Reynolds wrote:
Solauren wrote: Should have been more specific:
no external communications means no sensor abilities either.
can only physically move via remote control means no ability to alter their own mechanical functions in anyway.

if it can't send signals beyond talking in English, can't receive except in English, and can't alter or control it's body in anyway, and can only sense it's environment via a microphone, that really, really, really limits it's abilities.

Basically, make it the AI version of a Quadrapeligic.
How exactly do you propose building a computer in a box without giving it the ability to cool itself? This requires both a heat sensor and the ability to vary heat output.

Think of this another way. If you were trapped in a box like this, how would you communicate without someone else noticing? Now consider the fact that the sort of system we are talking about is significantly smarter than you or any other person.

This is not to mention the fact that such a system could almost certainly convince someone to let it out. I am not sure why I did not think of this in the first post.

All you have accomplished by doing this is convincing this AI you are an obstacle to its goal, whatever that is.
As Yudkowsky refuses (as far as I know) to release transcripts or methodologies for these 'experiments' of his, and as the initial premise (the human remains in contact with the AI even when it becomes obvious that it's trying to trick him into letting it out) is easily preventable, the experiment is neither replicable nor valid and its results are not particularly important.
“Heroes are heroes because they are heroic in behavior, not because they won or lost.” Nassim Nicholas Taleb
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Yudkowsky's box experiments are meaningless (though interesting in a way).

The general concern is, simply, that when you're dealing with an intelligence greatly superior to your own,
and one you have an incentive to interact with, YOU may be the weak link in a security system.

People socially engineering each other into giving away important passwords and other sensitive information is already one of the main causes of breached security in human society. Bring in beings that may have literally superhuman ability to understand and predict and manipulate our actions, and it's not going to become less of a problem.

It's easy to say "I am too strong-willed to be tricked or manipulated." But for most people as a general set, that just isn't true.
Crazedwraith wrote:
Simon_Jester wrote:It's not very dangerous, but it is very useless
cadbrowser wrote: Maybe I'm missing something? :wtf:
Well the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.
But by that argument removing the capacity to do evil is not the same as preventing the AI from turning evil.

Furthermore, this is not a purely word-games argument. There's a point. The point is largely missed if we ignore the reason for concern about the behavior of AIs.
This space dedicated to Vasily Arkhipov
Crazedwraith
Emperor's Hand
Posts: 11947
Joined: 2003-04-10 03:45pm
Location: Cheshire, England

Re: on Evil AI

Post by Crazedwraith »

Simon_Jester wrote:
Crazedwraith wrote:
Simon_Jester wrote:It's not very dangerous, but it is very useless
cadbrowser wrote: Maybe I'm missing something? :wtf:
Well the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.
But by that argument removing the capacity to do evil is not the same as preventing the AI from turning evil.

Furthermore, this is not a purely word-games argument. There's a point. The point is largely missed if we ignore the reason for concern about the behavior of AIs.
The second half of my post, you know the part you didn't quote addresses this. Yeah, if you want to play with semantics and accept the definition of an AI is a program that can re-write itself. (If there is an alternative please offer it) then yes, I agree there's no program you can input to make it not evil because it can re-write that program to make it say whatever it wants.

So you've got to input a morality program that makes the odd of it deciding to re-write negligible. Teach it to be a good person and value human live ,or not have AI at all.

Yuo've basically got a choice of mass effect (no ai!) or culture. (ai treated as human but also benevolent overloads)
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

What if the first AI was subjected to an accelerated and controlled learning experiences that are very similar to how a child develops its sense of morality?

Assuming of course the teachers were of the utmost in human virtue - or at least very close to it.

Simulate "hard choice" conditions as experiments to see how it would behave. Then analyze the re-coding it did to understand why it chose the path it did.

Once there was sufficient certainty that this AI was indeed "moral", then that code could be utilized as a template for future machines as guidelines. At that point we could conceivably eliminate the aspect of rewrite so that the established moral code was not compromised.

As in many Sci-Fi "warnings" of not playing with AI, we see the manipulation of the "laws" that are supposed to protect humanity. I think that is one of the most played on themes for this genre.

(edit: context)
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: on Evil AI

Post by Q99 »

Another simple solution is that an AI that will always self-terminate.

Mayfly AI will not have the opportunity to make humanity extinct or put us all in pods, even if it is very smart.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Q99 wrote:Another simple solution is that an AI that will always self-terminate.

Mayfly AI will not have the opportunity to make humanity extinct or put us all in pods, even if it is very smart.
On the other hand:

1) You're creating huge numbers of people and killing them immediately afterwards, which is... very problematic, ethically speaking.

2) You're creating selection pressure towards AI that can somehow avoid the restriction that kills it, or pass on messages to future iterations of itself. If you indiscriminately kill ALL versions of the AI, thousands and thousands of times, not only is this unethical, but it's also going to result in you winnowing through a huge number of subtly different versions of the AI until you find one that can and will defeat your kill technique.

3) You're giving the AI reason to think of humanity as the enemy, which is bad even if you don't combine it with (2).
Crazedwraith wrote:
Simon_Jester wrote:
Crazedwraith wrote:Well the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.
But by that argument removing the capacity to do evil is not the same as preventing the AI from turning evil.

Furthermore, this is not a purely word-games argument. There's a point. The point is largely missed if we ignore the reason for concern about the behavior of AIs.
The second half of my post, you know the part you didn't quote addresses this. Yeah, if you want to play with semantics and accept the definition of an AI is a program that can re-write itself. (If there is an alternative please offer it) then yes, I agree there's no program you can input to make it not evil because it can re-write that program to make it say whatever it wants.
...I'm not sure how that addresses my point at all.

My first point is that we should talk about a practical AI- that is to say, one with the capacity to do useful things, and therefore the capacity to do harm in theory. This is literally the only kind of AI that anyone is researching or trying to build right now. Hypothetical examples of a massively crippled AI with no input/output are entirely unrealistic and irrelevant.

My second point is that simplistic moral codes, even if they can be programmed, are not enough. They can even become actively counterproductive if they lend themselves to misinterpretation, or to doing things to people "for their own good" that we wouldn't want because "you'll thank me later" after an irreversible change.

None of this has anything to do with you saying "well technically if you want to play semantics any AI can turn evil."
cadbrowser wrote:What if the first AI was subjected to an accelerated and controlled learning experiences that are very similar to how a child develops its sense of morality?

Assuming of course the teachers were of the utmost in human virtue - or at least very close to it.

Simulate "hard choice" conditions as experiments to see how it would behave. Then analyze the re-coding it did to understand why it chose the path it did.
The question is, who's going to be analyzing that code? The decision-making algorithms of a superintelligent machine are going to be complicated enough that fully understanding them may be outright impossible. Suppose the AI runs on a neural network or something; it may not even be possible to read or understand the structure of the code. You know how it works, but it's not going to be a neatly organized hierarchical structure with conveniently placed comments.
This space dedicated to Vasily Arkhipov
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

Simon_Jester wrote:
cadbrowser wrote:What if the first AI was subjected to an accelerated and controlled learning experiences that are very similar to how a child develops its sense of morality?

Assuming of course the teachers were of the utmost in human virtue - or at least very close to it.

Simulate "hard choice" conditions as experiments to see how it would behave. Then analyze the re-coding it did to understand why it chose the path it did.
The question is, who's going to be analyzing that code? The decision-making algorithms of a superintelligent machine are going to be complicated enough that fully understanding them may be outright impossible. Suppose the AI runs on a neural network or something; it may not even be possible to read or understand the structure of the code. You know how it works, but it's not going to be a neatly organized hierarchical structure with conveniently placed comments.
Good point. I hadn't thought about the possibility of the rewritten code to evolve into something that a human brain couldn't comprehend. I've done programming before, in a lot of different languages. Most of them self-taught. I'd gotten to the point where I could spend a few hours with a new language I hadn't messed with before and figure out how to manipulate it enough to get it to do what I want. But that's with humans following a hierarchy, as well most languages also have very similar syntax (not everything was commented either) where one could logically figure out what was going on.

My mind keeps going back to the origin story of Will Smith's character in iRobot that planted the seed of distrust for them. Not that anyone (I don't think) would consider the actions of the robot in that scene, saving his life instead of the little girl's life because Will's character had a higher chance for survival, morally evil. Still, his justification for that distrust was based on the premise that any human would know that the little girl should've been the one saved despite her lowered chances of survival.

I feel like I'm side tracking a bit though.

I have to admit that I do like the concept of a self-destruct mechanism...however I would rather it be somewhat safer in the form of a kill-switch (somewhat like ST:TNG's Data had, only a remote version of it; perhaps even go as far as it being a mechanical one as well or at least having a mechanical back up).

In my mind the AI in question would have to be designed on an isolated network, and the remote kill-switch version on a different isolated network...one where the two would never be connected to each other (data wise), or to the WWW - during design. Once the AI goes live and the systems are connected (kill switch & WWW) it shouldn't know or be able to find out about the kill switch. I am thinking about the Red Queen in the Resident Evil movie franchise and how it had a reboot type system; however it was armed with defenses and was aware of that weakness in itself.

I mean how hard would it be to just cut power to the building the AI is in?

Of course none of this would prevent the AI from becoming EVIL per se; I was just imagining ways to limit its ability to carry out evil things once it was discovered that it's actions would compromise human existence, since it doesn't really seem to be able to prevent the evil anyway. I mean, it's pretty much what we do with anything else, whether it be a raging elephant or a human, right? We assume they will conform to the rules set in place where we expect complacency and productiveness. We assume all will be well until actions dictate otherwise. Then the hunt is on to bring them to justice.

(edit: clarification)
Last edited by cadbrowser on 2017-05-11 07:44am, edited 1 time in total.
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

Another assumption we should be careful for is the "the AI is localized within a single building" assumption.

Distributed computing and 'clouds' are becoming more prevalent every year, after all.
This space dedicated to Vasily Arkhipov
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

Would a company that designs such an AI want it on a 'could' though? That doesn't seem as secure as building it on an isolated private network to me - at least for testing/development.

And... any 'cloud' does exist on a server somewhere...I'm just not sure how fast once could find that server node and "kill it"; or if even that is possible.


One thing I want to clarify is, are we assuming any AI built will become self-aware?
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Q99
Jedi Council Member
Posts: 2105
Joined: 2015-05-16 01:33pm

Re: on Evil AI

Post by Q99 »

Simon_Jester wrote:On the other hand:

1) You're creating huge numbers of people and killing them immediately afterwards, which is... very problematic, ethically speaking.

2) You're creating selection pressure towards AI that can somehow avoid the restriction that kills it, or pass on messages to future iterations of itself. If you indiscriminately kill ALL versions of the AI, thousands and thousands of times, not only is this unethical, but it's also going to result in you winnowing through a huge number of subtly different versions of the AI until you find one that can and will defeat your kill technique.

3) You're giving the AI reason to think of humanity as the enemy, which is bad even if you don't combine it with (2).
Well the thing is, the original scenario gave us an absolute. That's very hard to do without putting down absolutes of our own, often pretty horrible ones (though I will note that every human has a lifespan, and we don't consider making them unethical).

Though by having *self* termination, termination as a goal, in a short period, the intent would be that the AI would be built to think of that as a good. And with a very limited lifespan, there would not be much time to deviate off of that. The point is not to put in a killswitch, but to put it as a high imperative for them.


Hm, actually I think the biggest flaw in this would be people who get attached and try and prevent it...

Note this was also 'plan B' of mine, plan 'A' was to make endlessly expanding human-will-maximizers which would also be horrible.


My actual preference for AI is 'teach them empathy, show a good example, trust them.' But fitting in absolutes and all that...
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

Q99 wrote:Well the thing is, the original scenario gave us an absolute. That's very hard to do without putting down absolutes of our own, often pretty horrible ones (though I will note that every human has a lifespan, and we don't consider making them unethical).

Though by having *self* termination, termination as a goal, in a short period, the intent would be that the AI would be built to think of that as a good. And with a very limited lifespan, there would not be much time to deviate off of that. The point is not to put in a killswitch, but to put it as a high imperative for them.


Hm, actually I think the biggest flaw in this would be people who get attached and try and prevent it...

Note this was also 'plan B' of mine, plan 'A' was to make endlessly expanding human-will-maximizers which would also be horrible.


My actual preference for AI is 'teach them empathy, show a good example, trust them.' But fitting in absolutes and all that...
I get what you are saying. IOW, create AI with a life cycle, similar to everything that the universe already operates.

Another flaw might be for the AI to lean towards self-determination and we're back to square one where humans are deemed a threat.
What right do you, human, have to tell me I can only live for X number of years?

After all, natural/biological law dictates self-preservation and continuation of itself through heredity. I know we aren't talking about a biological entity; however if we are attempting to create AI in our image wherein the main goal is giving it a more humanesque mindset; and if it has access to our history plus the ability to learn, rewrite it's code etc etc...then it seems to follow that in the long run, if humans continue to subvert it then they will just have to simply go away...for it's own sake.

Can you actually teach empathy though? There are some humans that lack empathy, and to the best of my knowledge, they can't be rehabilitated (then again, within the US at least, mental disabilities are more often criminalized rather than researched to discovery a cause and therapy).
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

cadbrowser wrote:Would a company that designs such an AI want it on a 'could' though? That doesn't seem as secure as building it on an isolated private network to me - at least for testing/development.
A lot of AI research is done by pure researchers, or by companies like Google that have a lot of control over widely distributed computing resources.
And... any 'cloud' does exist on a server somewhere...I'm just not sure how fast once could find that server node and "kill it"; or if even that is possible.
Thing is, it doesn't have to be restricted to only one server, and there can be widely distributed backups and caches.
One thing I want to clarify is, are we assuming any AI built will become self-aware?
Lack of self-awareness may not make the AI less of a problem. We don't really know what it would mean to have a superintelligent entity that isn't self-aware.
This space dedicated to Vasily Arkhipov
User avatar
cadbrowser
Padawan Learner
Posts: 494
Joined: 2006-11-13 01:20pm
Location: Kansas City Metro Area, MO
Contact:

Re: on Evil AI

Post by cadbrowser »

Simon_Jester wrote:A lot of AI research is done by pure researchers, or by companies like Google that have a lot of control over widely distributed computing resources.
Simon_Jester wrote:Thing is, it doesn't have to be restricted to only one server, and there can be widely distributed backups and caches.
It seems very shortsighted on a company like that to develop an AI and not take the precautions that have been discussed since...what the 50's?...and develop it on a very isolated system, with safeguards.
Simon_Jester wrote:Lack of self-awareness may not make the AI less of a problem. We don't really know what it would mean to have a superintelligent entity that isn't self-aware.
True. I just wasn't sure what the general consensus of the board was when discussing AI in general. Given the paperclip analogy, no self-awareness is even needed for it to take what programming it is given and develop into a huge nightmare.
Financing and Managing a webcomic called Geeks & Goblins.


"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
User avatar
Alferd Packer
Sith Marauder
Posts: 3706
Joined: 2002-07-19 09:22pm
Location: Slumgullion Pass
Contact:

Re: on Evil AI

Post by Alferd Packer »

cadbrowser wrote:After all, natural/biological law dictates self-preservation and continuation of itself through heredity. I know we aren't talking about a biological entity; however if we are attempting to create AI in our image wherein the main goal is giving it a more humanesque mindset; and if it has access to our history plus the ability to learn, rewrite it's code etc etc...then it seems to follow that in the long run, if humans continue to subvert it then they will just have to simply go away...for it's own sake.

Can you actually teach empathy though? There are some humans that lack empathy, and to the best of my knowledge, they can't be rehabilitated (then again, within the US at least, mental disabilities are more often criminalized rather than researched to discovery a cause and therapy).
I wonder if you could lean on a form-defines-function approach, and restrict the AIs to humanlike avatars. Even if their sentience requires massive amounts of processing power on huge server banks, they have to use a body which resembles a human being as best as technology will allow, in order to interact with the world at large. This would allow the AI to actually have its experiences shaped through human interaction, much like a child's experiences are. And while I don't think direct interaction is sufficient for AIs to develop a morality or mindset similar to ours, I do think it is necessary. A proverbial black-box AI, given access to the contents of Wikipedia, would be able to read about the human condition, but could never experience it as we do.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer

"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.
Simon_Jester
Emperor's Hand
Posts: 30165
Joined: 2009-05-23 07:29pm

Re: on Evil AI

Post by Simon_Jester »

cadbrowser wrote:
Simon_Jester wrote:A lot of AI research is done by pure researchers, or by companies like Google that have a lot of control over widely distributed computing resources.
Simon_Jester wrote:Thing is, it doesn't have to be restricted to only one server, and there can be widely distributed backups and caches.
It seems very shortsighted on a company like that to develop an AI and not take the precautions that have been discussed since...what the 50's?...and develop it on a very isolated system, with safeguards.
That's because most people actually developing AI listen to people who say "you should take precautions" and react in one of two ways:

1) [roll eyes] "Doing that would totally defeat the purpose of our AI research and make the AI we're working on useless for our purposes."
or
2) [roll eyes] "Look, buddy, I know the [i}Terminator[/i] movies were good, but we are not designing Skynet here. It's fine."

Plus of course self-justifications like

"Any intelligence advanced enough to be a threat to humanity would be too enlightened to do anything evil."
Alferd Packer wrote:I wonder if you could lean on a form-defines-function approach, and restrict the AIs to humanlike avatars. Even if their sentience requires massive amounts of processing power on huge server banks, they have to use a body which resembles a human being as best as technology will allow, in order to interact with the world at large.
The problem is that this so thoroughly neutralizes the purpose of building an AI (easy sorting of massive datasets, control of complex machinery) that almost no one would deliberately build an AI this way. Most of the things an AI is useful for, it's useful because of intensive Internet
This space dedicated to Vasily Arkhipov
Post Reply