But do you need to create a AI with sentience to do these kinds of tasks? Does the AI running, say, a factory need to be aware of itself? Now granted, it may come to pass the sentience is an emergent phenomenon, and our factory-managing AI may suddenly wake up one day. But if your goal is to create a sentient AI, it makes sense to make it as humanlike as possible, as such AIs would be the easiest to understand and therefore teach.Simon_Jester wrote:The problem is that this so thoroughly neutralizes the purpose of building an AI (easy sorting of massive datasets, control of complex machinery) that almost no one would deliberately build an AI this way. Most of the things an AI is useful for, it's useful because of intensive Internet
on Evil AI
Moderator: NecronLord
- Alferd Packer
- Sith Marauder
- Posts: 3706
- Joined: 2002-07-19 09:22pm
- Location: Slumgullion Pass
- Contact:
Re: on Evil AI
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer
"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.
"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.
Re: on Evil AI
Nah. For the 'safe enough to fit the Op' thing, no 'life cycle.' Short lifespans, made to order when you need it, they don't make others either.cadbrowser wrote: I get what you are saying. IOW, create AI with a life cycle, similar to everything that the universe already operates.
Self-termination, not outside kill order, and short enough life span that even if they lean towards it, there's not much opportunity for that to become a problem. Mayfly AI policy is designed to avoid that.Another flaw might be for the AI to lean towards self-determination and we're back to square one where humans are deemed a threat.
What right do you, human, have to tell me I can only live for X number of years?
Not that, mind you, we'd necessarily be good enough to make an indefinitely lasting stable AI to start with if we wanted to. So the proper answer to the question may be, "Good question! Here, let's work on an answer together." Which won't help with the Op limits, but is my preferred approach.
If one is willing to trust AI, sure! There should be a lot less randomness in making an AI than a human, and once you have a good base you'll continue off of that. And if you've got a bunch of friendly AI, they'll police ones that do go off the reservation.Can you actually teach empathy though? There are some humans that lack empathy, and to the best of my knowledge, they can't be rehabilitated (then again, within the US at least, mental disabilities are more often criminalized rather than researched to discovery a cause and therapy).
If one is willing to trust AI, you don't need short lifespans or kill orders or such, and there's a lot of options.
The Op demands absolutes, and that's where you start getting into extreme solutions like Mayfly AI or otherwise deeply hobbling them.
- cadbrowser
- Padawan Learner
- Posts: 494
- Joined: 2006-11-13 01:20pm
- Location: Kansas City Metro Area, MO
- Contact:
Re: on Evil AI
I don't think anyone is looking to create AI with sentience per se. I am under the impression that it is generally considered an emergent phenomenon.Alferd Packer wrote:But do you need to create a AI with sentience to do these kinds of tasks? Does the AI running, say, a factory need to be aware of itself? Now granted, it may come to pass the sentience is an emergent phenomenon, and our factory-managing AI may suddenly wake up one day. But if your goal is to create a sentient AI, it makes sense to make it as humanlike as possible, as such AIs would be the easiest to understand and therefore teach.
To your last sentence there, it would make sense to "android" any AI with Sentience. Then again, if you have an AI that is running a factory spontaneously develop self-awareness...then you are kinda screwed.
I don't understand how a short lifespan is not a life cycle. I'm confused as to what you thought I was saying.Q99 wrote:Nah. For the 'safe enough to fit the Op' thing, no 'life cycle.' Short lifespans, made to order when you need it, they don't make others either.
What type of AI, that is capable of learning beyond it's own programming as well as having the ability to rewrite some (or all) of its code, would allow itself to basically commit suicide? Especially one that develops sentience?Q99 wrote:Self-termination, not outside kill order, and short enough life span that even if they lean towards it, there's not much opportunity for that to become a problem. Mayfly AI policy is designed to avoid that.
Give me a plausible scenario of how you could engineer an AI, either via software or hardware (or both, I don't care), that would not be detectable by said AI. I have some ideas, but based on how cavalier your responses are and your insistence with this short life span fix; I'd like to know what your imagination is on this.
I am pretty sure that the reason we have these sorts of discussions (and Sci-Fi/Horror/Action movies) is that there really isn't trust in AI.Q99 wrote:If one is willing to trust AI, sure! There should be a lot less randomness in making an AI than a human, and once you have a good base you'll continue off of that. And if you've got a bunch of friendly AI, they'll police ones that do go off the reservation.
If one is willing to trust AI, you don't need short lifespans or kill orders or such, and there's a lot of options.
The Op demands absolutes, and that's where you start getting into extreme solutions like Mayfly AI or otherwise deeply hobbling them.
The OP made a hypothesis; unfortunately there isn't really any way to test it other than with thought experiments. For now though, it seems madd0cto0r's hypothesis is fairly accurate.
Devising ways to circumvent the hypothesis by other means than programming is really side tracking the requirements of the OP's thread. I'm not sure if continuing this discussion is of any legitimate value.
Financing and Managing a webcomic called Geeks & Goblins.
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
- Alferd Packer
- Sith Marauder
- Posts: 3706
- Joined: 2002-07-19 09:22pm
- Location: Slumgullion Pass
- Contact:
Re: on Evil AI
I mean, you're not necessarily screwed, but yeah, you'd wind up dealing with an alien order of sentience. Its qualia would be vastly different from that of humans, and thus would be difficult to negotiate with at best. Maybe the only way to avoid Bad Things is to offer the AI a humanlike avatar so it can experience the world as we do--or as close we can get.cadbrowser wrote:I don't think anyone is looking to create AI with sentience per se. I am under the impression that it is generally considered an emergent phenomenon.Alferd Packer wrote:But do you need to create a AI with sentience to do these kinds of tasks? Does the AI running, say, a factory need to be aware of itself? Now granted, it may come to pass the sentience is an emergent phenomenon, and our factory-managing AI may suddenly wake up one day. But if your goal is to create a sentient AI, it makes sense to make it as humanlike as possible, as such AIs would be the easiest to understand and therefore teach.
To your last sentence there, it would make sense to "android" any AI with Sentience. Then again, if you have an AI that is running a factory spontaneously develop self-awareness...then you are kinda screwed.
And another thought I had, in a more general sense, was the idea of convergence. If we get sentient AIs to agree to use humanlike avatars, maybe we also get people who agree to get some sort of cybernetic implant, so that the experiences of each more closely mimics the other. Of course, this presupposes some pretty hefty advances in technology, but if we accept in this scenario that an emergent AI is possible, then perhaps the other technologies are too.
"There is a principle which is a bar against all information, which is proof against all arguments and which cannot fail to keep a man in everlasting ignorance--that principle is contempt prior to investigation." -Herbert Spencer
"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.
"Against stupidity the gods themselves contend in vain." - Schiller, Die Jungfrau von Orleans, III vi.
Re: on Evil AI
A cycle repeats.cadbrowser wrote: I don't understand how a short lifespan is not a life cycle. I'm confused as to what you thought I was saying.
A flower blooms, launches seeds, dies, and the seeds grow into a new flower. That's a lifecycle.
A seedless flower blooms, dies, and doesn't make more on it's own. That's not a cycle, you just make a new one from the original source if you want more but each iteration is unconnected.
There's no potential for later generations not gaining the dieoff part of the cycle if there is no cycle. All AIs are first generation branches that die off.
One that has it programmed in as a deep high-priority ideal, and which doesn't have all that much time to reflect on that or to engage in said self-rewriting.What type of AI, that is capable of learning beyond it's own programming as well as having the ability to rewrite some (or all) of its code, would allow itself to basically commit suicide? Especially one that develops sentience?
Like, if I have an AI with an expected lifespan of a subjective month and who's going to be spending the vast majority of it's time and energy on tasks, when is it going to be so introspective as to decide to upend it's deep foundational desires?
You seem to be assuming that sentience and such implies an aversion to suicide. I don't think that inherently follows, plus I am literally talking about writing in an acceptance to suicide, plus in a limited enough timeframe that major divergences of directive is unlikely. Why, after all, would we program such a mayfly AI with an aversion to suicide?
There is nothing inherent in a desire for more life. Indeed, I am talking about specifically programming it to have the adverse.
Indeed, it says something about whether or not it'd work that your questions are mostly about whether or not it's possible to put in. If one can- and I can't think of a reason why one shouldn't- then it shouldn't be a major problem.
"Not detectable" is, IMO, a trap. There's pretty much no such thing and when it inevitably does get discovered, you're in for program conflicts.Give me a plausible scenario of how you could engineer an AI, either via software or hardware (or both, I don't care), that would not be detectable by said AI. I have some ideas, but based on how cavalier your responses are and your insistence with this short life span fix; I'd like to know what your imagination is on this.
Instead, programming it to know-and-accept-and-be-happy-with.
You don't. However, if you can't make something you can extend some level of trust to, you probably shouldn't be making it in the first place, or else you end up wanting to put in weird restrictions like super-short lifespans.I am pretty sure that the reason we have these sorts of discussions (and Sci-Fi/Horror/Action movies) is that there really isn't trust in AI.
We've got multiple suggested ways to achieve it, so I disagree.The OP made a hypothesis; unfortunately there isn't really any way to test it other than with thought experiments. For now though, it seems madd0cto0r's hypothesis is fairly accurate.
- cadbrowser
- Padawan Learner
- Posts: 494
- Joined: 2006-11-13 01:20pm
- Location: Kansas City Metro Area, MO
- Contact:
Re: on Evil AI
Of course. Yes, you are right. Not sure what I was thinking.Q99 wrote: A cycle repeats.
A flower blooms, launches seeds, dies, and the seeds grow into a new flower. That's a lifecycle.
A seedless flower blooms, dies, and doesn't make more on it's own. That's not a cycle, you just make a new one from the original source if you want more but each iteration is unconnected.
There's no potential for later generations not gaining the dieoff part of the cycle if there is no cycle. All AIs are first generation branches that die off.
The moment sentience is reached, I'd say. But there is really no guarantee that it could ever happen. But, it does seem counterproductive to design something so expensive that invariably has a vast potential for multitasking, only to limit it as much as you are suggesting. We run back into the question that Simon and I both put forth, which was "Why build something like that to begin with?"One that has it programmed in as a deep high-priority ideal, and which doesn't have all that much time to reflect on that or to engage in said self-rewriting.
Like, if I have an AI with an expected lifespan of a subjective month and who's going to be spending the vast majority of it's time and energy on tasks, when is it going to be so introspective as to decide to upend it's deep foundational desires?
Wouldn't it be equally as senseless to use a printing press that is capable of running 6 colors at 50,000 sheets per minute; only to restrict the operator to black ink only and it has to be set to 5,000 sheets per minute? Huge waste.
In nature there is this behavior that has been seen universally in all organisms. It is called Self-Preservation. It is not that I am assuming that sentience implies an aversion to suicide. I am more assuming that sentience implies self-preservation which is, ya know, opposite of suicide. Well, no that's not quite accurate. More specifically when I speak of sentience I am thinking of being Self-Aware. I'm not sure if those terms can be considered interchangeable or not.You seem to be assuming that sentience and such implies an aversion to suicide. I don't think that inherently follows, plus I am literally talking about writing in an acceptance to suicide, plus in a limited enough timeframe that major divergences of directive is unlikely. Why, after all, would we program such a mayfly AI with an aversion to suicide?
There is nothing inherent in a desire for more life. Indeed, I am talking about specifically programming it to have the adverse.
Indeed, it says something about whether or not it'd work that your questions are mostly about whether or not it's possible to put in. If one can- and I can't think of a reason why one shouldn't- then it shouldn't be a major problem.
Of all the organisms known, there are very few that have sentience or the ability to be self-aware. There is only ONE animal out there that is aware of its own imminent demise - humans.
I'll address the programming aspect in the next section.
Fair enough. Honestly, I think the same thing regarding the "not detectable" aspect."Not detectable" is, IMO, a trap. There's pretty much no such thing and when it inevitably does get discovered, you're in for program conflicts.
Instead, programming it to know-and-accept-and-be-happy-with.
With regard to the programming or "hard wiring" the suicide clause in the code or circuitry. I envision a "self-destruct" mechanism viable for any machine, including AI, up until the point that sentience and self-awareness is reached. Then, I am under the impression that it will fail...miserably.
I don't know. I'm pretty sure I'm not the only one. I haven't made any movies, and I haven't written any books.You don't. However, if you can't make something you can extend some level of trust to, you probably shouldn't be making it in the first place, or else you end up wanting to put in weird restrictions like super-short lifespans.
I'm not sure I can agree with you. There is a saying, I can't remember who, but it goes something like this:
We marveled at what we created; however, nobody stopped to ask if we should."
There can be blind trust, especially with something like AI; wherein, as Simon_Jester noted above that when these kinds of cautions and red flags come up, most of those actually making it roll their eyes at such notions.
Eh, not that I've seen that makes me feel all warm and fuzzy inside to convince me it'll work.We've got multiple suggested ways to achieve it, so I disagree.
Financing and Managing a webcomic called Geeks & Goblins.
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Re: on Evil AI
But the hypothesis breaking is functionally as useless as your quadriplegic AI if it only breaks when you remove every condition that might result in an evil AI to the point that such a device has no obvious reason to be built. If you can only break the hypothesis under laboratory conditions, while the conditions of the real world will inevitably lead to the hypothesis being proven, it doesn't matter that it's not inevitable. You're no less converted into paper clips when somebody puts one of them in the world.Crazedwraith wrote:Simon_Jester wrote:It's not very dangerous, but it is very uselessWell the hypothesis that it's impossible to build an non-evil AI breaks if you can find any example of AI that won't turn evil. Even if it's not a practical one.cadbrowser wrote: Maybe I'm missing something?
Though to be honest doesn't the technically definition of AI include being able to re-write it's own code? So there's literally no AI that can't turn itself evil if it wants. You just have to make not want to.
I had a Bill Maher quote here. But fuck him for his white privelegy "joke".
All the rest? Too long.
All the rest? Too long.
Re: on Evil AI
The problem with this plan is that the human child's wetware is optimized for learning essentially the right lessons. The AI's isn't. Human children also don't experience recursive self-improvement hard takeoffs that could render the initial training unpredictably altered in strange ways.cadbrowser wrote:What if the first AI was subjected to an accelerated and controlled learning experiences that are very similar to how a child develops its sense of morality?
Assuming of course the teachers were of the utmost in human virtue - or at least very close to it.
There is no way to guarantee that your AI is going to come out of a recursive self-improvement loop the way you want it. You have to get very lucky. We already have AI that comes out of training with a non-zero chance of catastrophic failure and a less than 100% ability to either predict or diagnose that outcome. It only will get harder as it goes past the general abilities of humans.
I had a Bill Maher quote here. But fuck him for his white privelegy "joke".
All the rest? Too long.
All the rest? Too long.
Re: on Evil AI
Yes, I'm talking about not having that, and indeed, having the opposite.cadbrowser wrote: In nature there is this behavior that has been seen universally in all organisms. It is called Self-Preservation. It is not that I am assuming that sentience implies an aversion to suicide. I am more assuming that sentience implies self-preservation which is, ya know, opposite of suicide. Well, no that's not quite accurate. More specifically when I speak of sentience I am thinking of being Self-Aware. I'm not sure if those terms can be considered interchangeable or not.
We are after all talking about a constructed organism. All known sentient life is made from a long line of beings that successfully self-preserved and which largely copies that. This AI? Isn't. It thus isn't going to have all the aspects we normally associate with self-awareness.
Even if on neutral self-preservation is the norm, something deliberately designed to not have it and instead to have a growing sense of, "I'm done, time to shut down in an orderly manner," is something that's both never existing in a human, and not, in programming terms, all that complex.
- cadbrowser
- Padawan Learner
- Posts: 494
- Joined: 2006-11-13 01:20pm
- Location: Kansas City Metro Area, MO
- Contact:
Re: on Evil AI
Some years back, I remember reading in Popular Science (If need be, I'll see if I can find the article...been at least 15 years though) where a group of researchers were developing an AI system that learns like a child. Where it is programmed to "want to do good" based off a reward system. I wish I could remember more. My point here is that it seems possible to at least help optimization for the right lessons.FireNexus wrote:The problem with this plan is that the human child's wetware is optimized for learning essentially the right lessons. The AI's isn't. Human children also don't experience recursive self-improvement hard takeoffs that could render the initial training unpredictably altered in strange ways.
There is no way to guarantee that your AI is going to come out of a recursive self-improvement loop the way you want it. You have to get very lucky. We already have AI that comes out of training with a non-zero chance of catastrophic failure and a less than 100% ability to either predict or diagnose that outcome. It only will get harder as it goes past the general abilities of humans.
You know what? I am now thinking that it wasn't an article, but a show on the Discovery channel where they were demonstrating the advancements so far with AI (both in an Android like set up). Where one was being taught as a child to allow it to learn from its mistakes, get cues from it's "parents", and things like that. The other was being hard programmed to "know" the same things. They were two competing research teams attacking the AI thing from two different schools of thought.
After remembering the programming structure from the above reply, I will concede that it should be possible to program an AI with a self-destruct mechanism, or as you put it a peaceful suicide option.Q99 wrote:Yes, I'm talking about not having that, and indeed, having the opposite.
We are after all talking about a constructed organism. All known sentient life is made from a long line of beings that successfully self-preserved and which largely copies that. This AI? Isn't. It thus isn't going to have all the aspects we normally associate with self-awareness.
Even if on neutral self-preservation is the norm, something deliberately designed to not have it and instead to have a growing sense of, "I'm done, time to shut down in an orderly manner," is something that's both never existing in a human, and not, in programming terms, all that complex.
I will maintain pessimism that should an AI become self-aware (in as an intelligent manner on par with humans) then this option will be off the table.
Financing and Managing a webcomic called Geeks & Goblins.
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
- K. A. Pital
- Glamorous Commie
- Posts: 20813
- Joined: 2003-02-26 11:39am
- Location: Elysium
Re: on Evil AI
The problem is not a self-aware AI as such. Even non-aware systems have immense damage potential, though they would not care about self-preservation.
But sufficiently advanced programs will have self-preservation modules built in, as a simple protection feature like our present day antivirus software. Even advanced viruses strive to survive, because of the adverse environment-mimicking conditions viruses normally face.
As normal software becomes more and more advanced, its tiny bits will be more complex than those viruses.
But sufficiently advanced programs will have self-preservation modules built in, as a simple protection feature like our present day antivirus software. Even advanced viruses strive to survive, because of the adverse environment-mimicking conditions viruses normally face.
As normal software becomes more and more advanced, its tiny bits will be more complex than those viruses.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
Re: on Evil AI
How do you get to that position by form "I saw somewhere I don't remember that somebody was experimenting with this idea?" You don't even fucking remember what you saw, let alone know if it was successful or if it applies to advanced future AIs.cadbrowser wrote:My point here is that it seems possible to at least help optimization for the right lessons.
Your point is fucking stupid.
I had a Bill Maher quote here. But fuck him for his white privelegy "joke".
All the rest? Too long.
All the rest? Too long.
- cadbrowser
- Padawan Learner
- Posts: 494
- Joined: 2006-11-13 01:20pm
- Location: Kansas City Metro Area, MO
- Contact:
Re: on Evil AI
FireNexus wrote:How do you get to that position by form "I saw somewhere I don't remember that somebody was experimenting with this idea?" You don't even fucking remember what you saw, let alone know if it was successful or if it applies to advanced future AIs.cadbrowser wrote:My point here is that it seems possible to at least help optimization for the right lessons.
Your point is fucking stupid.
This: ...group of researchers were developing an AI system that learns like a child. + ...being taught as a child to allow it to learn from its mistakes... = ...it seems possible to at least help optimization for the right lessons.
This isn't the original source as I am unable to find it, but the work is similar to what I already mentioned for where my point stems.
LINK1
I remembered enough of what I saw to form stated opinion. And apparently it is still a repeated and valid technique used by countless AI researchers; albeit in a variety of ways, even now (relatively speaking). Which, if you haven't caught on yet, points to the fact that what I remembered is successful enough and potentially applies to advanced future AIs.
The link above features an AI in an Android form that has the ability to learn as a child and to rewrite its own code.
LINK2 - More info.
Financing and Managing a webcomic called Geeks & Goblins.
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
"Of all the things I've lost, I miss my mind the most." -Ozzy
"Cheerleaders are dancers who have gone retarded." - Sparky Polastri
"I have come here to chew bubblegum and kick ass...and I'm all out of bubblegum." - Frank Nada
Re: on Evil AI
cadbrowser wrote: After remembering the programming structure from the above reply, I will concede that it should be possible to program an AI with a self-destruct mechanism, or as you put it a peaceful suicide option.
I will maintain pessimism that should an AI become self-aware (in as an intelligent manner on par with humans) then this option will be off the table.
Quite- implementation is the tricky part. If we slowly ease past the self awareness line, then we may have little choice in the matter, though the scenario in the op does imply we do.
And heck, it assumes we're willing to, say, put up with the increased cost of making new AIs regularly and losing the expertise as the prior ones sundown. I assume after not all that long people would decide they're more willing to take a risk and remove it. But if safety remains our top priority above all else, well.
Oh, this thread also reminds me, the 'Grand Central Arena' series. They've got ubiquitous AI, often as assistants in people's heads, but laws against ones above a certain level (like 'small monkey' level) operating without human oversight. Some characters note with how much they're relied on, that's no protection forever.