Personalizing Machines
Moderator: Alyrium Denryle
Personalizing Machines
Going to the Hospital, they have a robot which delivers the lab collections to where they are actually tested
They have put eyes on the robot along with an apron and giving it a name. What are peoples thoughts on this kind of personalization of machines?
They have put eyes on the robot along with an apron and giving it a name. What are peoples thoughts on this kind of personalization of machines?
"He that would make his own liberty secure must guard even his enemy from oppression; for if he violates this duty, he establishes a precedent that will reach to himself."
Thomas Paine
"For the living know that they shall die: but the dead know not any thing, neither have they any more a reward; for the memory of them is forgotten."
Ecclesiastes 9:5 (KJV)
Thomas Paine
"For the living know that they shall die: but the dead know not any thing, neither have they any more a reward; for the memory of them is forgotten."
Ecclesiastes 9:5 (KJV)
Re: Personalizing Machines
In two words: its harmless
The long version: people have a tendency of personalizing (or personifying...or anthropomorphizing...or whatever you call it) everything around them. Stories of talking animals are known throughout history. I think its part of human nature to assign human attributes to non-human things.
Of course, when machines actually become as smart as people (if that ever happens) we had BETTER treat them like people rather than machines, otherwise we're in for trouble.
The long version: people have a tendency of personalizing (or personifying...or anthropomorphizing...or whatever you call it) everything around them. Stories of talking animals are known throughout history. I think its part of human nature to assign human attributes to non-human things.
Of course, when machines actually become as smart as people (if that ever happens) we had BETTER treat them like people rather than machines, otherwise we're in for trouble.
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Re: Personalizing Machines
I'd be more worried about the machines that become magnitudes smarter than people.Modax wrote:Of course, when machines actually become as smart as people (if that ever happens) we had BETTER treat them like people rather than machines, otherwise we're in for trouble.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
Re: Personalizing Machines
why? I don't expect that a superhuman AI will go skynet on us. Feelings of hate, and all violent behaviours are evolutionary adaptations. Even basic self-preservation is an instinct which has an evolutionary purpose. Arbitrarily projecting these traits on AIs is not realistic. AIs could be designed to feel emotions, but its much more likely that they will be programmed to feel universal empathy. I don't think that emotions are emergent phenomena that any intelligent mind will automatically possess. Star Trek actually got something right when they decided Data would not come with built-in emotions, IMHO.
-
- Village Idiot
- Posts: 4046
- Joined: 2005-06-15 12:21am
- Location: The Abyss
Re: Personalizing Machines
And they arose in reaction to the problems our distant ancestors faced; they could arise in machines the same way ( idea not original to me ). For example; if an AI has a purpose, a goal; then it will generally need to survive to achieve it; therefore as a learning machine it will likely develop a "survival instinct", even if it didn't start with one built in. And just from having those two qualities of purpose(s) and a desire to survive, it could develop emotions or emotion-imitating behavior like fear, anger, frustration and so forth. Emotions like that have less to do with biology than they do with being a thinking, purposed and destructible being, as I see it.Modax wrote: Feelings of hate, and all violent behaviours are evolutionary adaptations. Even basic self-preservation is an instinct which has an evolutionary purpose.
They might or might not actually "feel" emotions, but they quite likely will act as if they have some emotions, which for practical purposes is the same thing.
"There are two novels that can change a bookish fourteen-year old's life: The Lord of the Rings and Atlas Shrugged. One is a childish fantasy that often engenders a lifelong obsession with its unbelievable heroes, leading to an emotionally stunted, socially crippled adulthood, unable to deal with the real world. The other, of course, involves orcs." - John Rogers
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Personalizing Machines
Correct. However there are three serious problems;Modax wrote:why? I don't expect that a superhuman AI will go skynet on us. Feelings of hate, and all violent behaviours are evolutionary adaptations.
1) Getting rid of humanity (in the long run) is a subgoal of the vast majority of open-ended goals, because humanity can and probably would prevent AIs from doing anything radical (such as turning all available mass into more computers to help solve some abstract problem).
2) Lots of people are deliberately trying to program emotions into AIs. It's a very easy thing to screw up, and at this rate someone will.
3) Quite a few designers are trying to use simulated evolution to make AIs, which tends to ingrain a survival-of-the-fittest self-preservation/replication/competition dynamic into the whole design.
General AI design is very, very hard. Most people working on it are massive optimists who don't really understand what they're doing and don't see that as a problem. They might cheerfully say 'oh yeah we'll make it nice' but that's equivalent to a medieval alchemist saying 'oh yeah, when it's done this potion will cure your cancer, and don't worry I'm sure there will be no side effects'.AIs could be designed to feel emotions, but its much more likely that they will be programmed to feel universal empathy.
That's correct, however a lot of current (prospective) general AI designs rely very heavily on 'emergent phenomena', and are structured in such a way that 'emotions' would probably emerge, if those designs actually worked (they don't yet obviously). Of course in almost all cases those 'emotions' would be very alien ones.I don't think that emotions are emergent phenomena that any intelligent mind will automatically possess.
In Trek, Data's predecessor (Lore) was built with emotions from the word go. Data lacks them because Soong screwed up and made Lore a megalomanical psychopath. This is actually relatively realistic.Star Trek actually got something right when they decided Data would not come with built-in emotions, IMHO.
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Re: Personalizing Machines
StarGlider, any idea on how hard would it be to implement a "Three Laws of Robotics" concept?
Wouldn't it be relatively easy to hardwire any potential AI mind to first execute a 'probable outcome' subroutine before commiting to any decision/activity, and compare those potential outcomes with the Three Laws?
Wouldn't it be relatively easy to hardwire any potential AI mind to first execute a 'probable outcome' subroutine before commiting to any decision/activity, and compare those potential outcomes with the Three Laws?
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Personalizing Machines
What would be more interesting from a fictional standpoint, is how would AIs and humans interact and would would AIs think of humans.
Would AIs realize that they need humans to survive? Would the grow fond of humans or some human types?
Would AIs realize that they need humans to survive? Would the grow fond of humans or some human types?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Re: Personalizing Machines
The enitre robot series is dedicated to showing that robots won't go beserk on us... and thatr the laws don't work- they prevent robots from killing people and the like, but create more interesting complications.Bubble Boy wrote:StarGlider, any idea on how hard would it be to implement a "Three Laws of Robotics" concept?
Wouldn't it be relatively easy to hardwire any potential AI mind to first execute a 'probable outcome' subroutine before commiting to any decision/activity, and compare those potential outcomes with the Three Laws?
Personal favorite? Robot convinced it was made in the image of its creator... and robots that haven't been programmed what "human" is.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Personalizing Machines
Very difficult, but a more serious problem is that simplistic designs like that don't work very well in practice even if the implementation is technically correct. Most of Asimov's robot stories were about exactly this; problems resulting from the inflexibility of the three laws, or robot misinterpretation of them. In fact a while back the Singularity Institute for AI (a research foundation I used to be associated with) made a publicity site that focused on why 3-laws type ideas don't work and what real solutions to the problem should look like.Bubble Boy wrote:StarGlider, any idea on how hard would it be to implement a "Three Laws of Robotics" concept?
Firstly, a lot of general AI designs don't give you anything like that amount of control. The people playing with neural nets and simulated evolution and other 'emergent soups' (yes, they actually call them that) only have a very limited ability to see what's going on in their systems. Some of them are actually proud of their own ignorance, probably because they like their technology to seem like magic. In these cases all you can do is present appropriate training scenarios and try to verify experimentally, but of course you can't cover all real world situations and as the intelligence level increases you have no real way to detect if the AI has developed an independent goal system and is just faking being nice.Wouldn't it be relatively easy to hardwire any potential AI mind to first execute a 'probable outcome' subroutine before commiting to any decision/activity, and compare those potential outcomes with the Three Laws?
Sane, rational AGI designs do give you that level of control and use 'probable outcome' simulations as the fundamental basis of their operation. The problem comes with the 'compare outcome with laws' part and defining the actual laws to do what we want them to do. The problem is complicated by the fact that general AIs can self-modify (in several possible ways) and there are lots of potential mechanisms for any such laws to become unstable, modified or ineffective.
Yes, for as long as they actually did. That might not be that long if even half the things the nanotech optimists say are correct. Even if they aren't, that just stretches the timescale out a bit.Zixinus wrote:Would AIs realize that they need humans to survive?
Not unless that characteristic was specifically designed in. Genuine human-type emotions would be very hard to replicate by any means other than directly simulating a human brain in exquisite detail (i.e. making uploads) - hard to replicate for current AI designers anyway, I'm sure a transhuman AGI wouldn't have that much trouble with it. But frankly surface emotions are pretty easy to fake, for the purposes of entertaining/reassuring/deceiving humans. The problem is you absolutely do not want to base your actual core goal system on that kind of cheap fake simulation, because (a) it won't respond the way real human emotions do in corner cases, (b) it'll probably be unstable under reflection and (c) most humans suck at being moral anyway, particularly to other species.Would the grow fond of humans or some human types?