Scientific American on AI

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Eleas
Jaina Dax
Posts: 4896
Joined: 2002-07-08 05:08am
Location: Malmö, Sweden
Contact:

Scientific American on AI

Post by Eleas »

I bought the June issue of Scientific American yesterday to take on the train. There was an article by Lawrence Krauss in it, which was kinda neat, although (understandably) it didn't get into technical details.

Then there was the special headline: 12 Events That Will Change Everything (and not in the ways you think). What insights will these learned men and women give us about the state of AI, I wondered? And then, to my shame, the question was answered. By Will Wright. Apparently, in this context he is counted as being on equal footing with engineers, computer experts, neuroscientists and logicians. He's a futurist, after all, just like them.

The following is a transcript of the latter half of the article. If I should suffer, at least I will not do so alone.
Scientific American, June 2010 wrote:In other words, [Will] Wright notes, self-awareness leads to self-replication leads to better machines made without humans involved. "Personally, I've always been a lot more scared of this scenario than a lot of others" in regard to the fate of humanity, he says. "This could happen in our lifetime. And once we're sharing the planet with some form of superintelligence, all bets are off."

Not everyone is so pessimistic. After all, machines follow the logic of their programming, and if this programming is done properly, [Selmer] Bringsjord says, "the machine isn't going to get some supernatural power." One area of concern, he notes, would be the introduction of enhanced machine intelligence to a weapon or fighting machine behind the scenes, where no one can keep tabs on it. Other than that, "I would say we could control the future" by responsible uses of AI, Bringsjord says.

This emergence of more intelligent AI won't come on "like an alien invasion of machines to replace us," agrees futurist and prominent author Ray Kurtzweil. Machines, he says, will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them, he adds.

The legal implications of machines that operate outside of humanity's control are unclear, so "it's probably a good idea to think about these things," [Hod] Lipson says. Ethical rules such as the late Isaac Asimov's "three laws of robotics" - which, essentially, hold that a robot may not injure a human or allow a human to be injured - become difficult to obey once robots begin programming one another, removing human input. Asimov's laws "assume that you program the robot," Lipson says.

Others, however, wonder if people should even govern this new breed of AI."Who says that evolution isn't supposed to go this way?" Wright asks. "Should the dinosaurs have legislated that the mammals not grow bigger and take over more of the planet?" If control turns out to be impossible, let's hope we can peaceably share the planet with our silicon-based companions."
Thanks, Will. You can stop peeing any time you like.
Björn Paulsen

"Travelers with closed minds can tell us little except about themselves."
--Chinua Achebe
User avatar
Temujin
Jedi Master
Posts: 1300
Joined: 2010-03-28 07:08pm
Location: Occupying Wall Street (In Spirit)

Re: Scientific American on AI

Post by Temujin »

I grow tired of this stereotypical fear of machines taking over and EXTERMINATING us. I mean The Terminator was cool and all, but the future of humanity lies with melding with the machine. And even with the existence of superior intelligences, who says they won't behave like the Culture Minds and take care of us.
Image
Mr. Harley: Your impatience is quite understandable.
Klaatu: I'm impatient with stupidity. My people have learned to live without it.
Mr. Harley: I'm afraid my people haven't. I'm very sorry... I wish it were otherwise.

"I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe.
If I cannot satisfy the one, I will indulge the other." – Frankenstein's Creature on the glacier[/size]
User avatar
Eleas
Jaina Dax
Posts: 4896
Joined: 2002-07-08 05:08am
Location: Malmö, Sweden
Contact:

Re: Scientific American on AI

Post by Eleas »

Temujin wrote:I grow tired of this stereotypical fear of machines taking over and EXTERMINATING us. I mean The Terminator was cool and all, but the future of humanity lies with melding with the machine. And even with the existence of superior intelligences, who says they won't behave like the Culture Minds and take care of us.
Sadly, Will Wright fears a machine takeover might be what Nature intends. And since that may be what Evolution has decided for us, who are we mere humans to stand in the way of its Divine Plan?
Björn Paulsen

"Travelers with closed minds can tell us little except about themselves."
--Chinua Achebe
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Scientific American on AI

Post by Modax »

Nobody with any sense is worried about malevolent, intelligent machines taking over the world like skynet. What people are worried about is any recursively self-improving goal-seeking computer system that is not guided by a complex model of human ethics. Such a system is not going to be malevolent any more than bubonic plague is malevolent, that's just a naive anthropomorphism. Without complex ethics it is very likely to endlessly make copies of itself and consume all resources in an effort to accomplish whatever pointless goal it started off with, with complete indifference to humans or any other part of its environment.

EDIT: What's worse are the people thinking that this is okay because "evolution is supposed to happen this way", which is just shockingly, absurdly, stupid and naive.
User avatar
adam_grif
Sith Devotee
Posts: 2755
Joined: 2009-12-19 08:27am
Location: Tasmania, Australia

Re: Scientific American on AI

Post by adam_grif »

My first day of Psychology last year had the tutors tell us to go around the room for some sort of getting-to-know-you exercise. Anyway, I told this one girl that I was interested in doing Artificial Intelligence, and she replied:

"Oh! Do you believe in artificial intelligence?"

Fucking hell. Public ignorance about AI is widespread and probably incurable. People have seen The Matrix, I, Robot and The Terminator, but that's it. I think somebody needs to make some fucking movies where AIs don't rebel and take over the world.
A scientist once gave a public lecture on astronomy. He described how the Earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy.

At the end of the lecture, a little old lady at the back of the room got up and said: 'What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.

The scientist gave a superior smile before replying, 'What is the tortoise standing on?'

'You're very clever, young man, very clever,' said the old lady. 'But it's turtles all the way down.'
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Scientific American on AI

Post by Modax »

What, like Wall-E, Bicentennial Man, and Spielberg's Artificial Intelligence? Those are just as bad, as they'd have you believe that AIs are harmless, misunderstood minorities. If that is your expectation for the real world, please don't go into AI. :|
User avatar
Temujin
Jedi Master
Posts: 1300
Joined: 2010-03-28 07:08pm
Location: Occupying Wall Street (In Spirit)

Re: Scientific American on AI

Post by Temujin »

AIs, like all technology (and like all knowledge) is neither good nor evil. It's how the technology is developed and used that matters.

If you develop robots with the purpose to "kill all humans", than that's probably what they're going to do. If you treat it well and treat it as a sentient being it probably will reciprocate that behavior towards humans as it develops.
Modax wrote:Nobody with any sense is worried about malevolent, intelligent machines taking over the world like skynet. What people are worried about is any recursively self-improving goal-seeking computer system that is not guided by a complex model of human ethics. Such a system is not going to be malevolent any more than bubonic plague is malevolent, that's just a naive anthropomorphism. Without complex ethics it is very likely to endlessly make copies of itself and consume all resources in an effort to accomplish whatever pointless goal it started off with, with complete indifference to humans or any other part of its environment.
That actually sounds like a more reasonable and realistic version of the Borg and Replicators that I came up with for a SciFi universe.
Image
Mr. Harley: Your impatience is quite understandable.
Klaatu: I'm impatient with stupidity. My people have learned to live without it.
Mr. Harley: I'm afraid my people haven't. I'm very sorry... I wish it were otherwise.

"I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe.
If I cannot satisfy the one, I will indulge the other." – Frankenstein's Creature on the glacier[/size]
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Scientific American on AI

Post by Modax »

Temujin wrote:If you treat it well and treat it as a sentient being it probably will reciprocate that behavior towards humans as it develops.


No, sorry. Reciprocal altruism and all the other social instincts we take for granted are complex evolved adaptations that certain higher animal species developed over millions of years, and which formed only because there were very specific selection pressures which promoted them. Reciprocal altruism, etc. is not going to just appear in some random intelligent system just because you believe you are being very nice to it. More likely it is just going to model the walking colony of meat-cells that keeps interacting with it in some abstract information-theoretic sense so as to increase its chances of attaining some arbitrary goal-state.
User avatar
Temujin
Jedi Master
Posts: 1300
Joined: 2010-03-28 07:08pm
Location: Occupying Wall Street (In Spirit)

Re: Scientific American on AI

Post by Temujin »

Well I'm assuming this goes along with it's development and continual refinements in it's programming. Not, here's an AI we developed, let's be nice to it and hope it doesn't kill us. :lol:
Image
Mr. Harley: Your impatience is quite understandable.
Klaatu: I'm impatient with stupidity. My people have learned to live without it.
Mr. Harley: I'm afraid my people haven't. I'm very sorry... I wish it were otherwise.

"I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe.
If I cannot satisfy the one, I will indulge the other." – Frankenstein's Creature on the glacier[/size]
Modax
Padawan Learner
Posts: 278
Joined: 2008-10-30 11:53pm

Re: Scientific American on AI

Post by Modax »

Temujin wrote:Well I'm assuming this goes along with it's development and continual refinements in it's programming. Not, here's an AI we developed, let's be nice to it and hope it doesn't kill us. :lol:
That's not what you said though, was it?

Anyway, if you know what you are doing on the design and programming side, you will have programmed the AI to be a rational, selfless agent with a deep grasp of human ethical systems, and given it a goal system that makes it want to increase the average welfare of humankind. If you get this part right, it doesn't really matter if you are nice to it or not, because it is selfless and benevolent and has no feelings you can hurt (and also no possible motivation for seeking revenge). If you get the design and programming wrong, it *still* doesn't matter how nice you are to it, because you created an alien sociopath and it doesn't care. :P
User avatar
adam_grif
Sith Devotee
Posts: 2755
Joined: 2009-12-19 08:27am
Location: Tasmania, Australia

Re: Scientific American on AI

Post by adam_grif »

Modax wrote:What, like Wall-E, Bicentennial Man, and Spielberg's Artificial Intelligence? Those are just as bad, as they'd have you believe that AIs are harmless, misunderstood minorities. If that is your expectation for the real world, please don't go into AI. :|
Or Star Wars or whatever.

Although you're right, I was more getting at the kind of disembodied supercomputer style AI's. Cute and cuddly robots have been Hollywood's sweetheart for a long time now. There seems to be a bit of a disconnect: artificial humans are misunderstood minorities, faceless supercomputers are evil abominations bent on destroying the world. I imagine it's directly related to people sympathizing with humanoids.

Culture minds and EDI are the only benevolent ones I can think off from the top of my head. I could prattle off a list three miles long for the inverse, though.
A scientist once gave a public lecture on astronomy. He described how the Earth orbits around the sun and how the sun, in turn, orbits around the centre of a vast collection of stars called our galaxy.

At the end of the lecture, a little old lady at the back of the room got up and said: 'What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.

The scientist gave a superior smile before replying, 'What is the tortoise standing on?'

'You're very clever, young man, very clever,' said the old lady. 'But it's turtles all the way down.'
User avatar
Temujin
Jedi Master
Posts: 1300
Joined: 2010-03-28 07:08pm
Location: Occupying Wall Street (In Spirit)

Re: Scientific American on AI

Post by Temujin »

Modax wrote:
Temujin wrote:Well I'm assuming this goes along with it's development and continual refinements in it's programming. Not, here's an AI we developed, let's be nice to it and hope it doesn't kill us. :lol:
That's not what you said though, was it?
No it wasn't. I was sloppy with my posting.
Modax wrote:Anyway, if you know what you are doing on the design and programming side, you will have programmed the AI to be a rational, selfless agent with a deep grasp of human ethical systems, and given it a goal system that makes it want to increase the average welfare of humankind. If you get this part right, it doesn't really matter if you are nice to it or not, because it is selfless and benevolent and has no feelings you can hurt (and also no possible motivation for seeking revenge). If you get the design and programming wrong, it *still* doesn't matter how nice you are to it, because you created an alien sociopath and it doesn't care. :P
I thought that developing AI would go beyond simply building the hardware and programming it; that its neural network would have to develop from a set point. Sure there would be a certain degree of base programming for raw knowledge purposes, but for it to fully actualize its potential it would need to do a fair bit of learning, or more accurately, gain experience using its knowledge.

Or am I confusing this with a fictional/pop science brain bug that's floating around?
Image
Mr. Harley: Your impatience is quite understandable.
Klaatu: I'm impatient with stupidity. My people have learned to live without it.
Mr. Harley: I'm afraid my people haven't. I'm very sorry... I wish it were otherwise.

"I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe.
If I cannot satisfy the one, I will indulge the other." – Frankenstein's Creature on the glacier[/size]
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Re: Scientific American on AI

Post by Surlethe »

You fucking fail, Drooling Iguana. I dumped your post into the Barrel. Anybody else want to try spamming a one-liner?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
FSTargetDrone
Emperor's Hand
Posts: 7878
Joined: 2004-04-10 06:10pm
Location: Drone HQ, Pennsylvania, USA

Re: Scientific American on AI

Post by FSTargetDrone »

Modax wrote:What's worse are the people thinking that this is okay because "evolution is supposed to happen this way", which is just shockingly, absurdly, stupid and naive.
This is very important, as there is no "decision" or "intent" with respect to evolution. We aren't describing a anything other than a process.
Image
ThomasP
Padawan Learner
Posts: 370
Joined: 2009-07-06 05:02am

Re: Scientific American on AI

Post by ThomasP »

Temujin wrote:I thought that developing AI would go beyond simply building the hardware and programming it; that its neural network would have to develop from a set point. Sure there would be a certain degree of base programming for raw knowledge purposes, but for it to fully actualize its potential it would need to do a fair bit of learning, or more accurately, gain experience using its knowledge.

Or am I confusing this with a fictional/pop science brain bug that's floating around?
As I understand it, the neural network approach to AI is just one avenue out of several that are being or have been attempted, and the idea that AI is synonymous with human-like neural networks is pretty much a brain-bug of popular TV and movies. Neural networks are good at optimizing themselves for a specific set of conditions, but the problem is that if you don't know how to lay out the right conditions, you get something Bad(TM). Because of that, you end up with inherently less control over the final product.

You look at a human and the responses we take for granted emerged over thousands to millions of years of trial-and-error. There's a whole lot of evolutionary baggage that makes us act how we act, and an AI mind wouldn't come with that pre-loaded. You'd have to build it in.

In the case of a neural network, you have to hope that you trained it right. The trouble there is that while you might end up with something human-like as the "most fit", you might just as easily end up with a mind that's only figured out that putting on a smiling face and giving the right answers is the way to maximize it's goal of converting the universe into staplers.

The "safer" approaches would be what Modax said -- to explicitly build in human-friendly goals and ethical systems, such that the AI values human life/existence/happiness/etc. above anything else. Otherwise, you have no way of knowing what it will do, and it could just as easily wipe you out from pure indifference rather than any malice. If it doesn't value you as a very high priority, then your existence is irrelevant to it.
All those moments will be lost in time... like tears in rain...
User avatar
Lagmonster
Master Control Program
Master Control Program
Posts: 7719
Joined: 2002-07-04 09:53am
Location: Ottawa, Canada

Re: Scientific American on AI

Post by Lagmonster »

Apparently, spartasman decided he was going to toss in a one-liner no less than THREE POSTS after Surlethe warned about not putting in one-liners. It wasn't even a GOOD one-liner, just the tired overlord one from the Simpsons.

spartasman, you are currently the luckiest bastard on this board, because you're still on this board.
Note: I'm semi-retired from the board, so if you need something, please be patient.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Re: Scientific American on AI

Post by Sarevok »

I think people should read stargliders AI faq thread stickied in this forum before asking same questions about neural nets again. From what I understood neural nets are not an ideal approach towards creating general purpose artificial intellect. No one is quite sure what the best approach would be but a variety of methods are being developed each withs own pros and cons. Given how diverse approaches taken are I think it is very foolish and premature to make blanket statement on what AI might or might not do.
I have to tell you something everything I wrote above is a lie.
User avatar
Temujin
Jedi Master
Posts: 1300
Joined: 2010-03-28 07:08pm
Location: Occupying Wall Street (In Spirit)

Re: Scientific American on AI

Post by Temujin »

ThomasP wrote:Snip
Thanks, that clarifies things a bit.
Sarevok wrote:I think people should read stargliders AI faq thread stickied in this forum before asking same questions about neural nets again. From what I understood neural nets are not an ideal approach towards creating general purpose artificial intellect. No one is quite sure what the best approach would be but a variety of methods are being developed each withs own pros and cons. Given how diverse approaches taken are I think it is very foolish and premature to make blanket statement on what AI might or might not do.
Your right, I've been meaning to read that, but keep forgetting about it. I'll bookmark the link that way I'll remember.
Image
Mr. Harley: Your impatience is quite understandable.
Klaatu: I'm impatient with stupidity. My people have learned to live without it.
Mr. Harley: I'm afraid my people haven't. I'm very sorry... I wish it were otherwise.

"I do know that for the sympathy of one living being, I would make peace with all. I have love in me the likes of which you can scarcely imagine and rage the likes of which you would not believe.
If I cannot satisfy the one, I will indulge the other." – Frankenstein's Creature on the glacier[/size]
Post Reply