Machines 'to match man by 2029'

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

[R_H]
Sith Devotee
Posts: 2894
Joined: 2007-08-24 08:51am
Location: Europe

Machines 'to match man by 2029'

Post by [R_H] »

BBC

Machines will achieve human-level artificial intelligence by 2029, a leading US inventor has predicted.

Humanity is on the brink of advances that will see tiny robots implanted in people's brains to make them more intelligent, said Ray Kurzweil.

The engineer believes machines and humans will eventually merge through devices implanted in the body to boost intelligence and health.

"It's really part of our civilisation," Mr Kurzweil explained.

"But that's not going to be an alien invasion of intelligent machines to displace us."

Machines were already doing hundreds of things humans used to do, at human levels of intelligence or better, in many different areas, he said.

Man versus machine

"I've made the case that we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029," he said.

"We're already a human machine civilisation; we use our technology to expand our physical and mental horizons and this will be a further extension of that."

Humans and machines would eventually merge, by means of devices embedded in people's bodies to keep them healthy and improve their intelligence, predicted Mr Kurzweil.

"We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons," he told BBC News.

The nanobots, he said, would "make us smarter, remember things better and automatically go into full emergent virtual reality environments through the nervous system".

Mr Kurzweil is one of 18 influential thinkers chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering.

The experts include Google founder Larry Page and genome pioneer Dr Craig Venter.

The 14 challenges were announced at the annual meeting of the American Association for the Advancement of Science in Boston, which concludes on Monday.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Just about everyone doing actual work in this field wishes Kurweil would STFU with his idiotic graphs and deadlines. Most of his predictions are reasonable - in fact in terms of effects they are probably quite conservative, because he doesn't accept the 'virtually certain hard takeoff' theory of seed AI (I do, obviously). But the oversimplified nonsense used to justify them and the pick-a-date-out-of-a-hat routine in particular usually discredits the basic points more than it supports them.
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Ray Kurzweil's predictions seem straight out of 'Ghost in the Shell'. Sounds like Shirow Masamune should hire a lawyer and sue for plagiarism.
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
JediToren
Padawan Learner
Posts: 231
Joined: 2003-04-17 11:12pm
Location: Nashville, TN, USA
Contact:

Post by JediToren »

Wasn't 2029 the year that the Terminators were sent back from?
[R_H]
Sith Devotee
Posts: 2894
Joined: 2007-08-24 08:51am
Location: Europe

Post by [R_H] »

Starglider wrote:Just about everyone doing actual work in this field wishes Kurweil would STFU with his idiotic graphs and deadlines. Most of his predictions are reasonable - in fact in terms of effects they are probably quite conservative, because he doesn't accept the 'virtually certain hard takeoff' theory of seed AI (I do, obviously). But the oversimplified nonsense used to justify them and the pick-a-date-out-of-a-hat routine in particular usually discredits the basic points more than it supports them.
Why doesn't he accept the hard-takeoff theory?
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

Oh shit, we must destroy skynet now!
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
Because it sounds so radical and because it renders so much of his pitch irrelevant. All that stuff about how a stream of wonderful new technologies will change our lives is irrelevant if the reality is humans being rendered thoroughly obsolete and existing only if AIs permitt them to exist too fast for 99% of the population to ever see it coming.

Unfortunately it isn't possible to prove hard takeoff 'scientifically' without actually building a seed AI and letting it run well past human ability level, which a) we can't do yet and b) reasonably likely to make humanity extinct every time you do it. The only way to genuinely appreciate the danger right now is to have a deep understanding of the kind of AI mechanism that can support strong reliable computationally-cheap self-enhancement; these are a narrow subset of AI techniques that are for the most part obscure or unfashionable (oh noes isn't based on 'emergence', isn't buzzword compliant). Kurzweil doesn't have the technical understanding of AI sufficient to properly appreciate the argument (though last time I checked he accepts it as a currently-low-probability possible outcome); this is what I've heard from people better equipped than me who've tried to convince him have told me .
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
What IS the hard-takeoff theory? I couldn't find it on Wikipedia. Does it have something to do with AI?
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
Mr Bean
Lord of Irony
Posts: 22466
Joined: 2002-07-04 08:36am

Post by Mr Bean »

Sidewinder wrote:
[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
What IS the hard-takeoff theory? I couldn't find it on Wikipedia. Does it have something to do with AI?
Hard AI is also referred to as a "Singularity"
If you want wiki here's a wiki-quote
The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.[1]

Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unseen by their designers, and thus recursively augment themselves into far greater intelligences. Vernor Vinge later called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion. In the 1980s, Vinge popularized the Singularity in lectures, essays, and science fiction. More recently, some AI researchers have voiced concern over the potential dangers of Vinge's Singularity.

Others, most prominently Ray Kurzweil, define the Singularity as a period of extremely rapid technological progress. Kurzweil argues such an event is implied by a long-term pattern of accelerating change that generalizes Moore's Law to technologies predating the integrated circuit and which he argues will continue to other technologies not yet invented.

Critics of Kurzweil's interpretation consider it an example of static analysis, citing particular failures of the predictions of Moore's Law. The Singularity also draws criticism from anarcho-primitivism and environmentalism advocates.
The singularity can be summed up as this, if you make an intelligent AI that can improve itself, it will start improving itself at a magnitude scale. Only slow when it runs into hardware limits.

"A cult is a religion with no political power." -Tom Wolfe
Pardon me for sounding like a dick, but I'm playing the tiniest violin in the world right now-Dalton
User avatar
Turin
Jedi Master
Posts: 1066
Joined: 2005-07-22 01:02pm
Location: Philadelphia, PA

Post by Turin »

Coincidentally, I just finished up his Age of the Spiritual Machines a couple weeks back. I was immediately struck by the fact that he extrapolates things like his "Law of Accelerating Returns" out of a human-centric and generally poor understanding of evolutionary biology. In setting up the rest of the book, he makes an argument that's right out of the Intelligent Design debating manual -- he sees evolution based on some inexorable progress towards intelligence, rather than motivated by the replicator power of genes.

While I found the rest of the book interesting, his premises are totally flawed.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Turin wrote:Coincidentally, I just finished up his Age of the Spiritual Machines a couple weeks back. I was immediately struck by the fact that he extrapolates things like his "Law of Accelerating Returns" out of a human-centric and generally poor understanding of evolutionary biology. In setting up the rest of the book, he makes an argument that's right out of the Intelligent Design debating manual -- he sees evolution based on some inexorable progress towards intelligence, rather than motivated by the replicator power of genes.

While I found the rest of the book interesting, his premises are totally flawed.
That's a fairly major flaw for a supposedly smart man. Given evolution is important in so many fields, you'd expect he'd grasp that evolution has no end goal other than keeping genes alive. A dumb r-species parasite can easily beat a smart primate, for instance.
User avatar
Turin
Jedi Master
Posts: 1066
Joined: 2005-07-22 01:02pm
Location: Philadelphia, PA

Post by Turin »

Admiral Valdemar wrote:That's a fairly major flaw for a supposedly smart man. Given evolution is important in so many fields, you'd expect he'd grasp that evolution has no end goal other than keeping genes alive. A dumb r-species parasite can easily beat a smart primate, for instance.
I find that a surprising number of otherwise educated people still fuck up even the basic concepts of evolution. But maybe I oversimplified there. Unfortunately I just lent my book to someone so I can directly quote. But he makes the argument that the "salient events" in the development of evolutionary complexity occur on a hyperbolic curve. (I should point out he simply tags technological complexity onto the end of biological complexity in this evolutionary process.)

He describes the timeline of evolution via it's "salient events." The problem is that his idea of salient events is wholly species-specific. It's like those displays you might see in a museum where the fish crawls out of the ocean, and then there's an amphibian, and then there's a reptile, and so on and so on until at the end there's the animatronic cave man and guy wearing a suit. It completely ignores the branching nature of evolution.
Lord of the Abyss
Village Idiot
Posts: 4046
Joined: 2005-06-15 12:21am
Location: The Abyss

Post by Lord of the Abyss »

Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.

As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
User avatar
Winston Blake
Sith Devotee
Posts: 2529
Joined: 2004-03-26 01:58am
Location: Australia

Post by Winston Blake »

JediToren wrote:Wasn't 2029 the year that the Terminators were sent back from?
My first thought too.
Lord of the Abyss wrote:Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.
Actually, I think a lot of occupations require managing or ignoring emotions in order to make clear judgements in stressful environments. Soldiers and emergency workers are the first that come to mind. Powerful figures like police must ignore the emotional pull of power abuse and corruption.
As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
Sure, if you can prevent certain emotions. If you can't, a Skynet that just got dumped by its AI-lover is going to be much more likely to want the world to end.

[binary]Craaaawling in my skiiiin![/binary]
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Lord of the Abyss wrote:As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
IIRC the book on Emotional Intelligence, human emotions are "shortcuts," pre-programmed responses to certain events, there to cut down the response time to those events. As an example, the sight of a snake instinctively arouses fear and the urge to get away from it, sparing us the time it takes for the brain to process the information and think up a response, time in which a snake may strike.

Humans NEED emotions because they're vital to survival. Machines do NOT. And if you want a machine to NOT slaughter humans, simply deny it the ability to do so, or program limits into its behavior, e.g., the Three Rules of Robotics.
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

MKSheppard wrote:Oh shit, we must destroy skynet now!
No problem, just send Kusanagi Motoko from 'Ghost in the Shell'. Thermal-optical camouflage, LEET hacking skills, and the training to use those tools should get her past Skynet's defenses.
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Post by K. A. Pital »

So what? Another prediction of AI emergence. That's not new, there's thousands of those predictions flowing around.

So far none have been able to pass the Turing test, but probably it is a matter of time before the true AI arises.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Lord of the Abyss wrote:
Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.
Really? Back up this assertion then.

Emotions tend to get in the way of logical and rational thinking. That's why we have so many crazy fundies and stupid people who "feel" god or other fictional shit that seriously compromises their thinking ability.
As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
Or instead Skynet would have been sadistic and thoroughly enjoyed killing off humanity, perhaps even thinking up ways to ensure captured humans suffer even more before they are terminated.

Your emotion plea can be argued both ways; all that matters is the desired goal on part of any paritcular AI.

And personally, I'd rather the AI be influenced by logic and reason, not useless, counterproductive and potentially very dangerous emotions.
User avatar
Flagg
CUNTS FOR EYES!
Posts: 12797
Joined: 2005-06-09 09:56pm
Location: Hell. In The Room Right Next to Reagan. He's Fucking Bonzo. No, wait... Bonzo's fucking HIM.

Post by Flagg »

No, I want an AI to have nothing but love and compassion for humanity. Logic and reason are great until it suddenly decides that the most logical and reasonable thing to do would to eradicate humanity since we're not exactly logical and reasonable creatures as a whole.

We can program that shit in from the get-go and block the "bad" emotions like hate, anger, and aggression. Of course as the technology becomes more and more accessible, some asshole (probably me) will remove the emotional blocks to see what happens. Then we're all fucked. Unless that lone asshole is really well liked by the AI. Then everyone but him and anyone he likes is safe, but everyone else is fucked. Now would be a good time to send me money. :D
We pissing our pants yet?
-Negan

You got your shittin' pants on? Because you’re about to
Shit. Your. Pants!
-Negan

He who can,
does; he who cannot, teaches.
-George Bernard Shaw
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Flagg wrote:No, I want an AI to have nothing but love and compassion for humanity. Logic and reason are great until it suddenly decides that the most logical and reasonable thing to do would to eradicate humanity since we're not exactly logical and reasonable creatures as a whole.
Again, emotions are far too unpredictable. What's to stop a 'loving' AI from deciding it's best to kill off most of humanity so that the select few can live far better lives? Or that all humans are experiencing/inflicting too much suffering in the world and should be humanely destroyed?

No, I say stick with the logical and reasonable AI, with built in systems to prevent it from harming people if possible.

And even if it's not possible to build in such safe guards, I'd much rather try to discuss terms with a logical and reasonable AI, not an emotional one, wouldn't you?
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Post by His Divine Shadow »

I think we're being too worried about AI's going crazy and wanting to kill us all. Whats different here compared to a human doing the same? It's not like we're going to just make one and give it access to every system in the world. We're probably going to have scores of them, all being their own self-contained personas. Some might work in defence, some might work with the elderly and so forth.
Those who beat their swords into plowshares will plow for those who did not.
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

In the Year of Darkness, 2029, the rulers of this planet devised the ultimate plan. They would reshape the Future by changing the Past. The plan required something that felt no pity. No pain. No fear. Something unstoppable. They created THE TERMINATOR
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
OmegaGuy
Retarded Spambot
Posts: 1076
Joined: 2005-12-02 09:23pm

Post by OmegaGuy »

Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Having emotions is probably one of the only things that would keep them from killing us
Image
User avatar
Molyneux
Emperor's Hand
Posts: 7186
Joined: 2005-03-04 08:47am
Location: Long Island

Post by Molyneux »

OmegaGuy wrote:
Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Having emotions is probably one of the only things that would keep them from killing us
Marvelous, how you assert that without any kind of justification whatsoever.
Ceci n'est pas une signature.
Post Reply