Machines will achieve human-level artificial intelligence by 2029, a leading US inventor has predicted.
Humanity is on the brink of advances that will see tiny robots implanted in people's brains to make them more intelligent, said Ray Kurzweil.
The engineer believes machines and humans will eventually merge through devices implanted in the body to boost intelligence and health.
"It's really part of our civilisation," Mr Kurzweil explained.
"But that's not going to be an alien invasion of intelligent machines to displace us."
Machines were already doing hundreds of things humans used to do, at human levels of intelligence or better, in many different areas, he said.
Man versus machine
"I've made the case that we will have both the hardware and the software to achieve human level artificial intelligence with the broad suppleness of human intelligence including our emotional intelligence by 2029," he said.
"We're already a human machine civilisation; we use our technology to expand our physical and mental horizons and this will be a further extension of that."
Humans and machines would eventually merge, by means of devices embedded in people's bodies to keep them healthy and improve their intelligence, predicted Mr Kurzweil.
"We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons," he told BBC News.
The nanobots, he said, would "make us smarter, remember things better and automatically go into full emergent virtual reality environments through the nervous system".
Mr Kurzweil is one of 18 influential thinkers chosen to identify the great technological challenges facing humanity in the 21st century by the US National Academy of Engineering.
The experts include Google founder Larry Page and genome pioneer Dr Craig Venter.
The 14 challenges were announced at the annual meeting of the American Association for the Advancement of Science in Boston, which concludes on Monday.
Machines 'to match man by 2029'
Moderators: Alyrium Denryle, Edi, K. A. Pital
Machines 'to match man by 2029'
BBC
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Just about everyone doing actual work in this field wishes Kurweil would STFU with his idiotic graphs and deadlines. Most of his predictions are reasonable - in fact in terms of effects they are probably quite conservative, because he doesn't accept the 'virtually certain hard takeoff' theory of seed AI (I do, obviously). But the oversimplified nonsense used to justify them and the pick-a-date-out-of-a-hat routine in particular usually discredits the basic points more than it supports them.
- Sidewinder
- Sith Acolyte
- Posts: 5466
- Joined: 2005-05-18 10:23pm
- Location: Feasting on those who fell in battle
- Contact:
Ray Kurzweil's predictions seem straight out of 'Ghost in the Shell'. Sounds like Shirow Masamune should hire a lawyer and sue for plagiarism.
Please do not make Americans fight giant monsters.
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Why doesn't he accept the hard-takeoff theory?Starglider wrote:Just about everyone doing actual work in this field wishes Kurweil would STFU with his idiotic graphs and deadlines. Most of his predictions are reasonable - in fact in terms of effects they are probably quite conservative, because he doesn't accept the 'virtually certain hard takeoff' theory of seed AI (I do, obviously). But the oversimplified nonsense used to justify them and the pick-a-date-out-of-a-hat routine in particular usually discredits the basic points more than it supports them.
- MKSheppard
- Ruthless Genocidal Warmonger
- Posts: 29842
- Joined: 2002-07-06 06:34pm
Oh shit, we must destroy skynet now!
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong
"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Because it sounds so radical and because it renders so much of his pitch irrelevant. All that stuff about how a stream of wonderful new technologies will change our lives is irrelevant if the reality is humans being rendered thoroughly obsolete and existing only if AIs permitt them to exist too fast for 99% of the population to ever see it coming.[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
Unfortunately it isn't possible to prove hard takeoff 'scientifically' without actually building a seed AI and letting it run well past human ability level, which a) we can't do yet and b) reasonably likely to make humanity extinct every time you do it. The only way to genuinely appreciate the danger right now is to have a deep understanding of the kind of AI mechanism that can support strong reliable computationally-cheap self-enhancement; these are a narrow subset of AI techniques that are for the most part obscure or unfashionable (oh noes isn't based on 'emergence', isn't buzzword compliant). Kurzweil doesn't have the technical understanding of AI sufficient to properly appreciate the argument (though last time I checked he accepts it as a currently-low-probability possible outcome); this is what I've heard from people better equipped than me who've tried to convince him have told me .
- Sidewinder
- Sith Acolyte
- Posts: 5466
- Joined: 2005-05-18 10:23pm
- Location: Feasting on those who fell in battle
- Contact:
What IS the hard-takeoff theory? I couldn't find it on Wikipedia. Does it have something to do with AI?[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
Please do not make Americans fight giant monsters.
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Hard AI is also referred to as a "Singularity"Sidewinder wrote:What IS the hard-takeoff theory? I couldn't find it on Wikipedia. Does it have something to do with AI?[R_H] wrote:Why doesn't he accept the hard-takeoff theory?
If you want wiki here's a wiki-quote
The singularity can be summed up as this, if you make an intelligent AI that can improve itself, it will start improving itself at a magnitude scale. Only slow when it runs into hardware limits.The technological singularity is a hypothesized point in the future variously characterized by the technological creation of self-improving intelligence, unprecedentedly rapid technological progress, or some combination of the two.[1]
Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unseen by their designers, and thus recursively augment themselves into far greater intelligences. Vernor Vinge later called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion. In the 1980s, Vinge popularized the Singularity in lectures, essays, and science fiction. More recently, some AI researchers have voiced concern over the potential dangers of Vinge's Singularity.
Others, most prominently Ray Kurzweil, define the Singularity as a period of extremely rapid technological progress. Kurzweil argues such an event is implied by a long-term pattern of accelerating change that generalizes Moore's Law to technologies predating the integrated circuit and which he argues will continue to other technologies not yet invented.
Critics of Kurzweil's interpretation consider it an example of static analysis, citing particular failures of the predictions of Moore's Law. The Singularity also draws criticism from anarcho-primitivism and environmentalism advocates.
"A cult is a religion with no political power." -Tom Wolfe
Pardon me for sounding like a dick, but I'm playing the tiniest violin in the world right now-Dalton
Coincidentally, I just finished up his Age of the Spiritual Machines a couple weeks back. I was immediately struck by the fact that he extrapolates things like his "Law of Accelerating Returns" out of a human-centric and generally poor understanding of evolutionary biology. In setting up the rest of the book, he makes an argument that's right out of the Intelligent Design debating manual -- he sees evolution based on some inexorable progress towards intelligence, rather than motivated by the replicator power of genes.
While I found the rest of the book interesting, his premises are totally flawed.
While I found the rest of the book interesting, his premises are totally flawed.
- Admiral Valdemar
- Outside Context Problem
- Posts: 31572
- Joined: 2002-07-04 07:17pm
- Location: UK
That's a fairly major flaw for a supposedly smart man. Given evolution is important in so many fields, you'd expect he'd grasp that evolution has no end goal other than keeping genes alive. A dumb r-species parasite can easily beat a smart primate, for instance.Turin wrote:Coincidentally, I just finished up his Age of the Spiritual Machines a couple weeks back. I was immediately struck by the fact that he extrapolates things like his "Law of Accelerating Returns" out of a human-centric and generally poor understanding of evolutionary biology. In setting up the rest of the book, he makes an argument that's right out of the Intelligent Design debating manual -- he sees evolution based on some inexorable progress towards intelligence, rather than motivated by the replicator power of genes.
While I found the rest of the book interesting, his premises are totally flawed.
I find that a surprising number of otherwise educated people still fuck up even the basic concepts of evolution. But maybe I oversimplified there. Unfortunately I just lent my book to someone so I can directly quote. But he makes the argument that the "salient events" in the development of evolutionary complexity occur on a hyperbolic curve. (I should point out he simply tags technological complexity onto the end of biological complexity in this evolutionary process.)Admiral Valdemar wrote:That's a fairly major flaw for a supposedly smart man. Given evolution is important in so many fields, you'd expect he'd grasp that evolution has no end goal other than keeping genes alive. A dumb r-species parasite can easily beat a smart primate, for instance.
He describes the timeline of evolution via it's "salient events." The problem is that his idea of salient events is wholly species-specific. It's like those displays you might see in a museum where the fish crawls out of the ocean, and then there's an amphibian, and then there's a reptile, and so on and so on until at the end there's the animatronic cave man and guy wearing a suit. It completely ignores the branching nature of evolution.
-
- Village Idiot
- Posts: 4046
- Joined: 2005-06-15 12:21am
- Location: The Abyss
Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
- Winston Blake
- Sith Devotee
- Posts: 2529
- Joined: 2004-03-26 01:58am
- Location: Australia
My first thought too.JediToren wrote:Wasn't 2029 the year that the Terminators were sent back from?
Actually, I think a lot of occupations require managing or ignoring emotions in order to make clear judgements in stressful environments. Soldiers and emergency workers are the first that come to mind. Powerful figures like police must ignore the emotional pull of power abuse and corruption.Lord of the Abyss wrote:Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.
Sure, if you can prevent certain emotions. If you can't, a Skynet that just got dumped by its AI-lover is going to be much more likely to want the world to end.As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
[binary]Craaaawling in my skiiiin![/binary]
- Sidewinder
- Sith Acolyte
- Posts: 5466
- Joined: 2005-05-18 10:23pm
- Location: Feasting on those who fell in battle
- Contact:
IIRC the book on Emotional Intelligence, human emotions are "shortcuts," pre-programmed responses to certain events, there to cut down the response time to those events. As an example, the sight of a snake instinctively arouses fear and the urge to get away from it, sparing us the time it takes for the brain to process the information and think up a response, time in which a snake may strike.Lord of the Abyss wrote:As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
Humans NEED emotions because they're vital to survival. Machines do NOT. And if you want a machine to NOT slaughter humans, simply deny it the ability to do so, or program limits into its behavior, e.g., the Three Rules of Robotics.
Please do not make Americans fight giant monsters.
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
- Sidewinder
- Sith Acolyte
- Posts: 5466
- Joined: 2005-05-18 10:23pm
- Location: Feasting on those who fell in battle
- Contact:
No problem, just send Kusanagi Motoko from 'Ghost in the Shell'. Thermal-optical camouflage, LEET hacking skills, and the training to use those tools should get her past Skynet's defenses.MKSheppard wrote:Oh shit, we must destroy skynet now!
Please do not make Americans fight giant monsters.
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.
They have more WMD than there are monsters for us to fight. (More insanity here.)
- K. A. Pital
- Glamorous Commie
- Posts: 20813
- Joined: 2003-02-26 11:39am
- Location: Elysium
So what? Another prediction of AI emergence. That's not new, there's thousands of those predictions flowing around.
So far none have been able to pass the Turing test, but probably it is a matter of time before the true AI arises.
So far none have been able to pass the Turing test, but probably it is a matter of time before the true AI arises.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Really? Back up this assertion then.Lord of the Abyss wrote:Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Emotions tend to get in the way of logical and rational thinking. That's why we have so many crazy fundies and stupid people who "feel" god or other fictional shit that seriously compromises their thinking ability.
Or instead Skynet would have been sadistic and thoroughly enjoyed killing off humanity, perhaps even thinking up ways to ensure captured humans suffer even more before they are terminated.As for machines, if they are intelligent, emotions like compassion strike me as an excellent idea. A Skynet with compassion would have been rather less likely to slaughter billions of people.
Your emotion plea can be argued both ways; all that matters is the desired goal on part of any paritcular AI.
And personally, I'd rather the AI be influenced by logic and reason, not useless, counterproductive and potentially very dangerous emotions.
- Flagg
- CUNTS FOR EYES!
- Posts: 12797
- Joined: 2005-06-09 09:56pm
- Location: Hell. In The Room Right Next to Reagan. He's Fucking Bonzo. No, wait... Bonzo's fucking HIM.
No, I want an AI to have nothing but love and compassion for humanity. Logic and reason are great until it suddenly decides that the most logical and reasonable thing to do would to eradicate humanity since we're not exactly logical and reasonable creatures as a whole.
We can program that shit in from the get-go and block the "bad" emotions like hate, anger, and aggression. Of course as the technology becomes more and more accessible, some asshole (probably me) will remove the emotional blocks to see what happens. Then we're all fucked. Unless that lone asshole is really well liked by the AI. Then everyone but him and anyone he likes is safe, but everyone else is fucked. Now would be a good time to send me money.
We can program that shit in from the get-go and block the "bad" emotions like hate, anger, and aggression. Of course as the technology becomes more and more accessible, some asshole (probably me) will remove the emotional blocks to see what happens. Then we're all fucked. Unless that lone asshole is really well liked by the AI. Then everyone but him and anyone he likes is safe, but everyone else is fucked. Now would be a good time to send me money.

We pissing our pants yet?
-Negan
You got your shittin' pants on? Because you’re about to Shit. Your. Pants!
-Negan
He who can, does; he who cannot, teaches.
-George Bernard Shaw
-Negan
You got your shittin' pants on? Because you’re about to Shit. Your. Pants!
-Negan
He who can, does; he who cannot, teaches.
-George Bernard Shaw
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Again, emotions are far too unpredictable. What's to stop a 'loving' AI from deciding it's best to kill off most of humanity so that the select few can live far better lives? Or that all humans are experiencing/inflicting too much suffering in the world and should be humanely destroyed?Flagg wrote:No, I want an AI to have nothing but love and compassion for humanity. Logic and reason are great until it suddenly decides that the most logical and reasonable thing to do would to eradicate humanity since we're not exactly logical and reasonable creatures as a whole.
No, I say stick with the logical and reasonable AI, with built in systems to prevent it from harming people if possible.
And even if it's not possible to build in such safe guards, I'd much rather try to discuss terms with a logical and reasonable AI, not an emotional one, wouldn't you?
- His Divine Shadow
- Commence Primary Ignition
- Posts: 12791
- Joined: 2002-07-03 07:22am
- Location: Finland, west coast
I think we're being too worried about AI's going crazy and wanting to kill us all. Whats different here compared to a human doing the same? It's not like we're going to just make one and give it access to every system in the world. We're probably going to have scores of them, all being their own self-contained personas. Some might work in defence, some might work with the elderly and so forth.
Those who beat their swords into plowshares will plow for those who did not.
- MKSheppard
- Ruthless Genocidal Warmonger
- Posts: 29842
- Joined: 2002-07-06 06:34pm
In the Year of Darkness, 2029, the rulers of this planet devised the ultimate plan. They would reshape the Future by changing the Past. The plan required something that felt no pity. No pain. No fear. Something unstoppable. They created THE TERMINATOR
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong
"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
Marvelous, how you assert that without any kind of justification whatsoever.OmegaGuy wrote:Having emotions is probably one of the only things that would keep them from killing usBubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Ceci n'est pas une signature.