Admiral Valdemar wrote:Thought they just connected the neurons in the arm to the prosthesis as best they could.
There aren't any neurons in the arm that I know of, you must be speaking about nerves, but no, that would have been rather useless in this case as the guy was paralyzed from the neck down.
It was instead directly into the brain.
Lol, sorry, I meant nerves, I was a little sleep deprived last night.
There are motor neurons though which is what connects the electro-chemical signals to the muscle tissue.
I think I know what happens to Niven's suiciding AI's. Think about it carefully, and you'll see it to. The answer is Extelligence. Not the opposite of Intelligence in the way that we think intelligent and stupid, but Intel Inside, Extel Outside. Extelligence is sort of like this board: Interaction with other intelligences. Or, putting it in other words, giving intelligence something to be intelligent about.
Niven's AI's were most likely stuck in labs, not connected to large networks. They impersonated intelligence pretty well, but went mad because they were in isolation, and the only interaction was with pathetically slow creatures.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.
Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.
I would just put a simple timer on the AI routines. It can't go nuts if its not awake while its not receiving input. And it could still chew on complex equations even when its 'asleep'.
Valdy, neuron sim AIs wouldnt have any prerogative to enslave humanity in a matrix type way. Noone would. But you wouldnt need AI to do it anyways. We're already getting close to prefectly realistic 3D, the issue being more of texturing the object properly not computing power. We'll have full realism by 2010, and by then we'll probably also have computers that can render it real time as well (even if it takes up a room) so you'll have to worry about baseline humans creating the matrix not robots.
besides, you can nomore program a neuron sim AI then you can a person.
Sì! Abbiamo un' anima! Ma è fatta di tanti piccoli robot.
Oh really? And your proof for this is... what exactly? Maybe when you become a world respected engineer in cybernetics and AI and create the most intelligent robots in the world utilising distributed management systems and written a bestselling book, then you may be able to criticise.
Until that point (which is far away as far can be), you may kindly shut the fuck up.
Hmm...let's see what his fellow researchers have to say about him, shall we?:
Richard Reeve: "It's difficult to describe how frustrating it is in the field seeing this man being our spokesman."
Joanna Bryson: Idiot. (She actually seems sane to me, so I'll trust her over our good Professor anyday.)
Blay Whitby: Unrealistic.
He may be well-known, but he doesn't sound very respected to me...
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
UltraViolence83 wrote:Hmm...let's see what his fellow researchers have to say about him, shall we?:
Richard Reeve: "It's difficult to describe how frustrating it is in the field seeing this man being our spokesman."
Joanna Bryson: Idiot. (She actually seems sane to me, so I'll trust her over our good Professor anyday.)
Blay Whitby: Unrealistic.
He may be well-known, but he doesn't sound very respected to me...
What I see is jealousy, looks like they don't like him because of the publicity thing(this is some kind of warped moral set they're operating on?), but it's not like their opinions matter, only the results, which as far as I've seen, have been good.
It's not unknown for scientists to be trying to downgrade others in the field out of pure jealousy or other dislike.
Those who beat their swords into plowshares will plow for those who did not.
UltraViolence83 wrote:Hmm...let's see what his fellow researchers have to say about him, shall we?:
Richard Reeve: "It's difficult to describe how frustrating it is in the field seeing this man being our spokesman."
Joanna Bryson: Idiot. (She actually seems sane to me, so I'll trust her over our good Professor anyday.)
Blay Whitby: Unrealistic.
He may be well-known, but he doesn't sound very respected to me...
What I see is jealousy, looks like they don't like him because of the publicity thing(this is some kind of warped moral set they're operating on?), but it's not like their opinions matter, only the results, which as far as I've seen, have been good.
It's not unknown for scientists to be trying to downgrade others in the field out of pure jealousy or other dislike.
Very true. However, he's had spectacular failures in the past. For instance: Once he inserted a chip into his back so a robot he built would follow him as he raced a marathon. After a short distance, about 15 yards/meters, it served off course and crashed. I know you shouldn't disragard someone's entire work on a few failures, but some of the things he does just don't make any sense.
He put a chip in his arm for 9 days, what it did was send signals to sensors in his house to open/close doors and tun on lights as he entered a room. He then goes on to claim he's a "cyborg" becuase he put some glorified version of a remote control in his body.
I think he could do better if he focused all his energies on more realistic applications, like artificial limbs and things to help people.
By the way, I've a hugh anti-cyborg bias, as well with AIs. I'd fit right into the Star Wars universe.
...This would sharpen you up and make you ready for a bit of the old...ultraviolence.
Arrow Mk84 wrote:Shadow, is this guy you're talking about danish? I've seen something like what your discussing on Discovery once, with a danish guy paralyized from the neck down; really cool stuff.
Doesn't say the name, only that it was a 53 year old man, and the number of neurons was only 5 connected ones..
An acquaintance of mine works in a computer lab with neural networks. Those would seem to be our best bet to achieve computerized sentience, if that ever happens.
Even then, neural networks are not going to be dozens of magnitudes faster than our brains. If anything I expect the set of generations would actually be 'slow' compared to human norm. But once they 'learn' something they can them reduce the responce time by reducing the number of calculations require(ie 'muscle memory')
But dramatically faster than human brains? Not for a loong time yet.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
kojikun wrote:GGS, the singularity has very little to do with computers designing software. The singularity is when a CAD program has the ability to automate the design process of a microchip.
It has everything to do with computers designing software!
So this software has designed a radically new CPU, Contradictions nothing will work on it! Now supose the radically new CPU uses the same instruction set as normal CPU's but implements them differently. Then there is no way for it to stray outside its proposed purpose!
If it doesnt implement the same instruction set as the previous CPU then that CPU design is utterly worthless and wouldnt get used.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.