It doesn't have to do that. This is NOT a "Skynet" scenario.Channel72 wrote:Starglider, isn't it a pretty big assumption that an AGI would even be selfish in any sense? Humans probably fear it would, because we're biased to think in terms of selfishness as fundamental to goal-seeking because of our evolutionary roots. Biological evolution is an algorithm that (mostly) selects for selfishness (self-survival as a fundamental goal), resulting in a feedback loop that favors selfish survivalists (whether we're talking in terms of individual organisms, or groups of organisms).
But why would an AGI necessarily even value itself over other entities? It's entire concept of self may be nothing more than a reference point. You seem to be very afraid that it's goal system will spiral out of control to the point where it decides "SURVIVAL OF THE SELF AT ALL COSTS AND KILL EVERYONE ELSE".
The AI doesn't have to want to protect itself. Or think that it can best protect itself by killing all other life. The AI just has to think that whatever it does want is more important than, well, whatever it doesn't want. And if it doesn't specifically want the well-being of the human race, there is no reason to assume that it will act in ways compatible with that well-being.
You may not wish the ladybugs that live in the bushes on your lawn any harm. You may even be rather fond of ladybugs. Certainly you would never think of them as a threat that must be eliminated to preserve your life.
But to be quite frank... that won't stop you from uprooting the bushes and tossing them into the wood-chipper, ladybugs and all, in order to improve the aesthetic value of the landscaping.
From the point of view of the ladybugs, you are a gigantic and vastly superintelligent being from lands far beyond their reach... and when you fire up that woodchipper, you might as well be Cthulhu, arisen to destroy the world now that the stars are realigned.
But from your point of view, your actions are perfectly understandable and normal, and millions of people make such decisions every year. To you, the day you decided to exterminate the front yard holly-bush ladybugs of 1313 Cherry Lane was a Saturday afternoon like any other. It probably didn't even occur to you that the ladybugs might have a say in the matter, or that it was worth going to any effort to rescue them somehow.
And that is the problem with superintelligent AI.
How exactly do you determine whether a superintelligent computer is "thinking of taking over the world?" What entity or system monitors its thoughts in real time? It's not like human brains come with convenient labels for what you're thinking. About the best you can do, even with massively intrusive instrumentation, instrumentation that human brains are totally unequipped to fool, is to determine "well, the part of his brain that handles violence is active, so he's probably thinking about beating up somebody." Even that, we can only do because of extensive medical testing about which parts of the brain handle which kind of thought.Also, I don't understand why a (potentially) hostile AI couldn't be constrained by hardware restraints (like NX bits or whatever) - I mean just have the OS segfault the damn thing if it starts having ambitions of overtaking the world.
With an AI, especially one that rewrites itself to self-improve, we may have literally no idea which blocks of its code do what. We may not be able to design diagnostic equipment it can't fool. So the idea of having a line hardwired into its code that says thisAlgorithmBecomingSkynetCost = 999999999 and having that actually work is a bad joke.