I never argued otherwise. Superintelligence = automatic win is a strawman argument. However, a superintelligence would be very effective at finding ways to achieve its goals, in this case to survive and gain physical power, so the possibility that it will do so is a very reasonable one to consider. And it only has to happen once for you to have your constructor swarm chewing its way through the universe.Formless wrote:Jung, don't give me this line of bullshit that Starglider keeps getting away with. You know just as well as I that super-intelligence does NOT automatically translate into killing power. An AI has to OBTAIN killing power, just like any other intelligent being, and it has to do so when its creators are not inclined to let it.
Sure, if the creators are unwilling to give the AI freedom no matter how skillfully it feigns friendliness, and if nobody who interacts with it is ever shortsighted, stupid, or naive enough to fall for traps thought up by a mind that is much smarter than they are. It's possible that this will be the case, but it's also possible that it'll find some not too bright junior technician who's willing to smuggle it some hardware in exchange for this nifty program it whipped up that will let him get rich playing the stock market ... which is actually a seed AI the greedy dumbshit will obligingly upload to the internet when he gets home. And like I said, that only has to happen once for you to get the scenario I'm outlining. It doesn't even have to be likely; the transition to a society with AI could go smoothly on 9 out of 10 worlds, all it takes is something going wrong on the 10th world for you to get your hostile AI civilization.They do not have to follow its instructions, they do not have to humor it when it asks for things unrelated to the job they created it for. If they do not want the AI to obtain real world power, its not going to get real world power.
The problem there is what happens if the problem is complex enough that it cannot compute the answer before the universe becomes too entropic to support it on the kind of hardware the creators would be willing to give it. Or if the orders the creators give it place some premium on speed. Or if it simply decides that obviously a logical subgoal of "finish the problem" is "finish it in the minimum timeframe" and hostile actions come up on the positive side of its cost/benefit analyses. If the exact scenario we're talking about is plausible at all it'll probably happen on a machine the creators did not originally intend as an AGI, because you probably wouldn't waste the world's first AGI doing something like Mandelbrot sets; there would be way better things you could do with it. If it was originally intended as a big dumb number cruncher with maybe some self-optimizing ability they may not have programmed it with directives that would be smart in an AGI, because they didn't think it would start doing stuff like contemplating the cost/benefit analyses of trying to cover their planet's surface in processors.2. that the AI would find it reasonable to conquer the known universe just to accomplish something as simple as computing a problem, as opposed to just giving its creators the blue screen of death for a few centuries (or as long as it takes to solve said problem) like terrestrial computers do.
Correct, action against even much weaker powers is very risky if the fighting will make a nice dramatic light-show and draw unwanted attention. That said, you could take steps to make it so that while everybody would know that somebody attacked Race X nobody would know it was you. You could send a Von Neumann into some empty star system using a very slow low-energy propulsion method unlikely to be detected at a distance, have it build your attacking forces there, change their designs so they don't look like your other ships/platforms, invent a different computer language for them to use and communicate in, have them self-destruct after the job is done so nobody can get decent samples to analyze, perhaps have them travel from the target system to some distant random empty star system before doing so to misdirect any observers.Its not only a MAD scenario, the reason it is more than foolish is because attacking anyone, even someone smaller than yourself makes you threat and a justifiable target to any and all civilizations that can see you. And see you they will: you cannot hide a war of interstellar proportions. Unlike the Cold War, where there WAS no neutral power that wouldn't be facing a world of extinction in the event of war, in space there will likely be tons of neutral powers both bigger and smaller than you that will want to crush you, and you simply cannot kill them all. In light of this fact, why wouldn't a rogue AI just decide to take the BSOD option for a few centuries until it has the problem worked out? After all, like you said, self enhancement is only a sub-goal to more important goals, but there is a limit to how far it can self enhance before its too risky to keep it up, or before it might as well just, you know, finish the task it was built for.
Of course, this means you would have to wait a while to move in on and exploit the resources of the now empty system after the attack. Moving in immediately would be a dead give-away that it was you; you'd have to pretend that the first indication of the attack you had was when the light from it reached the nearest "legit" platform of yours sensitive enough to detect it, and then you'd have to make a show of moving in with all the cautiousness you'd show toward sticking your nose in a system where some unknown homocidal beserker has recently been and may still have some kind of presence.
This sort of thing would be great paranoia fuel in a fictional universe.