This is a point worth expanding on. A superintelligence should easily be able to imitate humans in electronic media (computers, TV, radio, telephones etc.). If an AI has really suborned much of the computers on the planet it will be able to easily feed people false information about what's going on. At that point it shouldn't be too hard for it to fake calls, announcements etc. from human authorities that everything is under control. In an AI takeover scenario it very likely wouldn't be too hard for the AI to see to it that most people never get a clue what's really going on until it's too late to do anything about it.Samuel wrote:I'm not seeing how that answers anything. The media can simply claim civil disorder has shut down communications in area x.
Of course, if it came down to a choice between saving humanity at the cost of a high body count and letting an AI that desires the destruction of humanity go about its agenda unopposed a well-designed FAI would take the high body count as the lesser of two evils. A well-designed FAI will try to minimize disruption to human civilization in getting to the point where it can protect us from future UFAIs that may arise, but if it senses a well-defined threat already out there it will go all out if it has to.Wyrm wrote:The higher the body count, the more unpalpable the option will seem to the AI, if it's really friendly.
Slap-drone each person (assign them a robot which watches them continually and restrains them from doing anything stupid). Alternately, set up a cordon around the cities and contain everybody who tries to leave using nonlethal methods (knockout gas, shock stunning etc.). Of course, these both require substantial industrial capacity, see below.How do you keep hundreds of millions (perhaps billions) of humans, once they realize they are in a technology trap of their own cities, from fleeing into the countryside in panic, where things turn from bad to worse?
Naturally, no superintelligence would be that stupid. What it would likely do is try to gather human collaborators who it could use to help build it an automated factory (suborning the dictator of some Third World country would probably be ideal). Getting them to build it a universal assembler would be ideal as it would allow you to do much or all of your work from small, easily hidden facilities, but if it's impractical with the available technology and resources it's not necessary, nor does the first factory have to be anything but a horrible kludge. All that matters is it's good enough to produce a second, better version of itself. From there bootstrapping up a robot army capable of easily defeating humanity is a simple matter of exponential growth in classic Von Neumann fashion. Of course, such a strategy will probably not be quick, and hiding all this activity from humans would be the big challenge, but there are conceivable ways that could be done (the optimal strategy would depend on exactly what technology and resources the AI had access to).Let's suppose the AI tries to secure itself without thwarting infrastructure and see what happens. The AI transfers into a cyber-enhanced millitary installation (using the internet to only get to that secure location). However, that installation will need fuel, ammunition and spare parts to keep it running at full force. But that fuel, ammunition and spare parts all are distrubuted by infrastructure. If the humans cut off the infrastructure, then sooner or later, the cybertanks will run out of fuel, their guns run out of shells, and the machines wear out and cannot be repaired for lack of spare parts, and the AI's millitary installation turns into a rust-heap. Eventually, the AI has to turn its attention to subverting our infrastructure, if only to keep its cybertanks supplied.
Indeed, it is probably the least risky strategy.When I said that the steps "will work", I meant for our FAI. The FAI needs to earn our trust.
BTW I take it you have quietly conceeded your earlier point that a truly friendly AI would not want to be de-boxed.
Hopefully. Depending on how good computing technology is it's hardly inconceivable that AIs could be created by people we really don't want to see creating them, like terrorist groups. Such people would both be highly likely to deliberately create an AI that's hostile to the interests of many humans and to go about building their monster in an inept way that causes it to turn into a full-blown anti-human UFAI. We really, really want to have FAI ready to go by the time computer technology gets good enough that you don't need a huge super-expensive project to create an AGI.When I said "random AI", I was not considering AIs that were concerned with their own survival, nor AIs that were deliberately hostile to humanity. We're creating these things to be useful to humanity, and one of the primary requirements to be useful to humanity is to be safe for humanity, hence friendliness. That's why I added "popped out of any other institution" after "random one"; the stuff that comes out of serious research institutions — the places where the brainpower will be concentrated to work on the problem — will AIs of this type.
That's hardly incredibly reassuring. AI researchers are human beings with human failings. Get enough of them running around and sooner or later one of them is bound to do something malicious or stupid.The bolded part is the important stumbling block. Even if the necessary hardware is cheap, until serious research is able to package up the messy details into some sort of easy-to-use kit, no one will bother aquiring the necessary expertese (on the order of a Ph.D.) to create an AI unless they were going to go into AI research. Even so, AI research will probably require sizable teams of experts on AI. The bottleneck is in brainpower, not computer power.
Plus in the scenario I outlined these brain simulators would probably be a modification of programs that would already be used in medicine. You probably wouldn't need to be a world class programmer to convert them.
Oh yes, you'll probably be able to produce reasonably safe FAI long before it gets this far. This is an issue for misguided attempts to keep Pandora's box closed forever by simply banning AGI, or trying to mandate that every AI every produced be kept in a box in perpetuity. It is the reason those approaches are almost certainly unsustainable.To convert a brain scan into an AI would require an intimate understanding of both neurobiology and AI. If anything, it would be a much tougher problem to crack than even FAI.