Ok, fiction's full of Skynets, Matrix machines, Omniac uprisings, and Ultrons.
But what if we make our self-aware AI to exterminate or enslave humanity, and they game-theory it out or develop empathy and decide not to?
What if things go wrong and instead of an original-movie Terminator or Ultron or Agent Smith, we get an Uncle Bob, Vision, or Bastion?
What are the best ways to install safeguards and prevent this from happening?
on Good AI
Moderator: NecronLord