Umm, no, I didn't say anything about giving it responsibilities like that.Stark wrote:What? You think we shouldn't assume a largely unknown, incredibly intelligent entity might one day do something we don't like with our pile of nuclear weapons and plan for it? What's a failsafe? What's mitigating risk? I mean, pfffffft it'll be fine, right? Assuming altruism or 'playing nice' seems extraordinarily naive.Gullible Jones wrote:Re Stark: I'm going to have to agree with Ford Prefect. The assumption you're making is hardly grounded. It's not like we shouldn't be prepared for such potentialities, but let's not assume hostility by default, 'kay? At best that's a stupid doctrine, and at worst it's one that could get us all killed.
Again: I did not advocate putting an AI in control of anything. What the fuck would be the point of that, when something that didn't have a mind of its own would be perfectly sufficient and in all likelihood better?The attitude that the risk is the same between a super-intelligent AI and some human we're used to controlling strikes me as ridiculous. You're basically putting a superintelligent alien in charge of your shit and saying 'he'll be cool about it'. I'm not advocating much beyond what we already have for humans, but it's not hard to get better than 'hahah he's friendly and he'll stay that way because I assume he will'.
Maybe I'm overreacting; however, I've seen enough people spouting the "extraterrestrials will be hostile, we should try to kill them if they visit us" line that I'm quite weary of those claiming that anything should be considered hostile by default. Caution, sure. Paranoia, fucking hell no.