Ford Prefect wrote:'Glider, if you happen to find the hard-takeoff theory plausible (if not inevitable, I gather)
I am fairly certain that hard takeoff will occur with almost any fully reflective general intelligence created by a humanlike (non-fully-reflective) intelligence. Essentially because full reflectivity makes software (and cognitive) engineering ability above a minimum threshold a feedback loop that will runaway to the hardware limit. The combination of that hardware limit being so far above what humans are capable of writing (and thus where the general AI starts), and the fact that humans have networked most of our planetary computing power into a conveniently subvertable distributed network, makes the 'humans or near-transhumans at the mercy of AGI' scenario highly probable. I assure you that I did not accept this conclusion easily, nor did anyone else I know who is really taking it seriously.
The only obvious ways to avoid this are;
1) Human civilisation goes down really hard, such that we never regain the technological capability to make computers of contemporary performance. If you care about non-human civilisations, then there would also have to be no further sapient species who reach this level in our future light-cone (the area of space a singularity level event can potentially effect, almost, assuming FTL isn't physically possible). I optimistically regard this as unlikely.
2) We develop uploading and get really lucky, such that a large community of people can make the transition to fully reflective intelligences and then extreme posthumans without anyone jumping ahead or making a de novo AGI or completely losing our human goal systems in the transition. I pesemistically regard this as unlikely because transhuman intelligence makes creating strong AI much easier, without automatically making it more obvious how dangerous it is.
and given that it seems extraordinarily dangerous, why exactly are you working in the field?
Because I think it's extremely likely that someone will eventually do it. Probably by accident (i.e. one of the emergence / simulated evolution teams gets lucky, or an initially opaque neuromorphic AI that isn't close enough to human to automatically inherit humanlike goals rewrites itself into a form capable of hard takeoff), but possibly on purpose (people who understand hard takeoff but have ridiculously simplistic ideas about what constitutes a suitable and stable goal system).
As I understand it, your research and actions furthers the cause of an all-powerful master computer which doesn't even exist yet. That is one clever computer.
Were I to personally write an AGI that underwent hard takeoff, they would
determine its cause. Rational AGIs essentially work like a monstrously powerful, obsessively literal genine that only takes orders in Old Kingdom hieroglyphs. In theory the outcomes they will seek, including their own self-modification trajectory, are entirely determined by what you specify when you initialise them (assuming no intervention by some vastly more powerful being once they're beyond involuntary human control). In practice it's devilishly difficult to design a goal system that is stable under reflection and achieves complex positive goals.
I've been working on some of the core components of rational general AI system that meets the minimum predictability and transparency requirements to be a usable platform for arbitrary Singularity-scale goal systems. Other people I know are working on the goal systems themselves. I believe the only practical way out of this is to build a 'good strong AI' before someone manages to build a 'bad' one. A tall order, given that building a 'good' one is at least an order of magnitude more difficult (probably more). However there are some technical mitigating factors that raise my assessment of humanity's chances from 'hopeless' to 'poor'.
Of course this is the single most important human endeavour in history. And of course, hardly anyone believes this, and most of those who do just talk about it, they don't pitch in and help. Of the people who do help, hardly any are qualified to work on the technical problems instead of just donnating money or doing PR. On the one hand, it's tempting to ask for Manhattan Project levels of funding and secrecy in order to get this done safely. That isn't likely to happen and it's probably for the best because all kinds of people who are wholly unqualified to have a say on any aspect of the design (AGI core and goal system) would insist on having a say anyway (Skynet was surprisingly realistic in many ways; the US government probably
would try to use it for something as silly as global military dominance). An Apollo Project instead would probably be even worse; the hysteria you'd get if even a small fraction of the population began to realise how obscenely dangerous even moderately mature AI technology really is would help the 'black hats' a lot more than it helps the 'white hats'.
Note that I personally have drawn quite a bit of fire for trying to commercialise seed AI precursor technologies (seed AI = a minimal general AI designed solely to undergo hard takeoff ASAP). This is of course horribly risky. Of course I sound like a nut saying that (which is why I don't normally go into my own actions); I'm not a government scientist working on nuclear or bio weapons, who am I to claim that my actions have a measurable (as in, more than one part in a million) effect on human extinction risk? Anyway, I judge the risk justified because this whole research effort is so pathetically cash-strapped, if my start-up is successful I'll be able to fund a lot of deserving researchers who can't currently focus on this full time, as well as throwing gobs of software engineering effort at necessary components.
I've got to ask, where will you be when the voice of World Control broadcasts for the first time?
Pounding my fists on the wall, since if even a tiny fraction of humanity had been a bit more rational and forward thinking, and approached the problem correctly, we could have had a paradise beyond imagining instead of whatever dystopia you're envisioning. Not that a dystopia is terribly likely, except possibly in simulation.
Supplicating yourself before the central processing unit, sacrificing your best calf before the cold red computer-eye?
Sorry, live humans just aren't that useful compared to robots or even human bodies implanted with wifi nodes in the place of brains. Extinction is far, far more likely than enslavement. It would take a spectacularly perverse goal system to favour the later.
SirNitram wrote:I'm actually wondering if you can have intelligence without emotions. Have we ever observed it before?
Deep Blue didn't have emotions. In fact 99.9% of AI systems don't, and the remaining few that are supposed to aren't terribly convincing. Of course, those aren't general intelligences. But then we've never observed any general intelligences that aren't humans. A sample of one is not enough to draw any conclusions even if you're being purely empirical rather than thinking about underlying mechanisms.
Mind you, I'm not comfortable with the 'No emotions' idea for one basic reason, and that's what we call a human who is highly logical and has no empathic emotional attachments: A sociopath.
Emotions are an integral part of humans; we evolved that way. Applying the same standards to non-evolved intelligences is a false generalisation.
Mind you, if anyone has good rebuttals to these, I'll probably concede.
Name one specific thing that can't be accomplished without emotions.