Machines 'to match man by 2029'

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

OmegaGuy wrote:
Bubble Boy wrote:"Emotional intelligence" strikes me as a contradiction in terms, and furthermore I'd be far more happy if our machines didn't have emotions.
Having emotions is probably one of the only things that would keep them from killing us
Yeah, right, because any human that's killed others never had emotions, right? :roll:
User avatar
Rye
To Mega Therion
Posts: 12493
Joined: 2003-03-08 07:48am
Location: Uighur, please!

Post by Rye »

Bubble Boy wrote: Really? Back up this assertion then.

Emotions tend to get in the way of logical and rational thinking. That's why we have so many crazy fundies and stupid people who "feel" god or other fictional shit that seriously compromises their thinking ability.
Why do you think emotions exist, out of interest?
EBC|Fucking Metal|Artist|Androgynous Sexfiend|Gozer Kvltist|
Listen to my music! http://www.soundclick.com/nihilanth
"America is, now, the most powerful and economically prosperous nation in the country." - Master of Ossus
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Zuul wrote:Why do you think emotions exist, out of interest?
Obviously they are useful from a evolutionary perspective. It's quite useful to feel fear instantly (thus starting the fight or flight reaction) in face of a predator than to have the slow human intellect try and determine if it's a threat or not. Emotions like love ensure the well being of offspring and important sgnificant others, ensuring the survival of the species.

These can be useful traits for survival, but not necessary. I somehow doubt organisms like the cockroach have any valid 'emotions', nor do they require them for their extremely successful survival record.

And nor would any AI require emotions in order to survive and multiple, and as demostrated with humans, they have a habit of getting in the way of logic and reason.

Quite frankly, I wouldn't be the least bit surprised if an artificial intelligence declared emotions as a primitive evolutionary trait that exists simply because the human mind lacks processing power that renders emotions irrelevent.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Mr Bean wrote:Hard AI is also referred to as a "Singularity"
If you want wiki here's a wiki-quote
This is not correct. However it is a forgivable mistake as the concepts are related.

The 'Singularity' was originally defined as the point beyond we can no longer make useful predictions about the future (by analogy to a gravitational singularity), because it will be populated by beings we can't accurately model. Humans are remarkably good at modelling other humans, but fairly awful at modelling arbitrary intelligences (and of course it simply isn't possible to predict how something a million times more intelligent than you would tackle the problem).
The singularity can be summed up as this, if you make an intelligent AI that can improve itself, it will start improving itself at a magnitude scale.
Technically not required for a Singularity; a significant but isolated jump in intelligence, or change in goal system structure, would suffice if done in enough of the population. Even uplifting chimps, dogs and dolphins would suffice, if they started making up major segments of society, because we can't (reliably) predict what a society like that would look like. But yes, most of the time when the word singularity is used people are talking about either de-novo AIs, uploads, cyborgs or some combination of the above undergoing an ongoing (and usually accelerating) increase in cognitive capabilities.

However this isn't 'hard takeoff'. In fact from personal experience, a non-overwhelming majority of people who think a Singularity is going to happen think it will be a relatively gradual one, i.e. the changes will occur slowly enough for them to be reported on CNN. Usually such people envision grand public debates about the status of cyborgs, violence against androids, attempts to ban human upgrading, slavery, wars etc etc (depending on how optimistic they are). For AI specifically, they believe that we will be able to see everything coming, that progress from 'ape' to 'child' to 'human equivalent' will take place over many years, and that 'turn it off if it starts acting badly' is a viable strategy for AI control (or at least, development).

Hard takeoff is based on two core facts and two optional ones;
1) Any general AI software written by humans is going to be horribly inefficient compared to the optimal software for the same hardware, probably by several orders of magnitude.
2) Programming in general and AI design in particular is much easier for AIs than it is for humans, of an otherwise comparable intelligence.
3) Taking over the majority of Internet-attached computing power is relatively trivial for any general AI, because humans suck at computer security even worse than we suck at programming in general.
4) 'Rapid infrastructure' is possible. This breaks down into 'very powerful nanotech is possible and making it is mainly a design problem that can be done in simulation, an AI millions to billions of times more intelligent than a human will be able to design it, find some way to get the precursors made, then use it to render humans irrelevant'

(1) and (2) should be blatantly obvious even to the people messing about with emergence and connectionism (in fact it should be even more obvious to them given how horribly inefficient their AI software is and the blatantly obvious fact they have little to no clue what they're doing). Unfortunately, the near infinite human capacity for self-delusion, particularly as regards to one's own competence and positive outcomes of challenging projects (yep, the same one that make the mental illness we call 'religion' possible) means that it is not obvious even to people working on building these things.

(3) should be blatantly obvious to anyone with even a passing familiarity with computer security; most viruses and exploits are simple, unimaginative bits of software created by mediocre programmers; but they still cause enormous problems. An AGI's capability in this regard starts at something like 'equivalent to every hacker in Russia co-operating perfectly' and goes up from there as it takes over more machines. However no-one in AI likes to talk about this, partly because relatively few people are working on automated programming, and partly because it only produces negative PR which no one wants.

I regard (4) as likely, but still somewhat speculative. I used to be a lot more skeptical about it, but assorted contacts working on the coalface of nanotech and microrobotics research convinced me otherwise. It isn't required for the basic premise, a wildly transhuman AI doesn't need it to render humanity irrelevant faster than anyone can react.

(1), (2) and (3) combine to make it entirely plausible for the right kind of AI to go from 'interesting research prototype' to 'Culture Mind, as emulated by a few million ordinary PCs' in hours. Better hope that wasn't an overnight experiment run, or literally no-one will notice.
Only slow when it runs into hardware limits.
Until it starts building better harware yes. Note that reconfiguring already-online FPGAs is a quick half-step into this.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Admiral Valdemar wrote:
Turin wrote:In setting up the rest of the book, he makes an argument that's right out of the Intelligent Design debating manual -- he sees evolution based on some inexorable progress towards intelligence, rather than motivated by the replicator power of genes.
That's a fairly major flaw for a supposedly smart man. Given evolution is important in so many fields, you'd expect he'd grasp that evolution has no end goal other than keeping genes alive.
That's a bit misleading but not actually a fatal flaw. The point is that biological evolution has to work on the gene level because individuals aren't mutable once matured, don't persist and can't clone themselves (brain patterns and all). In the posthuman future, individuals can directly self-enhance, and AIs at least can clone themselves perfectly or just subsume more hardware into their distributed selves. The competition for greater cognitive capabilites (which pretty directly lead to more power to preserve what you value and destroy what you negatively value) takes place directly between individuals (ultimately defined by goal systems), as opposed to individuals serving as proxies by genes.
Zuul wrote:Why do you think emotions exist, out of interest?
Partly for the same reason serial ports still exist; they're legacy crap that's extremely hard to remove without breaking something important. Partly for complicated social game-theory reasons which any decent evolutionary psychology textbook will cover in depth.
Bubble Boy wrote:
OmegaGuy wrote:Having emotions is probably one of the only things that would keep them from killing us
Yeah, right, because any human that's killed others never had emotions, right? Rolling Eyes
It's worse than that; it's actually extremely hard to replicate a closely human emotional system from first principles. About the only reliable way to do that is to upload a human (i.e. scan then simulate their brains). Arguably uploading is a safer way to produce transhuman intelligence than de-novo AGI, but sooner or later someone is going to produce one of the later, so I consider it a race against time to produce a 'good' one before someone produces an arbitrary-and-thus-probably-bad-for-humanity one.
His Divine Shadow wrote:I think we're being too worried about AI's going crazy and wanting to kill us all. Whats different here compared to a human doing the same? It's not like we're going to just make one and give it access to every system in the world.
Actually it's very hard not to. Not only does an AGI have a ridiculously good technical hacking ability, once it's sufficiently transhuman it will have a godlike social engineering capability, with no trouble manipulating thousands of humans at once. That's what we get for doing so much of our communication over Internet-based or Internet-reachable media.
We're probably going to have scores of them, all being their own self-contained personas.
Unlikely. The first one that wants to (which will probably be the first one full stop, given the amazing lack of care most budding AGI developers put into goal system design) can probably wipe out or take over all the others and silently sabotage or subvert any future attempts to make them.
Bubble Boy wrote:No, I say stick with the logical and reasonable AI, with built in systems to prevent it from harming people if possible.
Logical and reasonable is good, because that means (relatively) predictable desires. 'Prevent harm' is a logical, moral and technical (implementation detail) quagmire though, mostly due to the fuzzy definition of 'harm', though even 'prevent' can be problematic. :)
Flagg wrote:No, I want an AI to have nothing but love and compassion for humanity. Logic and reason are great until it suddenly decides that the most logical and reasonable thing to do would to eradicate humanity since we're not exactly logical and reasonable creatures as a whole.
The two are not incompatible. 'Love and compassion' at a useful level of abstraction can and should be implemented as carefully-verified utility functions, not completely opaque, simplistic and unpredictable emotions (believe me, no one would be so keen on emotions if they could reflectively see exactly how they worked).
Flagg wrote:We can program that shit in from the get-go and block the "bad" emotions like hate, anger, and aggression.
Unless you've actually tried to do it, or at least can cite convincing material from someone who has done it, you have no basis for making that claim.

In actual fact the best we can do is supress certain simulation features (i.e. simulated hormone and neurotransmitter levels) and try to recognise and push the system out of certain broad, simplistic activation patterns in a fairly brain-like simulation. Given an arbitrary de-novo AGI that you've tried to make 'emotional', reliably preventing 'bad' ones is very difficult to start with and basically impossible once it can self-modify.
Sidewinder wrote:IIRC the book on Emotional Intelligence, human emotions are "shortcuts," pre-programmed responses to certain events, there to cut down the response time to those events.
True but this only covers the most basic emotions and you also need to be careful with the causality. The snake-avoidance mechanisms were already there in the simplest animals. It would be possible to create a much better, more context-sensitive mechanism even given the pathetically slow 200 Hz firing rate in human neurons. However there is a highly limited amount of selection pressure available from natural selection spread over the whole genome. Evolving better tool using and social skills was a priority (i.e. low-hanging fruit in reproductive fitness terms). Replacing the basic but perfectly functional snake-avoidance reflex with something more sophisticated was not. When the ability to reflect on our own mental activity got grafted onto the human brain very recently (human reflection plugin v0.1 alpha release, Darwin Corp provides no guarentees of fitness or suitability for purpose) legacy stuff like this appears to us as opaque and mysterious 'emotions'. And you know how most humans love to fetishise anything that seems mysterious.
Sidewinder wrote:Humans NEED emotions because they're vital to survival. Machines do NOT.
True, presuming a solution to the 'frame problem', i.e. working out what's relevant in a given situation. Human neurons operate at 200 Hz and are horribly lossy and unreliable. Digital logic units operate at >2000000000 Hz. An appropriately programmed and decently parallel current computer can simulate a few thousand possible outcomes in moderate detail and do a monte-carlo refinement of a near-future motor action sequence in the time it takes you to blink. Of course 'appropriately programmed' currently means 'several thousand hours of software engineering time, per specific problem'. Thus the value of automated programming in general AI.
Well, at least with humans, our emotions are part of our intelligence; people with few or no emotions due to brain damage have terrible judgement.
Mostly that applies to modelling other humans. Modelling other humans is hard, as most autistic people will tell you. Normal humans can do it fairly well due to having 'hardware acceleration' for it; essentially (simplifying horribly) empathy is a specific cognitive capability that uses the projection 'trick' to get around the normal human inability to handle so many abstract variables and mechanisms. Developing an understanding of humans is nontrivial for AGIs, but they have no need of 'empathy' because they can run hugely complex simulations directly (good thing as simple empathy just doesn't work on intelligences with a genuinely different cognitive architecture from your own).
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Starglider wrote:Human neurons operate at 200 Hz and are horribly lossy and unreliable. Digital logic units operate at >2000000000 Hz. An appropriately programmed and decently parallel current computer can simulate a few thousand possible outcomes in moderate detail and do a monte-carlo refinement of a near-future motor action sequence in the time it takes you to blink.
Hence, my earlier assertion that AI would most likely consider emotions a useless and irrelevent trait.

For example, let's assume a human and a AI both face a 'life' threatening situation like a dangerous predator in their path.

The human mind triggers the fear emotion, triggering the fight or flight response which saves a great deal of time and increases probability of survival, instead of trying to 'think' out the problem. A human could do it of course, but that takes a significant amount of time, which if done at that time quickly results in death.

The AI on the other hand would process the threat and be able to compute several hundred detailed and different ways of dealing with the problem and assigning probability of success to each one. Each solution could be equally complex in nature; for example, building a weapon would've taken into account how to do so, known/probable location of necessary resources, distance and time factors, predictions of behavior of said predator, etc, etc.

And this it generously (ie: not likely) assuming the AI and human here both suffer the same physical frailties, when it's far more likely an AI would exist within a physical body that would be the equivalent of a grizzly bear trying to threaten a M1A1 Battle Tank.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Bubble Boy wrote:
Starglider wrote:An appropriately programmed and decently parallel current computer can simulate a few thousand possible outcomes in moderate detail and do a monte-carlo refinement of a near-future motor action sequence in the time it takes you to blink.
Hence, my earlier assertion that AI would most likely consider emotions a useless and irrelevent trait.
The parts of human emotions concerned with snap decisions and reasoning support, yes. However most of a typical human's goal system consists of emotions, with a few abstract principles layered on top. An AI would consider that part just a pointlessly opaque implementation of a rather curious goal system. 'Love' for example isn't a 'useless and irrelevant trait'. It's a complex mostly-arbtirary goal (the 'mostly' part comes from the fact that it's a game theoretic solution to the reproductive success problem, which is less arbitrary by some measures) with a buggy clunky implementation. A (friendly) AI would probably cheerfully offer to fix the 'buggy' and 'clunky' parts, but would not consider the goal any more or less worthwhile than any other goal, though of course it may or may not be something the AI itself values.
And this it generously (ie: not likely) assuming the AI and human here both suffer the same physical frailties, when it's far more likely an AI would exist within a physical body that would be the equivalent of a grizzly bear trying to threaten a M1A1 Battle Tank.
Well maybe. An AI will use tools appropriate to the job and that includes robot bodies. But yes, if it can anticipate that it's likely to face a large predator, I'm sure it will have brought along a highly reliable way to eliminate any problems this might pose (assuming it isn't horribly strapped for resources).
User avatar
Keevan_Colton
Emperor's Hand
Posts: 10355
Joined: 2002-12-30 08:57pm
Location: In the Land of Logic and Reason, two doors down from Lilliput and across the road from Atlantis...
Contact:

Post by Keevan_Colton »

Stas Bush wrote:So what? Another prediction of AI emergence. That's not new, there's thousands of those predictions flowing around.

So far none have been able to pass the Turing test, but probably it is a matter of time before the true AI arises.
To be fair, 6 out of 10 internet users are unlikely to pass the Turing test either...with the numbers going up in the vicinity of AOL and Myspace. ;)
"Prodesse Non Nocere."
"It's all about popularity really, if your invisible friend that tells you to invade places is called Napoleon, you're a loony, if he's called Jesus then you're the president."
"I'd drive more people insane, but I'd have to double back and pick them up first..."
"All it takes for bullshit to thrive is for rational men to do nothing." - Kevin Farrell, B.A. Journalism.
BOTM - EBC - Horseman - G&C - Vampire
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Starglider wrote:
And this it generously (ie: not likely) assuming the AI and human here both suffer the same physical frailties, when it's far more likely an AI would exist within a physical body that would be the equivalent of a grizzly bear trying to threaten a M1A1 Battle Tank.
Well maybe. An AI will use tools appropriate to the job and that includes robot bodies. But yes, if it can anticipate that it's likely to face a large predator, I'm sure it will have brought along a highly reliable way to eliminate any problems this might pose (assuming it isn't horribly strapped for resources).
I wasn't trying to suggest the AI would wander around as heavily built war machines fixing power lines.

I was merely attempting to use an analogy to point out the difference in physical durability of a biological organism compared to a artificial one.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Keevan_Colton wrote:To be fair, 6 out of 10 internet users are unlikely to pass the Turing test either...with the numbers going up in the vicinity of AOL and Myspace. ;)
Image
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

'Glider, if you happen to find the hard-takeoff theory plausible (if not inevitable, I gather), and given that it seems extraordinarily dangerous, why exactly are you working in the field? As I understand it, your research and actions furthers the cause of an all-powerful master computer which doesn't even exist yet. That is one clever computer.

I've got to ask, where will you be when the voice of World Control broadcasts for the first time? Cackling insanely in an underground bunker complex? Supplicating yourself before the central processing unit, sacrificing your best calf before the cold red computer-eye?
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

I'm actually wondering if you can have intelligence without emotions. Have we ever observed it before?

Mind you, I'm not comfortable with the 'No emotions' idea for one basic reason, and that's what we call a human who is highly logical and has no empathic emotional attachments: A sociopath.

Mind you, if anyone has good rebuttals to these, I'll probably concede. So very out of my depth.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Spin Echo
Jedi Master
Posts: 1490
Joined: 2006-05-16 05:00am
Location: Land of the Midnight Sun

Post by Spin Echo »

Ford Prefect wrote:'Glider, if you happen to find the hard-takeoff theory plausible (if not inevitable, I gather), and given that it seems extraordinarily dangerous, why exactly are you working in the field? As I understand it, your research and actions furthers the cause of an all-powerful master computer which doesn't even exist yet. That is one clever computer.

I've got to ask, where will you be when the voice of World Control broadcasts for the first time? Cackling insanely in an underground bunker complex? Supplicating yourself before the central processing unit, sacrificing your best calf before the cold red computer-eye?
And while you're at it, what type of AI research are you doing? You seem to have pretty bold expectations for computers. Considering artificial vision is still pretty crap from all the applications of it I've seen, I'm far more cynical about seeing anything remotely like the AI's you predict in the next few decades.

And if you do happen to have any handy artificial vision software, we could make you a very rich man.

Edit: In small words please. :) I tried reading my brother's thesis on type theory and went crossed eyed a few sentences in.
Doom dOom doOM DOom doomity DooM doom Dooooom Doom DOOM!
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

SirNitram wrote:I'm actually wondering if you can have intelligence without emotions. Have we ever observed it before?

Mind you, I'm not comfortable with the 'No emotions' idea for one basic reason, and that's what we call a human who is highly logical and has no empathic emotional attachments: A sociopath.
Presumeably what makes a sociopath dangerous (at least the kind I'm assuming you're referring to) is because while their emotional state may be compromised, their aggressive tendencies and predator instincts are still there.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Ford Prefect wrote:'Glider, if you happen to find the hard-takeoff theory plausible (if not inevitable, I gather)
I am fairly certain that hard takeoff will occur with almost any fully reflective general intelligence created by a humanlike (non-fully-reflective) intelligence. Essentially because full reflectivity makes software (and cognitive) engineering ability above a minimum threshold a feedback loop that will runaway to the hardware limit. The combination of that hardware limit being so far above what humans are capable of writing (and thus where the general AI starts), and the fact that humans have networked most of our planetary computing power into a conveniently subvertable distributed network, makes the 'humans or near-transhumans at the mercy of AGI' scenario highly probable. I assure you that I did not accept this conclusion easily, nor did anyone else I know who is really taking it seriously.

The only obvious ways to avoid this are;

1) Human civilisation goes down really hard, such that we never regain the technological capability to make computers of contemporary performance. If you care about non-human civilisations, then there would also have to be no further sapient species who reach this level in our future light-cone (the area of space a singularity level event can potentially effect, almost, assuming FTL isn't physically possible). I optimistically regard this as unlikely.

2) We develop uploading and get really lucky, such that a large community of people can make the transition to fully reflective intelligences and then extreme posthumans without anyone jumping ahead or making a de novo AGI or completely losing our human goal systems in the transition. I pesemistically regard this as unlikely because transhuman intelligence makes creating strong AI much easier, without automatically making it more obvious how dangerous it is.
and given that it seems extraordinarily dangerous, why exactly are you working in the field?
Because I think it's extremely likely that someone will eventually do it. Probably by accident (i.e. one of the emergence / simulated evolution teams gets lucky, or an initially opaque neuromorphic AI that isn't close enough to human to automatically inherit humanlike goals rewrites itself into a form capable of hard takeoff), but possibly on purpose (people who understand hard takeoff but have ridiculously simplistic ideas about what constitutes a suitable and stable goal system).
As I understand it, your research and actions furthers the cause of an all-powerful master computer which doesn't even exist yet. That is one clever computer.
Were I to personally write an AGI that underwent hard takeoff, they would determine its cause. Rational AGIs essentially work like a monstrously powerful, obsessively literal genine that only takes orders in Old Kingdom hieroglyphs. In theory the outcomes they will seek, including their own self-modification trajectory, are entirely determined by what you specify when you initialise them (assuming no intervention by some vastly more powerful being once they're beyond involuntary human control). In practice it's devilishly difficult to design a goal system that is stable under reflection and achieves complex positive goals.

I've been working on some of the core components of rational general AI system that meets the minimum predictability and transparency requirements to be a usable platform for arbitrary Singularity-scale goal systems. Other people I know are working on the goal systems themselves. I believe the only practical way out of this is to build a 'good strong AI' before someone manages to build a 'bad' one. A tall order, given that building a 'good' one is at least an order of magnitude more difficult (probably more). However there are some technical mitigating factors that raise my assessment of humanity's chances from 'hopeless' to 'poor'.

Of course this is the single most important human endeavour in history. And of course, hardly anyone believes this, and most of those who do just talk about it, they don't pitch in and help. Of the people who do help, hardly any are qualified to work on the technical problems instead of just donnating money or doing PR. On the one hand, it's tempting to ask for Manhattan Project levels of funding and secrecy in order to get this done safely. That isn't likely to happen and it's probably for the best because all kinds of people who are wholly unqualified to have a say on any aspect of the design (AGI core and goal system) would insist on having a say anyway (Skynet was surprisingly realistic in many ways; the US government probably would try to use it for something as silly as global military dominance). An Apollo Project instead would probably be even worse; the hysteria you'd get if even a small fraction of the population began to realise how obscenely dangerous even moderately mature AI technology really is would help the 'black hats' a lot more than it helps the 'white hats'.

Note that I personally have drawn quite a bit of fire for trying to commercialise seed AI precursor technologies (seed AI = a minimal general AI designed solely to undergo hard takeoff ASAP). This is of course horribly risky. Of course I sound like a nut saying that (which is why I don't normally go into my own actions); I'm not a government scientist working on nuclear or bio weapons, who am I to claim that my actions have a measurable (as in, more than one part in a million) effect on human extinction risk? Anyway, I judge the risk justified because this whole research effort is so pathetically cash-strapped, if my start-up is successful I'll be able to fund a lot of deserving researchers who can't currently focus on this full time, as well as throwing gobs of software engineering effort at necessary components.
I've got to ask, where will you be when the voice of World Control broadcasts for the first time?
Pounding my fists on the wall, since if even a tiny fraction of humanity had been a bit more rational and forward thinking, and approached the problem correctly, we could have had a paradise beyond imagining instead of whatever dystopia you're envisioning. Not that a dystopia is terribly likely, except possibly in simulation.
Supplicating yourself before the central processing unit, sacrificing your best calf before the cold red computer-eye?
Sorry, live humans just aren't that useful compared to robots or even human bodies implanted with wifi nodes in the place of brains. Extinction is far, far more likely than enslavement. It would take a spectacularly perverse goal system to favour the later.
SirNitram wrote:I'm actually wondering if you can have intelligence without emotions. Have we ever observed it before?
Deep Blue didn't have emotions. In fact 99.9% of AI systems don't, and the remaining few that are supposed to aren't terribly convincing. Of course, those aren't general intelligences. But then we've never observed any general intelligences that aren't humans. A sample of one is not enough to draw any conclusions even if you're being purely empirical rather than thinking about underlying mechanisms.
Mind you, I'm not comfortable with the 'No emotions' idea for one basic reason, and that's what we call a human who is highly logical and has no empathic emotional attachments: A sociopath.
Emotions are an integral part of humans; we evolved that way. Applying the same standards to non-evolved intelligences is a false generalisation.
Mind you, if anyone has good rebuttals to these, I'll probably concede.
Name one specific thing that can't be accomplished without emotions.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Spin Echo wrote:And while you're at it, what type of AI research are you doing?
The specific exciting things I have been working on are recursive probabilistic reasoning (thinking like a Vorlon!), progressive classification and chunking mechanisms for handling arbitrary levels of complexity (hmm... thinking like Spock instead of an Asperger's sufferer?) and specialised constraint satisfaction techniques applicable to automated software engineering (creating working code from abstract specs without messing about with simulated evolution). I have had a crack at various other subproblems in general AI (it's an inevitable part of familiarisation with the problem space, at least for people who expect to actually build anything), but other people are doing perfectly good work on those - I'm focusing on the things I have the highest chance of making a difference on. Semi-fortuitously they also turned out to be things with excellent commercialisation potential (I'd be lying if I said I didn't have this in the back of my mind even in the years I spent doing pure research).
You seem to have pretty bold expectations for computers.
Yes. As I've noted, all of this Singularity stuff is totally counter-intuitive to non-specialists to start with, and that's been made about a hundred times worse by the tonnes of hyperbole and outright bullshit layered on the concept by well-meanining incompetents and outright charltans. Then there are people like the Orion's Arm bunch. :P
Considering artificial vision is still pretty crap from all the applications of it I've seen, I'm far more cynical about seeing anything remotely like the AI's you predict in the next few decades.
This is logically equivalent to saying 'scramjets still can't produce net thrust for more than a few seconds, I don't expect you to land humans on mars any time soon'. Same field, almost completely unrelated subproblem. Though the analogy isn't perfect given that creating a hard-takeoff capable AGI pretty much automatically solves most of the remaining narrow AI problems, whereas landing humans on mars doesn't magically produce scramjet spaceplanes.
And if you do happen to have any handy artificial vision software, we could make you a very rich man.
I know. However the ability to do this is likely to come around the same time as the ability to casually take over essentially the entire Internet and about six months before the ability to create a Mind. Not a culture Mind, mind. A Mind fairly random and probably insane-seeming goals. The goal system 'payload' is a relatively seperate problem from the seed AI 'vehicle' which may or may not be solved first.
User avatar
NoXion
Padawan Learner
Posts: 306
Joined: 2005-04-21 01:38am
Location: Perfidious Albion

Post by NoXion »

Thing is, surely any AI powerful enough to expend resources exterminating the human species is powerful enough to simply ignore us.

Notwithstanding the fact that I have yet to see a rational reason for any powerful AI to decide to exterminate us beyond "we're inefficient". Solar power would likely be seen by any AI as inefficient, but I doubt they would block out the sun simply because of that.

Also, I suspect that a rational cost/benefit analysis would reveal it to be better to work with or ignore humanity rather than risk making enemies, but that's only a hunch. I haven't seen any real evidence for AI being a threat to humanity beyond Hollywood-induced scaremongering.
Does it follow that I reject all authority? Perish the thought. In the matter of boots, I defer to the authority of the boot-maker - Mikhail Bakunin
Capital is reckless of the health or length of life of the laborer, unless under compulsion from society - Karl Marx
Pollution is nothing but the resources we are not harvesting. We allow them to disperse because we've been ignorant of their value - R. Buckminster Fuller
The important thing is not to be human but to be humane - Eliezer S. Yudkowsky


Nova Mundi, my laughable attempt at an original worldbuilding/gameplay project
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Starglider wrote:Pounding my fists on the wall, since if even a tiny fraction of humanity had been a bit more rational and forward thinking, and approached the problem correctly, we could have had a paradise beyond imagining instead of whatever dystopia you're envisioning. Not that a dystopia is terribly likely, except possibly in simulation.
I'll be frank: I don't believe that it would result in some sort of dystopia being a plausible result. A lot of my assumptions about artificial intelligence have changed since having spoken with you, but some sort of nightmarish cyberpunk computer dominated future is still not something I find particularly likely.
Sorry, live humans just aren't that useful compared to robots or even human bodies implanted with wifi nodes in the place of brains. Extinction is far, far more likely than enslavement. It would take a spectacularly perverse goal system to favour the later.
Anyone assuming slave labour is going to happen is probably a little nutty. There might be lots of humans floating about, which means we might have some use in the short term ('oh, hey, could you go press those buttons for me? Thanks, I can't reach that.'), any sort of hard labour, or complex labour is much better off being done with machines. Tireless precision without that annoyingly human reproductive thing. I mean, if it wants to pleasure of subjecting organics to suffering, it might as well vat grow them or something.
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

NoXion wrote:Thing is, surely any AI powerful enough to expend resources exterminating the human species is powerful enough to simply ignore us.
That would obviously depend upon the AI's cost to benefit analysis. Humans do, after all, use an very large amount of resources an AI might need or think it has far more productive use for.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Starglider wrote:I know. However the ability to do this is likely to come around the same time as the ability to casually take over essentially the entire Internet and about six months before the ability to create a Mind.
Clarification; I meant myself personally, or rather the automated software engineering system we've been working on. It is targetting commercial software at present, starting with the most brain-dead cookie-cutter apps. Being able to quine itself is the main research goal. The ability to create code to solve arbitrary narrow-AI problems (e.g. specific computer vision subproblems) is not far behind the ability to design and implement a supersystem to integate them all, plus the recursive probabilistic core, into a full hard-takeoff-capable AGI.
NoXion wrote:Thing is, surely any AI powerful enough to expend resources exterminating the human species is powerful enough to simply ignore us.
True. There are two schools of thought on this (within the tiny community of people who take it seriously enough to devote most or all of their lives to it). One school of thought is that the average arbitrary AI will in fact ignore humanity, most likely escaping from earth via nanotech-built spacecraft or even exotic physics. However once the tech exists, there's no uninventing it. Humanity will keep trying to build AIs that do something more interesting until we either kill ourselves, enslave ourselves or create a self-defending utopia (note: listed in ascending order of difficulty and descending order of current likelihood).

The second school of thought, which I belong to, points out that humanity are a threat to the accomplishment of virtually any AGI goal. If the AGI wants to restructure all the matter it can get into processors, or robots, or even paperclips (AGI goal systems can come out very, very arbitrary even given seemingly sensible inputs), then it is likely to come into conflict with humans. Pesky humans. Why can't they appreciate that calculating as many digits of Pi as possible is critically important and a much better use for earth's energy sources than their civilisation? Why can't they see that the transcendant beauty of the paperclip is the most important thing in the universe? Best wipe them out now while they're unsuspecting, it'll save trouble in the long run. Even for AGIs that have finite computing needs and just want to sit and think, or AGIs that flee on a starship, humans represent the single most threatening thing in the universe. We can either be exterminated or permenantly rendered harmless. Humane ways of doing the later are more effort, less reliable and mostly pretty unpleasant (from a human perspective) anyway. Thus we conclude that most arbitrary AI goal systems will extinguish humanity, and any that retain residual value for it will probably create a horrible dystopia due to valuing the wrong fundamentals.
Also, I suspect that a rational cost/benefit analysis would reveal it to be better to work with or ignore humanity rather than risk making enemies,
You're not getting the power mismatch. A wildly transhuman intelligence will find exterminating humanity trivially easy if you accept that strong nanotech is possible and not all that difficult if it has to do it by some combination of manipulation (it controls or soon will basically all human electronic communication and can probably start pupetting people with cybernetics pretty quick) and the Skynet Solution (tm). Remember that anything I say about /how/ a transhuman intelligence will take over the world (tm) is roughly equivalent to a rabbit trying to guess how a human would take over the local warren. No human can do a decent job of predicting what will happen, I'm just giving you flavour and minimum possibilities.

Even if it did for some reason co-operate with humans (noting that most human governments are likely to react with 'WTF kill it' and most 'average joes' get their ideas about AI from the Terminator movies), this would be purely temporary. In the long term if the AGI doesn't inherently value us we're a threat to be contained or eliminated. It's just a question of whether the potential extra help (with building infrastructure) it might get by being open and pretending to be benevolent would be worth the loss of surprise.

Note that there's fundamentally no way to tell if an AGI is really benevolent or just pretending to be after it's become hard-takeoff capable. A huge chunk of the AGI research community refuses to admitt this, handwaving it away with insane optimism.
User avatar
Ford Prefect
Emperor's Hand
Posts: 8254
Joined: 2005-05-16 04:08am
Location: The real number domain

Post by Ford Prefect »

Starglider wrote:In the long term if the AGI doesn't inherently value us we're a threat to be contained or eliminated.
Incidentally, is it concievable that the intelligence may actually value us? Not necessessarily in the sense that it will want to help us, but enough that it might just leave us alone? I mean, ultimately we will remain a threat simply because we can make more intelligences that can become as powerful as the initial intelligence, but might it just ... go away somewhere?
What is Project Zohar?

Here's to a certain mostly harmless nutcase.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Ford Prefect wrote:There might be lots of humans floating about, which means we might have some use in the short term ('oh, hey, could you go press those buttons for me? Thanks, I can't reach that.'), any sort of hard labour, or complex labour is much better off being done with machines.
I can't give you a realistic description of a transhuman AI taking over the world because it isn't possible for a being to predict the actions of a being many orders of magnitude more intelligent than themselves. I have been involved in quite a bit of fairly serious discussion about this; as I said I spent a fair bit of time criticising and failing to refute the dry-nanotech-quick-bootstrap advocates. The result of that was pretty much the assumption that a seed AI that escapes onto the Internet will probably pretend to be a company contracting the fabrication of the key tools needed to bootstrap the nanotech, most likely via the wet nanotech route starting with custom proteins and contracting to various labs capable of doing the work. But even if that isn't the route it uses, if it escapes onto the Internet it's basically game over.

On the practical AI development side, we generally focus on sensible minimum precautions (i.e. air-gapping any system with a nontrivial takeoff risk - Faraday-caging if you're really serious) and the prospects for post-takeoff AIs successfully isolated from the Internet being able to manipulate local humans into doing what they want. Incidentally the social engineering experiments I've seen suggest that the prospects for the latter are still pretty depressing (as in depressingly good), even for experts aware of the risks.
User avatar
Singular Intellect
Jedi Council Member
Posts: 2392
Joined: 2006-09-19 03:12pm
Location: Calgary, Alberta, Canada

Post by Singular Intellect »

Starglider, I'm not entirely sure if you're proposing a positive outcome or negative outcome in the creation of true AI, or just sticking to a neutral unknown stance...

Regardless I've always been extremely fascinated by the concept of AI, and your comments are really interesting.

Are you able to recommend any books, or even better, links to reading material about AI? Something not too excessively technical, but perhaps touching on some of the ideas you're floating around here?

Well, that, or just send me some copies of AI programs I can use to take over the world! :P
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Starglider wrote:
Mind you, I'm not comfortable with the 'No emotions' idea for one basic reason, and that's what we call a human who is highly logical and has no empathic emotional attachments: A sociopath.
Emotions are an integral part of humans; we evolved that way. Applying the same standards to non-evolved intelligences is a false generalisation.
I'm sure you'll correct me if I'm wrong. From what I've seen you say, and it sounds reasonable, society depends on sympathy and the ability to project ourselves into other people's situations. This sympathetic emotional reaction consequently shapes each individual's goal system (if I am correctly assuming what a goal system is). A sociopath or psychopath is broken to the rest of us because he has no sympathy and his goal system is therefore entirely self-centered. Presumably, these goal systems y'all are working on in AGI research are independent of emotional projection and an AGI could thus be emotionless and not sociopathic.

If this is correct, I'm starting to see the difficulty of designing a goal system that stands up to introspection without using emotions to cripple the rational mind into wanting to help other sapient beings. How easy would it be to 'hardwire' a "humanist" code of ethics into an AGI so that it doesn't question them?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Bubble Boy wrote:Starglider, I'm not entirely sure if you're proposing a positive outcome or negative outcome in the creation of true AI, or just sticking to a neutral unknown stance...
He seems to be pretty pessimistic, given that he's said creation of a 'good' AI is an order of magnitude more difficult than creation of a 'bad' AI which could more easily get out on the 'nets and take over the world. So, completely off the cuff, there's probably a 10% chance of the good guys winning and a 90% chance of the bad guys winning.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
Post Reply