Technological Future: Man IS the machine

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

I just got back from Oxford. As far as I can tell (I wasn't at the conference, but I met up with people who were) there wasn't any content that the Malthusians would approve of (resource depletion, climate change). I get the impression those researchers have their own conferences. This was all long range mostly technological stuff. Typically for an academic conference it mostly consisted of people reading out their papers - my partner tells me the valuable part of these events for academics is the networking that nets them cites and (if they're lucky grants)..
Sarevok wrote:I always thought creating what you academic types call an AGI by mimicking the brain is similar to 19th century dreams of building a flying contraption by using flapping wings like a bird.
This example is used so often in the literature it has long since passed the point of being a dead horse. Generally by symbolic AI people bashing connectionist approaches.
Would you not be getting better results by using algorithms instead of simulating via spaghetti coding a neural net that only exists because gigahertz chips did not exist when it evolved ?
I would say 'yes, obviously' - I'm at the 'the brain sucks' end of the 'human brain is a kludgy mess / human brain is amazing and optimal' spectrum of opinion, but even the biomorphism fans tend to admit that the design is a poor match with currently available hardware (of course they also tend to advocate implementing NNs in FPGAs and custom silicon).
If you can have a master algorithm that rewrites itself you can eliminate this problem and potentially create a thinking machine that can do anything humans can and do it better.
That is pretty much what I'm working on, but automated code generation is a pretty lonely niche at the moment. 90% of machine learning research is narrow-case function optimisation (SVMs, classic NNs, almost all genetic programming) and nearly all the rest is some brand of 'emergence'. There are still plenty of symbolic groups around of course (the entire 'semantic web' concept is essentially a funding refuge created for the 80s symbolic AI academics to flee to) but they focus on inference not learning. Then there's the 'situated intelligence' people, who are still somewhere between the mouse and cockroach level.
PeZook wrote:Or serve as soldiers. At the end of their tour they may even get their old personality back, after having it replaced by one optimized for combat Very Happy
Having personal medical nanotech installed as a benefit of military service is plausible, within a certain band of cost. Of course with that kind of technology base the military personnel profile will look rather different.
Dooey Jo wrote:Creating AI by emulating biological neurons seems pretty inefficient anyway.
The aerodynamics of the first aircraft (see how insidious these airplane analogies are?) were awful by modern standards. But that was all they knew how to build at the time. Neuromorphic general AI is a kind of lowest common denominator in the sense that you can do it with a lot of conventional science (mapping the anatomy and biochemistry of the brain in enough depth) and a lot of computing power (getting cheaper every year). I don't think it's a good way to make AI either, but the other ways are far more speculative and require unknown (but large) quantities of original insight, so a lot of people see it as a safe bet.
Plus, we'd need to know pretty much exactly how everything in a brain works:
Not necessarily. You can substitute computing power for knowledge. You just run lots of copies with slightly different layouts and settings. The search can of course be automated. In theory this has ethical issues for sentient AIs, but to be honest very few AI researchers seem bothered ATM.
That knowledge itself is probably decades away.
Yes. This is one place where I'd actually go with Kurzweil's estimate; three plus minus one decades for enough to make an AGI - assuming computing power continues to grow, to bridge the gap with some automated search. Progress in neuroscience has really picked up over the last 15 years and it's a lot less dependent on fads and individual breakthroughs than de novo AGI.
Fire Fly wrote:We still don't even know how higher order integration works in something as simple as a nematode.
The allure of connectionist (and more generally, 'emergence based' approaches) is that you don't have to know how it works. In fact many researchers are perversely proud of not knowing how it works. To be fair it does prove that their system is capable of a certain amount of independent learning, but I still don't like any attitude that considers ignorance of functional mechanisms acceptable and even desireable.

Anyway, brain simulation does not rely on knowing how the brain works. When Kurzweil or the Blue Gene people or any of the other groups I'm aware of working on this say 'we're going to build a human-equivalent simulation in 20XX', they mean 'we're going to run a simulation of 100 billion elements we've made as much like real neurons as we can, in as much like the real connectivity pattern as we can, and fiddle with it until it does interesting stuff'.

I've personally only worked with classic ANNs (standard backprop and spiking with basic Hebbian), not the elaborately biomorphic stuff (which AFAIK isn't useful for anything outside that specific research area yet). As such take my opinion on this with a grain of salt, but it looks to me as if getting the connectivity plasticity model right will be by far the hardest part. Current simulation work focuses more heavily on getting the pulse propagation and synapse-level plasticity right. That's why I'd got with three-ish decades rather than the two-ish decade ones; replicating the general connectivity pattern of the brain is a start but I doubt learning is going to work well beyond basic pattern recognition without capturing the mechanism that evolves that connectivity. There are brain-like theories that try to get around the need for such a mechanism (e.g. Jeff Hawkin's Numenta concept), but once you deviate from the biological template that much you've lost most of the benefits of neuromorphic AGI (in terms of risk reduction) in the first place. So I class those as the kind of 'slavish adoration of biology' that Sarevok mentioned.
Kwizard
Padawan Learner
Posts: 168
Joined: 2005-11-20 11:44am

Post by Kwizard »

Starglider wrote: There are brain-like theories that try to get around the need for such a mechanism (e.g. Jeff Hawkin's Numenta concept), but once you deviate from the biological template that much you've lost most of the benefits of neuromorphic AGI (in terms of risk reduction) in the first place.
If we're looking to reduce risk, then would it help to develop full-brain neuromorphic simulation technologies and run the uploaded minds of AI researchers, taking advantage of much faster thinking speeds? Would that level of technology be too far off to matter or have nasty consequences I'm not seeing?
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Kwizard wrote:If we're looking to reduce risk, then would it help to develop full-brain neuromorphic simulation technologies and run the uploaded minds of AI researchers, taking advantage of much faster thinking speeds?
I was actually referring to development risk (i.e. the chance of it working at all), but yes being slavishly neuromorphic probably reduces Friendliness risk too. There are counterarguments to that - getting a near-miss on humanlike morality may create intelligences with a malign interest in humanity as opposed to ones with no intrinsic interest in humanity - but they're highly speculative.

Anyway the answer to your question is yes, but...
Would that level of technology be too far off to matter or have nasty consequences I'm not seeing?
The problem is, the ability to make near-human brain simulations by trial and error (and lots of test runs with automated tuning) will most likely come decades before the technology to do uploading properly. Since hardly anyone is prepared to wait, the solution you mention isn't practical as a means of averting 'Unfriendly' general seed AI.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

That is pretty much what I'm working on, but automated code generation is a pretty lonely niche at the moment. 90% of machine learning research is narrow-case function optimisation (SVMs, classic NNs, almost all genetic programming) and nearly all the rest is some brand of 'emergence'. There are still plenty of symbolic groups around of course (the entire 'semantic web' concept is essentially a funding refuge created for the 80s symbolic AI academics to flee to) but they focus on inference not learning. Then there's the 'situated intelligence' people, who are still somewhere between the mouse and cockroach level.
How would the machine learn though ? Unleashing it in a physical environment via a robotic avatar would be rather slow. So I was wondering about what else might work. How about some kind of virtual world where an agent can bang its head against bounding boxes a thousand times a second till it figures out whats solid and whats not. Once it gets the hang of the world of textures and 3D meshes it can be slowly given real world sensory input. Would that be effective in rapidly evolving a working AGI ?
I have to tell you something everything I wrote above is a lie.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

So I was wondering about what else might work. How about some kind of virtual world where an agent can bang its head against bounding boxes a thousand times a second till it figures out whats solid and whats not.
This is a popular approach. Here's one specifically designed for general AIs. Really it's a linear development of the microworld approaches of the 1970s (the canonical example being SHRDLU). I recall seeing the idea of using Second Life as an AI training environment in serious papers, though I don't remember where.
Sarevok wrote:How would the machine learn though ? Unleashing it in a physical environment via a robotic avatar would be rather slow.
There still plenty of 'situated intelligence' people who claim that simulations just don't cut it. Personally I think a lot of them are just trying to defend their robotics grants (particularly the ones who aren't pushing the state of the art on the mechanical & control side). Intelligently applied noise injection should make the simulations challenging enough 99% of the time.

One problem with this is that the simulations themselves take a surprisingly long time to write, even with an existing toolset (e.g. 3D engine) - particularly if you want to teach something specific as opposed to doing artificial life style 'throw a load of agents into a virtual environment and see what happens'. One thing I would eventually like to study is using the code generation system (and a heuristic description of what makes a problem interesting) to get the AI to set itself challenges to solve. This is a very challenging problem though and I haven't seen anyone else tackle it (though of course simple randomisation of simulation parameters and escalation of difficulty is fairly common).
Once it gets the hang of the world of textures and 3D meshes it can be slowly given real world sensory input. Would that be effective in rapidly evolving a working AGI ?
I think so. Actually IMHO video footage should be extremely useful. Simply taking YouTube videos and trying to guess what will happen in the next second (at one second intervals) would be a very rich visual comprehension learning source. Humans can't develop hand-eye co-ordination (and many other skills) by passive observation alone, but some classes of AI can (at least, in principle).
Darmalus
Jedi Master
Posts: 1131
Joined: 2007-06-16 09:28am
Location: Mountain View, California

Post by Darmalus »

Thats hilarious. Teaching AIs by having them play video games and browse the internet. You're going to make a bunch of good-for-nothing slacker AIs! Why spend all this money on this research when we have plenty of teenagers lying around who would leap at the chance to slack.

On a more serious note, the idea of using a virtual world (like the 2nd Life example) that humans also use to teach AIs sounds cool. Or were they talking about just the VR world, minus the random element of humanity?
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Darmalus wrote:Thats hilarious. Teaching AIs by having them play video games and browse the internet. You're going to make a bunch of good-for-nothing slacker AIs!
:) Actually there's serious money in this; just about everyone would like to see smarter AI in video games (not necessarily better - ideally we'd like AI smart enough to understand 'maximise the player's enjoyment - without breaking suspension of disbelief' - but in most games even limited improvements would be much appreciated). There's even more money in making sense of the vast amounts of material available online - there are hordes of start-ups in this field, and I've personally accumulated at least a year of billable hours on projects to applying machine learning techniques to online search and filtering.
On a more serious note, the idea of using a virtual world (like the 2nd Life example) that humans also use to teach AIs sounds cool. Or were they talking about just the VR world, minus the random element of humanity?
Generally the serious researchers are just talking about the VR environment and carefully controlled conditions. The online part is good for collaboration and decoupling but that's it. The 'lets just let it wander around Second Life and see what happens!' remarks tend to come from cranks, philosophers and chatbot authors...
Post Reply