This example is used so often in the literature it has long since passed the point of being a dead horse. Generally by symbolic AI people bashing connectionist approaches.Sarevok wrote:I always thought creating what you academic types call an AGI by mimicking the brain is similar to 19th century dreams of building a flying contraption by using flapping wings like a bird.
I would say 'yes, obviously' - I'm at the 'the brain sucks' end of the 'human brain is a kludgy mess / human brain is amazing and optimal' spectrum of opinion, but even the biomorphism fans tend to admit that the design is a poor match with currently available hardware (of course they also tend to advocate implementing NNs in FPGAs and custom silicon).Would you not be getting better results by using algorithms instead of simulating via spaghetti coding a neural net that only exists because gigahertz chips did not exist when it evolved ?
That is pretty much what I'm working on, but automated code generation is a pretty lonely niche at the moment. 90% of machine learning research is narrow-case function optimisation (SVMs, classic NNs, almost all genetic programming) and nearly all the rest is some brand of 'emergence'. There are still plenty of symbolic groups around of course (the entire 'semantic web' concept is essentially a funding refuge created for the 80s symbolic AI academics to flee to) but they focus on inference not learning. Then there's the 'situated intelligence' people, who are still somewhere between the mouse and cockroach level.If you can have a master algorithm that rewrites itself you can eliminate this problem and potentially create a thinking machine that can do anything humans can and do it better.
Having personal medical nanotech installed as a benefit of military service is plausible, within a certain band of cost. Of course with that kind of technology base the military personnel profile will look rather different.PeZook wrote:Or serve as soldiers. At the end of their tour they may even get their old personality back, after having it replaced by one optimized for combat Very Happy
The aerodynamics of the first aircraft (see how insidious these airplane analogies are?) were awful by modern standards. But that was all they knew how to build at the time. Neuromorphic general AI is a kind of lowest common denominator in the sense that you can do it with a lot of conventional science (mapping the anatomy and biochemistry of the brain in enough depth) and a lot of computing power (getting cheaper every year). I don't think it's a good way to make AI either, but the other ways are far more speculative and require unknown (but large) quantities of original insight, so a lot of people see it as a safe bet.Dooey Jo wrote:Creating AI by emulating biological neurons seems pretty inefficient anyway.
Not necessarily. You can substitute computing power for knowledge. You just run lots of copies with slightly different layouts and settings. The search can of course be automated. In theory this has ethical issues for sentient AIs, but to be honest very few AI researchers seem bothered ATM.Plus, we'd need to know pretty much exactly how everything in a brain works:
Yes. This is one place where I'd actually go with Kurzweil's estimate; three plus minus one decades for enough to make an AGI - assuming computing power continues to grow, to bridge the gap with some automated search. Progress in neuroscience has really picked up over the last 15 years and it's a lot less dependent on fads and individual breakthroughs than de novo AGI.That knowledge itself is probably decades away.
The allure of connectionist (and more generally, 'emergence based' approaches) is that you don't have to know how it works. In fact many researchers are perversely proud of not knowing how it works. To be fair it does prove that their system is capable of a certain amount of independent learning, but I still don't like any attitude that considers ignorance of functional mechanisms acceptable and even desireable.Fire Fly wrote:We still don't even know how higher order integration works in something as simple as a nematode.
Anyway, brain simulation does not rely on knowing how the brain works. When Kurzweil or the Blue Gene people or any of the other groups I'm aware of working on this say 'we're going to build a human-equivalent simulation in 20XX', they mean 'we're going to run a simulation of 100 billion elements we've made as much like real neurons as we can, in as much like the real connectivity pattern as we can, and fiddle with it until it does interesting stuff'.
I've personally only worked with classic ANNs (standard backprop and spiking with basic Hebbian), not the elaborately biomorphic stuff (which AFAIK isn't useful for anything outside that specific research area yet). As such take my opinion on this with a grain of salt, but it looks to me as if getting the connectivity plasticity model right will be by far the hardest part. Current simulation work focuses more heavily on getting the pulse propagation and synapse-level plasticity right. That's why I'd got with three-ish decades rather than the two-ish decade ones; replicating the general connectivity pattern of the brain is a start but I doubt learning is going to work well beyond basic pattern recognition without capturing the mechanism that evolves that connectivity. There are brain-like theories that try to get around the need for such a mechanism (e.g. Jeff Hawkin's Numenta concept), but once you deviate from the biological template that much you've lost most of the benefits of neuromorphic AGI (in terms of risk reduction) in the first place. So I class those as the kind of 'slavish adoration of biology' that Sarevok mentioned.