Limits of Genetic Engineering?

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Sikon wrote:In contrast, a lot of sci-fi depicts the first sapient AI as more or less the equivalent of a brain in a box: a computer without any avatar body, imagined to have sapience just through pre-programming rather than gradually obtaining such through learning.
Brain in a box, yes. Completely pre-programmed with no learning is quite uncommon though. If anything sci-fi authors err on the 'just like human brain' side far more often than they err on the 'just like a normal computer side' - at least in the last two decades, partly because of the failure of high-profile 'symbolic AI' projects, partly because they think 'it uses neural nets!' lets them get away with writing the AI like a human and not expending significant effort on how a truly alien entity thinks. Of course once you've trained one AI, you can make as many perfect copies as you want, given sufficient hardware.
Without interaction with the environment prior to full-fledged sapience and without gradual learning like that of a human baby and child,
This is the embodiment argument and it is not equivalent to the learning issue. No one with a clue (i.e. everyone except the Cyc people) really expects to make an AGI by spoon-feeding knowledge only with no learning - and even if you could, it needs to be able to learn to actually be useful anyway, so this would imply a bizarre must-be-adult-human-level-to-work-at-all learning system or extreme stupidity on the part of the designers.

However a great deal of learning is possible without needing any sort of embodiment, physical or even VR-type simulation, via scanning of existing electronic material and use of internal modelling.
such implicitly is assuming human programmers somehow manage to encode "from scratch" the equivalent of the ~ 1E15 bit complexity of a sapient human brain.
You can't compare information encoded in neuron structure to lines of code. NNs have ok though highly lossy compression for media information, but sucky compression on causal complexity and have to use massive redundancy to implement reasonably reliable/lossless storage.
However, a far more plausible technique would be just to program vastly less, like some gigabytes of seed code,
'Gigabytes of code' is a huge amount of complexity; the entire source code for a typical Linux distribution (with all bundled tools and apps) is about a gigabyte. Very few AGI designers envision actually /programming/ that much complexity - it would only be plausible for an AI built incrementally by a huge army of programmers, probably a whole industry. But of course the vast majority of complexity in a human brain is not code-equivalent (or at least, not the kind of high level structure a human would write). The DNA specifying brain function (at a rough guess, probably around 30 megabytes) is a much closer analogy to AI code, and that's still a lot bigger than most envisioned AGI code (but not knowledge) bases.
while having the first sapient AI have the complexity of its "programming" subsequently increase by orders of magnitude and become sapient through having an avatar body which interacts with the environment and learns over time.
Interacting with the environment is actually pretty worthless for the 'sapience' part. All it will do is improve motor skills (which are a conventional engineering problem more than an AI problem at this point) and develop the self-environment embedding model, which is something evolutionary methods have a big problem with but is actually pretty straightforward to design in de novo.

For the 'sapience' part you'd need lots of social interaction and language use. Having a physical body isn't really necessary or even useful for this. In particular you wouldn't want to lumber an AI with the ridiculously narrow bandwidth of a single sensorium when with adequate CPU it can analyse thousands of video streams and conversations, play competitive-negotiation games with subinstances of itself, hold hundreds of IM conversations with humans etc.
(Possibly, there might be additional similarities to biological brain development for the first sapient AI if its seed code was developed by understanding and partially copying that of progressively more complex biological organisms, from those with simple nervous systems to those with complex brains
Well yes, if you mimic biology closely, then your learning process /will/ resemble biological learning. I don't personally like the biomorphic approach, but I admitt that the consequences of a /badly written/ biomorphic AGI have a significantly smaller chance of being disasterous that a /badly written/ de novo AGI. They will still probably be very bad.
Of course, after that, things get more complicated, as potentially sapient AIs may become more and more alien.
Of course they will, but if they're transhumand and they're actually interested in communicating with you then you may not notice, as emulating a human personality at the interface layer will rapidly become a trivial task.
If viewed acceptable and if desired, subsequent AIs might not need to reproduce the time-consuming process of learning,
Even without direct copy, any minimally sane level of modularisation will allow import/export of chunks of the KB. NNs aren't /that/ opaque, in that there are already analysis tools that serve as a proof of concept for snipping out and pasting in big chunks of knowledge while preserving referntial meaning (though a lot more work would have to be done before it would work on something as complex as an upload).
That's when "brain in a box" type sapient AI computers without physical avatar bodies are technically possible ...
While a large minority of AGI researchers believe that embodiment makes things much easier regardless of whether the AI is closely biomorphic (obviously I am not in this group), only a small minority claim that embodiment is absolutely necessary in the development of a new AI.
At this point, if the AIs become recursively self-improving, if they modify the "programming" of their brains, it's hard to say what would become the desired form of the equipment used by them to interact with the environment.
'Hard to say' is an understatement. Try 'impossible except in very constrained and unusual circumstances'. This is part of the definition of 'Singularity' (in the 'predictive horizon resulting from transhuman intelligence' sense).
Perhaps none of it would resemble humanoid bodies at all, maybe.
I would imagine very little of it for the same reason that commuters do not ride to work on mechanical horses.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Admiral Valdemar wrote:Blood, sweat and iron got us where we are now
The blood part at least is romantic to write about but sucks to actually experience.
Sometimes the human manufactured universe is just less interesting and fascinating. How dull would science lessons be if "Goddidit" was the answer?
I certainly find big complex engineering projects fascinating, at least as much as biology. The problem with 'goddidit' in that sense is that there's no interesting detail, it's just 'oh, we cannot comprehend/man was not meant to know' etc. Of course the religious nuts are pathetic at inventing plausible detail - every time they try making something up it just gets immediately shot down by scientists' (not that the real fundies pay any attention).
It's fairly cost effective these days actually, a moderate life insurance policy will cover it.
For your head, at least.
Freezing your whole body is about four times as expensive, but the 'if they have the tech to revive you they'll have the tech to upload and/or clone you a new body' argument is a pretty convincing one.
Personally, I'd rather upload myself into a sufficiently advanced computer and screw virtual catgirls until I can tailor make my own body again.
Well I know plenty of people who are working on the uploading... of course I emphatically deny that I am in fact working on the horny virtual catgirls.
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

Well yes, which is an accomplishment (and yes pissing off fundies who believe 'man should not play god' is a minor bonus).
Using a combination of genetic engineering and cyborging to create a massive organic computer "Mother Brain" would be glorious. Man making God in his image. Which it sounds like they may be, compared to us, given enough time to reach their potential. A disturbing, but amusing result.

There's a degree of disappointment seeing that we're all destined to end up as either irrelevences, or as brains in robot spider bodies. I've always found the idea of a massively advanced society interesting fiction, but it seems that unless you invent some kind of Magic (like psyker powers, or Spice), people just get obsolete, and are either replaced or sidelined. Not EVERY civ can end up being the Culture though, so it would be pleasent if there was still some variety of use that meatmen brought to the table. Just doesn't seem like there is.

A good followup question to this would be on the limits of AI's. If it turns out that we simply reach the limits of AI's, and that they're generally too expensive to put in every single robot soldier or menial worker, then it ends up adding some purpose to meat people. But unless humans (or whatever species it is we're speaking of, theoretically non-humans would behave the same way) can rank in somewhere in the cost-to-benefit analysis, we're going to see them as nothing but an unneccessary drain on resources--or living in simulation inside of a server somewhere as a curiosity.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Covenant wrote:Using a combination of genetic engineering and cyborging to create a massive organic computer "Mother Brain" would be glorious.
'Because it looked cool in anime' is never a good reason to embark on an engineering project. Do not go down this route or you will end up with an (ineffective) army of mecha :)
Man making God in his image.
Is a really bad idea because of the many and varied ways in which humans suck. Don't set your sights so low. We can do better; we can pass on only the best bits and most desirable of humanity.

'Fear, anger... they can all be programmed out' - yes, I suppose Dr Korby was a personal hero of mine. ;)
There's a degree of disappointment seeing that we're all destined to end up as either irrelevences, or as brains in robot spider bodies.
Disappointment? Hey, my robot spider will be tricked out with rocket launchers and flamethrowers. :)
people just get obsolete, and are either replaced or sidelined.
We can upgrade them. We (will) have the technology. We just need the legal authority to ignore their bleating 'no no, you can't replace my cells with superior nanoengineered substitutes, you can't correct my brain to rational norms, I'll lose my faith in god!' and act in their own ultimate best interests. Or failing that, the orbital mind control satellites. :)

Seriously, you don't have to be coercive to do this in a utopian society; everyone could in principle advance at their own pace. But basically the only chance of implementing that is having one or more benevolent superintelligences take over the world.
A good followup question to this would be on the limits of AI's.
Physically, see here, a conservative extrapolation based on known physics. Mentally, neither of us can imagine it.
If it turns out that we simply reach the limits of AI's, and that they're generally too expensive to put in every single robot soldier or menial worker,
Moore's law says no to that. And don't give me that 'but it isn't really a law' crap. People have been saying it will fail and then immediately been proved wrong for that last 20 years. Both the physics and the various plausible post-semiconductor manufacturing technologies support a cost-effective extension of this way past the compute density and power efficiency of human neural tissue - and of course well-designed AIs (I admitt that this is really really hard) will make much more efficient use of available computing power than wetware anyway.
then it ends up adding some purpose to meat people.
A marginal economic utility to your civilisation as a whole is not a good reason to keep living.
we're going to see them as nothing but an unneccessary drain on resources
Unnecessary compared to what? Keeping humanlike intelligences around isn't really that arbitrary a goal compared to any other goal - it's just quite unlikely to appear spontaneously (spontaneous goal system creation scenarios strongly favour things like 'convert as much of the universe as the AI can reach into computronium to run copies of itself').
or living in simulation inside of a server somewhere as a curiosity.
Is being implemented directly in reality that important to you? Unless you're doing something that actually requires it, e.g. science, does it make that much difference?
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

Argh! I just liked the symbolism! I wasn't advocating it!

The idea of nonimplimentation in the universe doesn't bother me personally, but then again, I'm also happy with the idea there's no overarching purpose to life and that when we die we simply cease. It's just that it is a very bizzare fictional world, where a species has decided it will happily live out it's life in simulation while the rest of everything whirrs along on it's own accord.

Not that it's wrong, or improbable, or anything. It just doesn't make for exciting blockbuster movies.

"IN A WORLD WHERE EVERYONE IS PERFECTLY CONTENT FOREVER..."

...and it's downhill from there. So, let it be known, I'm not disagreeing, just disappointed! It seems to be a rather unexciting way for a civilization of infinite cosmic power to end up. Even with flamethrowers and rocketlaunchers on our Robot Spider bodies, it does suck the uniqueness out of civilizations once their wierd and varied forms are all turned into something far more mundane.

Know what I mean? A species with the power to craft their own universes might seem more realistic as a computer race, but it's certainly not the most sexy idea I've ever heard.
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

What I should say is that they make rather poor protagonists, as there is just about no way you can really inconvenience a simulated human in any way he's able to do anything about. Even if confronted by an equally powerful civilization, I can't imagine that the conflict would be terribly INTERESTING.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Covenant wrote:then again, I'm also happy with the idea there's no overarching purpose to life and that when we die we simply cease.
Personally I am not happy with the later, I am just not stupid enough to think that I can make the universe a nicer place by convincing myself that it must be so (i.e. I am not religious). As for the former, I am fairly sure that it is a badly posed question (as in the whole notion of 'overarching purpose' is an incoherent one resulting from trying to rationalise the broken intuitions that come out of directly reflecting on the rather crude basis for human motivation).
Not that it's wrong, or improbable, or anything. It just doesn't make for exciting blockbuster movies.

"IN A WORLD WHERE EVERYONE IS PERFECTLY CONTENT FOREVER..."

...and it's downhill from there.
Iain Banks still manages to make the Culture novels interesting.
Even with flamethrowers and rocketlaunchers on our Robot Spider bodies, it does suck the uniqueness out of civilizations once their wierd and varied forms are all turned into something far more mundane.
Say what? Currently we've got a planet overrun with incredibly boring monkeys, all of whom look almost the same and think almost the same. Transhumanism delivers vastly more variety, not less, both in sapients (mental and physical architecture) and non-sapients (pets and constructs). It's possible that all the matter will get turned into computronium and the interesting stuff will all be virtual, but that's just an abstraction layer, if you're not sentimental about direct embodiment it shouldn't make a difference.
A species with the power to craft their own universes might seem more realistic as a computer race, but it's certainly not the most sexy idea I've ever heard.
Don't get me started on the generally dire predictions people make about transhuman sex. Well, distant-transhuman sex anyway. In the short term, harem of virtual catgirls ho!
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

I blame the vastly enjoyable Banks novels on the Minds, as well as the fact that despite the Culture's extreme levels of power, it does not seem immune to the excessive violent of more unenlightened neighbors. If they were simply giant computational engines, the whole series would really lose a lot of their soul, but the fact that they have such great personalities is a nice touch. Ships, droids, etc, all pleasent characters due to the fact that they've got some sass.

I would, however, be extremely interested in the civilization behind the Excession. The type of 'Uberciv' of arbitrarily advanced tech is closer to that in what I'm getting at, trying to figure out what your options are once you've basically opened up the entire playing field.

One good question would be, what drives a civilization at that level? The Culture is a very internally consistant entity, so it makes sense that they let the humans crawl around and have fun, while also dickering around in other people's affairs. A species that was pushed harder to become more efficent might not have the luxury of letting such a large drain on it's resources go, and by any large, I'm thinking of how the civ would interact with other people more than how they interact amongst themselves.

That's where a lot of the entertainment value is, afterall, without it becoming too Matrixy and dealing merely within the realm of the simulated. Seems like once you get to the point where you can simulate your existance, you might as well do so, and send yourself off into space in a c-fractional ship so that nothing evil is ever going to get a chance to bother with you. Essentially perfect security in perpetuity, allowing you to live out your fantasies forever.

Once you get to that point, afterall, what do you really need colonies and starships and legions of robotic assembler fleets for? Wouldn't it be possible just to exist, free of disturbance, in your own Dyson Brainsphere, and mostly ignore the rest of the universe? What would be the impetus not to? If presented with the option of actually struggling with real life, or just living in a fantasy land of dinosaur-riding lusty catgirls, I can't imagine that the few people who decide to concern themselves with the real world actually have much to offer to the giant brainsphere that's simulating them.

Wouldn't that kind of a civilization basically 'retire' from the universe? They wouldn't really have any material assets worth stealing, and they wouldn't need them from anyone else either. I'm not even sure what they'd have to fight over.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Covenant wrote:I blame the vastly enjoyable Banks novels on the Minds, as well as the fact that despite the Culture's extreme levels of power, it does not seem immune to the excessive violent of more unenlightened neighbors.
I take your bpoint, but don't conflate what would actually be nice in reality with what makes a good story.
I would, however, be extremely interested in the civilization behind the Excession.
I doubt it could be explained in any meaningful sense and remain impressive and intriguing. Besides an entire book written in 'Excession's statement style' would be worse than Feersum Enjin :)
Once you get to that point, afterall, what do you really need colonies and starships and legions of robotic assembler fleets for? Wouldn't it be possible just to exist, free of disturbance, in your own Dyson Brainsphere, and mostly ignore the rest of the universe?
Expansion is not evil. Creating new sapients has a lower priority than protecting existing ones, but where possible it's good; it increases the amount of fun/satisfaction/diversity/etc in the universe. It's better than leaving it as dead matter. Plus I would say that sufficiently advanced civilisations have a moral obligation not to allow natural selection to produce sapient species who have to suffer their way through all of pretechnological history. The very least they should do is seed all planets likely to develop life with brain-state-recording-nanobots and resurrect everyone who dies once the civ has made it to transcendence/interstellar travel (plus probably prevent civs being destroyed by cataclysms).

[qipte]If presented with the option of actually struggling with real life, or just living in a fantasy land of dinosaur-riding lusty catgirls, I can't imagine that the few people who decide to concern themselves with the real world actually have much to offer to the giant brainsphere that's simulating them.[/quote]

What do you mean by 'much to offer'? If there's only a few doing it, surely that means less contention for real-world-interface resources? Besides lusty catgirls get boring after a while, I'm hoping that basically all the intelligences are constantly growing and improving themselves, which implies growing the amount of computing substrate available anyway.
They wouldn't really have any material assets worth stealing, and they wouldn't need them from anyone else either. I'm not even sure what they'd have to fight over.
Computronium might be a fairly precious commodity. Ultimately stars become precious commodities (baring radically new physics).
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

No need to worry, I'm not confusing the two. I'm employing some wishful thinking to ask what might be possible, is all, but I recognize that for what it is.

I mean 'much to offer' in the sense that if someone wanted to be a scientist, I doubt their slim contribution of a simulated human mind would mean all that much to the super-AI. I would think that they'd be better off letting the computer use the processing power to do the thinking itself. And I can't see human pilots, or human janitors, or farmers having much value either, compared to the least complicated software package that can also do their job. I'm not sure if there's a job that's better done by a normal person, so I don't see why an AI would desire to keep the simulated brains of billions of humans running. Outside of some programming that they obey us, and we state that we want to be protected, I don't see what net gain a purely efficency-driven system benfits from by having simulation humans around when they could be AI's.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Covenant wrote:I mean 'much to offer' in the sense that if someone wanted to be a scientist, I doubt their slim contribution of a simulated human mind would mean all that much to the super-AI.
No, but that serves you right for remaining a simulated human for dog knows how long. If you want to be on the cutting edge, turn yourself into a super AI. We'll probably run out of physics to investigate relatively quickly but maths is pretty much a bottomless pit of things to prove and disprove, and there's always exploring the universe (including interpreting what you find), simulated worlds and all sorts of art to create.
I would think that they'd be better off letting the computer use the processing power to do the thinking itself.
Concentrating all computing power into a single completely rational (i.e. extremely inhuman) intelligence is probably the most efficient way, but what's the rush? In an ideal scenario it should not be a race. The upper edge of cognitive capabilities should not advance so fast that you have to struggle to keep up (yes, this implies enforcing a global but receeding limit). Furthermore, even if other intelligences have discovered scientific principles, does this make it useless for you to do so personally? Is being first the only thing that matters? Discovering how the universe works isn't necessarily a game we all have to play together and only once.
And I can't see human pilots, or human janitors, or farmers having much value either, compared to the least complicated software package that can also do their job.
No. You have to redefine your notions of 'value', when essentially everything you do is being done for fun.
I'm not sure if there's a job that's better done by a normal person,
Only social/service ones and even there only when the knowledge that it isn't a real person would make it impractical to use individually nonsapient but very convincing simulations run by a transhuman AGI (i.e. a transhuman AGI acting the part of various humans the way it would in a VR scenario/game).
so I don't see why an AI would desire to keep the simulated brains of billions of humans running.
Because we gave it that desire. The only other reason is curiosity about what humans would do in certain situations, which is not something you want to get caught up in.
Outside of some programming that they obey us,
That's one way to do it but probably not the best way. There's the question of who to obey when orders conflict and how to protect us from our own bad wishes without stifling us. Better to design a proper generally benevolent but not explicitly human-slaved goal system.
I don't see what net gain a purely efficency-driven system benfits from by having simulation humans around when they could be AI's.
It doesn't, but 'efficiency' only has meaning in the context of achieving a supergoal(s) of some sort, and all supergoals are essentially arbitrary. There's no reason why that supergoal can't be general benevolence to all sapient intelligences, other than that we have to have the desire and the capability to engineer such a thing before a homicidal transhuman intelligence (or any other existential risk) comes along and kills us first.
User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

Starglider wrote:
Sikon wrote:In contrast, a lot of sci-fi depicts the first sapient AI as more or less the equivalent of a brain in a box: a computer without any avatar body, imagined to have sapience just through pre-programming rather than gradually obtaining such through learning.
Brain in a box, yes. Completely pre-programmed with no learning is quite uncommon though.
Sapient computers in popular sci-fi are depicted as capable of learning after creation. However, they tend to be shown as sapient from the moment of activation, just pre-programmed into sapience rather than gaining sapience through learning, even if the first of their kind. For example, HAL and M-5 weren't shown as gaining sapience from learning over time like the progression from a human baby to an adult but rather were shown as full-fledged sapients from the moment of their introduction in the movie 2001: A Space Odyssey and the Star Trek episode The Ultimate Computer respectively. (I enjoyed watching both, but that doesn't affect my point about the prevailing depiction of sapient AIs).

No offense, but you seem to be nitpicking too much here.
Starglider wrote:However a great deal of learning is possible without needing any sort of embodiment, physical or even VR-type simulation, via scanning of existing electronic material and use of internal modelling.

[...]

Interacting with the environment is actually pretty worthless for the 'sapience' part. All it will do is improve motor skills (which are a conventional engineering problem more than an AI problem at this point) and develop the self-environment embedding model, which is something evolutionary methods have a big problem with but is actually pretty straightforward to design in de novo.
A "great deal of learning" being possible through that alone is technically true. However, obtaining sapience would be challenging enough without unnecessary constraints on a learning AI having no physical embodiment to interact with the real-world environment. If the goal is to do it as fast and easily as possible, one is likely better off with a physical avatar.

One method is suggested by analogy with a human baby:

For example, a baby's brain may try various actions, such as making various speech sounds after hearing them. A little analogous to genetic programming, when a technique results in success as measured by the internal goal system, that method is reinforced and more probable to be tried again. In this example, the baby pronouncing various sounds may eventually pronounce "mommy," which may be rewarded by attention, physical contact, or food, leading to learning, where the baby is starting to properly pronounce a word while associating it with a result. Hour by hour, day by day, year by year, the complexity of useful "programming" in the brain increases ... not just motor skills but general knowledge on every random topic. There's all sorts of little things learned over time, too many for any human to consciously remember and list them all, let alone directly program all aspects of them into electronic media in a reasonable period of time.

Of course, an AI doesn't have to learn in exactly the same manner, but the general idea of learning with a physical avatar can have greater practical potential than trying to encode "common sense" about the real world into electronic material directly. Implementing the latter with software has historically been rather limited. For example, recent computer game software sometimes tries to have more of a "physics model" system, but even that created with thousands of manhours of coding is lousy compared to the complexity of the real world.

There's limits even to the information available from all electronic media and from all the books in all the world's libraries combined. A million manhours of effort could still overlook much, when so much of the thought patterns in human sapience didn't come from any publication's instructions.

Given an astronomically complex and sufficiently perfect virtual reality simulation, an AI could theoretically learn enough to reach sapience without a physical avatar interacting with the real world. But such would have little to do with what is most reasonable for the first sapient AI's development succeeding in the easiest manner. Virtual simulations of today are like at the level illustrated by computer games, not even remotely close to suitable. For example, the virtual environment of a few gigabyte computer game is nothing in comparison to the complexity of the real world. While of course such will improve in the future, it would tend to take far more manhours to try to encode a sufficiently complete virtual reality simulation (if even possible in any reasonable amount of time) than just to build a physical avatar body for the learning AI.
Starglider wrote:
Sikon wrote:Without interaction with the environment prior to full-fledged sapience and without gradual learning like that of a human baby and child, such implicitly is assuming human programmers somehow manage to encode "from scratch" the equivalent of the ~ 1E15 bit complexity of a sapient human brain.
You can't compare information encoded in neuron structure to lines of code. NNs have ok though highly lossy compression for media information, but sucky compression on causal complexity and have to use massive redundancy to implement reasonably reliable/lossless storage.
The estimate isn't exact to multiple significant figures or anything silly like that, but one certainly can tell that there's an astronomical amount of data involved in the sapient human brain. Such is known to a degree sufficient for my point about the orders of magnitude difference between it and the relatively small amount of brain structure information encoded in DNA.

A 1998 article by Moravec is relevant here:
When Will Computer Hardware Match the Human Brain, Journal of Evolution and Technology wrote:Computers have far to go to match human strengths, and our estimates will depend on analogy and extrapolation. Fortunately, these are grounded in the first bit of the journey, now behind us. Thirty years of computer vision reveals that 1 MIPS can extract simple features from real-time imagery--tracking a white line or a white spot on a mottled background. 10 MIPS can follow complex gray-scale patches--as smart bombs, cruise missiles and early self-driving vans attest. 100 MIPS can follow moderately unpredictable features like roads--as recent long NAVLAB trips demonstrate. 1,000 MIPS will be adequate for coarse-grained three-dimensional spatial awareness--illustrated by several mid-resolution stereoscopic vision programs, including my own. 10,000 MIPS can find three-dimensional objects in clutter--suggested by several "bin-picking" and high-resolution stereo-vision demonstrations, which accomplish the task in an hour or so at 10 MIPS. The data fades there--research careers are too short, and computer memories too small, for significantly more elaborate experiments.

There are considerations other than sheer scale. At 1 MIPS the best results come from finely hand-crafted programs that distill sensor data with utmost efficiency. 100-MIPS processes weigh their inputs against a wide range of hypotheses, with many parameters, that learning programs adjust better than the overburdened programmers. Learning of all sorts will be increasingly important as computer power and robot programs grow. This effect is evident in related areas. At the close of the 1980s, as widely available computers reached 10 MIPS, good optical character reading (OCR) programs, able to read most printed and typewritten text, began to appear. They used hand-constructed "feature detectors" for parts of letter shapes, with very little learning. As computer power passed 100 MIPS, trainable OCR programs appeared that could learn unusual typestyles from examples, and the latest and best programs learn their entire data sets. Handwriting recognizers, used by the Post Office to sort mail, and in computers, notably Apple's Newton, have followed a similar path. Speech recognition also fits the model. Under the direction of Raj Reddy, who began his research at Stanford in the 1960s, Carnegie Mellon has led in computer transcription of continuous spoken speech. In 1992 Reddy's group demonstrated a program called Sphinx II on a 15-MIPS workstation with 100 MIPS of specialized signal-processing circuitry. Sphinx II was able to deal with arbitrary English speakers using a several-thousand-word vocabulary. The system's word detectors, encoded in statistical structures known as Markov tables, were shaped by an automatic learning process that digested hundreds of hours of spoken examples from thousands of Carnegie Mellon volunteers enticed by rewards of pizza and ice cream. Several practical voice-control and dictation systems are sold for personal computers today, and some heavy users are substituting larynx for wrist damage.

More computer power is needed to reach human performance, but how much? Human and animal brain sizes imply an answer, if we can relate nerve volume to computation. Structurally and functionally, one of the best understood neural assemblies is the retina of the vertebrate eye. Happily, similar operations have been developed for robot vision, handing us a rough conversion factor.

[...]The retina is a transparent, paper-thin layer of nerve tissue at the back of the eyeball on which the eye's lens projects an image of the world. It is connected by the optic nerve, a million-fiber cable, to regions deep in the brain. It is a part of the brain convenient for study, even in living animals because of its peripheral location and because its function is straightforward compared with the brain's other mysteries. A human retina is less than a centimeter square and a half-millimeter thick. It has about 100 million neurons, of five distinct kinds. Light-sensitive cells feed wide spanning horizontal cells and narrower bipolar cells, which are interconnected by whose outgoing fibers bundle to form the optic nerve. Each of the million ganglion-cell axons carries signals from a amacrine cells, and finally ganglion cells, particular patch of image, indicating light intensity differences over space or time: a million edge and motion detections. Overall, the retina seems to process about ten one-million-point images per second.

It takes robot vision programs about 100 computer instructions to derive single edge or motion detections from comparable video images. 100 million instructions are needed to do a million detections, and 1,000 MIPS to repeat them ten times per second to match the retina.

The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power.

[...]If 100 million MIPS could do the job of the human brain's 100 billion neurons, then one neuron is worth about 1/1,000 MIPS, i.e., 1,000 instructions per second. That's probably not enough to simulate an actual neuron, which can produce 1,000 finely timed pulses per second. Our estimate is for very efficient programs that imitate the aggregate function of thousand-neuron assemblies. Almost all nervous systems contain subassemblies that big.

The small nervous systems of insects and other invertebrates seem to be hardwired from birth, each neuron having its own special predetermined links and function. The few-hundred-million-bit insect genome is enough to specify connections of each of their hundred thousand neurons. Humans, on the other hand, have 100 billion neurons, but only a few billion bits of genome. The human brain seems to consist largely of regular structures whose neurons are trimmed away as skills are learned, like featureless marble blocks chiseled into individual sculptures. Analogously, robot programs were precisely hand-coded when they occupied only a few hundred thousand bytes of memory. Now that they've grown to tens of millions of bytes, most of their content is learned from example.

[...]Programs need memory as well as processing speed to do their work. The ratio of memory to speed has remained constant during computing history. The earliest electronic computers had a few thousand bytes of memory and could do a few thousand calculations per second. Medium computers of 1980 had a million bytes of memory and did a million calculations per second. Supercomputers in 1990 did a billion calculations per second and had a billion bytes of memory. The latest, greatest supercomputers can do a trillion calculations per second and can have a trillion bytes of memory. Dividing memory by speed defines a "time constant," roughly how long it takes the computer to run once through its memory. One megabyte per MIPS gives one second, a nice human interval.

[...]The best evidence about nervous system memory puts most of it in the synapses connecting the neurons. Molecular adjustments allow synapses to be in a number of distinguishable states, lets say one byte's worth. Then the 100-trillion-synapse brain would hold the equivalent 100 million megabytes. This agrees with our earlier estimate that it would take 100 million MIPS to mimic the brain's function. The megabyte/MIPS ratio seems to hold for nervous systems too! The contingency is the other way around: computers are configured to interact at human time scales, and robots interacting with humans seem also to be best at that ratio.

[...]With our conversions, a 100-MIPS robot, for instance Navlab, has mental power similar to a 100,000-neuron housefly. The following figure rates various entities.
From here.

Estimates vary a bit, such as by plus or minus one or two orders of magnitude, like for this article versus others, but the important thing is that it is easily millions of megabytes as opposed to any small figure like a few megabytes. Indeed, the difference between sapient human brain complexity versus starting DNA data is definitely large enough with your estimate of 30 megabytes for brain structure encoding in human DNA.

Image

Image

The above is a good illustration of insufficient hardware today still being one limit but with a point perhaps fast approaching where software progress lagging behind hardware progress may become the primary issue.
Starglider wrote:it would only be plausible for an AI built incrementally by a huge army of programmers, probably a whole industry
If human-level AI is eventually developed in a given timeframe, chances of success would tend to be better with $10 billion, $100 billion, or $1 trillion spent over the years than with a project employing a small number of people and a millionth as much. While it's hard to be sure of success occurring in a given timeframe even with large funds, chances would tend to be better than with a small project.

For example, a small number of programmers may within several years of effort code a computer game of today including its very primitive AI, but human-level AI is so many orders of magnitude different, trying to manage what would be the greatest engineering accomplishment in history.

Of course, to some degree, many small groups of researchers in aggregate may amount to the equivalent of a large army of programmers over the decades, sharing information and building upon each other's work, perhaps eventually obtaining the equivalent of a vast project despite individually small efforts. There should be incremental progress beyond current robots that tend to be around insect-level.
Starglider wrote:But of course the vast majority of complexity in a human brain is not code-equivalent (or at least, not the kind of high level structure a human would write). The DNA specifying brain function (at a rough guess, probably around 30 megabytes) is a much closer analogy to AI code, and that's still a lot bigger than most envisioned AGI code (but not knowledge) bases.
That may be very true for what's theoretically possible. Given sufficiently optimized perfect programming like that which superhuman AIs could manage, equal or greater efficiency than the biological seed code is undoubtedly possible. However, one would have to be hesitant to assume the human-written seed code for the first sapient AI would be that optimized and that efficient. DNA's coding is shockingly efficient for what it accomplishes relative to human programmers when 30 megabytes is what just AcroRd32.exe (Acrobat PDF file reader) takes in memory.
Starglider wrote:For the 'sapience' part you'd need lots of social interaction and language use. Having a physical body isn't really necessary or even useful for this. In particular you wouldn't want to lumber an AI with the ridiculously narrow bandwidth of a single sensorium when with adequate CPU it can analyse thousands of video streams and conversations, play competitive-negotiation games with subinstances of itself, hold hundreds of IM conversations with humans etc.
A sufficiently powerful and advanced AI could handle millions, billions, or more avatar bodies or other sources and recipients of data at once.

However, it may be easier to first develop an AI that can handle one body with sapient-level intelligence, following the general KISS principle of engineering. (Actually, it might be easiest to do it very incrementally, such as first managing the equivalent of a lizard's general intelligence then progressively greater challenges, breaking a hard problem down into smaller steps to be mastered one at a time before focusing on the next). When the AI becomes powerful enough in time, it can always control more avatars and have more incoming sources of data later.
Starglider wrote:
Sikon wrote:Possibly, there might be additional similarities to biological brain development for the first sapient AI if its seed code was developed by understanding and partially copying that of progressively more complex biological organisms, from those with simple nervous systems to those with complex brains
Well yes, if you mimic biology closely, then your learning process /will/ resemble biological learning. I don't personally like the biomorphic approach, but I admitt that the consequences of a /badly written/ biomorphic AGI have a significantly smaller chance of being disasterous that a /badly written/ de novo AGI.
That's a good point. In fact, if the first sapient AI was simply raised to human-level brainpower and operating speed before proceeding further, while resembling biological intelligence enough to be (relatively) understood by humans, that could be a powerful safeguard. In that case, only upon passing a safety evaluation would it be subsequently cleared to receive continuing upgrades taking it to the less understandable, harder to predict, more alien state of superhuman intelligence. A primitive analogy is that those in sensitive military positions like personnel handling nuclear weapons are given psych screening first.
Starglider wrote:
Sikon wrote:Of course, after that, things get more complicated, as potentially sapient AIs may become more and more alien.
Of course they will, but if they're transhumand and they're actually interested in communicating with you then you may not notice, as emulating a human personality at the interface layer will rapidly become a trivial task.
True ... though there might also be obtained the technology to upgrade one's own intelligence, reducing the gap.
Starglider wrote:While a large minority of AGI researchers believe that embodiment makes things much easier regardless of whether the AI is closely biomorphic (obviously I am not in this group), only a small minority claim that embodiment is absolutely necessary in the development of a new AI.
When current robots don't compare well in versatile intelligence even to lizards, with progress slowed by software limitations even when hardware capabilities improve, any technique that makes it much easier to transverse the vast gap to human and superhuman intelligence in a reasonable period of time is practically a necessity. Obtaining such would be hard enough without unnecessary constraints on the solution like skipping the benefit of having a physical avatar body through which the AI learns from interaction with the real world.
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

On another topic, here's some additional comments adding to my past posts in this thread:

While requiring advanced genetic engineering, high human radiation resistance could have much civilian benefit, with more than military applications alone. For example, radiation shielding tends to require more mass than anything else in space habitats. (There are some possibilities for having superconducting magnetic shielding instead of thick mass shielding, but some disadvantages render such uncertain). Radiation shielding isn't an excessive problem, as it is possible and plausible to affordably have enough, but it would be rather convenient to be able to skip such without health concerns about the background cosmic radiation.

Also convenient could be combining radiation resistance with widespread use of long-lasting radioisotope batteries in portable electronics, such as the equivalent of laptops that run practically forever.

Of course, sufficiently advanced artificial bodies could have as much radiation resistance while being superior to genetic engineering of biological cells alone.

*********

Perhaps the ultimate means by which the power of intelligence is expressed is through technological development. For example, if efforts of human scientists and engineers at achieving challenges like life extension or self-replicating technology take too long, a counter might be to develop superhuman AIs to succeed faster. If developing general AI took longer than expected but if the life sciences happened to advance enough in the meantime to allow sufficient genetic engineering, possibly progress towards AI might be accelerated by researchers with brains beyond the greatest homo sapiens geniuses.

Even in regard to military applications, true power is self-replicating factories able to turn almost countless quadrillions of tons of available extraterrestrial material into a more or less unstoppable horde of many quadrillions of drones, microbots, missiles, or other weapons, nuclear or otherwise. Such raw industrial power can be obtained through technological advancement, through the application of intelligence, possibly through human intelligence but possibly faster through superhuman intelligence. It is uncertain if there would be war in such a future at all, but, if war still occurred, the preceding can be the military result of superhuman intelligence.

A side with sufficiently superior technology developed by superior intellect may not just have smarter soldiers than the other side but rather have such an astronomical orders-of-magnitude edge in raw power as to be capable of squashing them like a bug.
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
Post Reply