Machines 'to match man by 2029'

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Post by phongn »

Stark wrote:I remember a thread on this forum (fiction related) that involved an AI emerging in some lab somewhere by accident and building impregnable energy shields out ... stuff lying around... using ... intelligence. It was just fiction, but he believed it was plausible, ie that 'super intelligent' = 'freed from physical limitations'.
If you're referring to The Metamorphosis of Prime Intellect, the superpowers of that system come from the design of its processors (the so-called "Correlation Effect"), not because it is a super-intelligent being.
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

phongn wrote:If you're referring to The Metamorphosis of Prime Intellect, the superpowers of that system come from the design of its processors (the so-called "Correlation Effect"), not because it is a super-intelligent being.
Nah I don't think that's it, it was just some worldbuilding silliness I think.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Gullible Jones wrote:Supposedly, there's the issue that a sufficiently smart AGI that can talk to us can "hack" our brains - that is, in non-wankspeak, it could get very good at manipulating people, and persuade its tenders to plug it into the web (or a tank, or whatever).

I'd hazard that this, like everything else about AGIs, is wanked completely out of proportion - although I sometimes wonder, seeing how easy people are to fool. It is something to watch out for though, if for whatever retarded reason we actually wanted to build a self-improving AGI.
How about we use the real word for it, 'Social Engineering'? Gods, people and their buzzwords.

Social engineering is a real thing, a real skill. It's the basis of con jobs, and many forms of crime. If you've ever had to sit through a lecture or presentation on workplace security, you've become aware of it. And the simple, blunt fact is, it's real, it works, and people buy it time and again.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Sir Nitram wrote: How about we use the real word for it, 'Social Engineering'? Gods, people and their buzzwords.
But... but... Social engineering sounds, like, all mundane and stuff! We can't use mundane terms when we're talking about the Holy Singularity! :lol:
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Gullible Jones wrote:
Sir Nitram wrote: How about we use the real word for it, 'Social Engineering'? Gods, people and their buzzwords.
But... but... Social engineering sounds, like, all mundane and stuff! We can't use mundane terms when we're talking about the Holy Singularity! :lol:
If you say memetic warfare, I will wedgie you right through the network of tubes!
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Funnily enough, I was going to use that as another example of buzzword abuse, but thought better of it...
User avatar
Xon
Sith Acolyte
Posts: 6206
Joined: 2002-07-16 06:12am
Location: Western Australia

Post by Xon »

As a programmer who has to deal with stupidly fragile and undocumented systems which are behind barriers which are physically imposible to cross baring code exploits in the underlying libraries (good luck finding out the versions we use), and with social engineering strongly hampered by the devs not talking to customers (they go though tech support who dont have access to the guts of the backend, and non of us want to speak to them), it is ludricious to assume some deus ex machina, err AGI will know exactly where to look, exactly who to conn, exactly how to conn them to magically get stuff done.

Understanding a software system by bruteforce inspection is a hoplessly complex task, which is only going to get worse. Then your mystically powerfull AI will somehow need to determine the difference between intended functionality, unintended design flaws, deliberate design flaws and bugs. All without having the designers of the system or access to the documentation (often stored in distinct systems) on call.
"Okay, I'll have the truth with a side order of clarity." ~ Dr. Daniel Jackson.
"Reality has a well-known liberal bias." ~ Stephen Colbert
"One Drive, One Partition, the One True Path" ~ ars technica forums - warrens - on hhd partitioning schemes.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Sikon wrote:This discussion has been skipping over mention of hardware limits too much. The hardware capability of the average desktop computer today is around insect-level to (maybe) lizard-level.
The average human has around 100 trillion synapses operating at a maximum sustained rate of 200 Hz. The average desktop computer has around half a billion transistors in the CPU, operating at about 2 GHz. Obviously this gives the desktop computer a hundred times the computing power of the human brain before you even start considering computing elements in the motherboard, graphics card, memory cells etc.

Of course this comparison is nonsensical, because the work done by a switching transistor isn't directly comparable to the work done by a synapse (plus the duty cycle is different - though usually in favour of the transistor). But your comparison is probably also nonsencial; in fact if you're comparing synapse firings to CPU instructions, it's considerably worse, because you're willfully comparing completely different levels of organisation.

That said, feel free to give the calculations behind your comparison.
Of course, some software can be better than others,
These kind of comparisons usually talk about the (rough) amount of CPU-level FP power required to simulate biological brains in high fidelity software. Neuromorphic AGIs are bad enough, with a structure completely incompatible with the Von Newman machine architecture, but uploads are even less efficient. 'Can be better than others' doesn't begin to cover it; the difference between upload and normative rational AGI implemented directly in code is (extrapolating from the most relevant experiments I know of) somewhere between five and ten orders of magnitude. On top of that there's the fact that the single clock cycle maximum neuron to neuron latency and extremely high average bisection bandwidth of the brain makes it very difficult to distribute over a cluster supercomputer, never mind a wide-area cluster. By comparison, direct-code rational AGIs are almost trivial to scale, even over WAN links.
But uber powerful AI requires hardware advancement beyond today.
Only if we're simulating brains. This is actually good news in that the don't-really-know-what-we're-doing neuromorphic and/or evolved AGI haven't got enough brute force to make real progress yet. If they did I'd be considerably more pessemistic about (directly) developing a rational/normative seed AI in time.
For example, for nanotech, given the complexity of a self-replicating nanorobot compared to the tiny number of atoms that be precisely manipulated with existing electron microscopes per hour, per month, even if one knew exactly how to build it down to full blueprints, the process of building all precursor industrial infrastructure would take some time.
I'm not a nanotechnology expert. I just know a few. However when I've asked this question, they've laid out various incremental paths that build up tools incrementally, either via fully 'dry' processes or starting with biotech and progressing through 'wet' nanotech. Going straight to general assemblers is apparently still an option but not a favoured one. Of course they're just guessing how superintelligence would solve the problem as much as anyone else.

That said, note that the idea of self-replicating nanobots is widely derrided as a sci-fi brainbug these days. The focus is on microbots and macroscale static arrays of specialised nanoscale assemblers. Self-replication at the nanoscale is in no way required for a 'rapid self-assembling infrastructure'; nanoscale manipulation, micrometre motility and macroscale self-sufficiency are quite adequate.
Still, while a concern under some possible future circumstances, it is a ways off at a minimum.
Only for AGIs that are nothing more than biological brain simulations. Your comparison does not apply to any other kind of AGI. You're just comparing the most obvious numbers without checking for comparability. Structure matters.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

His Divine Shadow wrote:I just do simple programming for web applications and such.
As it happens, that is exactly what my company is currently targetting as the 'low hanging fruit' for a major advance software development automation (sufficient to replace some of the lower level code grinders).
I'm just going on my observations of the increase of complexity and sophistication in the industry. It seems like we're coming to a point where we will simlply need more AI in our programs as things are getting too complex for humans to keep track reliably, and to use even.
Absolutely true. I've made several presentations to venture capitalists recently where I've said exactly this. In some areas (e.g. data mining), existing narrow AI approaches are progressing to take up the slack. The problem is, no one seems to know how to build genuinely better software engineering tools. Real progress in abstraction of program specification ground to a halt with the failure of 'fourth generation languages' in the 1980s. Since then we've just been making incremental improvements to third generation languages, mostly playing with the syntax and complexity hiding mechanisms.
So I am quite prepared to believe a steady AI development will occur over the decades to keep pace with increasing hardware capabilities and software complexity.
AI development hasn't been driven by demand. The basic demand for humanlike AI has been there since before digital computers were even invented, certainly since the notion of humanlike AI was popularised in the mid 20th century. We have plenty of demand. Rather AI development has been limited by the available hardware and theoretical basis.
Neural-network based AI's seem the most likely to be used, we're doing alot with those already.
They're good for some applications but not for software engineering or anything else involving a lot of symbolic logic. Unsurprisingly there have been a lot of attempts to hybridise classic symbolic AI and NNs (the popularity of this peaked around 1992), with mostly dismal results. Actually I'd say simple ANNs (the sort you actually find in industrial use) have peaked, support vector machines and Bayesian networks are replacing them in a lot of applications (for various practical reasons). Academic ANN research is really converging on the detailed brain simulation approach, though there are some people persevering with non-slavishly-biomorphic spiking ANN approaches.
I think the military have got some piloting planes. I think any true AI is likely to spring from a world filled with very advanced AI's already.
Most narrow AI research is quite distinct from AGI research (suprisingly so, if you're not very familiar with the field). That said narrow AI making more money and getting a higher public profile (and business acceptance) probably does make it easier to raise money to research AGI.
I've pretty much figured the best way is ever more advanced neural networks, mimicing the human mind in other words. Ofcourse thats something completely different from what you are after.
Where is 'best' coming from? I would say that this is the path with the lowest purely technological risk, if by 'advanced' you mean 'slavishly biomorphism, because it's eventually solvable by throwing man hours at the problem. That completely ignores the takeoff and 'friendly superintelligence' issues though. It's certainly not the best in performance terms, as hinted at even by the fact that SVMs and GAs perform much better than NNs on many narrow AI tasks (as occured in several contracts I've worked on where multiple narrow AI approaches were tried).
User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

Starglider wrote:
Sikon wrote:This discussion has been skipping over mention of hardware limits too much. The hardware capability of the average desktop computer today is around insect-level to (maybe) lizard-level.
The average human has around 100 trillion synapses operating at a maximum sustained rate of 200 Hz. The average desktop computer has around half a billion transistors in the CPU, operating at about 2 GHz. Obviously this gives the desktop computer a hundred times the computing power of the human brain before you even start considering computing elements in the motherboard, graphics card, memory cells etc.
(Bolding added)

You seriously don't see that your method of estimating computing power coming to such an unbelievable conclusion is an indication of making wrong assumptions?

Windows mspaint.exe can use 1% or more of my laptop's computing power. By your estimation method, that software program is using around as much computing power as all of my thoughts, and, if the programmers wrote it with different code, it could do the equivalent of my brain.

Not plausible...

Image
(Upside down)

Modern computer processors are 32-bit or 64-bit. One common package has a total of 478 pins (the equivalent of wires) connecting it to the socket on the motherboard, through which all input and output for the CPU goes. (Sure, there's a far greater number of transistors inside; even an 8-bit ALU uses a lot of transistors, let alone a Pentium chip). It operates very quickly but doesn't perform many operations at the same time.

The human brain, meanwhile, not only has 100 billion neurons, but each neuron itself connects to thousands of other neurons. There are trillions of connections, like having trillions of wires, and it is way beyond a 64-bit Pentium 4 processor. The Pentium CPU has a far greater operating speed, GHz versus no more than hundreds of hertz, but, even after consideration of that, the net result is still far less.

More precisely, the net result is on the order of 1000 MIPS for the desktop computer versus 100,000,000 MIPS for the human brain (million instructions per second).
Starglider wrote:Of course this comparison is nonsensical, because the work done by a switching transistor isn't directly comparable to the work done by a synapse (plus the duty cycle is different - though usually in favour of the transistor). But your comparison is probably also nonsencial; in fact if you're comparing synapse firings to CPU instructions, it's considerably worse, because you're willfully comparing completely different levels of organisation.

That said, feel free to give the calculations behind your comparison.
Moravec describes well how the preceding processing power difference can be estimated:
When Will Computer Hardware Match the Human Brain, Journal of Evolution and Technology wrote: Computers have far to go to match human strengths, and our estimates will depend on analogy and extrapolation. Fortunately, these are grounded in the first bit of the journey, now behind us. Thirty years of computer vision reveals that 1 MIPS can extract simple features from real-time imagery--tracking a white line or a white spot on a mottled background. 10 MIPS can follow complex gray-scale patches--as smart bombs, cruise missiles and early self-driving vans attest. 100 MIPS can follow moderately unpredictable features like roads--as recent long NAVLAB trips demonstrate. 1,000 MIPS will be adequate for coarse-grained three-dimensional spatial awareness--illustrated by several mid-resolution stereoscopic vision programs, including my own. 10,000 MIPS can find three-dimensional objects in clutter--suggested by several "bin-picking" and high-resolution stereo-vision demonstrations, which accomplish the task in an hour or so at 10 MIPS. The data fades there--research careers are too short, and computer memories too small, for significantly more elaborate experiments. [...]

At the close of the 1980s, as widely available computers reached 10 MIPS, good optical character reading (OCR) programs, able to read most printed and typewritten text, began to appear. They used hand-constructed "feature detectors" for parts of letter shapes, with very little learning. As computer power passed 100 MIPS, trainable OCR programs appeared that could learn unusual typestyles from examples, and the latest and best programs learn their entire data sets. [...]

More computer power is needed to reach human performance, but how much? Human and animal brain sizes imply an answer, if we can relate nerve volume to computation. Structurally and functionally, one of the best understood neural assemblies is the retina of the vertebrate eye. Happily, similar operations have been developed for robot vision, handing us a rough conversion factor. [...]

The retina is a transparent, paper-thin layer of nerve tissue at the back of the eyeball on which the eye's lens projects an image of the world. It is connected by the optic nerve, a million-fiber cable, to regions deep in the brain. [...]

Each of the million ganglion-cell axons carries signals from a amacrine cells, and finally ganglion cells, particular patch of image, indicating light intensity differences over space or time: a million edge and motion detections. Overall, the retina seems to process about ten one-million-point images per second. [...]

It takes robot vision programs about 100 computer instructions to derive single edge or motion detections from comparable video images. 100 million instructions are needed to do a million detections, and 1,000 MIPS to repeat them ten times per second to match the retina.

[There is not a many-order-of-magnitude difference like more optimized robot vision programs being able to derive single edge or motion detections with 1/100th of an instruction each instead of 100 instructions each; the overall situation of some hardware requirements as described applies whatever the software]


The 1,500 cubic centimeter human brain is about 100,000 times as large as the retina, suggesting that matching overall human behavior will take about 100 million MIPS of computer power. [...]

If 100 million MIPS could do the job of the human brain's 100 billion neurons, then one neuron is worth about 1/1,000 MIPS, i.e., 1,000 instructions per second. That's probably not enough to simulate an actual neuron, which can produce 1,000 finely timed pulses per second. Our estimate is for very efficient programs that imitate the aggregate function of thousand-neuron assemblies. [...]

The best evidence about nervous system memory puts most of it in the synapses connecting the neurons. Molecular adjustments allow synapses to be in a number of distinguishable states, lets say one byte's worth. Then the 100-trillion-synapse brain would hold the equivalent 100 million megabytes. This agrees with our earlier estimate that it would take 100 million MIPS to mimic the brain's function. [...]

With our conversions, a 100-MIPS robot, for instance Navlab, has mental power similar to a 100,000-neuron housefly. [...]
From here

(Bolding and comment in brackets added).

His estimates are well-supported and make perfect sense, including the actual real-world performance of the best robots today being around insect-level, precisely as would be expected from the MIPS performance of common computers of today.
Starglider wrote:That said, note that the idea of self-replicating nanobots is widely derrided as a sci-fi brainbug these days. The focus is on microbots and macroscale static arrays of specialised nanoscale assemblers. Self-replication at the nanoscale is in no way required for a 'rapid self-assembling infrastructure'; nanoscale manipulation, micrometre motility and macroscale self-sufficiency are quite adequate.
Self-replication is among the greatest of possible technologies, to allow almost unlimited capability and wealth when each ten doubling cycles mean a factor of 1000 increase without the corresponding amount of human labor cost.

Of course, such is not a potential exclusive to nanorobots, as self-replicating macroscale factories are also a possibility, indeed, one with some advantages. Replication doesn't have to be atomic-level perfect, only good enough to last up to tens of generations.

Naturally, the challenge and complexity places such out of development range today, though, unlike much in sci-fi such as FTL, it is actually possible within the laws of physics, and mankind or posthuman successors should develop it someday.
Starglider wrote:
Sikon wrote:Still, while a concern under some possible future circumstances, it is a ways off at a minimum.
Only for AGIs that are nothing more than biological brain simulations. Your comparison does not apply to any other kind of AGI.
The preceding discussion by Moravec considered the performance of the best software developed in the real world at the time of the article, such as robotic vision software frequently not based on biological brain simulation.

If a desktop computer of today was really 100 times the computing power of a human brain, with supercomputers up to millions of times the power, current robotics being quite this limited would make no sense. But the simplest explanation which makes everything logical is that you're wrong, and Moravec's right.

As hardware performance increases and software is written making use of the new performance, today's insect-level robots will be outperformed by future lizard-level robots, then mouse-level robots, etc.

In the process, perhaps a good enough understanding will be obtained with extensive past experience, before the hardware capability for human-level AI, let alone orders-of-magnitude-greater superintelligence. And they may know fairly well by then how to minimize the risk of a nasty surprise with each new AI.

When a superintelligence is finally developed, it might be very good indeed at innovating technology, gaining money, and providing enough benefit for many humans to have incentive to cooperate, a little like a smart 21st-century engineer dropped into the 19th-century figuring out how to gain power but even far more intelligent. In general, in principle, it has a great competitive advantage over lesser intelligences. But there is an excellent chance of being good at making benevolent AIs before that point is reached, in addition to possibilities such as human intelligence augmentation (IA).
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

I have always taken it that the emergence of a human level artificial intelligence that can readily evolve would be the seed point for the Singularity. From there, you essentially have a human mind that can duplicate or modify itself perfectly and indefinitely and so further its own processes by enhancing what we gave it as a base.

I do believe that Prof. Kevin Warwick's preidction of human level AI by around 2050 to be fairly accurate, give or take a decade and without unforeseen technological breakthroughs. Right now, as Sikon says, even the marvelled Seven Dwarves and descendants are mere insects in their ability to interact with the physical world simply because of hardware limitations from processing to CCDs and other sensors or servos etc. Once we reach human AI level, we could probably make humanoid robots with ease and go about improving everything else from there on.

One can only wonder how fast productivity, all other things being equal, would increase if we had just one above human level AI running certain social and scientific programmes.
User avatar
CaptainChewbacca
Browncoat Wookiee
Posts: 15746
Joined: 2003-05-06 02:36am
Location: Deep beneath Boatmurdered.

Post by CaptainChewbacca »

This may be off-topic, but is it possible for AI to display human capacities for intuition? Heck, is human intuition quantifiable?
Stuart: The only problem is, I'm losing track of which universe I'm in.
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
ImageImage
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

CaptainChewbacca wrote:This may be off-topic, but is it possible for AI to display human capacities for intuition? Heck, is human intuition quantifiable?
Unless you believe in the soul, I don't see why. The "second brain" nerve cluster in the abdomen is commonly referred to as the source of certain intuition hunches. A more primal instinct simply means our evolutionary programming may buzz in and conflict with our learned logic and reason, for better or worse.

It's still all neurological and still able to be reproduced unless you feel there's a metaphysical reason, which is quite ridiculous.
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

CaptainChewbacca wrote:This may be off-topic, but is it possible for AI to display human capacities for intuition? Heck, is human intuition quantifiable?
As I understand intuition, it's basically a guess a person makes extrapolating from the model he has in his head of the world*. For an expert, who has spent years refining this model, an intuitive guess can be preliminarily trusted. For your average person, the intuitive guess is based on vague ideas and half-misunderstandings, so it's probably shit anyway.

*At least it is in my case. Assumption: everybody else works something like I do. :wink:
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

CaptainChewbacca wrote:This may be off-topic, but is it possible for AI to display human capacities for intuition? Heck, is human intuition quantifiable?
:lol: Human intuition is probably the most easily simulated part of human intelligence. All you need is an algorithm that takes accumulated historical data, fits the current data to the historical data with a very low precision comparison, and then picks the historical event which most closely matches this one. If multiple events match equally, then just pick randomly. If no event matches at all, then either loosen the accuracy of the comparison and run it again, or if multiple iterations using this technique fail, just pick randomly.

There. Human intuition in a nutshell.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Ar-Adunakhor wrote:Starglider, it stands to (my :P ) reason that your chosen AI design would only excerbate the already hideous hardware/power consumption problem, does it not?
Actually the reverse is true. This design requires vastly less computing power than a brain simulation. The kind of prototypes I've been working on, which are general probabilistic reasoners that attempt to quine themselves from first principles, complete that task in a few seconds on my current eight-core desktop, and they aren't highly optimised yet and still have to use a fair bit of brute force search. However they're also 'cheating' quite badly and obviously implement only a small fraction of the functionality of a real seed AI.
An expanding microworld simulator governed by a utility analyser (+filter) would appear to require an exponential growth in hardware until it reaches the "critical mass" needed to begin a true self-improvement loop.
There isn't any particular 'critical mass'. You seem to be importing this idea from either sci-fi or hopeless optimists like CycCorp. It's quite common to see people who don't really have a clue how to make their AGI ideas scale claim 'and if we give it enough hardware/knowledge/whatever it will reach a 'critical mass' and start working'. They can't make a strong prediction of what the threshold is and they can't really describe the mechanism.

If you have a clear idea of how to implement the whole self-improvement loop, then you can do so from the beginning. It doesn't actually take that much code, because the axioms of rational intelligence while complex and subtle are relatively concise. The problem with this kind of minimal implementation is that you start very very far back on the exponential improvement curve, such that it would take billions of years of compute time to start snowballing into something that can tackle challenges such as 'build a visual modality (i.e. humanlike visual capability) from scratch'. The solution is simply to directly engineer more software engineering capability; populating the database with patterns, heuristics, fuzzy priors, metapriors, direct transformative generators, focused search strategies and all the other elements that work together to turn perceived problems into functional specs and functional specs into code.
We almost by defintion would have no idea how efficient it becomes after attaining it,
We know that for any given piece of functionality it will be better than the best program humans could write to solve the problem, given a crack team of world-class programmers and a century to work on it.
so let's just say that afterwards it also has an exponential decay in resources used.
This might be a plausible (though shakey) assumption if there was a 'critical mass' in the first place. To some extent, for neuromorphic and other initially-self-opaque AGIs, there is, though those designs all have their own subtlties and I can't think of any cases that are as simple as your model (specifically, I can't think of any where gaining adequate self-understanding and software engineering ability requires an exponential growth over the pre-existing complexity in the system.

In fact your model fits more closely the properties of fully evolved systems, which do require exponential increases (actually worse, in many cases) in computing power to create more complexity, up to the threshold at which the intelligence can directly self-modify and the evolution system (be it internalised or externalised - it's usually external) becomes irrelevant.
Even after attaining recursive improvement, though, it is reasonable to say that there is an arbitrary physical limit for the abilites of any given amount of hardware.
Of course.
Now, with your heavy investement in this I am guessing you have already worked out the amount of hardware and power required before several of these points can be reached. (or at least are in the process of maybe getting a good idea of it ;) )
The later. As I've said, there isn't a sudden threshold. Personally I suspect a contemporary desktop PC could run an AGI capable of passing the Turing test against any normal human. Only questions specifically designed to expose its weakness in the few things the human ridiculously-parallel-array-of-very-slow-very-noisy-not-easily-reprogrammable processors actually does better would have a chance of catching it out. Progress in integrating lots of reasonably-parallel general computing power into standard computers (i.e. the near-term successors of GPGPU) is rapidly removing that possibility. The relevant bottleneck for a fully optimised AGI is more likely to be various forms of bandwidth than raw computing power anyway, but remember that computer software has vastly more data compression options open to it than wetware.
So bearing that in mind, my question is thus: Why do you find it likely that the ability to supply the hardware needs of such a system would outpace our ability to monitor the development of those systems?
Because our ability to monitor the development of such systems is currently near zero and very, very few people are working on improving it. The majority of AGI researchers consider producing something, anything that displays humanlike capabilities much more important than understanding exactly how it does it or what its long-term behaviour will be. I would consider this a lack of foresight (deliberate in some cases); if AGIs were perfectly safe this would be a relatively sane strategy, though I would probably still be advocating reasoned design over iterated trial-and-error.
Is your microworld loop model so very intensive in logic but superficial in hardware demands?
To be honest I've almost certainly given you a misleading impression, judging by the emphasis you're placing on 'microworld loops'. You'll have to forgive me though, because giving a useful constructive description of an AGI design in two pargraphs to non-experts without using (much) specialist terminology is literally impossible. I say this because AGI researchers are constantly confusing and misunderstanding each other even under ideal conditions (i.e. conference papers with a follow up question session). Some of it is due to inexcusable vagueness in the actual designs, some of it is due to failure to agree on common terminology for many relevant concepts, but a lot of it is just due to the non-segmentable complexity and general counterintuitiveness of the problem space.

But anyway, the system is not fundamentally based on a large number of isolated microworlds. What it does do is create variable level of detail simulations by dynamically connecting code blocks together. I thought that a good way to visualise this would be a nested system of microworlds where, for each part of a given model that needs extra detail, the higher level model is expanded and then bidirectionally connected to a lower level model. 'Microworld' is a bit of a loaded term (associated with a large class of late-1970s AI failures) so in retrospect I probably should've phrased it differently.

Certain kinds of self modeling and essentially all kinds of recursive resource allocation do produce a reflective regress. I can see how you could latch onto this an 'exponentially expanding series', but I'm not aware of any way such a sequence could suddenly cross a 'magic' threshold and start collapsing. In practical systems, the sequence has to quickly converge from the beginning or it will sit there and do nothing (or rather, it will sit there for a while, then run out of memory and crash). For example for the kind of prototypes I have been working on, these kind of decisions very rarely go over five frames in depth (referring to logical reflection operations rather than low-level stack frames or inferential steps in the probability or utility support networks).
Or are you just hoping you get an awesome filter/analyzer combo that perpetrates some serious paring-down upon all that helpless information before the need to stick it in a model arrives?
Vast gobs of incoming data (be it a real or simulated sensory feed or a database being scanned) don't cause the kind of problems you seem to be talking about. This kind of data gets shoved through non-reflective processing code to produce relatively simple abstract models. Elements of these simple models are expanded to more detailed models be reanalysing sections of the raw data with less compression as required (by an application of EU that more neuromorphic AGIs would call an 'interest assignment mechanism'). Exceptions of various sorts (both the programming type and higher level 'is this a sane input' checks) will cause a wider-context analysis to kick in that overrules the basic processing code and does use reflective regress control mechanisms, but these events are by definition exceptional and don't occur very often compared to routine processing. This goes for most processing in general and is the rational AGI equivalent of the human perception that almost everything our brains do is done subconsciously. Of course, unlike humans an AGI of this type has the option of examining any part of itself in minute detail (right down to kicking in the debugger and tracing machine instructions and register bits) any time it needs to.
Of course, this all ignores what would happen after it hits the absolute maximum computing power that can be squeezed out of any arbitrary chunk of matter but still needs to do more calculations.
If you use reversible computing the theoretical limits of currently-designable-by-humans computronium are pretty scary. AGIs may or may not be able to do better, and quantum computing is a whole different ball game. Frankly though this is practly irrelevant, there's already enough computing power online or in the average supercomputer for this class of AI to be wildly transhuman.
I think we all know a good (for us) goal system is mission-critical should that comes to pass, if not much sooner.
Sadly, not only to most of humanity not know this, most AI researchers don't know this and most AGI researchers have convinced themselves that it can be solved by 'bringing up the AI right' or just 'later... once we've got it working at all'.
Post Reply