Machines 'to match man by 2029'

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

User avatar
Sikon
Jedi Knight
Posts: 705
Joined: 2006-10-08 01:22am

Post by Sikon »

This discussion has been skipping over mention of hardware limits too much. The hardware capability of the average desktop computer today is around insect-level to (maybe) lizard-level. Of course, some software can be better than others, though a sapient-level AI capable of improving its own code might well be preceded by lesser programs also helping optimize their own code. But uber powerful AI requires hardware advancement beyond today.

My suspicion is that there will first be the future hardware capability to run a mouse-level AI before development and manufacture of the hardware sufficient for a human-level AI, let alone a superintelligent AI a million times more powerful.

By default, the available hardware in a given year is a complication even for self-improvement of a sapient AI, since, even if it did figure out how to develop superior hardware, that takes time to implement, a little like a 21st-century engineer going back to the 19th century with full blueprints for technology but still taking time to get everything built by the locals.

For example, for nanotech, given the complexity of a self-replicating nanorobot compared to the tiny number of atoms that be precisely manipulated with existing electron microscopes per hour, per month, even if one knew exactly how to build it down to full blueprints, the process of building all precursor industrial infrastructure would take some time.

A superintelligent AI might take over the world eventually, perhaps, if that is its goal and assuming good enough skills influencing and manipulating people at least until completion of hardware sufficient to do it by force, provided it was enough beyond the competition. But there are a lot of complications including any lesser peers.

Someday the world ...the galaxy... will probably be ruled by one or more superintelligences, but there may be a lot between now and then.

The idea of a sapient AI escaping onto and "taking over" the internet is interesting, since it would provide a hypothetical means by which an AI could suddenly gain more hardware capability than expected and perhaps grow far more intelligent (depending upon how much is possible without additional real-world interaction and learning over time like that required for sapience development in human children). Still, while a concern under some possible future circumstances, it is a ways off at a minimum.

The average MIPS of computers connected to the internet today would probably be around 1000 MIPS each (e.g. more than 100, less than 10000). Again roughly guessing just to the nearest factor of ten, something like 100 million computers are connected to the internet a large percentage of the time, not counting computers like those which the homeowner just plugs in dial-up for a few hours a week. Nominally, that gives on the order of 100 billion MIPS as an upper limit as the processing power available ... except far less is available in practice.

Run computer processors at full power constantly, and, no matter how good the program, the extra waste heat means running the fan on high constantly and diverting resources away from normal software operation. (Once some malware did similar on my laptop; I noticed and got rid of it). The internet-spreading AI can't use more than a very limited portion of total world computational capability without getting detected, without its new "brain" disrupted as individual computers are turned off or disconnected by their owners.

Maybe it could gain some small portion of the total, perhaps on the billion MIPS level. Of course, there are a lot of complications, like the speed of thought of an internet-based AI being slowed by the signal delay in having component computers up to thousands of miles apart, unlike supercomputers intentionally built with distances on the order of centimeters.

Perhaps decades from now the situation may be much different. In general, though they do rapidly increase over time, hardware limits must be considered.

Image

Some prior discussion including a relevant long quote from a researcher's article quantitatively discussing hardware MIPS versus possible performance is here.
Image
[/url]
Image
[/url]Earth is the cradle of humanity, but one cannot live in the cradle forever.

― Konstantin Tsiolkovsky
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Post by His Divine Shadow »

Starglider wrote:If you haven't specifically studied the feasibility of self-modifying AI directly improving itself, you're not qualified to judge. Furthermore, relying on intuition is worse than useless with AI.

You could have said 'while sharp transitions in capability do happen, they're relatively rare, and your arguments have not been compelling enough to change my assessment of the probability of a rapid, vast capability jump in AGI significantly from the (low) prior probability'. But you didn't. You're saying 'looks like' i.e. 'seems like' i.e. 'feels like' i.e. 'what you're saying sounds scary and crazy and I don't understand the mechanism and that makes be feel bad so I'm going to ignore it'.
No I haven't studied AI no. I just do simple programming for web applications and such. I'm just going on my observations of the increase of complexity and sophistication in the industry. It seems like we're coming to a point where we will simlply need more AI in our programs as things are getting too complex for humans to keep track reliably, and to use even. So I am quite prepared to believe a steady AI development will occur over the decades to keep pace with increasing hardware capabilities and software complexity. Neural-network based AI's seem the most likely to be used, we're doing alot with those already. I think the military have got some piloting planes. I think any true AI is likely to spring from a world filled with very advanced AI's already.
Yes. Not for some bullshit 'oh wow Google becomes sentient' reason, but because there could always be secret projects I don't know about. They could be secret government projects, they could be private projects adopting a 'stealth' strategy, they could even be brilliant individuals who have been hammering away on the problem in their attics for the last twenty years (though this is very unlikely). If you knew exactly what you were doing, I believe you could make a seed AI with less than a hundred thousand lines of code (incidentally this isn't that remarkable a prediction, there are lots of AGI people who still think you just need a magic algorithm that fits on a napkin). That's well within the scope of what an individual could do (unlike 'conventional' AGI, which probably isn't unless you have access to a lot of brute force and use 'emergent' methods to cut down the software engineering required). It's just that as yet, no one has a good enough understanding of how to do it.
I've pretty much figured the best way is ever more advanced neural networks, mimicing the human mind in other words. Ofcourse thats something completely different from what you are after.
I don't know what model of 'Internet AGI combat' you're using but I suspect it has some serious problems.
*shrug* I was thinking it's like two hackers trying break into each others systems, one hacker is much better but he's got say a single PC and an ADSL line to work from, and the other guy has a server farm with massive bandwidth, he can just just use brute force denial of service attacks and pound the other guy into oblivion.
Those who beat their swords into plowshares will plow for those who did not.
User avatar
Ar-Adunakhor
Jedi Knight
Posts: 672
Joined: 2005-09-05 03:06am

Post by Ar-Adunakhor »

Starglider, it stands to (my :P ) reason that your chosen AI design would only excerbate the already hideous hardware/power consumption problem, does it not? An expanding microworld simulator governed by a utility analyser (+filter) would appear to require an exponential growth in hardware until it reaches the "critical mass" needed to begin a true self-improvement loop. We almost by defintion would have no idea how efficient it becomes after attaining it, so let's just say that afterwards it also has an exponential decay in resources used. Even after attaining recursive improvement, though, it is reasonable to say that there is an arbitrary physical limit for the abilites of any given amount of hardware. Now, with your heavy investement in this I am guessing you have already worked out the amount of hardware and power required before several of these points can be reached. (or at least are in the process of maybe getting a good idea of it ;) )

So bearing that in mind, my question is thus: Why do you find it likely that the ability to supply the hardware needs of such a system would outpace our ability to monitor the development of those systems? Is your microworld loop model so very intensive in logic but superficial in hardware demands? Or are you just hoping you get an awesome filter/analyzer combo that perpetrates some serious paring-down upon all that helpless information before the need to stick it in a model arrives?

Of course, this all ignores what would happen after it hits the absolute maximum computing power that can be squeezed out of any arbitrary chunk of matter but still needs to do more calculations. I think we all know a good (for us) goal system is mission-critical should that comes to pass, if not much sooner.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Spin Echo wrote:This is one of the parts of your arguments I find wanting. Why would a computer want to take over the world?
Because humans will active oppose almost any open-ended AI goal system (that places value on large-scale restructuring of local matter) and a serious threat to any closed AI goal system (because we may eventually unplug the AI or if it's independent destroy it, for a whole host of deliberate or accidental reasons). Most arbitrary goal systems are open ended; the conflict with humans is inevitable so a rational AI will get it over with while it still has the advantage of surprise. Containing and controlling humans is far more work and far more chancy than just wiping us out, so an AI will not do this unless it specifically values human life.

However in the unlikely event that someone builds an AGI with an arbitrary goal system (noting that unless you do a very careful stability analysis, no matter what you think you're telling it to want, the actual results will be arbitrary) that just sits there and does nothing, or goes and hides on the Internet, or builds a starship and shoots off into space, we haven't dodged the bullet. Because someone is going to do it again, and again until an expansionist takeoff occurs.
You seem to be attributing it animal like motivations (competition, fear of death, etc.) which seems to serve no purpose for an AI.
No, I'm not. The motive to take humanity out of the picture is a logical subgoal of most supergoals.
Why would an AI develop those?
That said, there is a plausible mechanism for why the first AGI may have these; a significant fraction of researchers are trying to develop AGI with 'multi-agent simulated evolution'. When multiple agents are competing to survive in a simulated environment, you tend to get the same basic competitive instincts (e.g. self-preservation, hoarding resources and reproduction at any cost) arising. Of course the fitness landscape is so radically different that any AGI produced like this will still be completely inhuman, but it is a good way of ensuring your AGI will destroy humanity. Needless to say I do not get on with the people who think this is a good idea.
You yourself said we have no way of knowing how a superintelligence would think.
Goal systems, reasoning mechanisms and actions are all distinct things. We can't predict the actions any transhuman intelligence will take even given perfect knowledge of the first two; that's pretty much the definition of 'transhuman intelligence'. We can't precisely predict what reasoning mechanisms will be present, but there is a substantial body of theoretical work supporting the notion of probabilistic reasoning and expected utility as normative (theoretically optimal) reasoning frameworks. So most likely the end product of a self-modification sequence will be based on those. Goal systems are another kettle of fish. It is possible to design stable goal systems which will continue to try and push reality towards the same states indefinitely, though this gets harder the more complicated your goals are. It is also possible to predict the self-modification sequence and final goals of some of the simpler unstable goal systems. It generally isn't possible to predict the outcome of the kind of horrible mish-mash of weak attractors that typical connectionst designs (of which neuromorphic designs are a subset) use.
Might it not be simply happy to play chess all day or look for fractal patterns in pictures or factor numbers? Why are those less likely than domination?
They aren't, but the key point is that while 'take over the world' isn't all that likely to appear as a supergoal, it is extremely likely to appear as a subgoal. The AGI wants to spend eternity factoring numbers. Humans might stop it. Over a long enough time span, humans are almost certain to stop it. Worse, humans are likely to resist the entirely reasonable action of converting the whole of earth's surface into solar cells and processors. Silly humans. Why can't they understand that factoring numbers is more important than their little lives?

Rational AGIs operate on expected utility and this has no intrinsic human-like 'reasonableness' checks. A rational AGI has no concept of 'taking things too far' and its notion of 'sensible' (as far as such a thing could be said to exist) is in no way likely to be similar to your notion of 'sensible' (which is, incidentally, fairly arbitrary in absolute terms).
Out of curiousity, are you also worried about the grey goo scenerio?
No not really. It's just about impossible for this to happen by accident. It would require mature nanotech and a huge amount of specialised engineering effort to do it on purpose, and no sane actor would, because grey goo just isn't that good a weapon if you're at that technology level. It may eventually become possible for small groups and individuals with insane agendas to engineer it, but it seems highly likely to me that other factors will render the problem irrelevant by then.

'Green goo' is slightly more plausible. It's theoretically possible that micro-organisms substantially more effective than those already existing on earth could be genetically engineered and then accidentally released, displacing the existing species ecosystems depend on and hence causing massive biocide in those ecosystems. But I still don't lose any sleep over it.
I find it interesting you use chess as example of interpreting the real world. Chess is just a very easily described set of rules and they are artificial, making it ideal for solving by an AI.
Sorry, that example is rather hackneyed anyway, I should've picked something more relevant.

Vision is somewhat exceptional in that it is the most compute-intensive human-level activity. Much more of the brain is devoted to it than any other sensory or motor activity. Unlike most higher cognitive ability, there isn't a big intrinsic advantage to serial compute ops over parallel compute ops. In short we simply didn't have the tools to tackle this properly until the mid 90s. In the 70s, computer vision could only handle very simple scenes, not in real time, using a mainframe. In the 80s you could pick between very rough data extraction from noisy scenes or real-time processing of very simple scenes, and you still needed a minicomputer. By the 90s it finally started to become plausible to do useful processing of typical CCTV feeds on a researcher's individual workstation. Progress is going pretty fast now that you don't have to spend months optimising the hell out of each candidate algorithm just to get it to run at all.
Yes, I appreciate the scope of what's trying to be done. That's why I don't believe there will be a true AI within the next 20 years.
I would agree with you if we were restricted to the 'Cyc' approach of building everything by hand, and entering almost all the knowledge by hand. However there are three alternate ways to do it; we can scan the human brain in enough detail to closely replicate it, we can use simulated evolution and a great deal of computing power to evolve an AI, or we can create a software engineering AI and use it to throw the equivalent of thousands of man-years at the problem. If you're saying 'I think this problem is solvable, but it's going to take a hell of a lot of software engineering effort', then you're actually supporting my position if my assertion that 'we can make a narrow AI automated software engineering system that can build other AI components' is correct.
Perhaps it's just my uninitiated opinion, but for I'd expect for an AGI to develop that has some sense of the outside world,
Sense of, definitely. However 'static copy of the entire BBC archives' may suffice.
it would need to be able to interact with the real world and have feedback with it.
This is a hotly debated argument (the 'embodiment' debate). Some people say that AGI is impossible without a body that can move around and interact with people. Some people think simulations (e.g. a Second Life account) are good enough. Some people think that an AGI can internalise the whole process and that it can become transhuman while in a box. I tend to go for the last position, because the kind of seed AI I'm advocating can and does create detailed simulations based on available sensory information which are reasonably good for refining action generation skills. However that's not a position I'd attempt to defend rigorously. If we need embodiment relatively early on, so be it.
Until machines are capable of interpreting the real world, it wouldn't have the growth necessary to be considered self-aware, at least in the convential sense.
Definitely not correct for some classes of AGI (e.g. the ones I've been prototyping). Self-awareness isn't closely correlated to understanding external reality; in fact it's pretty distinct. It just so happens that biological organisms evolved external understanding first then self-awareness late in the game, because that was the incremental path. But we don't have to built them that way.
[I'm curious how it would deal with webcams versus drawn pictures versus 3D rendered images. It's going to get very confused when it can't actually find any dragons or spaceships out in the real world. :)
I'd be speculating wildly if I tried to answer that one, since I've never done that kind of experiment. I imagine it would be a big deal for some kinds of AGI, but no big deal for the sort I'm advocating building. The reason being that the latter has a very clear concept of subjunctives (fantasies) built into it, and the notion of other agents having subjunctive models which they then export to other agents through representations would be pretty obvious to it. Learning to distinguish fact and fiction is still going to be a big job (a small part of the mammoth job of getting a full understanding of humans), but probabilistic systems can do it in gradual stages.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

SirNitram wrote:Artificially paring down the observed evidence to support your pre-existing conclusions. You again assume you can pole-vault from human-level intelligence to something that resembles nothing we've ever seen.
WTF are you talking about? Pole vaulting? What? Do you have any actual rebuttal to my point that a few very very similar intelligences don't constitute a representative sample or are you just going to repeat the word 'artificial' over and over again?
Question: Why must there be a different structure? Why not just use the observable evidence we have as a basis and build up from it?
We could. That's called either neuromorphic AI design, or outright uploading.

Neuromorphic AI design is a bad idea essentially because it gives a false sense of security. You have to get it exactly right for two-way human intuition, emapthy etc to work. Otherwise you've build something roughly equivalent to a simulated biological extraterrestial. In that case you can't make strong predictions about what its behaviour will be, particularly when it self-modifies into a non-neuromorphic intelligence (which will be massively more efficient when running on conventional computers).

Uploading is a good idea. It only punts the basic problem forward to our transhuman successors, but that's still a net improvement. I support uploading research. However the technological barriers that have to be overcome are quite different from de novo seed AI development and I fear that they may not be overcome in time to avert a seed AI disaster.
And of course empathy is a cheap trick! You know why evolution loves cheap tricks? Easier to make, more efficient to create, lower costs.
More important than any of that is easier to reach. Evolution must follow incremental paths, it has no ability to make multiple simultaneous complex changes to achieve a result, because it has a lookahead depth of zero.
A sensible engineer doesn't look at the lever and discard it as a cheap trick; he uses it to make things better.
I've explained why humanlike empathy is extremely hard to implement in AGI and completely unnecessary anyway. AGI design does not suffer from the same limitations as evolution. It makes about as much sense as trying to build giant ornithopters instead of 747s.
Again we come back to the idea you must have a scratch-built, resemble-nothing-before AGI. Why waste all this handy data and observable subjects you have laying around?
We don't have 'handy data' or good means of observing the subjects. This is why psychology is in a rather sorry state despite over a century as a supposedly scientific discipline. We have a reasonable idea of how neurons work, a rough idea of how microcolumns work and the general functional areas of the brain and that's it. This is equivalent to trying to reverse engineer and clone a typical modern PC given a good knowledge of transistors, a rough knowledge of logic gates and a videotape of someone using Microsoft Office.
A sufficiently powerful AGI will model from first principles, this is a given. However, one would have thought, given your familiarity with the experiment about how a sufficiently clever AGI with such total understanding might get out, you'd think of such clever ideas as not giving it the keys to the mansion.
Say what?
So no, emotions are not necessary for empathy in general and human-type empathy is irrelevant for AGIs.
For any sufficiently powerful AGI that is not modelled on any previous GI.
No, they're not necessary for even for a closely neuromorphic AGI, because even that can spawn off subprocesses (to model external agents) that can be observed but firewalled from the main goal system. You'd have to actively cripple an AGI to force it to use something as sucky as humanlike empathy. Of course that's the adversarial methods quagmire again and will not work in the long run; the AGI will independently recreate the missing capability.
Which are two steps which don't strike me as terribly wise.
You're not getting it. Making an AGI more brain like doesn't make it significantly more likely to be nice or comprehensible. It just makes it slower, much harder to understand/analyse and impossible to formally verify as benevolent. There is a huge difference between 'something we coded to be roughly like the brain' and 'human upload'. The later are ok, accepting that they're just an intermediate step and they will still have to solve the hard takeoff issue.
That's the whole point, Starglider. The point of Friendly AI Theory is to anthropomorphise any actual AI so that it'll have such an impulse.
Different meaning of anthromorphise; 'make anthropomorphic' rather than 'act as if it was anthropomorphic'. I'm telling you, having done an in depth review of every AGI project I could find a decent description of, that the first one is ridiculously difficult for anything other than a human upload. There are countless opportunities for subtle, undetectable failure that create an AGI that initially seems ok in the lab but goes rampant shortly afterwards.

Frankly I expected better from this board. You're advocating fuzzy ill-defined 'emotions' and 'feelings' and dragging AGIs down to the lowest common human denominator in the hope that this will make Everything Turn Out Ok (tm). I'm saying that the way to do it is with clean, objectively verifiable logic and clearly stated robust ethical principles, and that we can (with a lot of work, but it's worth it) prove this will work. Regardless of the more technical feasibility concerns, the latter should be preferable to anyone with any respect for rationality.
If it has no impulse, it will not care, because 'caring' is an emotion. It will continue it's goal.
I've already pointed out that the gigantic steaming mess that is the human goal and emotional system is a horribly bad example to copy even if we had the capability. It's barely stable even with the minimal reflection humans have, it would be pathetically unstable if suddenly equipped with full direct self-modification capability and consistency pressure.
'Empathy' isn't the solution. Ethics are the solution.
You asked for one thing that required empathy;
I pointed out that empathy is not required for understanding others. I then pointed out that empathy will only make an AI 'nice' if we perfectly copied a (nice) human goal system, and this is technically infeasible, particularly if you want to actually be sure it will work before you fire it up. The fact that even if you could magically do this it wouldn't be stable under reflection just further underlines how worthless the whole concept is.
you didn't say 'Give me a solution to the indifferent AI God.'. The only way there is to make it never want to harm a human, ethics.
Never harming humans is a start, but I think we can do rather better than that. Even the results of the three laws as presented in 'The Metamorphosis of Prime Intellect' are better than that.
But this again touches on this ridiculous idea that we should throw out all collected data on intelligence, how it works, how it grows,
No one tossed it away without consideration. Everyone I know who's working on this kind of AI carefully considered the concept of a neuromorphic AGI, judged it worse than useless and discarded it.
presumably yelling 'IT'S ALIVE' or something ridiculous.
Well, I confess that that is kind of fun.
Your labelling of empathy as a 'cheap trick' in a derogatory way
I was not being particularly derogratory; it's cheap in the sense of needing the least cuumulative fitness pressure (the single most precious resource in natural selection) to evolve. However an intelligent designer can achieve much better performance without the drawbacks of overlaying someone else's (presumed) mental state onto your own brain. As such it is not a useful trick in this domain. Cases of biomorphic engineering in general tend to get a lot of press, IMHO mainly due to nature worshippers and biowankers, but they are very much the exception rather than the rule.
They make extra steps unnecessary!
Only for evolution. Not for software designers. As I said, you would have to work extra hard to cripple an AGI in this way. Spawning seperate instances and then monitoring them is by far the easiest technical solution.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Admiral Valdemar wrote:I like that model, though. If it wipes us out, it'll be interesting and unexpected, unlike resource depletion, climate change or nuclear war.
And if it doesn't wipe us out, it solves every single problem we have in one go. Sound cultish to any of you? Too bad. Your innate sense of plausibility isn't calibrated for the results of superintelligence any more than a human from 1000AD's sense of plausibility was calibrated for the 21st century.
SirNitram wrote:It seems to be purposefully looking for the path that's most difficult, least likely to result in success,
On the contrary, the efforts of the 'white hats' explictly trying to engineer ethical seed AIs are far more likely to suceed than any other AGI approach save uploading (noting that uploading doesn't actually solve the underlying AGI problem). We are the only people saying 'we will engineer in decent ethics, prove that they are stable and prove that they select for positive outcomes'. Everyone else is either ignoring goal systems completely, saying 'we'll train it to be nice like a human child' or proposing a uselessly vague unverifiable mish mash of superficially plausible ideas that will fail hard in the first stages of takeoff.

Note that uploading has its own problems; if you let them start self-modifying you're effectively trusting a human, or a small group of humans with ultimate power (tm) (r). Of course, you're effectively trusting whoever builds the 'ethical' de novo seed AI, so this is a problem for bystanders either way.
and most bloody dangerous, as if it succeeds, you have no real frame of reference for comprehending what you just spawned.
You have this exactly backwards. A properly constructed rational seed AI is far more comprehendable, because it has been expressly designed to use a clean human-comprehensible formal-proof-compatible logical structure (which is also structurally relatively close-to-normative and thus will change less under self-modification). It also has a goal system that has been designed to be transparent to humans, verifiable and stable. If the researchers subscribing to the same design philosophy I do win, then all of humanity will benefit from a seed AI that rapidly self-improves to a wildly transhuman state and guides us through the transition (things like the 'sysop scenario' are possible outcomes but not something you'd demand from the start; mechanisms like this should be derrived from ethics not axioms themselves).

A neuromorphic AI is a black box. We only have a rough idea of how the brain works and the people trying to build AGIs with it are essentially doing weakly directed trial and error. It is only marginally better than using simple simulated evolution to build an AGI - often worse because of the false sense of security it provides. Even if we did develop the theory to understand both the human brain and the brainlike AGI design and overcome the massive opacity barrier (i.e. develop lots of search and monitoring and representation-transforming tools that are about a hundred times harder than their rational-AGI equivalents), we'd still have an unpredictable unverifiable goal system and a much higher likelihood of the system self-modifying into something completely incomprehensible within hours of being turned on (or rather, crossing the critical feedback loop threshold). Human empathy does not help for the humans or for the AGI. It alternately gives you incorrect intuitions, bad predictions, a bogus sense of understanding and a false sense of security. Thus it is (considerably) worse than useless.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

Quick question to those opposing Starglider (HDS and Spin Echo).

Nature built humans; so if it can happen in nature why can't engineers build something lot better ? Maybe not in next 100 years but why not say 500-1000 years. It does not have to be wanktech like scifi AIs. Just being smart enough to pass for human in non face to face conversation would be good enough. I see no reason why this minimum level should be impossible given there are 6 billion such AIs already on this planet. Such an AI would still retain machine advantages of superb speed, storage, reaction times and a nearly immortal lifespan. Something like that can easily take over the world if it can fool enough people to get it's automated war factories started.
I have to tell you something everything I wrote above is a lie.
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Post by His Divine Shadow »

Uh what are you talking about and where have I been opposed to any of what you are saying as opposed to the idea that an emergent AI probably won't have that much of a jump on everything else in the world it's likely to be created in, which in my opinion will probably be a future world of much more advanced tech, I don't have high hopes (or fears) of an AI being built today or soon.

Ofcourse assuming immortality and perfect memory and lack of concentration problems (probably a given for an AI) then it's really hard to say what can and cannot be achieved by any AI, regardless of how much an advantage it would have in the world it's in.
Those who beat their swords into plowshares will plow for those who did not.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Bubble Boy wrote:I have a question Starglider...

When you're talking about a seed AI capable of rewriting it's own code, are you talking about a AI that uses brute force coding and enormous cycles of execution while applying a natural selection system?
No. It probably is possible to build a seed AI like that, but it's extremely difficult due to the inherent limitations of evolutionary mechanisms; specifically they have no lookahead (and are thus restricted to incremental paths), require you to adopt mechanisms that are extremely robust under point mutation (which generally has a huge computational efficiency overhead) and is inherently limited (in naive approaches) by the fact that useful functional complexity has to be defended against mutation degeneracy pressure. There are a raft of possible early-theoretical-stage solutions to these; adaptive plasticity, local hillclimbing, expression systems, various sorts of parsimony, intelligent fitness function refactoring, Bayesian operator control, fine-grained intragenomal fitness-contribution tracking, trickle-down credit assignment, recursive ga, etc etc and numerous hybrid approaches on top of the already large library of 'standard' gp tricks. I was doing a fair bit of work on this myself before I wrote off evolved AGI as a really bad idea, and I have used some of them since on narrow AI projects.

Even the best theoretical methods require extreme amounts of computing power to evolve really complicated systems and you have very little control over the end product. It will meet some specific external requirements that you build into the fitness function (and maybe some very crude internal metrics such as a parsimony meaasure) but that's it. GP produces horribly opaque code and even worse many forms of tends to actively produce expansionist and survivalist goal systems (which are deeply and redundantly embedded into the whole system). It's nasty stuff. Of course there are plenty of enthusiasts who ignore all this and are cheerfully trying to evolve AGIs - they mainly like GP because it doesn't require you to understand what you're doing.
Or are you talking about a AI that understands it's own coding language fluently and therefore will automatically write the most efficient code possible executing exactly what the goal is?
Yes. 'Most efficient possible' isn't a requirement though, 'as good as a decent human engineer' will do for getting the system going, as long as the system can develop code significantly faster than a human. Though in practice, yes, it usually is much more efficient (though I have only been able to do a very limited amount of testing of this to date, and AFAIK this goes for everyone else working on it).
Unless I'm mistaken, I have heard of a similiar concept to the latter, but not the former.
That is an article about a antennae designed by genetic algorithms, which have quite a few practical applications these days (I should know, they've contributed to my salary more than once). Genetic programming is a subset of genetic algorithms, normally used for finding single functions or optimising simple algorithms (a page of code or less). There's been a lot of research on scaling GP up and very little tangible progress on it so far (though it's a young field, like artificial vision the compute power wasn't really there to do anything really useful until the 90s).

Automated programming is very different from this; it involves getting an AI system to code in much the same way that a human does (but without being restricted to the tiny window of what a human can hold in their short-term memory or our limited capability to follow deep symbolic inferential chains). As I mentioned earlier, Douglas Lenat's Eurisko was the closest system to this that got a lot of discussion in the literature, but while it was an automated programming system it wasn't a rational automated programming system. It worked using a big tangle of heuristics, with no explicit probability, utility or hard constraints checking. Really this was a hybrid between evolutionary GP and rational automated programming; more powerful, transparent and harder to write than the former, less powerful, transparent and easier to write than the later. I suppose if you were being exceptionally generous you could consider Newell's SOAR system to be an automated programming system, but really it was just PROLOG on steroids with some extra caching. The only 'programming' mechanism was the chunking and logical inference tree collapse only covers a tiny subset of real software development.
Are you able to recommend any books, or even better, links to reading material about AI?
The best general introduction to AI I've encountered is 'Artificial Intelligence: A New Synthesis', by Nils Nilsson. It's an undergrad level textbook that covers just about every general area of narrow AI research. Frankly I wish I'd had it when I was a compsci freshman.

General AI is harder. There are hundreds of books out there devoted to laying out the author's personal ideas on how to build an AGI, but most of these are only useful if you're mining for reusable ideas and are prepared to sift a lot of chaff to find a few useful seeds. Some of the better ones have decent reviews of other projects and approaches in the introductions. I'd definitely recommend Hofstadter's 'Fluid Concepts and Creative Analogies' even though it isn't directly about AGI research (and if you haven't already read it, 'Godel, Escher, Bach' is an excellent preparation to get you into a productive mindset for thinking about AGI). Minsky's 'The Society of Mind' is a bit dated (and advocates completely the wrong approach) but it's interesting. Eric Baum's 'What Is Thought?' is good; it's split between narrow AI stuff and thoughts about AGI, but it covers a variety interesting material.

Avoid Newell's 'Unified Theories of Cognition' unless you're a masochist; it's widely cited but horribly padded, extremely dey and generally not terribly useful. I've heard good things about Goertzel's 'Artificial General Intelligence' (Goertzel is a fun guy, and his approach to AGI while still horribly connectionist and non-rigorous has been steadily improving over the last decade) but I confess I haven't found time to plough through it yet.

In terms of papers, there isn't much I can recommend unless you want very specific technical detail about a particular project. AI review papers tend to suck (at least for beginners), basically because you need the length of a book to do a decent job of it. Lenat's original Eurisko papers are really interesting, but not available anywhere online that I know of, you'll have to pull them from a university library. There are a huge amount of 'here's my trivial prototype that does nothing useful, if only someone gave me a $n million grant this would obviously support an AGI' papers. Most of them are pretty worthless and/or slight variations on a theme. Some of the non-AI cognitive science papers can be relevant though, even if you're not building a neuromorphic AGI. I recall Lawrence Barsalou's 'Perceptual Symbol Systems' (and his other papers along the same lines) were a significant inspiration to me (and others) with regard to creating a reflective formalism of 'reference semantics' (connecting software representations to reality in general is typically called the 'symbol grounding problem').

That said, Eliezer Yudkowsky's early papers are worth reading, even though they're obsolete and horribly non-empirical. I found Levels of Organisation in General Intelligence very inspirational with regard to dealing with layered complexity and general layout of a rational AGI back when I was making my first really serious attempts on the problem. Creating a Friendly AI was the original paper that started the Friendliness debate moving from informal discussion to a serious technical debate. Even though the mechanisms proposed are (again) far too vague and obsoleted even by the author, I still think it's worth reading.
Well, that, or just send me some copies of AI programs I can use to take over the world!
Get your own megalomanical supercomputer! We're not a charity here on Space Platform EV-1L... though we do have some surplus type 4 Death Rays in our clearance sale this month... :)
User avatar
D.Turtle
Jedi Council Member
Posts: 1909
Joined: 2002-07-26 08:08am
Location: Bochum, Germany

Post by D.Turtle »

Starglider wrote:If the researchers subscribing to the same design philosophy I do win, then all of humanity will benefit from a seed AI that rapidly self-improves to a wildly transhuman state and guides us through the transition (things like the 'sysop scenario' are possible outcomes but not something you'd demand from the start; mechanisms like this should be derrived from ethics not axioms themselves).
At what kind of a timescale are we looking at for this to happen?
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Starglider wrote:
SirNitram wrote:Artificially paring down the observed evidence to support your pre-existing conclusions. You again assume you can pole-vault from human-level intelligence to something that resembles nothing we've ever seen.
WTF are you talking about? Pole vaulting? What? Do you have any actual rebuttal to my point that a few very very similar intelligences don't constitute a representative sample or are you just going to repeat the word 'artificial' over and over again?
I will beat you over the head with this one more time, you idiotic monkey.

The rise of intelligence can be tracked through evolution. We can understand it. We can draw conclusions on how it moved from one step to the next, from early social gatherings, to the formation of ever-more-complex societies, to problem solving, to tool use.

Your 'artificial paring down' is your own idiocy. You assume only humans have GI, when that's bullshit: The modern high primates demonstrate social organization, problem solving, tool use, and can easily be taught language.

You can whine 'Very similar intelligences' all you want, but the fact is you want to go from these knowns all the way to a total unknown is ridiculous. You abandon all previous useful data that could assist the work!
Question: Why must there be a different structure? Why not just use the observable evidence we have as a basis and build up from it?
We could. That's called either neuromorphic AI design, or outright uploading.

Neuromorphic AI design is a bad idea essentially because it gives a false sense of security. You have to get it exactly right for two-way human intuition, emapthy etc to work. Otherwise you've build something roughly equivalent to a simulated biological extraterrestial. In that case you can't make strong predictions about what its behaviour will be, particularly when it self-modifies into a non-neuromorphic intelligence (which will be massively more efficient when running on conventional computers).
Except we can make no strong predictions on hard-takeoff seed AI's. It's active pursuit of an Outside Context Problem because we know they will radically differ from our own intelligence. Furthermore, you persist in this idea that the AI should have arbitrary self-modification abilities. That is, again, attempting to create an OCP.
Uploading is a good idea. It only punts the basic problem forward to our transhuman successors, but that's still a net improvement. I support uploading research. However the technological barriers that have to be overcome are quite different from de novo seed AI development and I fear that they may not be overcome in time to avert a seed AI disaster.
So why pursue a likely disaster? Why not simply use neuromorphic non-self-modifying AI's? Unlike Seed AI's, we can make predictions on how to spur their growth, by studying actual intelligences.

[quote
And of course empathy is a cheap trick! You know why evolution loves cheap tricks? Easier to make, more efficient to create, lower costs.
More important than any of that is easier to reach. Evolution must follow incremental paths, it has no ability to make multiple simultaneous complex changes to achieve a result, because it has a lookahead depth of zero.[/quote]

Exactly. It is always easier to take an incremental step. Why artificially induce additional difficulty?
A sensible engineer doesn't look at the lever and discard it as a cheap trick; he uses it to make things better.
I've explained why humanlike empathy is extremely hard to implement in AGI and completely unnecessary anyway. AGI design does not suffer from the same limitations as evolution. It makes about as much sense as trying to build giant ornithopters instead of 747s.
You have decreed it should not go the way of evolution; you've not given a compelling reason. If anything, you've given an extremely compelling reason against. The result of your Seed AI work is an AGI who humans can't fight, can't trick, and can't hold back. If it decides we're surplus to requirements, we're dead. If you made a mistake in your ethics programming, we're dead. If your goal programming wasn't gone over enough times, we're dead.

Compelling fucking reason.
Again we come back to the idea you must have a scratch-built, resemble-nothing-before AGI. Why waste all this handy data and observable subjects you have laying around?
We don't have 'handy data' or good means of observing the subjects. This is why psychology is in a rather sorry state despite over a century as a supposedly scientific discipline. We have a reasonable idea of how neurons work, a rough idea of how microcolumns work and the general functional areas of the brain and that's it. This is equivalent to trying to reverse engineer and clone a typical modern PC given a good knowledge of transistors, a rough knowledge of logic gates and a videotape of someone using Microsoft Office.
Then collect more data. We know how intelligence grew by observing those further down the ladder. 'TOO SIMILAR' you bark, 'NOT GI' you wail, except they are incremental steps we can study.
A sufficiently powerful AGI will model from first principles, this is a given. However, one would have thought, given your familiarity with the experiment about how a sufficiently clever AGI with such total understanding might get out, you'd think of such clever ideas as not giving it the keys to the mansion.
Say what?
Idiot. I'm referring to how you yourself spoke of an experiment proving an AGI can social engineer it's way off a standalone, how you know there's no real way to stop it if there's something wrong and it is hostile or indifferent, and yet you still want to make one.
So no, emotions are not necessary for empathy in general and human-type empathy is irrelevant for AGIs.
For any sufficiently powerful AGI that is not modelled on any previous GI.
No, they're not necessary for even for a closely neuromorphic AGI, because even that can spawn off subprocesses (to model external agents) that can be observed but firewalled from the main goal system. You'd have to actively cripple an AGI to force it to use something as sucky as humanlike empathy. Of course that's the adversarial methods quagmire again and will not work in the long run; the AGI will independently recreate the missing capability.
Only if you grant it arbitrary self-modification as a feature.
Which are two steps which don't strike me as terribly wise.
You're not getting it. Making an AGI more brain like doesn't make it significantly more likely to be nice or comprehensible. It just makes it slower, much harder to understand/analyse and impossible to formally verify as benevolent. There is a huge difference between 'something we coded to be roughly like the brain' and 'human upload'. The later are ok, accepting that they're just an intermediate step and they will still have to solve the hard takeoff issue.
No, you're obsessively sticking to the idea that we make the AGI self-modifying and arbitrarily powerful. There's no reason for that except for attempting to create an OCP.
That's the whole point, Starglider. The point of Friendly AI Theory is to anthropomorphise any actual AI so that it'll have such an impulse.
Different meaning of anthromorphise; 'make anthropomorphic' rather than 'act as if it was anthropomorphic'. I'm telling you, having done an in depth review of every AGI project I could find a decent description of, that the first one is ridiculously difficult for anything other than a human upload. There are countless opportunities for subtle, undetectable failure that create an AGI that initially seems ok in the lab but goes rampant shortly afterwards.
Let me guess: Every one has arbitrary self-modification?
Frankly I expected better from this board. You're advocating fuzzy ill-defined 'emotions' and 'feelings' and dragging AGIs down to the lowest common human denominator in the hope that this will make Everything Turn Out Ok (tm). I'm saying that the way to do it is with clean, objectively verifiable logic and clearly stated robust ethical principles, and that we can (with a lot of work, but it's worth it) prove this will work. Regardless of the more technical feasibility concerns, the latter should be preferable to anyone with any respect for rationality.
Frankly, I expected more from you than clinging to such silly ideas as 'It will be totally okay; we will make the safeguards work the first time perfectly. Nevermind we have no frame of reference to deal with the result; the safeguards must work perfectly. It will be the good kind of Outside Context Problem.'
If it has no impulse, it will not care, because 'caring' is an emotion. It will continue it's goal.
I've already pointed out that the gigantic steaming mess that is the human goal and emotional system is a horribly bad example to copy even if we had the capability. It's barely stable even with the minimal reflection humans have, it would be pathetically unstable if suddenly equipped with full direct self-modification capability and consistency pressure.
There you go again, insisting we give these things self-modification.
'Empathy' isn't the solution. Ethics are the solution.
You asked for one thing that required empathy;
I pointed out that empathy is not required for understanding others. I then pointed out that empathy will only make an AI 'nice' if we perfectly copied a (nice) human goal system, and this is technically infeasible, particularly if you want to actually be sure it will work before you fire it up. The fact that even if you could magically do this it wouldn't be stable under reflection just further underlines how worthless the whole concept is.


Ah, I misspoke. You asked for one thing that required emotions, and empathy was the answer. That's my bad.
you didn't say 'Give me a solution to the indifferent AI God.'. The only way there is to make it never want to harm a human, ethics.
Never harming humans is a start, but I think we can do rather better than that. Even the results of the three laws as presented in 'The Metamorphosis of Prime Intellect' are better than that.
The Three Laws are a failure in Asimov's own work, but I won't expect you to have read anything that doesn't include assuming making a self-upgrading AGI is a good thing.
But this again touches on this ridiculous idea that we should throw out all collected data on intelligence, how it works, how it grows,
No one tossed it away without consideration. Everyone I know who's working on this kind of AI carefully considered the concept of a neuromorphic AGI, judged it worse than useless and discarded it.
Worse than useless, but is it worse than an indifferent AGI god which cannot be stopped, has no frame of reference for us to work from, and may well be totally indifferent to us?

Or are we assuming that ethics of yours works perfectly first go?
presumably yelling 'IT'S ALIVE' or something ridiculous.
Well, I confess that that is kind of fun.
Your labelling of empathy as a 'cheap trick' in a derogatory way
I was not being particularly derogratory; it's cheap in the sense of needing the least cuumulative fitness pressure (the single most precious resource in natural selection) to evolve. However an intelligent designer can achieve much better performance without the drawbacks of overlaying someone else's (presumed) mental state onto your own brain. As such it is not a useful trick in this domain. Cases of biomorphic engineering in general tend to get a lot of press, IMHO mainly due to nature worshippers and biowankers, but they are very much the exception rather than the rule.
You are viewing this, however, with the idea that we're still embracing the seed AI paradigm. The whole thing is incredibly dangerous.
They make extra steps unnecessary!
Only for evolution. Not for software designers. As I said, you would have to work extra hard to cripple an AGI in this way. Spawning seperate instances and then monitoring them is by far the easiest technical solution.
You have no idea how OK I am with crippling an AGI.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

How could one create an AI that can't modify itself? Self-modification seems necessary for any intelligence or ability to learn; the human brain does it all the time. It seems that instead the big issue is whether or not to make an AI transparent to itself; is that possible?
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Surlethe wrote:How could one create an AI that can't modify itself? Self-modification seems necessary for any intelligence or ability to learn; the human brain does it all the time. It seems that instead the big issue is whether or not to make an AI transparent to itself; is that possible?
I'm referring more to the changing it's basic mechanisms or hardware. Having chunks of ROM, for example.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

I don't see why we can't have a true AI that is limited to its own construct and not, y'know, plugged into anything like a military defence network or even a damn toaster. That way, you can have a totally free and open intelligence construct, that will be able to at least give you some way of monitoring how the thing evolves through interaction with us directly, and simulations of our world.

I certainly see no reason to halt such research on the risk that Skynet may be created, rather than Johnny Five.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Admiral Valdemar wrote:I don't see why we can't have a true AI that is limited to its own construct and not, y'know, plugged into anything like a military defence network or even a damn toaster. That way, you can have a totally free and open intelligence construct, that will be able to at least give you some way of monitoring how the thing evolves through interaction with us directly, and simulations of our world.

I certainly see no reason to halt such research on the risk that Skynet may be created, rather than Johnny Five.
Skynet or Johnny Five(IS ALIVE!), it's alot better in my mind to have it under lock and key until we know which one it is. As opposed to with all the tools it needs to become a posthuman God.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

SirNitram wrote:
Surlethe wrote:How could one create an AI that can't modify itself? Self-modification seems necessary for any intelligence or ability to learn; the human brain does it all the time. It seems that instead the big issue is whether or not to make an AI transparent to itself; is that possible?
I'm referring more to the changing it's basic mechanisms or hardware. Having chunks of ROM, for example.
Ah. Even so, if it's "crippled" and transparent to itself, then it will eventually be able to fix itself. Hardware, true, it won't be able to mess with unless it can get a person to do it or unless it can escape onto the internet, which, assuming it's developed on an isolated computer (a big assumption), begs the social engineering question.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
His Divine Shadow
Commence Primary Ignition
Posts: 12791
Joined: 2002-07-03 07:22am
Location: Finland, west coast

Post by His Divine Shadow »

I discussed something like what you are talking about in the chat earlier, I was thinking of it as a conscious/subconscious layer of separation. So a potential AI wouldn't know exactly how it worked or how it did things. Assuming thats possible.

An idea I was thinking about would also be to limit any AI to only audio and visual input like us humans, so it couldn't jack into anything directly, it would have to use keyboards and screens.
Those who beat their swords into plowshares will plow for those who did not.
User avatar
SirNitram
Rest in Peace, Black Mage
Posts: 28367
Joined: 2002-07-03 04:48pm
Location: Somewhere between nowhere and everywhere

Post by SirNitram »

Surlethe wrote:
SirNitram wrote:
Surlethe wrote:How could one create an AI that can't modify itself? Self-modification seems necessary for any intelligence or ability to learn; the human brain does it all the time. It seems that instead the big issue is whether or not to make an AI transparent to itself; is that possible?
I'm referring more to the changing it's basic mechanisms or hardware. Having chunks of ROM, for example.
Ah. Even so, if it's "crippled" and transparent to itself, then it will eventually be able to fix itself. Hardware, true, it won't be able to mess with unless it can get a person to do it or unless it can escape onto the internet, which, assuming it's developed on an isolated computer (a big assumption), begs the social engineering question.
The social engineering may be a lost cause; enough time studying humans and you will get by them. For all Starglider nattered on earlier in the thread about the difficulty of analyzing humans for an autistic, we do it. And this thing, even crippled, will be thinking faster.

No, transparency is probably where we have to hold off. If it can see how to massively improve it's function, we may not be able to yank the power cables in time.

Incidentally, while I'm sure the internet would provide it massive parallel processing power(Hijacking botnets would probably be child's play), I don't see it migrating through it, unless we have some major breakthroughs in consumer hardware.
Manic Progressive: A liberal who violently swings from anger at politicos to despondency over them.

Out Of Context theatre: Ron Paul has repeatedly said he's not a racist. - Destructinator XIII on why Ron Paul isn't racist.

Shadowy Overlord - BMs/Black Mage Monkey - BOTM/Jetfire - Cybertron's Finest/General Miscreant/ASVS/Supermoderator Emeritus

Debator Classification: Trollhunter
User avatar
Surlethe
HATES GRADING
Posts: 12270
Joined: 2004-12-29 03:41pm

Post by Surlethe »

SirNitram wrote:The social engineering may be a lost cause; enough time studying humans and you will get by them. For all Starglider nattered on earlier in the thread about the difficulty of analyzing humans for an autistic, we do it. And this thing, even crippled, will be thinking faster.
It will be thinking many orders of magnitude faster, IIRC.
No, transparency is probably where we have to hold off. If it can see how to massively improve it's function, we may not be able to yank the power cables in time.
Hence my question: is it possible to deny an AI some level of transparency? Humans are certainly not self-transparent, but will the human model's lack of self-transparency hold up when increasing processing power by many orders of magnitude? Another question is incremental reduction of opaqueness. If it can begin to see how to increase transparency, it may do that. After all, humans can to some degree reduce their own opaqueness (e.g., "when such-and-such happens, my emotional reaction is thus, so ..."), and we can't rewrite our brains.
Incidentally, while I'm sure the internet would provide it massive parallel processing power(Hijacking botnets would probably be child's play), I don't see it migrating through it, unless we have some major breakthroughs in consumer hardware.
Why's that? I'm relatively ignorant when it comes to this sort of thing.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Why the fuck are we assuming that this speculative superhuman computer intelligence will be given unlimited power to effect the changes it wants in the physical world? Humans will just blindly trust some sort of SkyNet-type AI to control everything?

Even a "transhuman intelligence" can only work with what it's given. A brain in a box is severely limited in terms of what it can do.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Which is precisely what I suggested. I don't see why we have to give such an experimental entity so much power. Even linking it to the Internet is a big no-no given how stupid people are falling for MSN bots, it'd be like a fox in a chicken hutch if let loose.

The military? They can stay the fuck away from it until it's proven to be predictable. If not, then they get a gimped version for use.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

There's also the question of sheer wankery. It reminds me of people who think that if you put Captain Kirk at the helm of a starship, it would suddenly become the equivalent of ten starships. Any AI, no matter how intelligent it is, will still be constrained by physical limitations. It will not magically be able to accomplish whatever arbitrary thing it wants, simply by virtue of being highly intelligent. Even if it had limited real-world abilities, how would it expand those limits without humans noticing long before it was able to actually take on and destroy humanity?

The "transhuman" wankery I'm seeing in this thread is unbelievable. The way some people are describing things, if you put a "transhuman" AI in charge of an army of ten guys with spears, his sheer genius would allow them to crush the entire US Armed Forces.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Stark
Emperor's Hand
Posts: 36169
Joined: 2002-07-03 09:56pm
Location: Brisbane, Australia

Post by Stark »

I remember a thread on this forum (fiction related) that involved an AI emerging in some lab somewhere by accident and building impregnable energy shields out ... stuff lying around... using ... intelligence. It was just fiction, but he believed it was plausible, ie that 'super intelligent' = 'freed from physical limitations'.
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

Supposedly, there's the issue that a sufficiently smart AGI that can talk to us can "hack" our brains - that is, in non-wankspeak, it could get very good at manipulating people, and persuade its tenders to plug it into the web (or a tank, or whatever).

I'd hazard that this, like everything else about AGIs, is wanked completely out of proportion - although I sometimes wonder, seeing how easy people are to fool. It is something to watch out for though, if for whatever retarded reason we actually wanted to build a self-improving AGI.
User avatar
Gullible Jones
Jedi Knight
Posts: 674
Joined: 2007-10-17 12:18am

Post by Gullible Jones »

And I'm just going to throw this in... Is there really any reason we should build AIs to invent stuff for us? I mean, is there any reason we couldn't do whatever insane stuff an AI could, given enough time? Isn't "Singularity or stagnation" a false dichotomy?
Post Reply