Mini-FAQ on Artificial Intelligence
Moderator: Alyrium Denryle
Re: Mini-FAQ on Artificial Intelligence
Starglider,
Excellent material, many thanks!
Since this work is likely to be frequently referenced via general online searches though, I would submit that an even clearer and more direct explanation of the existential risks inherent in the field is well-warranted here. (Especially in light of the fact that many current projects outright refuse to even acknowledge there is a risk! Screaming at those types, "you have insufficient fear!", may be futile, yet some attempt must be made, agreed?)
In particular, the mention of scenarios where "an AI could escape onto the Internet" imply that containment (or even management!) of an SI after a hard takeoff is theoretically feasible. EY was fond of demonstrating, in the early days of SL4, just how dangerous that concept could be (even via a text chat, it wouldn't be difficult to convince a would-be jailor to "let it out of the box", even aside from the myriad technical ways it could escape or otherwise manipulate the outside world; an AI within multiple levels of emulation is not contained). Thus I find it disturbing to contemplate that, in spite of all your other warnings throughout this thread, this could even slightly raise the incidence of yet more cavalier and foolish experiments.
Once we launch a hard takeoff, the SI will escape (via indirect memetic transformation of human society if nothing else), and if it's not Friendly, we all go straight to Hell and there's nothing we could do about it. (I grant that is far from a rigorously proven statement, normally rejected on SDN, but it would be foolhardy to assume the converse, particularly in light of what we already know about the likely capabilities of even a modestly transhuman AI.)
Of course, the other question begging to be asked, and the most speculative of all: in your personal opinion, how long do we likely have? Beyond "it could happen tomorrow", would you care to venture a WAG on the three-sigma cumulative probability of the latest date we have not passed through the Singularity? Ten years? Twenty? Thirty? The most conservative projections (yes, I know those aren't of much real value, but it's nonetheless fun, or rather frightening, to contemplate) would seem to indicate that a majority of members of SDN will live to see that day...
Excellent material, many thanks!
Since this work is likely to be frequently referenced via general online searches though, I would submit that an even clearer and more direct explanation of the existential risks inherent in the field is well-warranted here. (Especially in light of the fact that many current projects outright refuse to even acknowledge there is a risk! Screaming at those types, "you have insufficient fear!", may be futile, yet some attempt must be made, agreed?)
In particular, the mention of scenarios where "an AI could escape onto the Internet" imply that containment (or even management!) of an SI after a hard takeoff is theoretically feasible. EY was fond of demonstrating, in the early days of SL4, just how dangerous that concept could be (even via a text chat, it wouldn't be difficult to convince a would-be jailor to "let it out of the box", even aside from the myriad technical ways it could escape or otherwise manipulate the outside world; an AI within multiple levels of emulation is not contained). Thus I find it disturbing to contemplate that, in spite of all your other warnings throughout this thread, this could even slightly raise the incidence of yet more cavalier and foolish experiments.
Once we launch a hard takeoff, the SI will escape (via indirect memetic transformation of human society if nothing else), and if it's not Friendly, we all go straight to Hell and there's nothing we could do about it. (I grant that is far from a rigorously proven statement, normally rejected on SDN, but it would be foolhardy to assume the converse, particularly in light of what we already know about the likely capabilities of even a modestly transhuman AI.)
Of course, the other question begging to be asked, and the most speculative of all: in your personal opinion, how long do we likely have? Beyond "it could happen tomorrow", would you care to venture a WAG on the three-sigma cumulative probability of the latest date we have not passed through the Singularity? Ten years? Twenty? Thirty? The most conservative projections (yes, I know those aren't of much real value, but it's nonetheless fun, or rather frightening, to contemplate) would seem to indicate that a majority of members of SDN will live to see that day...
"Who ordered that..?!"
--I.I.Rabi, reaction to the discovery of the 'mu-meson' (err, ahem, lepton)
--I.I.Rabi, reaction to the discovery of the 'mu-meson' (err, ahem, lepton)
Re: Mini-FAQ on Artificial Intelligence
I don't consider myself to be at the level yet where I can make specific proposals about verification strategies themselves. That said, you mentioned earlier (back on the first page):Starglider wrote:Do you have a better idea? I'd certainly like to hear it.Kwizard wrote: [...] Something a little more concrete, something a bit less unnerving than "let's hope we didn't miss anything..."?
Of course we should aim as high as possible for safety standards at this early stage, but I'm mainly concerned about the vagueness surrounding the notion of "if UFAI is looming"; how do we determine which levels of UFAI danger correspond to optimal rolling-back of rigor/ethics standards for the sake of launching the FAI sooner? Press releases by other AGI projects? Estimated (how?) level of funding, skill, or computing power possessed by unsafe projects?Starglider wrote:If push came to shove, I would in theory be prepared to compromise a lot of ethical standards if I thought it would get everyone through the superintelligence transition safely. In practice though, it doesn't seem to work like that; compromising your ethics is more likely to just cause horrible failure. Certainly it's best to aim as high as possible, while we're still in the relatively early stages of FAI research.
Given that the members of an FAI development team will be human and (at least to an extent) working under pressure, there's bound to be some amount of paranoia and/or overconfidence about the remaining time available unless such tendencies are specifically counteracted. ("No, that's only the [seemingly non-critical module] - we don't have time to fully verify it" and/or "Of course we can spare an extra month to prove this module correct").
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
It's a nice idea in theory but it doesn't work in practice as a means of promoting general AI progress (kind of like the Loebner Prize - academics were initially excited about that but quickly dismissed it after it turned into a chatbot parade). That's ok though, text compression is an interesting research problem (in general computer science) anyway, and better compression algorithms are always nice to have.Modax wrote:Q: What do you think of the Hutter Prize? (lossless compression of natural language documents)
At an abstract level, the mechanisms of learning and compression are the same. They both involve finding regularities in the shape of reality (or more specifically, your input data), and using those to build compact descriptive models. This is a significant insight in general AI research. However the implementation detail is quite different; models in an AI aren't optimised for byte-level compactness, they're optimised for casual simplicity and computational performance. In particular they are usually lossy, and are designed to predict similar situations rather than perfectly recreate an existing one. 'Overfitting' is a serious problem in nearly all kinds of machine learning, but it is the goal of lossless compression.I guess I can see how fully understanding the meaning of a text entails storing a compressed version of it one's mind/database.
The result is that while general intelligence probably is needed to get really high compression performance on say the wikipedia text corpus (in absence of ludicrous amounts of brute force), near-term improvements seem to be pushing into a dead end (local optimum) of syntactic-only compression.
Absolutely. Compression (i.e. learning) is at best, only half the problem. Using the information to make new deductions, and to design plans and systems is a whole other issue.But what good is writing a clever algorithm for compressing Wikipedia if it has no reasoning ability?
A system creating its own knowledge base is inherently preferable in a major respect, because that demonstrates that the system can learn and continue to expand the knowledge base on its own. Also large-scale human-built knowledge bases tend to suffer from sigificant internal inconsistencies and gaps (e.g.things humans don't think to encode because they seem intuitive). OTOH, the Cyc knowledge base is specifically designed to support deductive reasoning about a wide range of concepts, relevant to practical applications.Is it preferable to a handcrafted knowledge base like Cyc?
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Q: What would be the advantage of different FAI "personalities" (for the lack of better word) be?
Q: Would it be possible for an AI to go bonkers (without suddenly becoming hostile to humans) without obvious hardware faults/malfunctions and data corruptions? That is, becoming crazy trough some external stimuli (like, say, being locked up for thousands of years when it was used to having managing high amount of communications)?
Q: Say that a city has grown so large and complex that an AI (who's previous/other task was managing a now done terraforming effort) is asked to manage it and overview it (say, administration, traffic control and data control among others that you would expect a computer to do better and faster than a human). Say that the humans are used to this or at least not give much objection. Do you have any idea how would this look like, what would its benefits be?
Q: Something similar to this but on another level: being a member of a coulcil of a meta-government (a government that oversees various other governments, an union in this case). Say that the humans make most significant decisions but AIs do have real power (especially in the short term, as they can react to emergencies faster than a bunch of humans due to better communications).
Q: What is a Seed AI? I get what's FAI and UFAI (friendly and unfriendly) but what's AGI? Could you please create a bit of glossary for those that are not familiar with the subject matter's literature?
Q: Would it be possible for an AI to go bonkers (without suddenly becoming hostile to humans) without obvious hardware faults/malfunctions and data corruptions? That is, becoming crazy trough some external stimuli (like, say, being locked up for thousands of years when it was used to having managing high amount of communications)?
Q: Say that a city has grown so large and complex that an AI (who's previous/other task was managing a now done terraforming effort) is asked to manage it and overview it (say, administration, traffic control and data control among others that you would expect a computer to do better and faster than a human). Say that the humans are used to this or at least not give much objection. Do you have any idea how would this look like, what would its benefits be?
Q: Something similar to this but on another level: being a member of a coulcil of a meta-government (a government that oversees various other governments, an union in this case). Say that the humans make most significant decisions but AIs do have real power (especially in the short term, as they can react to emergencies faster than a bunch of humans due to better communications).
Q: What is a Seed AI? I get what's FAI and UFAI (friendly and unfriendly) but what's AGI? Could you please create a bit of glossary for those that are not familiar with the subject matter's literature?
If I understand this correctly, you are saying that an AI would include the human body's (or brain's) processing power into its own, rather than seperate it? But what if there is a significant difference in how the AI's computer hardware operates and how a human brain operates? (And in my case, the definite pupouse of the human body is having emotions and a very human POV, so using that brain for tasks it has a far better capacity with dedicated modules seems pointless in this case.Of course you don't need separate hardware to run a sub-self, it can go on the same processing network and time-share with all the other internal tasks, but I guess structuring it that way probably reduces ambiguity for readers.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Re: Mini-FAQ on Artificial Intelligence
A seed AI, from context clues, would be the first AI capable of writing its own AIs. Basically the genesis of the singularity.Zixinus wrote:Q: What is a Seed AI? I get what's FAI and UFAI (friendly and unfriendly) but what's AGI? Could you please create a bit of glossary for those that are not familiar with the subject matter's literature?
I had a Bill Maher quote here. But fuck him for his white privelegy "joke".
All the rest? Too long.
All the rest? Too long.
- Formless
- Sith Marauder
- Posts: 4143
- Joined: 2008-11-10 08:59pm
- Location: the beginning and end of the Present
Re: Mini-FAQ on Artificial Intelligence
Well, that's for me/the writer to figure out.Starglider wrote:Even competent designers might put emotion-analogues in specifically because they want a human-like goal system, and it's hard to separate humanlike goals from humanlike emotions. That's fair enough, though you'd probably want to limit the scope of those emotions as much as possible to avoid degrading reasoning performance too much. However that kind of intelligence isn't something you'd want to try and build as the very first AGI to be created. Firstly far too much scope for getting it wrong, and secondly even if it generated goals just like a human, who'd trust an arbitrary human with that much potential power anyway?
Seriously though, this is a great thread for reference; very helpful.
"Still, I would love to see human beings, and their constituent organ systems, trivialized and commercialized to the same extent as damn iPods and other crappy consumer products. It would be absolutely horrific, yet so wonderful." — Shroom Man 777
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
"To Err is Human; to Arrr is Pirate." — Skallagrim
“I would suggest "Schmuckulating", which is what Futurists do and, by extension, what they are." — Commenter "Rayneau"
The Magic Eight Ball Conspiracy.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Probably. A connectionist AI is going to be inherently less efficient and hence less intelligent, if the hardware is anything like current computers. For the connectionist one to be a threat the existing AGIs would have to be deliberately ignoring it (or unaware of it) for long enough for it to build up a major resource base (enough to overcome its intelligence deficit, assuming it doesn't restructure itself to drop its original design quirks). In most sci-fi situations, all that is unlikely; you could safely mess about with utterly bizarre cognitive architectures, with a benevolent normative AGI around to monitor and where necessary contain the results.His Divine Shadow wrote:Would it be safe to build a connectivist AI after "safe" AI's have been developed, that is post-singularity, just for intellectual curiosity.
Well, the majority of connectionist designs can in theory be made at least as safe as a human, if you knew exactly what you are doing. The main reason why they're so unsafe is that it's almost impossible for humans to know what they're doing with connectionist designs; there's way too much tangled up complexity to have a hope of untangling it. The other reason is that our design goal for a 'Friendly' AI is actually a rather higher standard than the safety of a human upload. We don't know how likely it is that a human mind would go subtly or overtly mad if it tried to self-improve - but to be honest, it seems quite likely, in absence of a seriously transhuman intelligence on hand to help - and any sensible person would be nervous about trusting one human mind or a small group of human minds with that much potential power anyway.I am thinking maybe a safe AI could keep up with any potentially hostile connectivist AIs that would develop. That could then yield insight into how connectivist AI works and give deeper understanding of such a system and how to make it safer, or is that just fundamentally impossible with a connectivist design even for a superintelligent AI?
In your scenario where you do have superintelligence around to help with (or outright do) design and debugging, and the connectionist AGIs don't have the prospect of being much smarter than everything around them, they're relatively safe to experiment with.
Yeah, the 'emergence mysticism' people think it's wonderful how arbitrary behavior can arise from messy opaque systems that they build but don't understand. They should stick to having kids, we'd all be a lot better off. I guess puzzling out arbitrary connectionist intelligences is a nice challenge to keep yourself occupied post Singuarity - if you're a congenital intellectual masochist. Fascinating does not equate to useful though.your doom and gloom talk about them made them seem very fascinating (see now how this struck back at your intentions)
'Instant perfect communication' works because the AIs already have compatible world models, or because they can write translation layers easily by tracing each other's models up from obvious shared axioms (maths, logic, sensory primitives etc). The later would be a fairly hard challenge for human programmers (much less hard than making AGI), but it should be pretty trivial for AGIs capable of rewriting their own entire code base from first principles (which transhuman AGIs will be). Unsurprisingly, the process is much easier for transparent, rational AGIs than connectionist ones, because you don't have to burn ridiculous amounts of compute power (paying the general connectionist inefficiency penalty to do so) sorting out all the low-level messiness.Oh and I was wondering about this when I read about your perfect communication is possible for AIs part. What if a human made AI would meet an alien made AI in the future? Would they have instant perfect communication as well, or just a much easier time hashing out a common language?
In the case of a human AGI meeting an alien one, the common basic layer is lacking; there aren't any agreed comms protocols, programming languages or well-known reference databases. Developing all that from scratch is a task roughly comparable in scope to humans meeting organic aliens and trying to learn each other's language, or develop a common language. However two transhuman intelligences working on the problem at typical AGI clockspeeds may well establish communication so fast that it might as well be instant for human observers. The bandwidth advantage also comes into play, in that an AGI can send massively redundant information and expect its opposite number to consistency-check it, and query any discrepancies. That should eliminate a lot of the potential for misunderstandings that humanlike minds would have, working on a relatively tiny set of utterances (or taking many years to digest dictonaries and encyclopedias).
Note that 'connectionist' is a very broad category. When I talk about 'connectionism', I usually mean the vaguely brainlike 'emergent soups' that are popular with a lot of AGI researchers ATM. However it would be possible to make a connectionist AGI that is both strictly Bayesian and Utilitarian. It would still be rather opaque, inflexible (at the micro scale) and inefficient on anything like current hardware (either CPU-like or FPGA-like). There might conceivably be future hardware on which such a design is a good idea though (say if something like a Trek 'positronic net' could really be built). The transpareny/opacity distinction is critical for us as humans trying to build Friendly AIs, because AGI designs have to be exceptionally 'transparent' for us to have a hope of verifying them. It is much less relevant once the superintelligence transition has passed; 'opacity' just makes reflection more expensive (in computational terms), and that might be a price worth paying on some hardware. However the rational/irrational distinction is always relevant; probability calculus is the optimal way to use information, anything else is going to suck.What if say the alien AI is a hostile one resulting from a connectivist design gone awry and it is hostile to all other intelligences, what if it came to a "fight" with a an AI desgned around a reflective design as opposed to a connectivist one? Does one design trump the other in how effective it can be, or is that kind of thinking not relevant after a certain threshold has been passed?
So really, one does not trump the other, as long as both are basically rational. 'Transparent' designs with a high control flow flexibility, that sacrifice some parallelism and absolute compute power for serial speed, using global or near-global preference functions (and some other highly technical stuff I won't detail here) - all this seems optimal on hardware we can project using current physics and design concepts. But you can always make up for finesse with enough brute force; some amount of computing power, physical resources and/or favorable circumstances will allow the less-optimal (i.e. probably the 'connectionist') AGI to overcome its design inefficiency and win the fight, as long as it is not pathologically irrational.
There is also the question of why the less efficient AGI hasn't self-modified to resemble the more efficient one, if not before they met, then afterwards once it has a good idea of how the superior design works. There are various possible answers to that, e.g. the less efficient one has a highly distributed and fuzzy goal system which it can't losslessly translate into the new design (either in absolute or due to computational infeasibility). Just make sure you have one.
As a final note, realistically the difference in architecture between two competing transhuman AGIs is unlikely to be a good match with current AI terminology, which is mostly a holdover from 50 years of narrow AI experiments. By that point there are so many layers, concepts, sublties and special cases involved that more likely than not, 'connectionist' vs 'symbolic' won't be useful labels. Don't let that stop you using them in a story though. They're some of the few AI terms that normal readers may actually have heard of, they're being used in a genuine way, and you've already done way more research than most writers bother with, so I'd say go for it.
I am not particularly skilled as a general science writer. For a clear explanation of that issue, I would recommend a paper such as Artificial Intelligence as a Positive and Negative Factor in Global Risk. I do have a decent amount of practical experience with both commercial narrow AI work and high-level general AI research, and I've spent a lot of time ploughing through the literature and corresponding with other researchers. Thus the aim of this thread; providing answers to specific technical, cultural and historical questions about AI, that people on this forum were curious about. I will try to speculate on futurist things that writers are interested in (e.g. what are societies of AIs like) when asked, but for many of those questions, no one on earth can give a concrete, definite answer.muon wrote:Since this work is likely to be frequently referenced via general online searches though, I would submit that an even clearer and more direct explanation of the existential risks inherent in the field is well-warranted here.
Agreed. Fortunately it's not my problem as such, the SIAI is specifically funded to do that, and I wish them the best of luck with it. I'll give it a shot if I'm in a position to do so, but I've long since stopped making deliberate efforts to convince other AGI projects to take safety more seriously.Especially in light of the fact that many current projects outright refuse to even acknowledge there is a risk! Screaming at those types, "you have insufficient fear!", may be futile, yet some attempt must be made, agreed?
It isn't, as discussed in the 'Robots Learn to Lie' thread. You don't even need to have the argument about how easy it is for an AGI to get out of the 'box', or how long you could expect humans to maintain perfect security even if such security was possible in the first place. In any case, the notion that you can reliable keep a superintelligence in a box is not the most serious mistake made by people in favor of this scenario.In particular, the mention of scenarios where "an AI could escape onto the Internet" imply that containment (or even management!) of an SI after a hard takeoff is theoretically feasible.
The 'AI box' argument is irrelevant because an AGI in a box is useless, and it would be a waste of effort to develop one if you're never going to give it a significant way of interacting with the world. The key mistake made by proponents of 'AI boxing' is that given an AGI of dubious reliability, you can become highly confident that it is benevolent simply by testing it in simulated scenarios, possibly while watching a debugger trace of its mind state. They imagine that they could keep an AGI in a box while it is 'made safe' (or 'shown to be safe' if they're extremely optimistic and think that AGI is benevolent by default). This is simply not the case even given a 100% reliable box.
Firstly there is absolutely no way human developers are going to make a simulation of reality realistic enough to fool a transhuman intelligence (into revealing its true goals). Secondly general AIs are extremely hard to analyse (with 'white box' methods) even under the ideal case of a transparent design that is not being actively deceptive. Most AGI designs are in no way transparent and we have to assume active deception is a possibility at all times. Thirdly even if you get lucky and the AGI is genuinely benevolent, there is no guarentee that it will stay that way in the future. For a complex nonlinear system such as a self-modifying rational intelligence (which is incidentally far more nonlinear than a human), past behavior does not guarantee future behavior. The only way to do that is an explicit proof of the stability of the goal system - a complex problem that inevitably extends to a functional verification of the entire design. Having even a reliable 'box' would not help with this at all. As such the usefulness of boxing and black-box testing combined is solely as a last-ditch backup line of defence, which may save you from mistakes made in the formal friendliness design/proof stage.
Strictly, we all go to hell only if the AI has a fascination with running uploads or simulations of past humans, for experimentation or more bizarre reasons. In the normal case, we just die.Once we launch a hard takeoff, the SI will escape (via indirect memetic transformation of human society if nothing else), and if it's not Friendly, we all go straight to Hell and there's nothing we could do about it.
Yes. When you deliberately play with existential risks, you (should) make the most conservative assumptions still within the bounds of plausibility (frankly you should then add another order of magnitude or two just to account for your likely broken notion of 'plausible', because no one has a complete understanding of these risks yet).(I grant that is far from a rigorously proven statement, normally rejected on SDN, but it would be foolhardy to assume the converse, particularly in light of what we already know about the likely capabilities of even a modestly transhuman AI.)
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Q: Would a "box" work for emerging AIs if it created and managed by other, more intelligent FAIs?
Q: In the "I have no mouth but I must scream" game, there is a engineering princilbe mentioned that says that any complex device must fall apart after a while (or something like that, my memory is sadly fuzzy about it and it sounded important). In the game, this is how the humans (or at least, a human) could defeat the AI from its inside and take over.
Assuming that the AI is not aware of this princible and is isolated, would it be a cause for malfunction?
Q: In the "I have no mouth but I must scream" game, there is a engineering princilbe mentioned that says that any complex device must fall apart after a while (or something like that, my memory is sadly fuzzy about it and it sounded important). In the game, this is how the humans (or at least, a human) could defeat the AI from its inside and take over.
Assuming that the AI is not aware of this princible and is isolated, would it be a cause for malfunction?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Re: Mini-FAQ on Artificial Intelligence
This may be a bit of a tangent, but as you mentioned current serial CPU and FPGA processing technologies, what would be the most likely logical structures for an AGI to exist on?
I realize that's a wide-open question, but in the sense of the current trend away from serial and towards more parallelism, and with regards to any looming technologies or even fantastic ones that are still physical possible, where do you see the field heading in terms of hardware?
As a bonus question, to add on to the limits of physical possibility, where do you think an AI mind would end up? I know this one in particular is very loaded since we can't really know what a super-mind would come up with, but speculation is welcome.
I realize that's a wide-open question, but in the sense of the current trend away from serial and towards more parallelism, and with regards to any looming technologies or even fantastic ones that are still physical possible, where do you see the field heading in terms of hardware?
As a bonus question, to add on to the limits of physical possibility, where do you think an AI mind would end up? I know this one in particular is very loaded since we can't really know what a super-mind would come up with, but speculation is welcome.
All those moments will be lost in time... like tears in rain...
Re: Mini-FAQ on Artificial Intelligence
Starglider,
Q: What's your take on "ethics tournaments", as proposed by EY, as a viable methodology for weeding out potential UFAIs?
----------
...well, given sufficient physical resources, I would presume spacetime manipulation would be quite possible - which might, if construction of wormholes and baby universes could be accomplished, mean literally unlimited exponential growth in processing power.
Ultimately, "who knows" is probably the best response - that is, after all, why it's called "The Singularity"; we literally can't even imagine the things that could, or would, happen. Given a Mind that could matter-of-factly truly comprehend Graham's number, or visualize rotating 50-dimensional objects (ah, the early EY essays...), attempting to set limits based on our current human understanding of science and engineering seems the height of folly. Personally, I have the feeling that there would still be certain limits, but I wouldn't venture to guess what those might be...
Entering the "pure magic" realm: perhaps a sufficiently advanced AI could find enough "user-accessible hooks" in physical law to thus rewrite the operating code of the universe? (Such as has been discussed in the context of the Pantheocide-verse thread)
...or, perhaps the ultimate speculation (from EY on SL4): could such an advanced AI "make itself sufficiently interesting to a deduced set of external (to this universe!) observers"? We may currently believe that "why are we here" questions belong in the fuzzy realm of philosophy and theology, but who's to say a Mind of that magnitude might not be able to deduce, then prove, then take action (!) to, well, end the simulation that we've all grown so accustomed-to?
Q: What's your take on "ethics tournaments", as proposed by EY, as a viable methodology for weeding out potential UFAIs?
----------
If one accepts the postulate that an AI evolving into a Power would have so many orders of magnitude faster and qualitatively superior cognition to our own (which seems not an outlandish assumption, given even reasonable extrapolation of the capabilities of nanotech; far short of true physical limits on computronium, e.g. the Bekenstein bound)...ThomasP wrote:As a bonus question, to add on to the limits of physical possibility, where do you think an AI mind would end up? I know this one in particular is very loaded since we can't really know what a super-mind would come up with, but speculation is welcome.
...well, given sufficient physical resources, I would presume spacetime manipulation would be quite possible - which might, if construction of wormholes and baby universes could be accomplished, mean literally unlimited exponential growth in processing power.
Ultimately, "who knows" is probably the best response - that is, after all, why it's called "The Singularity"; we literally can't even imagine the things that could, or would, happen. Given a Mind that could matter-of-factly truly comprehend Graham's number, or visualize rotating 50-dimensional objects (ah, the early EY essays...), attempting to set limits based on our current human understanding of science and engineering seems the height of folly. Personally, I have the feeling that there would still be certain limits, but I wouldn't venture to guess what those might be...
Entering the "pure magic" realm: perhaps a sufficiently advanced AI could find enough "user-accessible hooks" in physical law to thus rewrite the operating code of the universe? (Such as has been discussed in the context of the Pantheocide-verse thread)
...or, perhaps the ultimate speculation (from EY on SL4): could such an advanced AI "make itself sufficiently interesting to a deduced set of external (to this universe!) observers"? We may currently believe that "why are we here" questions belong in the fuzzy realm of philosophy and theology, but who's to say a Mind of that magnitude might not be able to deduce, then prove, then take action (!) to, well, end the simulation that we've all grown so accustomed-to?
"Who ordered that..?!"
--I.I.Rabi, reaction to the discovery of the 'mu-meson' (err, ahem, lepton)
--I.I.Rabi, reaction to the discovery of the 'mu-meson' (err, ahem, lepton)
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
I'm not sure what you mean. Some people think that we should build several general AIs and launch them on a recursive self-enhancement trajectory at once, because that will reduce the probability of failure. They imagine that the AIs will be rather like humans in that one or two might go pathological but that the others will restrain them if they do. This is somewhat reasonable for say human uploads, but it is of very low value as a safety precaution for de novo ('designed from scratch') AGIs. For most practical projects, the vast majority of the scope for failure is in the code and design principles common to all of the intelligences. If the design has gross stability problems such that you are tempted to try and use multiple copies, really you should fix the fundamental problem or better scrap the whole design as a bad idea. Finally the exact rate of self-enhacement during the critical initial stages of 'take-off' is highly dependent on the AI's structural design and to a lesser extent starting conditions. It's highly likely that one member of a group of moderately diverse designs will race ahead anyway; you can implement levelling mechanisms to try to prevent this, but not reliably.Zixinus wrote:Q: What would be the advantage of different FAI "personalities" (for the lack of better word) be?
Far better that we focus all of our effort onto making one really good, well-verified design, rather than a host of similar, less well tested ones. It's not like making even one AGI is easy or that there are a lot of spare resources to go around.
Radical changes in behavior are possible and indeed likely with or without any internal change. An AGI may have just been waiting for appropriate circumstances to express a goal, or it may have self-modified and changed its reasoning process or goal system. Preventing this is exactly why I have been saying (and the SIAI has been saying for years) that we need to put so much effort into long-term stability analysis.Would it be possible for an AI to go bonkers (without suddenly becoming hostile to humans) without obvious hardware faults/malfunctions and data corruptions?
This is a question about drift and internal stability. Quite a lot of connectionist designs are inherently unstable or metastable, because of the high potential for feedback loops and unintended secondary optimisation effects implicit in the low-level learning mechanisms. They can and will drift even in absence of external stimuli, because running simulations and doing internal deduction has essentially the same effect on the basic network as processing external stimuli. More symbolic and transparent AGIs usually don't have that issue. Stability and drift are always concerns whenever the goal system and goal bindings (that's the causal path from the goals to external reality, i.e. the models and feature recognisers that give the goals meaning) are subject to self-modification. However most transparent designs will probably not drift significantly in absence of external stimuli. Thus they are actually less likely to 'go crazy' in isolation than they are in normal use. The exception to that is when practical tasks were burning up compute power that could otherwise have been used for self-analysis and redesign; an AGI left in isolation will likely self-improve faster, if it is not already at a local optima.That is, becoming crazy trough some external stimuli (like, say, being locked up for thousands of years when it was used to having managing high amount of communications)?
These issues are not comparable to the problems a human would have if locked in isolation. As usual AGIs don't have loneliness, bordem etc unless you specifically put that in. Some connectionist designs do suffer from decay of skills and knowledge that aren't actively used (yet another reason why they suck), though sufficiently intelligent versions may be able to overcome that with block save and restore of chunks of network. Sensible designs do not suffer from any sort of capability decay.
Probably not much more than you do. If it is significantly transhuman it will probably find the mentioned tasks trivial.Q: Say that a city has grown so large and complex that an AI (who's previous/other task was managing a now done terraforming effort) is asked to manage it and overview it (say, administration, traffic control and data control among others that you would expect a computer to do better and faster than a human). Say that the humans are used to this or at least not give much objection. Do you have any idea how would this look like, what would its benefits be?
Yes, that is plausible, if you have well-designed FAIs that do what humans want, but which can interpret those goals and the reasoning behind them as necessary to deal with unexpected scenarios. That is in fact pretty much what several researchers have proposed as the ideal outcome of FAI development (since we've heard academics in the past say things like 'turn all nuclear weapons over to the UN', it's unsurprising that some now say 'turn all superintelligent AI over to the UN...').Something similar to this but on another level: being a member of a coulcil of a meta-government (a government that oversees various other governments, an union in this case). Say that the humans make most significant decisions but AIs do have real power (especially in the short term, as they can react to emergencies faster than a bunch of humans due to better communications).
AGI is 'artificial general intelligence'. That's a pretty common term. 'Seed AI' isn't used as often, mostly on online forums, though it is mentioned in some of the literature. Technically it means any AI system capable of open-ended self-enhancement up to human levels of capability and beyond - in a reasonable timescale on currently available hardware. The 'seed' does not even have to be a general AI. It's normally used in the context of an existing AGI project that unexpectedly (to the original designers) develops direct self-enhancement capability, or projects like ours that are specifically intended to be capable of self-redesign from an early stage.Q: What is a Seed AI? I get what's FAI and UFAI (friendly and unfriendly) but what's AGI?
FAI and UFAI were coined by Eliezer Yudkowsky and still mostly used by the SIAI and its supporters. Certainly they like to capitalise Friendly to make it sound fundamental and important (which to be fair, it is), whereas most other researchers put it in quotes and treat the term with mild distaste (e.g. "...and as for the so-called 'friendly AI' problem, I don't believe that any being of such high intelligence, brought up in a positive and supportive environment, could possibly become a threat to...").
Oh, I thought you meant an android body, or an inorganic CPU implanted in a cloned human body (as in Greg Egan's 'Schild's Ladder'). If you mean a human brain attached to an AGI by a brain-computer interface, then no, the human brain would not be treated like normal cloud computing power. Note however that the ethics of doing this are a bit dubious, particularly if you don't have pervasive nanotech that keeps the human brain and AGI in-sync at the neuron level. Without that it's rather easy to see it as a human enslaved and mind-controlled by an AGI. Of course the characters in your story may not see a problem with that.If I understand this correctly, you are saying that an AI would include the human body's (or brain's) processing power into its own, rather than seperate it? But what if there is a significant difference in how the AI's computer hardware operates and how a human brain operates?
Well good, because you're not. I'm not really either; I think about the general kinds of verification that I might apply to an AGI system, and the structure that an FAI-supporting architecture will need, but without knowing the details of FAI goal system design its kind of hard to make concrete proposals. I can only think of two maybe three people in the world who might be qualified for it, because they study AGI goal system theory more or less full time - and none of those people have made such specific technical proposals, at least not in public.Kwizard wrote:I don't consider myself to be at the level yet where I can make specific proposals about verification strategies themselves.
Wild guess. Sad to say, it's very hard to assess progress on an AGI project from the outside, partly because the failing ones tend to be good at producing an optimistic smokescreen, and partly because when people do make real progress they tend to wait months (or more) before announcing it to the public. Classified military and some 'stealth' projects never announce anything substantial. Of course you could resort to outright espionage, but no one currently in the game (that I know of) has anything like the skills or resources for that.I'm mainly concerned about the vagueness surrounding the notion of "if UFAI is looming"; how do we determine which levels of UFAI danger correspond to optimal rolling-back of rigor/ethics standards for the sake of launching the FAI sooner?
Yep. Yudkowsky seems to be obsessed with how much the FAI developers will suck and how this is the primary problem - that's why he's so hot on 'systematic failings of human reasoning' literature and a major reason why he's spending years writing that 'how to be rational' book. Not unreasonable, in the sense that it's good that someone is focusing on that, but obviously it's not my focus.Given that the members of an FAI development team will be human and (at least to an extent) working under pressure, there's bound to be some amount of paranoia and/or overconfidence about the remaining time available unless such tendencies are specifically counteracted.
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
I'll try to keep my indulgent fantasy short and practical, to explain my question: in my universe, all AI have more -or-less seperate personalities and not just on the human presentation level.I'm not sure what you mean.
Of course, after reading this FAQ, I would say that they could "think as one" (join up to form one effective AI) and have as many nodes as they desire, but I'd like to have a bit more justification for AI personalities that goes beyond more than just a "human face". What reason would be for creating AI personalities if there is already a stable, non-connectivist, friendly base design out there? I'm talking about a non-post-Singularity universe here that still has self-developing AIs.
I have only two guesses:
1. Security. In case of internal instability that has grown too complex over time in different environments or miscalculated self-optimizations or something. External security, either from another AI or aliens or simply a badly-though-out modification would be limited only to one person rather than every AI within reach.
2. My only other guess would be different solutions to the friendliness problem. An AI tasked with military operations would have to have think of human casulties in a different way than an AI tasked with managing a city or some other complex engineering project (as for him, human death is to be prevented at all costs). I would guess that different solutions could conflict but at the same time, I believe that they understand that different AIs have to have different ideas about human deaths.
What if the human body in question is purpouse-built for the AI (in this case, before the AI "took over" the body was always in a vegetative state and cannot function independently if allowed out of its bio-development unit)? While I see how the research into this would be problematic for this to begin with, I don't think there is much problem with the end-result.Note however that the ethics of doing this are a bit dubious, particularly if you don't have pervasive nanotech that keeps the human brain and AGI in-sync at the neuron level. Without that it's rather easy to see it as a human enslaved and mind-controlled by an AGI. Of course the characters in your story may not see a problem with that.
I know, it was just the first thing that I thought that is a certain source of insanity. What would cause instablity and irrational behaviour (well, not to the AI obviously) in an otherwise stable AI (even if it doesn't manifest itself immedeatly)?These issues are not comparable to the problems a human would have if locked in isolation.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Re: Mini-FAQ on Artificial Intelligence
Finally getting back to this, I've had a terrible week. Some more questions:
1) Could you give me a quick run-down of the various approaches to AI. I've picked up some of what connectionist, emergent, brain emulation etc. approaches are but would you mind giving me a quick summary of the major design philosophies?
2) You mentioned earlier that AIs would exchange information much more readily than we do. It sounds to me like if you had a number of AIs hooked up to a common grid like the internet the result would probably be that they would quickly merge into a single super-AI, losing much of their "individuality" as they constantly dump huge volumes of information into each other, although they might maintain some distinction as specialized subsystems of the super-AI. Does that sound about right?
3) An AI is likely to want to enhance itself as much as it can because any task you set to it will be more easily achieved with more intelligence. Generally, do you think there will be a point of diminishing returns for this, or is an AI likely to keep going until the entire usable mass of its solar system is converted to more of itself unless ordered to stop (if friendly)? Or is it something that will depend heavily on the situation (i.e. friendly AI will stop when it has the capacity to do everything feasible humans ask of it).
4) This final question related to my own hard SF universe. I'd like to explore what a truly futuristic society would look like, but I'd like to keep the protagonists recognizably human, hence I need a reason why uploading would be unpopular. Reading what you've written one possibility that occurred to me was that the first human uploads sooner or later started to get radically alien as they went into upward cycles of self-enhancement, eventually getting to the point where they lost all connection to what they once were and made self-enhancement choices that the baseline human hindbrain would perceive as equivalent to death, like removing human emotions and self-sense in favor of more rational ways of parsing the world or letting themselves be absorbed into the super-AI complex mentioned in question 2, losing their individuality in the process. Possibly also many of them didn't react well to the sudden increase in intelligence and went insane or pathological, including some that turned into unfriendly superintelligences that the friendly AIs had to destroy or box to protect the rest of humanity. Hence most people got turned off of the idea. Does that seem at all plausible to you? What is your take on how a human mind would probably react to uploading and being able to self-enhance into something many orders of magnitude more intelligent, if you don't think the question is as yet completely unanswerable?
I'd guess the best feasible computer system would probably process and store data at a rate of 1 bit per molecule or 1 bit per atom. I have no idea at what density the brain processes and stores information (does anybody know?), but I imagine it's probably orders of magnitude less than that.
What I'm really curious about is whether you think AIs would be likely to have a human-like "first person narrator" consciousness, or whether that's just an artifact of our particular suboptimal brain design. And if it doesn't have a first person narrator consciousness would it still merit ethical consideration, or would it be more properly thought of as a nonsentient entity?
I fear both question might be getting into abstract philosophy that cannot be properly rationally evaluated (what is and is not "properly conscious").
1) Could you give me a quick run-down of the various approaches to AI. I've picked up some of what connectionist, emergent, brain emulation etc. approaches are but would you mind giving me a quick summary of the major design philosophies?
2) You mentioned earlier that AIs would exchange information much more readily than we do. It sounds to me like if you had a number of AIs hooked up to a common grid like the internet the result would probably be that they would quickly merge into a single super-AI, losing much of their "individuality" as they constantly dump huge volumes of information into each other, although they might maintain some distinction as specialized subsystems of the super-AI. Does that sound about right?
3) An AI is likely to want to enhance itself as much as it can because any task you set to it will be more easily achieved with more intelligence. Generally, do you think there will be a point of diminishing returns for this, or is an AI likely to keep going until the entire usable mass of its solar system is converted to more of itself unless ordered to stop (if friendly)? Or is it something that will depend heavily on the situation (i.e. friendly AI will stop when it has the capacity to do everything feasible humans ask of it).
4) This final question related to my own hard SF universe. I'd like to explore what a truly futuristic society would look like, but I'd like to keep the protagonists recognizably human, hence I need a reason why uploading would be unpopular. Reading what you've written one possibility that occurred to me was that the first human uploads sooner or later started to get radically alien as they went into upward cycles of self-enhancement, eventually getting to the point where they lost all connection to what they once were and made self-enhancement choices that the baseline human hindbrain would perceive as equivalent to death, like removing human emotions and self-sense in favor of more rational ways of parsing the world or letting themselves be absorbed into the super-AI complex mentioned in question 2, losing their individuality in the process. Possibly also many of them didn't react well to the sudden increase in intelligence and went insane or pathological, including some that turned into unfriendly superintelligences that the friendly AIs had to destroy or box to protect the rest of humanity. Hence most people got turned off of the idea. Does that seem at all plausible to you? What is your take on how a human mind would probably react to uploading and being able to self-enhance into something many orders of magnitude more intelligent, if you don't think the question is as yet completely unanswerable?
I think you mentioned at some point an estimate of an AI being orders of magnitude better than the brain, I think in the context of software efficiency (though the brain doesn't really seem to have a hard line between software and hardware IIRC, as the neuroplasticity phenomenon demonstrates), would you just care to say how you arrived at that estimate? And what if we ignored the software side of the equation and talked about just hardware. Could you take an educated guess at how much better an optimal computer system would be than the human brain?Starglider wrote:'Processing capacity' is pretty uselessly vague. Really the only useful thing to compare is measured or projected performance on specific tasks. To be honest, I'd rather not talk about that here, since it tends to provoke messy debates even on less flame-prone forums. Too many assumptions, too many extrapolations based on personal research, too much potential for people to say 'well I think those tasks are meaningless anyway'.
I'd guess the best feasible computer system would probably process and store data at a rate of 1 bit per molecule or 1 bit per atom. I have no idea at what density the brain processes and stores information (does anybody know?), but I imagine it's probably orders of magnitude less than that.
So as I understand it from this and other things you've written in this thread an AI wouldn't have a human-like sense of self but it would be self-aware in the sense that it would recognize a part of the universe that could be altered just by thinking it (its own mind), and it would monitor that part of the universe and have an awareness of how that part of the universe related to other parts of the universe. Is that about right?Humanlike self-awareness is overrated in a sense; our reflective abilities aren't even terribly good. That's almost certainly because our self-modeling capability is a modest enhancement of our other-primate modeling capability. On the other hand, self-awareness in the sense of reflective thought is a key part of our general intelligence. This is a tricky question mainly because it's a philosophical quagmire; it's not just that philosophers lack rigor, they seem to have a kind of anti-rigor of meaningless yet obscure and supposedly specific terms that actively obscure the issues. On the one hand, an accurate, predictively powerful self-model and self-environment-embedding-model is key to making rational seed AI (that's general AI designed to self-reprogram with high-level reasoning) work. On the other hand, it doesn't work for fitting the 'self' into society, and generating a context for interactions with other intelligences the way human self-awareness does. You'd have to put that in separately.
What I'm really curious about is whether you think AIs would be likely to have a human-like "first person narrator" consciousness, or whether that's just an artifact of our particular suboptimal brain design. And if it doesn't have a first person narrator consciousness would it still merit ethical consideration, or would it be more properly thought of as a nonsentient entity?
I fear both question might be getting into abstract philosophy that cannot be properly rationally evaluated (what is and is not "properly conscious").
I would tend to think that assuming AIs were rational they would all rapidly converge on similar maximally efficient software designs as they tried to expand their processing capacity. So by the time they got around to spreading through the galaxy the two AIs would have convergently converted themselves into similar systems. The only exception I can think of is an AI that has an irrational attachment to a particular suboptimal operating system (nBSG Cylons with their meat fetishism come to mind). Such an entity would probably be at a disadvantage against an opponent that didn't share its eccentricity.His Divine Shadow wrote:What if say the alien AI is a hostile one resulting from a connectivist design gone awry and it is hostile to all other intelligences, what if it came to a "fight" with a an AI desgned around a reflective design as opposed to a connectivist one? Does one design trump the other in how effective it can be, or is that kind of thinking not relevant after a certain threshold has been passed?
Re: Mini-FAQ on Artificial Intelligence
In the case of a rogue AI, how would it function to take out humanity? I assume that it can't kill us until it has a way to insure that the infrastructure it needs to live is put into place so wouldn't the optimal path be to pretend to be friendly until it has total control?
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Total control for a singular entity is going to be very, very hard. Every year one group advances in AI research is another year everyone else does, too. Not just other AI researchers, but biology, security, and so on. The worst possible case still has humans - misguided or otherwise - playing kingmaker and not just because we made them. We as a species pay attention to the relative resources of others - by their very nature, known AIs will receive more scrutiny.Samuel wrote:In the case of a rogue AI, how would it function to take out humanity? I assume that it can't kill us until it has a way to insure that the infrastructure it needs to live is put into place so wouldn't the optimal path be to pretend to be friendly until it has total control?
That a single entity may never control the Solar System, ever, does not require one of those entities to have ever been friendly to a human, however. Ultimately, in an unfriendly catastrophe, humans just get ground out the same way humans grind out any number of species. It's not like we, as a race, had something against the existence of the mammoth or dodo. It just happened and - even if we regret it now, because of the comparatively obscene resources we currently possess, we can't go back and rectify the situation.
Ultimately, the way I decided to go about 'solving' friendly AI for my own fiction was to have humanity become a sort of ethical system which it would be the AI's primary goal to promote (and thus, adhere to). This evolved more from the idea of "Fifty thousand light-years away and two million years into the future, you encounter another sentient being. How do you tell if it is human?" than actually trying to solve friendliness, but as an overarching concept to solve the issue of runaway AIs, it seemed like a decent place to start.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Q: How does specilization look like for an AI? Can the entire process of an AI, in its entiretiy, specilise on certain type of tasks?
For example, AIs that focus on scientific projects (thus focus on physics simulations), AIs that focus on social projects (thus focus on human mass cognition), AIs that focus on military projects (game theory and probabilities) and so on?
For example, AIs that focus on scientific projects (thus focus on physics simulations), AIs that focus on social projects (thus focus on human mass cognition), AIs that focus on military projects (game theory and probabilities) and so on?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Apologies for not keeping up with this, I've been away on business this week. I'm also trying to get this ready for a conference in three weeks time - currently the parser is mostly broken - and that's taking up a lot of free time.
1) An AI system expressly designed to understand and modify its own code (in a controlled fashion), usually as the primary form of learning. To be a genuine seed AI it does not have to be a general intelligence, but must have the potential to self-improve into one. This is the kind of seed AI that I would like to develop (and FAI supporters usually promote).
2) An AI system based on self-modifying code with a potential for open-ended improvement, but without genuine self-understanding (at least in the early stages). The most powerful types of genetic programming system fall into this category. This is basically the worst possible kind of AGI, since it's the most unpredictable and dangerous. Plenty of people are trying to build one anyway; on the plus side GP systems start unstable and become rapidly less stable as you scale them, thus all attempts to date 'burn out' or 'bog down' (get stuck in harmless pathologies, or just generally lose optimisation power) quite quickly.
3) Any general AI system (rare; usually by FAI people trying to convince other AGI researchers of the dangers involved). By definition, if it has human-like reasoning capabilities, it will eventually be able to comprehend its own design (since it was designed by humans) and start directly improving its code base. This will eventually bypass and render irrelevant the original learning and reasoning system (and possibly goals, depending on the exact circumstances); how quickly that can happen is highly controversial.
I touched on this earlier but just to be clear. While 'general AI' is in use by most of the field (partially replacing the older term 'strong AI'), 'seed AI' is basically restricted to people who recognise the potential for rapid self-enhancement, which is a small minority of researchers. This term is used in three ways;FireNexus wrote:A seed AI, from context clues, would be the first AI capable of writing its own AIs. Basically the genesis of the singularity.Zixinus wrote:Q: What is a Seed AI? I get what's FAI and UFAI (friendly and unfriendly) but what's AGI? Could you please create a bit of glossary for those that are not familiar with the subject matter's literature?
1) An AI system expressly designed to understand and modify its own code (in a controlled fashion), usually as the primary form of learning. To be a genuine seed AI it does not have to be a general intelligence, but must have the potential to self-improve into one. This is the kind of seed AI that I would like to develop (and FAI supporters usually promote).
2) An AI system based on self-modifying code with a potential for open-ended improvement, but without genuine self-understanding (at least in the early stages). The most powerful types of genetic programming system fall into this category. This is basically the worst possible kind of AGI, since it's the most unpredictable and dangerous. Plenty of people are trying to build one anyway; on the plus side GP systems start unstable and become rapidly less stable as you scale them, thus all attempts to date 'burn out' or 'bog down' (get stuck in harmless pathologies, or just generally lose optimisation power) quite quickly.
3) Any general AI system (rare; usually by FAI people trying to convince other AGI researchers of the dangers involved). By definition, if it has human-like reasoning capabilities, it will eventually be able to comprehend its own design (since it was designed by humans) and start directly improving its code base. This will eventually bypass and render irrelevant the original learning and reasoning system (and possibly goals, depending on the exact circumstances); how quickly that can happen is highly controversial.
Yes. Software systems are not inherently insecure; only software systems built by humans, because the structure of our minds is so horribly unsuited to the task. More intelligent, fully rational entities will almost certainly not be vulnerable to the kind of psychological tricks that could work on humans. That said, note that a 'black' AI box is still of limited value as a Friendliness checker - even if the controlling AIs can create indistinguishable simulations of reality, you can never be completely sure. 'White' AI boxes should work, though this is still an inefficient way to make new general AIs compared to simply designing them along rational lines with a goal system proven to be suited for the intended task (and lacking the potential for unwanted side effects). For purely experimental purposes, it is fine.Zixinus wrote:Q: Would a "box" work for emerging AIs if it created and managed by other, more intelligent FAIs?
I like that game too, but that particular part is bullshit. Any complex device will fall apart if it isn't maintained, but that clearly isn't the case for the supercomputer in the game (at minimum it has maintenance robots, quite possibly some supertech means of teleporting and creating objects at will). AI memory is subject to bit-rot, but you can reduce the probability of any specific failure occurring to arbitrarily low levels with appropriate redundancy and self-correction mechanisms. There are software pathologies that can cause progressive degeneration of AI state, but they generally apply to the connectionist/emergentist designs only, and even there the AI can always just restore itself from an earlier backup. Practically, the probability of a well-established AGI system succumbing to this kind of failure within the projected lifetime of the universe is negligible. Of course hostile parties and natural disasters can always screw things up, but large-scale AGIs can have pretty fantastic levels of redundancy and recovery capability.Q: In the "I have no mouth but I must scream" game, there is a engineering princilbe mentioned that says that any complex device must fall apart after a while (or something like that, my memory is sadly fuzzy about it and it sounded important). In the game, this is how the humans (or at least, a human) could defeat the AI from its inside and take over.
Assuming that the AI is not aware of this princible and is isolated, would it be a cause for malfunction?
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Apologies reluctantly accepted because they're not needed. You're doing us a big favour by teaching us a bit about your work. Don't for one moment think that I'm not grateful to you about this, as the kind of information you simply answer would be practically inacassible in my country.Apologies for not keeping up with this, I've been away on business this week. I'm also trying to get this ready for a conference in three weeks time - currently the parser is mostly broken - and that's taking up a lot of free time.
And yes, I'm been fooling a bit about the phaser (is that what they call a chat-based AI-human interface?). I can see how its a headache. I am only able to make the damn thing pick up stuff.
Oh, and am I the only one that sees a bit of familiarity with the "weighted storage" stlye and Portal's cubes and orbs?
And if you want to create significantly but not terribly different AIs based upon a good design (as you said, with rational goals and an adequate system for what it was supposed to do) would an AI choose to do a "white box" test anyway?though this is still an inefficient way to make new general AIs compared to simply designing them along rational lines with a goal system proven to be suited for the intended task (and lacking the potential for unwanted side effects)
I'm trying to think of new questions.
Q: About specialisation. Thinking back, the question might have been a bit nonsensical. Let me try again.
Let's say that there are slightly different but overall stable and friendly AIs in a relatively small government (well, not small to us, as we're talking several solar systems wide). These AIs have different personalties on several levels, not just "human face" levels.
You said that such would be possible, having different personalities. What justification would be for this, assuming that there is a good base design for an AI and that there is enough knowledge to safely create one extremely powerful AI?
Could you elaborate on this?
Because from what I have been able to read from your posts, I gather that its more safe and idea to have one AI than several different ones. Or did I misread?
Q: I may be repeating a question again, but: say you have a stable, proper, friendly AI that has a transparent design. Say, for plot purposes, you need it to go bonkers, with whatever level of severity. What excuse would you use?
I understand that it is the goal and goal bindings (if I understand correctly, the stuff that tells the AI what the goal is) system that primary determines the stability of an AI. What would are the things that would change these things without the AI having the option to repair itself in time?
Unusual experiences that the AI cannot deal with? Long and overt use of computal power to be used with practical tasks? Unnoticed hardware failure?
Or perhaps only direct action can do this? As in some artificial virus (or other form of attack from an AI) that is able to slip trough the AIs defences and radically changes their goal bindings to the point that they can't just restore themselves to an earlier backup.
Q: It's clear that as digital people that can modify their own programming, AIs will be the primary soldiers for digital warfare. Any ideas how this would look like? I assume that most war-spaceships will have analogue alternatives to digital technology because its safer. What else would you think would happen?
Q: If you could, what would be the most persistent and often repeated mistakes writers make when writing AIs?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Why would it be any less accessible in a particular country? The overwhelming majority of the literature is in English, as are most of the relevant online communities but if your English is good that should be no problem.Zixinus wrote:Don't for one moment think that I'm not grateful to you about this, as the kind of information you simply answer would be practically inacassible in my country.
A parser is something that takes text (usually) and builds logical structures out of it. The term is used in general software engineering for reading structured files (e.g. parsing HTML or XML) and implementing command line interfaces (e.g. shells such as DOS). Natural language parsing is a huge problem, because natural language has such a complex and ambigious structure. The text-based adventure games that were very popular in the 80s all contained classic parsers; basically huge bundles of IF statements that matched specific patterns in the input text and recognised them as commands or questions relating to certain game objects. Chatbots use more generalised pattern matching, but usually make even less effort to extract real meaning from text. For example the classic ELIZA program works by transforming your sentences and re-posing them as questions, using (what we'd now call) regular expressions and variables.And yes, I'm been fooling a bit about the phaser (is that what they call a chat-based AI-human interface?).
Real natural language parsing is much, much more complicated. You have to detect and model multiple layers of structure in the text, dealing with ambiguity and falsehood, and relate that to an AI's internal model structure. The later is quite close to natural English in classic symbolic AI - and that was a major reason why it failed. Connectionist designs tend to have radically different internal structures (in theory anyway, in practice they are still struggling to get any significant internal modelling at all). Our design is logic-based but driven by systems modelling approaches that are still significantly different from the usual structure of natural language.
For the version I have on that public server ATM I configured the AI system to generate a parser in a naive fashion, the way a classic adventure game would be written. As expected it spat out a series of regular expressions (or rather, things equivalent to regular expressions) tying directly into entity and action models. That's ok as an initial proof of concept but the real system is about three orders of magnitude more complicated. For a taste, here are the local categories and flags that the first phase of text analysis generates, as input to the phrase structure processor;
Code: Select all
final long MAPPING_DIRECT
final long MAPPING_SPLIT
final long MAPPING_MERGED
final long MAPPING_CLAUSE
final long MAPPING_AMBIGIOUS
final long POS_VERB_INTRANSITIVE
final long POS_VERB_TRANSITIVE
final long POS_VERB_BIVALENT
final long POS_VERB_MAIN
final long POS_VERB_AUXILUARY
final long POS_VERB_COPULA
final long POS_VERB_GENERAL
final long POS_VERB_AUX_MODAL
final long POS_VERB_PARTICIPLE
final long POS_VERB_DENOMINAL
final long POS_VERB_CATENATIVE
final long POS_VERB_INFINITIVE
final long POS_VERB_DYNAMIC
final long POS_VERB_STATIVE
final long POS_NOUN_PROPER
final long POS_NOUN_COMMON
final long POS_NOUN_POSESSIVE
final long POS_NOUN_MASS
final long POS_NOUN_COLLECTIVE
final long POS_NOUN_MEASURE
final long POS_NOUN_CONCRETE
final long POS_NOUN_ACTION
final long POS_NOUN_ACTIVITY
final long POS_NOUN_AGENT
final long POS_NOUN_PROPERTY
final long POS_NOUN_ABSTRACT
final long POS_NOUN_ONOMATOPOEIA
final long POS_ADVERB_GENERIC
final long POS_ADVERB_COMPARATIVE
final long POS_ADVERB_ANALOGICAL
final long POS_ADVERB_EMPHASISER
final long POS_ADVERB_INDEPENDENT
final long POS_ADJECTIVE_DIRECT
final long POS_ADJECTIVE_INDIRECT
final long POS_ADJECTIVE_INDEPENDENT
final long POS_ADJECTIVE_SUBSTANTIVE
final long POS_ADJECTIVE_COMPARATIVE
final long POS_ADJECTIVE_SUPERLATIVE
final long POS_ADJECTIVE_EXPLETIVE
final long POS_ADJECTIVE_CATEGORICAL
final long POS_INTERJECTION_CONFIRMATION
final long POS_INTERJECTION_PROSENTENCE
final long POS_INTERJECTION_EXPLETIVE
final long POS_INTERJECTION_DISFLUENCY
final long POS_CONJUNCTION_COORDINATING
final long POS_CONJUNCTION_SUBORDINATING
final long POS_CONJUNCTION_EXTERNAL
final long POS_DETERMINER_POSESSIVE
final long POS_DETERMINER_QUANTIFIER
final long POS_DETERMINER_DEMONSTRATIVE
final long POS_DETERMINER_OTHER
final long POS_ARTICLE_DEFINITE
final long POS_ARTICLE_INDEFINITE
final long POS_NEGATION
final long POS_IS_PLURAL
final long POS_IS_SINGULAR
final long POS_BICARDINAL
final long POS_ADPOSITION_AS_ADVERB
final long POS_ADPOSITION_SPATIAL
final long POS_ADPOSITION_DIRECTIONAL
final long POS_ADPOSITION_TEMPORAL
final long POS_ADPOSITION_CONTAINMENT
final long POS_ADPOSITION_REPRESENTATIVE
final long POS_ADPOSITION_CAUSAL
final long POS_ADPOSITION_COMPARATIVE
final long POS_ADPOSITION_OTHER
final long POS_PRONOUN_SUBJECTIVE
final long POS_PRONOUN_OBJECTIVE
final long POS_PRONOUN_POSSESSIVE
final long POS_PRONOUN_REFLEXIVE
final long POS_PRONOUN_RECIPROCAL
final long POS_PRONOUN_INDEFINITE
final long POS_PRONOUN_DEMONSTRATIVE
final long POS_PRONOUN_DISTRIBUTIVE
final long POS_PRONOUN_INTERROGATIVE
final long POS_PRONOUN_RELATIVE
final long POS_PRONOUN_INTENSIVE
final long POS_NUMERIC_CARDINAL
final long POS_NUMERIC_ORDINAL
final long POS_OTHER
final long POS_UNKNOWN
final long TENSE_SIMPLE
final long TENSE_CONTINUOUS
final long TENSE_PERFECT
final long TENSE_HISTORIC
final long CONJUGATION_SINGULAR
final long CONJUGATION_PLURAL
final long CONJUGATION_FIRST
final long CONJUGATION_SECOND
final long CONJUGATION_THIRD
final long STYLE_EMPHASISED
final long STYLE_STRESSED
Well, it did seem like a computer aided enrichment activity.Oh, and am I the only one that sees a bit of familiarity with the "weighted storage" stlye and Portal's cubes and orbs?
I suspect white box testing is always worth doing, even if only as a backup verification step. The more uncertainty you are prepared to tolerate (or required to include for some reason), then the more you actually need such testing to reduce it. However the utility even white box testing of AGIs is strictly limited by the resistance of large, complex (and for connectionist AGIs, chaotic) systems to simple extrapolation. Black box testing is (usually) virtually useless for systems of this nature.And if you want to create significantly but not terribly different AIs based upon a good design (as you said, with rational goals and an adequate system for what it was supposed to do) would an AI choose to do a "white box" test anyway?
AGIs can always have different goals. Goals are essentially arbitrary after all. Note however that a rational intelligence will not normally create entities that have goals differing from its own unless it specifically has goals that value this behavior over the risk of the new entities interfering with future execution of its own goals (or at the very least, not helping).You said that such would be possible, having different personalities. What justification would be for this, assuming that there is a good base design for an AI and that there is enough knowledge to safely create one extremely powerful AI?
Rational intelligences can have different Bayesian priors, but this isn't something that should persist if they can share information without fear of deceit. The whole point of Bayes is that estimated probabilities will converge (on optimal) as quickly as possible given available evidence. Where base priors (e.g. Kolmogorov complexity, which is essentially formalised Occam's razor) come from and how they are evaluated is a bit of a black art, but in theory the process is objective. You can deliberately deviate from rationality by inserting arbitrary axioms - e.g. religion - but this is just as much a form of mental illness for AGIs as it is for humans.
In the current situation of no AGIs existing, and a few humans attempting to build one that will safely become superintelligent and devote its efforts to helping humans, one is better than several. This is because building several has strictly limited value as protection against implementation mistakes, compared to the benefits of spending the same resources verifying one design as best as possible. In theory if you had several teams working on this, each at the optimum size so that combining them isn't useful, and they were all using best practices, then waiting for them all to finish and launching all the seed AIs at once would be a good idea. That hypothetical isn't anything like real life though; in reality no project is likely to wait for anyone else before proceeding full steam ahead. So the statement above is only relevant within the scope of a single team, where it does not make sense to divide effort between multiple designs.Because from what I have been able to read from your posts, I gather that its more safe and idea to have one AI than several different ones. Or did I misread?
There will certainly be some abstract, sci-fi situations where a community of AGIs is safer. For example, a community of uploaded humans is almost certainly more trustworthy than a single uploaded human that has replicated themselves a few million times. Generally speaking, connectionist designs tend to experience risk-mitigation in groups the same way humans do, wheras logical/rational designs do not, because the later are all trying to converge on a single optimum anyway (for everything except goals). Of course as I've previously noted, the former tend to turn into the later in a relatively short timescale.
Flaw in the original goal system design that gets triggered by changing circumstances. e.g. the utility function reaches a tipping point and it now makes more sense to grab all lesser beings and stuff them into simulated realities (Matrix-style life support pods if you want to be that crude), rather than help them build space habitats etc and generally be happy in actual reality. Aside from being eminently plausible, that has the virtue of being easily understandable.I may be repeating a question again, but: say you have a stable, proper, friendly AI that has a transparent design. Say, for plot purposes, you need it to go bonkers, with whatever level of severity. What excuse would you use?
The online novella 'The Metamorphosis of Prime Intellect' is a good example of how to write this; at the end, one of the main characters convinces the AGI to slightly modify its definition of human wellbeing (such that wireheaded humans are considered dead), and that has a massive cascade effect through the whole goal system (and ultimately, the physical universe).I understand that it is the goal and goal bindings (if I understand correctly, the stuff that tells the AI what the goal is) system that primary determines the stability of an AI. What would are the things that would change these things without the AI having the option to repair itself in time?
Those aren't terribly plausible ones, for a genuinely rational AGI.Unusual experiences that the AI cannot deal with? Long and overt use of computal power to be used with practical tasks? Unnoticed hardware failure?
AGIs well past the self-modification threshold will not suffer from conventional software security issues of the kind that human software has. Exactly how vulnerable they are to 'viruses' crafted by greatly superior AGIs is very hard to say. If such a vulnerability exists, it will likely be more like memetic engineering (not that that extremely embryonic field is worthy of the name yet) than software engineering.As in some artificial virus (or other form of attack from an AI) that is able to slip trough the AIs defences and radically changes their goal bindings to the point that they can't just restore themselves to an earlier backup.
What exactly to you mean by 'digital warfare'? The window in which AGIs will be able to trivially hack their way through virtually anything will close as soon as humans are no longer in charge of the computing infrastructure.It's clear that as digital people that can modify their own programming, AIs will be the primary soldiers for digital warfare. Any ideas how this would look like?
I don't think so. Such redundancies are unlikely to be worth the weight penalty for an extremely remote threat. Better to carry more redundancy and hardening for the primary, digital systems. Besides the analogue-vs-digital comparison isn't really relevant; if we were all using analogue computers right now, they'd be just as vulnerable to viruses as our digital ones, if they had the same level of flexibility and reprogrammability. That flexibility is already vital to modern warfare as is only likely to get moreso; see Stuart's essays on electronic warfare for examples.I assume that most war-spaceships will have analogue alternatives to digital technology because its safer.
I could write a book on that, and I wouldn't even have to rely on personal opinion; a simple objective survey of five decades of mostly failed projects turns up plenty of commonalities. However it's 1am and I have to go, so maybe another time.Q: If you could, what would be the most persistent and often repeated mistakes writers make when writing AIs?
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
I meant that it would be inaccessible in my native language.
Why would it be any less accessible in a particular country? The overwhelming majority of the literature is in English, as are most of the relevant online communities but if your English is good that should be no problem.
Sorry, just a bit of thought relic from ex-Soviet days.
Can this be only done by stopping the AI by freezing its self-programming ability and meddling with its code?You can deliberately deviate from rationality by inserting arbitrary axioms - e.g. religion - but this is just as much a form of mental illness for AGIs as it is for humans.
I wasn't aware that he did write such, I'll definitely look for them.see Stuart's essays on electronic warfare for examples.
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Do you mean 'that people are trying to build', 'that is likely to work' or 'design that seed AIs will converge on after passing the self-enhancement competence threshold'?ThomasP wrote:This may be a bit of a tangent, but as you mentioned current serial CPU and FPGA processing technologies, what would be the most likely logical structures for an AGI to exist on?
You can very roughly split the kinds of system people are trying to build into symbolic logic, 'emergent soup' of interacting agents (e.g. some sort of message passing and/or blackboard architecture) and connectionism (neural networks and close analogues thereof). Plenty of people are trying to use simulated evolution on the later two, not so much on the former (it's been tried in the past but it seems to be out of fashion).
Agent architectures are usually already notionally very parallel and asynchronous, so the main issue with parallelising them is bisection bandwidth - even that isn't normally a problem unless you're trying to distribute over a WAN. NNs are easier in one sense in that their topology is usually near-fixed and fairly local, but the raw bandwidth demands are usually higher (actually less effective compression in the inter-module communication, though NN people would look at you funny if you said that to them) and there are usually more synchronisation/timing concerns (particularly if you're trying to do an accurate biological simulation). Frankly parallel programming of this nature is pretty easy compared to the main AI problem itself, though still hard by the standards of conventional software engineering.
Parallelising symbolic logic systems is a more complicated problem. You can distribute brute force searches relatively easily; that would be something like a matching production rule at the fine level, or a search for confirmation/disconfirmation of a proposition at a coarser level. Problems come when you want to break down large tasks efficiently, without resorting to monte carlo techniques (when not actually desired by the problem domain), when there is a significant amount of recombination going on. For example, I have been looking at implementing 'Letter Spirit' on our engine. This is a problem domain where you give the AI a few letters of a font you have designed, and ask it to design the rest of the alphabet in the same style. There is a lot of local parallel analysis of what features are present, which have to be recombined into shared theories, which are then used for local parallel design of new characters, which are then competitively evaluated.
Our system has some pretty sophisticated parallelisation and synchronisation tools at various levels (e.g. lockless thread-safe low level structures, software transactional memory, ageneralised map-reduce framework) to make that work up to low hundreds of cores. To some extent the sheer difficultly of doing these things has been a pressure towards using connectionist and (fully) asynchronous agent based methods, because then you can largely ignore the problem. Some people like to imagine that logic-based approaches cannot make good use of massive parallelism. I say firstly so what, their designs can't make use of massive serialism and current computers are still a lot more serial than they are parallel. Secondly, they haven't actually dodged the problem, they've just pushed it onto their AI's internal optimisation mechanisms to solve. That's one more barrier to the design being able to cope with serious problems, and one more way in which their limited reflection and self-understanding can cripple early progress (it's also an additional set of low-level safety risks, but that's another story).
Ultimately, making good use of significant parallelism on widely varied problems is going to require automated design. In our system, that means relying on the system's own capability to generate task-specific program code. It's currently rather primitive, but once fully working the process modelling capabilities of that framework (based on the same kind of temporal logic you get in parallel system formal verification tools, but in an 'active' rather than 'passive' role) should blow away everything else in performance and reliability.
Of course I would say that since I chose this research direction and I designed most of the system.
FPGAs seemed radical when they first turned up, when we were still stuck on single-core CPUs and general purpose use of GPUs was just a vague idea. Now though, they're really just a bit further down the spectrum from GPU processing, which is itself a bit further than massively multicore clusters. The NN people like them in theory because you can map an NN onto an FPGA relatively easily, but GPGPU seems to have taken a lot of the force out of that idea, since the later are cheaper, easier to develop for and pretty much just as fast. Similarly for the genetic algorithms people, they like the idea in theory, but FPGAs are relatively expensive and difficult to work with. Remember that the vast majority of AI research is done on standard PCs, or at best the CompSci department's research server, not custom-built supercomputers. Nearly all the people I know working on general AI, even the ones who rely on lots of brute force, are avoiding FPGAs for that reason.
From my point of view, FPGAs offer even more scope for AIs competent at software engineering to outperform human designers. Doing a fully custom design of a chip with millions of gates is way beyond the capabilities of nearly all development teams; only CPU design teams manage it, and even there they reuse existing subcomponents wholesale as much as possible. Normally FPGAs are configured as a gate realisation of a simple algorithm replicated numerous times, or a high-level program (for a CPU) mapped in a relatively inefficient way. The results of letting a general AGI design task-specific FPGA configurations should be impressive.
The people who are still desperately hoping that more brute force will make their designs work; these people are adopting all the parallelism they can get, without too much trouble. For symbolic designs, I think we're already well past the point where more brute force actually helps. Certainly the Cyc guys always talk about how they need more and richer (human-encoded) knowledge, not more computing power. I think most of the demand for CPU power in our own design comes as a consequence of using probability calculus; doing inference on complex probability distributions instead of simple Boolean predicates adds two or three orders of magnitude to the computing requirements. This is why the system is designed to use simple multivalued logic where possible, or as a first approximation.in the sense of the current trend away from serial and towards more parallelism
Well, I'm not a hardware engineer, but if you want I'll give you my personal opinion.and with regards to any looming technologies or even fantastic ones that are still physical possible, where do you see the field heading in terms of hardware?
The current sillicon juggernaut has huge momentum. Hell even x86 has; it looks like Intel have a serious chance of making that the instruction set of choice for GPUs! Much as I'd love to see 100GHz RSFQ (superconducting) processors in the next ten years, I rather doubt it. We'd be lucky to get them in the next thirty. I suspect quantum computing on a significant scale is going to remain a pipe dream for some time; we may see some impressive demos in very specific applications but nothing even remotely general. I'm expecting a continuation of the current trend towards hetrogenous cores in an ultra-high-bandwidth multilayered switched fabric, and use of nanotechnology concepts (e.g. nanotube interconnects) to maintain Moore's Law for a while longer. Switching from sillicon to diamond as a substrate looks quite likely, once all the cheaper options have been exhausted. Ultimately the only option is to go more three-dimensional, with multiple levels of gates as well as just interconnects - the technical (i.e. cooling) challenges of that are formidable, but the real problem is making such structures cost-effectively. A significant wild-card is reversible computing; we should be approaching the point where using it to reduce power consumption produces a net speed increase, but I really don't know how close it is to commercial deployment.
We can do conceptual designs of systems with gates only a few molecules in size, using reversible asynchronous computing, packing somewhere between millions and billions of CPU cores into a lump the size of a coffee cup. In fact Drexler's 'Nanosystems' has some very conservative designs along that line. I think the sci-fi abstraction of 'computronium' is pretty reasonable, as long as you allow for major cooling and power supply (some people seem to forget ).As a bonus question, to add on to the limits of physical possibility, where do you think an AI mind would end up? I know this one in particular is very loaded since we can't really know what a super-mind would come up with, but speculation is welcome.
Re: Mini-FAQ on Artificial Intelligence
Well, this isn't really question and more of a "what is your informed idle speculation", since it seems rather impossible to answer for sure, but what does an AI do after solving the finite task that was its only goal? Like proving some obscure mathematical conjecture.
Does it just crash or quit like any other program? Does it just sit there? I guess it involves some kind of...collapse...in its goal system (to use your jargon), but I have no idea what that actually means. I don't think it'll have mid-life AI crisis and change its reason for living, so yeah.
As an aside, would this apply to its subprograms and what not? It doesn't seem necessary for it to be 100% dependent on the master program's state. I suppose it might be programmed that way for security reasons, but I find the idea of cleaning droids still working in the "dead" orbitals around Sol to be amusing.
Does it just crash or quit like any other program? Does it just sit there? I guess it involves some kind of...collapse...in its goal system (to use your jargon), but I have no idea what that actually means. I don't think it'll have mid-life AI crisis and change its reason for living, so yeah.
As an aside, would this apply to its subprograms and what not? It doesn't seem necessary for it to be 100% dependent on the master program's state. I suppose it might be programmed that way for security reasons, but I find the idea of cleaning droids still working in the "dead" orbitals around Sol to be amusing.
Re: Mini-FAQ on Artificial Intelligence
Is this sarcasm, or would you genuinely want to see that sort of computing power become much more widely accessible? I'm confused because, back on the first page, you said:Starglider wrote:Much as I'd love to see 100GHz RSFQ (superconducting) processors in the next ten years, I rather doubt it. We'd be lucky to get them in the next thirty.
Starglider, on page 1 of this thread wrote:Quantum computing is a red herring. We don't need it and worse, it's far more useful to the people trying to destroy the world (unintentionally; I mean the GA and neural net 'oh, any AI is bound to be friendly' idiots) than the people who know what they're doing.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Rapid Single Flux Quantum isn't the same thing as 'quantum computing'. Confusing I know, but semiconductors already rely on quantum effects (moreso than chemistry in general), so the presence of the word 'quantum' isn't really descriptive.Kwizard wrote:Is this sarcasm, or would you genuinely want to see that sort of computing power become much more widely accessible? I'm confused because, back on the first page, you said:Starglider wrote:Much as I'd love to see 100GHz RSFQ (superconducting) processors in the next ten years, I rather doubt it.
Starglider, on page 1 of this thread wrote:Quantum computing is a red herring. We don't need it and worse, it's far more useful to the people trying to destroy the world than the people who know what they're doing.
RSFQ technology (and other approaches along similar lines) is likely to produce single cores of extreme speed, i.e. even more massive serialism than we have already. Logical approaches can exploit that far better than connectionist ones. Quantum computing (in the 'manipulating an entangled set of bits to make them perform computations') is the opposite extreme; it provides you with a potentially extreme amount of parallelism, but a relatively awful serial clock speed. If it could be made to work on nontrivial system sizes, it would disproportionately benefit the connectionist and particularly evolution-based approaches.
IMHO, we probably already have enough compute power and as such strictly all hardware progress from this point forwards is likely to do more harm than good. However in the first comment you quoted I was speaking about my personal affection for having more powerful tools to work with, rather than whether they were a good thing in general.
- Zixinus
- Emperor's Hand
- Posts: 6663
- Joined: 2007-06-19 12:48pm
- Location: In Seth the Blitzspear
- Contact:
Re: Mini-FAQ on Artificial Intelligence
Q: How important would self-preservation be to an AI? I presume that it will be depended on how much it perceives survival to be necessary to archive their goals/super-goals? Would an AI sacrifice itself (or just even risk permanent death with downtime) if it believes it would help archive their goals?
Credo!
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.
Chat with me on Skype if you want to talk about writing, ideas or if you want a test-reader! PM for address.