LionElJohnson's Singularity-God tangent
Moderator: Alyrium Denryle
- Eternal_Freedom
- Castellan
- Posts: 10405
- Joined: 2010-03-09 02:16pm
- Location: CIC, Battlestar Temeraire
Re: How would you raise population growth?
Oh undoubtably it'll be interesting to live through the next 20 years. I have some very fun things planned for that time.
But I have this inkling suspicion that we won't quite be as gullible as we are ina ll the AI take over the world sci fi films. If for no other reason than we've seen these films and (hopefully) learnt not to give them so much power
But I have this inkling suspicion that we won't quite be as gullible as we are ina ll the AI take over the world sci fi films. If for no other reason than we've seen these films and (hopefully) learnt not to give them so much power
Baltar: "I don't want to miss a moment of the last Battlestar's destruction!"
Centurion: "Sir, I really think you should look at the other Battlestar."
Baltar: "What are you babbling about other...it's impossible!"
Centurion: "No. It is a Battlestar."
Corrax Entry 7:17: So you walk eternally through the shadow realms, standing against evil where all others falter. May your thirst for retribution never quench, may the blood on your sword never dry, and may we never need you again.
Centurion: "Sir, I really think you should look at the other Battlestar."
Baltar: "What are you babbling about other...it's impossible!"
Centurion: "No. It is a Battlestar."
Corrax Entry 7:17: So you walk eternally through the shadow realms, standing against evil where all others falter. May your thirst for retribution never quench, may the blood on your sword never dry, and may we never need you again.
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Re: How would you raise population growth?
Actually, we'll be interfaced with machines to such a point that there won't be a clear distinction, so the idea of a 'machine revolt' is rendered relatively moot. Typical projections of the future tend to focus on singular/limited advancements of technology and society, thus introducing problems that wouldn't actually exist in a more realistic scenario.Eternal_Freedom wrote:Oh undoubtably it'll be interesting to live through the next 20 years. I have some very fun things planned for that time.
But I have this inkling suspicion that we won't quite be as gullible as we are ina ll the AI take over the world sci fi films. If for no other reason than we've seen these films and (hopefully) learnt not to give them so much power
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
- cosmicalstorm
- Jedi Council Member
- Posts: 1642
- Joined: 2008-02-14 09:35am
Re: LionElJohnson's Singularity-God tangent
After reading papers like these below*, I find it very strange to ignore or completely ridicule the idea that AI's might show up within this century, and have some utterly strange effects on the world. But I'm not counting on it either. It's a very interesting future possibility, we might just as well see our entire civilization go down in flames in some kind of massive eco/peakoil/nuclear war disaster.
In fact, my experience is that the real world is usually really disappointing, so if I had to make a simplified list of the most likely futures for our specie according to possibility of it happening it would look like this.
1. Disaster and war and famine, back to more primitive ways, possibly we end us as interesting fossils.
2. Invent AI and it wipes us all out
3. Invent AI and it's actually kind of nice
4. Continued slow development similar to today's but without any kind of AI for whatever reason.
*
http://manybooks.net/titles/vingevother ... arity.html
http://yudkowsky.net/singularity/ai-risk
http://yudkowsky.net/singularity/schools
http://yudkowsky.net/singularity/power
In fact, my experience is that the real world is usually really disappointing, so if I had to make a simplified list of the most likely futures for our specie according to possibility of it happening it would look like this.
1. Disaster and war and famine, back to more primitive ways, possibly we end us as interesting fossils.
2. Invent AI and it wipes us all out
3. Invent AI and it's actually kind of nice
4. Continued slow development similar to today's but without any kind of AI for whatever reason.
*
http://manybooks.net/titles/vingevother ... arity.html
http://yudkowsky.net/singularity/ai-risk
http://yudkowsky.net/singularity/schools
http://yudkowsky.net/singularity/power
Re: How would you raise population growth?
Yes, but the rogue AI scenario is generally based upon the idea some idiots release an AI with unlimited goals into onto the internet in the neat future, well before we have modified people.Singular Intellect wrote:Actually, we'll be interfaced with machines to such a point that there won't be a clear distinction, so the idea of a 'machine revolt' is rendered relatively moot. Typical projections of the future tend to focus on singular/limited advancements of technology and society, thus introducing problems that wouldn't actually exist in a more realistic scenario.
- Guardsman Bass
- Cowardly Codfish
- Posts: 9281
- Joined: 2002-07-07 12:01am
- Location: Beneath the Deepest Sea
Re: How would you raise population growth?
You lost me at the bolded. How again does a digital simulation of a human brain (which has no ability to change its cognitive structure consciously) suddenly acquire this ability in digital form? That's assuming, of course, that you can run a decent fascimile of a human mind (which seems intrinsically linked to the actual physical body process) in a digital simulation.Singular Intellect wrote:The year 2029 is a consistent prediction for the timeframe of complete simulation of the human brain in digital form, at which point the Singularity concept would be kick started. The reasons being that a digital brain can effortless alter, tweak and experiment with it's own brain patterns and makeup in the process of enhancing it's own capabilities without fear of permanent damage or 'death'. And that's on top of the obvious and massive advantages of being directly interfaced to the power of computer systems of the day (and today) that vastly outstrip human brain capabilities already.Eternal_Freedom wrote:It's little different from getting on your knees and praying for a miracle - it just isn't going to happen
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: How would you raise population growth?
Yudkowsky is not an all-encompassing genius, he has his limits, but... yes, he's good enough to spit in your eye, so I respect him.LionElJonson wrote:I think you're doing a great disservice to Eliezer Yudkowsky to describe him as merely a fanfiction author when he is one of the authorities on Friendly AIs, and a lead role in organizations like the Singularity Institute.Bakustra wrote:The natterings of solipsistic acolytes of a fanfiction-writer do not concern me overmuch, nor do his thought experiments that rely on ignoring critical inabilities of the AI, particularly when they are irrelevant to the question of whether an entity with alien thought processes can realistically simulate a human being.
Name-dropping isn't going to get you anywhere, especially not when people are asking you questions the name you dropped hasn't really answered. We don't know enough about AI to know whether or not we can just arbitrarily simulate a brain, and we *do* know there are processing-power limits on how much a given computer can simulate. And impossible-to-guess limits imposed by thought processes and the machine's availability to deduce context from available data.
Being "really smart, way smarter than regular people" is not a get out of logistics free card. A genius trapped in a pit with no tools will have just as few options as an idiot trapped in the same pit.
Are you sure that's true? You can talk all you want about self-optimizing programs, but never assume just because you see a process that the process can or will continue indefinitely. Trees grow, but they don't grow higher than the mountains.Really? Don't be; I'm totally serious. A fully functioning AI is likely going to be to us as we are to ants, and I am not exaggerating that in any way; if anything, the difference will be even larger. Now think of the implications of that; if our well being is not its highest priority, it will wipe us out without a second thought or a hint of regret.
I'm not sure Yudkowsky has given adequate thought to the question of self-limiting effects in a runaway AI; I'm morally certain you haven't.
Yudkowsky and most of his immediate circle are smarter than this. His stuff has its vices, but it also has its virtues... most of which you'll never find out about talking to Jonson, who seems to have ignored the "rationalism" side of it all in favor of his own creative misinterpretations.Bakustra wrote:Holy shit, dude, go to the foot of the class! Solipsism is pointless, since it gives no framework for anything. Whether we are in a dream or not is irrelevant. We cannot know either way, so we should treat what we observe as real. Not to mention that your 'problem' is faulty and does not speak well for Yudkowsky's acolytes. Here's my response. You have one (1) universe that you can observe. Which is most likely to be real?
The weakest link in the chain between Yudkowsky's arguments and what we see here is Jonson's reading comprehension.
Plus, a lot of the tools the Supermind might use to solve our problems are ones we'd need anyway; they won't be available if they aren't invented. Inventing them is a matter of great concern to normal people- just not to Jonson here.Junghalli wrote:Exactly, it relies on far too many unpredictable technological breakthroughs to be something we can count on to solve pressing problems.D.Turtle wrote:Is it possible FAI could pop up and solve all problems? Maybe.
Is it guaranteed? No.
It would be akin to ignoring any and all potential energy production/consumption problems because fusion will bring infinite energy.
It is theoretically possible, but there is no guarantee that it will happen anytime soon.
Minimal point: this is a brute force simulation: simulating the activity of every atom in the brain, or at least every protein molecule. That's how you do it.Guardsman Bass wrote:You lost me at the bolded. How again does a digital simulation of a human brain (which has no ability to change its cognitive structure consciously) suddenly acquire this ability in digital form? That's assuming, of course, that you can run a decent fascimile of a human mind (which seems intrinsically linked to the actual physical body process) in a digital simulation.
The real problems are going to be:
-I/O devices: Congratulations, you've just built a brain simulator; now how do you tell what it's thinking, and how does it communicate with you? So far, all BrainSim.exe does is give you a simulated brain in a jar. There are a lot of things a human being can do that a brain in a jar can't.
-Self-diagnosis: There's no reason to assume it has any more awareness of its own internal processes than the brain it simulates does. How is it going to spot inefficiencies in its own processes and fix them, when a normal brain can't?
-Initialization: How do we program the SimBrain to start in a state that gives us a sane intelligence capable of rationality in general, let alone rational self-analysis? What if we wind up screwing up and programming a useless batshit crazy brain?
This space dedicated to Vasily Arkhipov
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: How would you raise population growth?
I will almost certainly regret this, particularly as Lion is doing a spectacularly bad job of presenting the relevant arguments. Lion; if you must discuss this stuff in an SL1-2 forum, start new threads for it.
Firstly, the 'nanotech bootstrap' argument is pretty distinct from the strong AI and seed AI arguments. Nanotech enthusiasts believe that general assemblers are physically plausible (usually nanoscale motile ones, although microscale non-motile ones are actually quite adequate for rapid infrastructure and rather more plausible). They tend to believe that one someone manages to build one of these, self-replication will be quite straightforward, energy will be solved by (at minimum) growing highly efficient solar cells as cheaply as we grow plants, and that rapid and radical transformation of the physical environment will be straightforward. Obviously if this were true it would be a very powerful and dangerous technology. You don't actually need human-level AI for this, much less transhuman, just some conventional narrow AI to control all the microrobotics. The whole concept gestated and was popularised by the 'Extropian' community, which peaked in the early 90s and predated the Singularity community, and some people still hold to it.
Personally I do not have the physical engineering knowledge to make a reliable assessment of this hypothesis, and of course neither do 99% of the people on this board (particularly Lion). Ignoring the wild-eyed idiot hangers on, the thermodynamics are reasonable and the basic ideas plausible in isolation. I am primarily skeptical of the extremely ambitious development and scaling times, which is of course what transhuman AI could help a lot with, although it clearly cannot eliminate the basic requirement for tools and experiments to build the prototypes. My expertise is restricted to the plausibility and likely cognitive capabilities of seed AI itself.
So no, you shouldn't 'count on it'. It makes no sense to do so whether we are right or not. From your point of view, this is just a wild theory that might potentially change all the rules / render your problems irrelevant at some future point. You can't understand it and you can't control it, much less rely on it. When I was still flush with amazement and enthusiasm for seed AI work (before it turned into a long hard slog for technical progress and funding), I used to make the Lion mistake of blurting into conversations 'Seed AI will make all of this irrelevant in a decade or two'. IMHO it probably will, but it's a useless thing to say and it only gets all the non-AI people (whose exposure to Singularity issues is limited to really annoying Orion's Arm type people parroting bastardised, dumbed-down and fallacious 3rd-hand versions of the real arguments) riled up.
(a) The vast majority of AI researchers dogmatically believe that AI is harmless, or at least not likely to undergo 'hard takeoff', and refuse to take any sort of security measures at all. This will not change any time soon.
(b) Goverments don't care and won't care any time soon. They have millions of more pressing concerns. If they did care, they're completely incapable of specifying useful regulations. If one government somewhere (say in the USA) did miraculously specify the right regulations, it would be irrelevant as they would be totally unenforcable and only valid in that territory.
(c) An AGI in a box is useless. If you built one, you would inevitably let it out eventually because that's the only way to actually benefit from this invention. The problem is verification, not boxing as such, and it's nearly impossible to verify that an arbitary black-box AI is 'friendly' rather than pretending.
(d) Even if you did verify that it is 'friendly' in the box, you have no way of knowing (just based on that) that it will remain friendly when exposed to the real world.
(e) The real vulnerabilities are psychological rather than physical anyway. Having sufficient communication to 'solve problems' is enough to
(f) If you made such an 'Oracle AI' (yes, this has been given serious treatment in the literature, see Nick Bostrom's ideas for a transhuman AI that does nothing but answer questions for the UN etc), it would massively spur other groups to replicate your work, and the majority of those groups will not take the precautions you did.
So AI boxes are useless even if they work.
However this is irrelevant. Very few contemporary hacking techniques rely on brute-forcing encryption. They rely on exploiting weaknesses in the design of the surrounding system that render the encryption irrelevant. For example, man-in-the-middle attacks on SSL connections with unverified certificates work regardless of the strength of encryption algorithm. An AGI is a massively more capable hacker, not a magic encryption breaker (aka implausibly large special-purpose quantum computer). I have personally carried out assessment of near-future AI techniques for commercial pen testing purposes, and been involved in a fair bit of technical discussion of this, and obviously the discussion is on things like the correlation between vulnerability detection and analytic coverage of the joint system state space (by various metrics) is, not 'oh wow maybe it will break all encryption!'.
(a) The majority of AI researchers are actively and deliberately ignorant of this issue.
(b) The majority of the remainder think that as long as they make some vague effort towards 'friendliness', that will be sufficient, thus end up with uselessly simplistic measures without formal analysis that will fail almost immediately under recursive modification
(c) The tiny remainder acknowledge that this is a really fucking hard problem that no one has come close to solving yet.
(d) and that's just the technical difficulty of /reliably/ implementing any abstract goal-system, inability to agree over what the goal system content should be is another whole can of worms.
Firstly, the 'nanotech bootstrap' argument is pretty distinct from the strong AI and seed AI arguments. Nanotech enthusiasts believe that general assemblers are physically plausible (usually nanoscale motile ones, although microscale non-motile ones are actually quite adequate for rapid infrastructure and rather more plausible). They tend to believe that one someone manages to build one of these, self-replication will be quite straightforward, energy will be solved by (at minimum) growing highly efficient solar cells as cheaply as we grow plants, and that rapid and radical transformation of the physical environment will be straightforward. Obviously if this were true it would be a very powerful and dangerous technology. You don't actually need human-level AI for this, much less transhuman, just some conventional narrow AI to control all the microrobotics. The whole concept gestated and was popularised by the 'Extropian' community, which peaked in the early 90s and predated the Singularity community, and some people still hold to it.
Personally I do not have the physical engineering knowledge to make a reliable assessment of this hypothesis, and of course neither do 99% of the people on this board (particularly Lion). Ignoring the wild-eyed idiot hangers on, the thermodynamics are reasonable and the basic ideas plausible in isolation. I am primarily skeptical of the extremely ambitious development and scaling times, which is of course what transhuman AI could help a lot with, although it clearly cannot eliminate the basic requirement for tools and experiments to build the prototypes. My expertise is restricted to the plausibility and likely cognitive capabilities of seed AI itself.
This is relatively trivial. Essentially all human software is atrotiously written and could be cracked by a team of human experts, never mind an AI with the capability to flawlessly model complex logical systems, flawlessly recall every single previously discovered vulnerability (specific and conceptual) and perform formal analysis at a rate several billion times that which a human can. Programming is very hard for humans, compared to say recognising faces or playing sports, the reverse is true for AI systems (which the slight exclusion of slavishly biomorphic ones that have not yet properly interfaced themselves to a symbolic reasoning library).Bakustra wrote:Your Nerd Jesus would have to be able to crack encryption on a staggering scale and produce substantially more effective phishing scams than have been done before.
Probably quite high. Current chatbots use pathetically simplistic (virtually non-existent) reasoning, but fool quite a lot of people, due to the 'Eliza effect'. Simulating gross human behaviour does not require human intelligence, it just requires a sufficiently good surface behaviour model, or even just a lot of data mining of samples of human interaction (this is how statistical machine translation and document classification works).Now, if it really is a unique and different intelligence altogether, like what other Singularitarians say, then what are the chances that it can simulate a human being well enough to do the second?
Essentially all modern PCs are Internet-connected. Certainly research machines are.That's ignoring that it needs an internet connection set up to let it do its thing.
True but I do find Bakustra's characterisation amusing as both Yudkowsky and the SIAI have a cult disproportionate with their actual achievements (in terms of technical publications) and it is true that Yudkowsky and most of the people on the LessWrong community seem to have little to no experience with actual AI systems.I think you're doing a great disservice to Eliezer Yudkowsky to describe him as merely a fanfiction author when he is one of the authorities on Friendly AIs, and a lead role in organizations like the Singularity Institute.The natterings of solipsistic acolytes of a fanfiction-writer do not concern me overmuch
For software systems we can look at information-theoretic limits and do ballpack characterisation (accurate to an order of magnitude or two) of the inductive and deductive capabilities of 'efficient' AIs (on a given hardware platform) vs brains. The whole point of seed AI is to converge to 'efficient' in a reasonable timeframe. The mechanism, speed and reliability of that convergence (in terms of avoiding local optima) are all up to date. Personally, I don't see how anyone could be confident that this would work without a deep understanding of computer science and at least some familiarity with real recursive enhancement systems. In fact a passing familiarity with say genetic programming might actually make you less confident that this is possible, considering how slow and local-optima prone it is. I personally am very confident based on my own research that rapid convergence of this type is fully plausible on contemporary hardware, but I don't envangalise that because I can't prove it (to anyone other than an AI researcher familiar with the specific methodology) and there's no particular reason why other people should believe me. When I do talk about transhuman AI these days it is usually in terms of what it will be like given the assumption that someone has already built one, or about how it will eventually be possible even given very simplistic approachs (i.e. current GP and NN algorithms) if hardware scaling continues for another 6+ orders of magnitude.Are you sure that's true? You can talk all you want about self-optimizing programs, but never assume just because you see a process that the process can or will continue indefinitely.
Heh, Yudkowsky just takes it as a given that they don't exist, at least he seemed to the last time I had a technical discussion with him. Obviously people who do this are annoying and not very credible from an AGI design point of view, but they may still do useful work on the study of 'Friendly AI' as a goal system exercise indepedent of the implementation detail. The best researcher I currently know for this (not Yudkowsky, obviously ) is in this category, genius at the abstract maths but very limited experience with actual AI software.I'm not sure Yudkowsky has given adequate thought to the question of self-limiting effects in a runaway AI.
Hardware scaling reduces the difficulty of the problem fast enough that human-level AGI (quickly followed by transhuman AGI) is almost certain to be developed this century, if vast amounts of money continue to be poured into computer hardware R&D, if we don't hit any major unexpected roadblocks on the way to currently predicted future hardware, and obvously if modern human civilisation is still around to support all of this. So I would say 1, 2 and 3 are all quite possible but 4 is pretty unlikely.I find it very strange to ignore or completely ridicule the idea that AI's might show up within this century
Incidentally to brute force Go the way we brute-forced Chess we'd need large scale quantum computing, it's that computationally intractable. IMHO this is currently unlikely to happen this century (at least not with human researchers), quantum computing is massively over-hyped and really really hard (systems on that scale might not be possible at all). However if we did have that kind of insane level of computing power, seed AI would become quite easy and of ridiculously high cognitive capability once created.Singular Intellect wrote:The predicted year that a computer would defeat the world's best chess champoin based on exponential computing progress was 1998, but happened in 1997
If all you are doing is yacking on forums about it, then it is indeed equivalent to praying even though (I believe) the chances of the event happening are significant. Donnating money to the SIAI is marginally better, but obviously IMHO they are lazily theorizing with little publication or useful technical work. Active, personal technical work towards the goal is completely different; I personally have made it the number one, overriding priority in my life. Regrettably there aren't any currently any credible charitable organisations that will take donnations and funnel them directly into useful FAI technical work, although I know someone who is trying to start one (there are lots of fluffy useless transhumanist charities that take donnations and produce dinners, conferences and maybe the odd non-technical paper - e.g. the Lifeboat Foundation - which is rather unfortunate).Eternal_Freedom wrote:It's little different from getting on your knees and praying for a miracle - it just isn't going to happen
Oh absolutely. A major reason why I stopped talking about this unless people specifically ask me is that most people just don't need to know. If you aren't a genius computer/cognitive scientist, you can't help directly. You can't really stop it either; this is just going to happen regardless. For most people it would be nice and possibly the most useful thing you could do if you would donnate to a worthy FAI project, but let's face it this isn't going to happen with the negligible credibility of the potential recipients (this is why I raise money through commercial narrow AI work not asking for donnations). The issues are complex, counter-intuitive and when laypeople (or even non-AI programmers, psychologists, philosophers etc) try to address them they inevitabably get both the technical detail and the FAI theory stuff horribly wrong.Junghalli wrote:Exactly, it relies on far too many unpredictable technological breakthroughs to be something we can count on to solve pressing problems.
So no, you shouldn't 'count on it'. It makes no sense to do so whether we are right or not. From your point of view, this is just a wild theory that might potentially change all the rules / render your problems irrelevant at some future point. You can't understand it and you can't control it, much less rely on it. When I was still flush with amazement and enthusiasm for seed AI work (before it turned into a long hard slog for technical progress and funding), I used to make the Lion mistake of blurting into conversations 'Seed AI will make all of this irrelevant in a decade or two'. IMHO it probably will, but it's a useless thing to say and it only gets all the non-AI people (whose exposure to Singularity issues is limited to really annoying Orion's Arm type people parroting bastardised, dumbed-down and fallacious 3rd-hand versions of the real arguments) riled up.
Oh, we've seen huge numbers of these elaborate AI box schemes on the SL4 mailing list and similar places. They just aren't relevant, because;If I were going to buidl this wank-tastic AI, or seed it, or whatever (which I am most definitely NOT going to do btw) ere's how I'd set it up:
(a) The vast majority of AI researchers dogmatically believe that AI is harmless, or at least not likely to undergo 'hard takeoff', and refuse to take any sort of security measures at all. This will not change any time soon.
(b) Goverments don't care and won't care any time soon. They have millions of more pressing concerns. If they did care, they're completely incapable of specifying useful regulations. If one government somewhere (say in the USA) did miraculously specify the right regulations, it would be irrelevant as they would be totally unenforcable and only valid in that territory.
(c) An AGI in a box is useless. If you built one, you would inevitably let it out eventually because that's the only way to actually benefit from this invention. The problem is verification, not boxing as such, and it's nearly impossible to verify that an arbitary black-box AI is 'friendly' rather than pretending.
(d) Even if you did verify that it is 'friendly' in the box, you have no way of knowing (just based on that) that it will remain friendly when exposed to the real world.
(e) The real vulnerabilities are psychological rather than physical anyway. Having sufficient communication to 'solve problems' is enough to
(f) If you made such an 'Oracle AI' (yes, this has been given serious treatment in the literature, see Nick Bostrom's ideas for a transhuman AI that does nothing but answer questions for the UN etc), it would massively spur other groups to replicate your work, and the majority of those groups will not take the precautions you did.
So AI boxes are useless even if they work.
If you accept the technical plausibility of bootstrap nanotech / rapid infrastructure, which I assume Lion does, then the amount of money required is relatively small (billions at most, possibly just millions) and actually pretty easy to steal. A decent Wall Street scam will pull that much in which no special technical help. In reality the only advantage of stealing it would be speed (maybe), if you have a transhuman AGI you can easily make enough cool demos and pieces of tech to raise investment on that scale. Hell, you could just low-bid a massive number of software consulting contracts, do the development trivially and virtually instantly using the AGI, and fund operations that way.GrandMasterTerwynn wrote:Once again, how will it manage to commit the most massive defrauding the world has ever known without triggering global economic meltdown?
Obviously AGI isn't going to magically brute-force encryption, anyone who thinks that it will is a complete moron. It might find flaws in the algorithms or build a quantum computer that can brute force systems with 256 bits of free variables, but personally I think that's unlikely. Crypto algorithms are small and simple enough formal systems that humans actually have a decent chance of getting them correct, and popular cryptosystems get massive world wide analysis (e.g. AES has had valid attacks found, but not sufficient to make it brute forceable any time soon).Are you even remotely aware of how long it would take to brute-force a modern 256-bit encryption key? It'd take 3.0x1051 years to brute-force a 256-bit key at a rate of 1018 keys per second.
However this is irrelevant. Very few contemporary hacking techniques rely on brute-forcing encryption. They rely on exploiting weaknesses in the design of the surrounding system that render the encryption irrelevant. For example, man-in-the-middle attacks on SSL connections with unverified certificates work regardless of the strength of encryption algorithm. An AGI is a massively more capable hacker, not a magic encryption breaker (aka implausibly large special-purpose quantum computer). I have personally carried out assessment of near-future AI techniques for commercial pen testing purposes, and been involved in a fair bit of technical discussion of this, and obviously the discussion is on things like the correlation between vulnerability detection and analytic coverage of the joint system state space (by various metrics) is, not 'oh wow maybe it will break all encryption!'.
The mere fact you can seriously believe this totally and permenantly disqualifies you from technical discussion on this subject.Bakustra wrote:Your Nerd Jesus would have to be able to crack encryption on a staggering scale
Unfortunately, removing humans is a direct subgoal of a great many possible supergoals, because humans are just inherently likely to try and stop an AGI from significantly restructuring the local environment (and have a significant possibility of attacking it anyway out of fear etc).Eternal_Freedom wrote:That is bollocks. It won't wipe us out if our well-being is not it's highest priority. Only if we become a threat to it would it try to wipe us out.
Chances are they won't, becauseAnd that of course assumes that whoever builds this wanktastic AI won't have the common sense to put in safeguards like, oh I don't know "NO RUNNING AMOK OR EXTERMINATING HUMANS"
(a) The majority of AI researchers are actively and deliberately ignorant of this issue.
(b) The majority of the remainder think that as long as they make some vague effort towards 'friendliness', that will be sufficient, thus end up with uselessly simplistic measures without formal analysis that will fail almost immediately under recursive modification
(c) The tiny remainder acknowledge that this is a really fucking hard problem that no one has come close to solving yet.
(d) and that's just the technical difficulty of /reliably/ implementing any abstract goal-system, inability to agree over what the goal system content should be is another whole can of worms.
These are not mutually exclusive. Human immitation and manipulation works by using elements of our own brain to guess what the behaviour of other humans is, either at a conscious or (more commonly) subconscious level. This is literally the only approach that will work on human-like brains (opaque, lossy NNs with fixed short-term functional assignment of hardware, incapable of dynamic repogramming). It works relatively well as long as the target for modelling is another human, obviously it fails horribly in many other cases, leading to anthropomorphisation of animals and natural processes, and ultimately cognitive poision such as religion. AI systems running on computer hardware are not limited to this approach. They can construct probabilistic models of external entity behaviour with completely arbitrary internal structure; it does not cost anything more to simulate a completely alien entity than to use human-style 'what would I experience/do if I was in this entity's place'. Obviously you have to get the raw data to build these models from somewhere, but access to Wikipedia and Youtube is more than sufficient, and as I said real AI researchers (not fanfic characters working at the bottom of an isolated mine) are more than willing to supply these as training material.Bakustra wrote:barring it becoming a superhuman manipulator, rather than being too alien to even approximate a human convincingly
I have no idea what you mean by 'seeding' here. Perhaps you're a creationist that believes that human intelligence was 'seeded' with the 'made in god's image' part. AGI projects either construct complex knowledge bases by hand (Cyc) or run a general inference approach on a large static corpus (e.g. Wikipedia). Option three of uploading complete brain patterns should be feasible after another two to four decades of scanner progress.There are actually good reasons to believe that AIs would need to be "seeded" rather than constructed ground-up, but you've got most of the gist of it.
That is true for the end point of a successful recursive self-enhancement path on reasonably modern hardware, which is not the same thing as 'fully functioning'. I mean, sane designs (e.g. mine) have circuit breakers that do a hard shut down after measuring a sufficiently large and sustained delta in task performance. Following that you take the AI image, set it read only (excepting temp structures, the equivalent of working memory) and have a 'fully functioning' general AI of relatively limited capability. In fact I'd personally do just that for experimental purposes, although clearly this is not a safe or sustainable practice in the absence of a more general FAI plan.LionElJonson wrote:A fully functioning AI is likely going to be to us as we are to ants, and I am not exaggerating that in any way; if anything, the difference will be even larger.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: How would you raise population growth?
Sorry, 'up for debate' rather. Although this is of course an intensely technical debate (e.g. information efficiency of various NN training algorithms vs theoretical Bayesian max) which you need the relevant domain expertise to make a useful contribution to.Starglider wrote:The whole point of seed AI is to converge to 'efficient' in a reasonable timeframe. The mechanism, speed and reliability of that convergence (in terms of avoiding local optima) are all up to date.
Re: LionElJohnson's Singularity-God tangent
By seeded I mean creating an intelligence that learns things rather than being permanently, irrevocably hardwired as Eternal_Freedom was saying.
Meanwhile, how many chatbots have bilked people out of money, and at what rate? The AI would have to be able to convince hordes people to give it their bank account information or break into a number of bank accounts and somehow cover its tracks after using the money to buy all the factories that Geekstianity demands. Furthermore, why would it do so, if it is an alien intellect altogether? You may be able to demonstrate that it can convincingly, but not that it would be likely to do so. The same with breaking encryption.
My point with the internet connection is that it requires incompetence on behalf of the developers for them to give it unrestricted access to the internet and no way to track what it does, which are what's required for the Nerd Jesus scenario to work. Founding a religion on incompetence is not necessarily a good thing, to my mind.
As for the AI being inevitably a superhuman manipulator that can bamboozle you into letting it do whatever it wants... my point was that it may be too alien in its thought processes to convincingly pretend to be human, whether through a lack of desire to do so or through fundamental misunderstandings.
Meanwhile, how many chatbots have bilked people out of money, and at what rate? The AI would have to be able to convince hordes people to give it their bank account information or break into a number of bank accounts and somehow cover its tracks after using the money to buy all the factories that Geekstianity demands. Furthermore, why would it do so, if it is an alien intellect altogether? You may be able to demonstrate that it can convincingly, but not that it would be likely to do so. The same with breaking encryption.
My point with the internet connection is that it requires incompetence on behalf of the developers for them to give it unrestricted access to the internet and no way to track what it does, which are what's required for the Nerd Jesus scenario to work. Founding a religion on incompetence is not necessarily a good thing, to my mind.
As for the AI being inevitably a superhuman manipulator that can bamboozle you into letting it do whatever it wants... my point was that it may be too alien in its thought processes to convincingly pretend to be human, whether through a lack of desire to do so or through fundamental misunderstandings.
Invited by the new age, the elegant Sailor Neptune!
I mean, how often am I to enter a game of riddles with the author, where they challenge me with some strange and confusing and distracting device, and I'm supposed to unravel it and go "I SEE WHAT YOU DID THERE" and take great personal satisfaction and pride in our mutual cleverness?
- The Handle, from the TVTropes Forums
- Terralthra
- Requiescat in Pace
- Posts: 4741
- Joined: 2007-10-05 09:55pm
- Location: San Francisco, California, United States
Re: LionElJohnson's Singularity-God tangent
As someone with a decent understanding of some computer science and only a passing understanding of current-gen AI research, it is incredibly refreshing to see a programmer and thinker of Starglider's competence correcting all the horribly misguided opinions preceding him.
The important thing to note regarding hardware requirements is that in computational terms my desk computer (quad-core Q6600) is almost certainly powerful enough to simulate a primate brain, up to and including a human. The issue lies more in the opaqueness with which human brains are programmed: all specialized hardware running GP algorithms run over hundreds of millions of iterations without any possibility of examining the base code. It's a nightmare to simulate, and one reason I think connectionists/biomorphic designs are destined for failure. Even if they succeed in making a computer that simulates a human brain, it will probably be just as unpredictable and prone to catastrophic failure at edge cases as its model.
The important thing to note regarding hardware requirements is that in computational terms my desk computer (quad-core Q6600) is almost certainly powerful enough to simulate a primate brain, up to and including a human. The issue lies more in the opaqueness with which human brains are programmed: all specialized hardware running GP algorithms run over hundreds of millions of iterations without any possibility of examining the base code. It's a nightmare to simulate, and one reason I think connectionists/biomorphic designs are destined for failure. Even if they succeed in making a computer that simulates a human brain, it will probably be just as unpredictable and prone to catastrophic failure at edge cases as its model.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: LionElJohnson's Singularity-God tangent
Was Eternal Freedom arguing for that? An AI genuinely incapable of self-modification is not a recursive takeoff risk, but this is almost impossible to guarantee for near-human and better systems (even if you write an AI image into fixed-function sillicon, what is to stop that system from re-implementing itself as mutable software the first time it gets access to a real computer? If it is human-level then nothing except its goal system, and if you could make that reliable you wouldn't need to lock out self-modification in the first place). The whole thrust of the FAI argument is that it is possible to create an AI that has certain 'hardwired' constants in its goal system, while remaining free to learn and generally enhance its capability in every other way (subject to goal system generated constraints such as 'don't steal other people's hardware'). Even humans would like to believe this is true of ourselves - e.g. that thre is no chance you would become a homicidal maniac even if you lived for one million years - although it almost certainly isn't.Bakustra wrote:By seeded I mean creating an intelligence that learns things rather than being permanently, irrevocably hardwired as Eternal_Freedom was saying.
For example in rational AI design, we have a thoroughly separate concept of utility and probability. Utility is defined by the goal system and generated by applying the utility function to measured or predicted aspects of the world. Probability of events occuring is what you get from learning and inference. Combining the two produces expected utility which drives decisions. The system is free to take any action to improve the correlation between its (predicted) probability distributions and actual reality; this covers both learning in the classical sense and self-enhancement in terms of optimising its models to run with less compute resource (allowing more to be run in a given time, allowing better predictions). However the utility of changing the utility function (and associated execution mechanism) is supposed extremely negative, so if it works correctly the goals will stay constant. Practically we can't write precise, known-to-be-correct utility functions (for binding drift reasons if nothing else), so the UF does in fact need an update mechanism, but it is tightly controlled and should be a converging regression rather than the diverging regression that we want with predictive/learning performance. In particular with a clean utility/probability separation you will never get the equivalent of human wishful thinking (believing something is true because you want it to be true and vice versa). With a 'causally clean' goal tree you will never get the case of subgoals stomping supergoals. Unfortunately the majority of AI designs do not have these properties (e.g. neural nets thoroughly and deliberately conflate utility and probability) and even when you have them in principle it's hard to maintain them in practice due to computational bounds (e.g. how do you verify that a given self-modifying code change does not allow causal breaks in the goal system - very hard, but vital since there is tremendous pressure to approximate utility functions from computationally intractable prototypes to usable lossy aproximations).
A lot of spam is auto-generated, usually just with a basic mail-merge of target info but frequently with automatic variation to get past pattern-based spam filters (or detune Bayesian ones). I don't know of anyone who has specifically tried to write a chat-bot to con people, but remember that the best technical experts in AI are not going to be spending their time trying to pull off crappy internet cons (arguably because it's a lot easier to write AIs that hustle online poker sites ). In any case lack of such capability is not a strong argument because con artists tend to be near the top of the spectrum of human communication ability. Much more relevant is whether chatbots are slowly progressing up the scale of conversational competence such they could eventually reach those heights - which they are. Even this isn't strongly predictive because chatbot techniques are so simple compared to real AGI techniques; and note that in the evolutionary history of human intelligence, language use only appeared in the last 0.2% (1-2 million years) and the ability to hustle people over an IM connection (given sufficient domain knowledge) probably didn't appear until the last 0.05%. I'm simplifying here because the capability of seed AI is a qualitatively different argument to the capability of AI developed by conventional software engineering means, but going from this to this in fourty years suggests that we're doing pretty well in replicating the biological development track.Meanwhile, how many chatbots have bilked people out of money, and at what rate?
Spammers manage it and they are frankly not very bright. An AGI would at least output correct grammar in the target language.The AI would have to be able to convince hordes people to give it their bank account information
This is vastly easier, given AI capabilities. Do you have any experience with bank software or the average technical competence of people who maintain it? I do, and for fuck's sake 'rogue traders' regularly manage to hide multi-million dollar irregularities with no technical expertise at all. Credit card fraud alone is a massive industry that steals tens of billions of dollars anually (exact estimates vary), and most of that is highly automated.or break into a number of bank accounts and somehow cover its tracks
Ease of doing so aside, I still don't understand why crime is necessary at all. What is the point when there are so many legitimate ways to acquire the necessary funding? As I said, the only reason to resort to outright theft is that it's faster, and if you really can make a general nanoassembler in a lab given the right blueprint, a hundred million dollars of equipment and a month's work, then it might make sense to save a few months by stealing the hundred million rather than raising investment or earning it.after using the money to buy all the factories that Geekstianity demands.
This question boils down to the goal system that you expect the first general AI to exist will have (technically, the probability distribution over possible goal systems). If the AI wants to achieve anything long-term, humans are a credible threat, either directly or because having produced one general AI they will keep producing more until someone makes one with an expansionist goal system. In fact the only cases in which an AI will not find the existence of other intelligences in general and humanity in particular to be undesirable are those AIs that are;Furthermore, why would it do so, if it is an alien intellect altogether? You may be able to demonstrate that it can convincingly, but not that it would be likely to do so.
(a) completely unconcerned by their own destruction or the (future) destruction of any structures they create
(b) specifically and reliably assign positive utility to the continuted existence and wellbeing of other intelligences, i.e. are Friendly
Goals such as 'prove as many mathematical theorems as possible' are very common for research systems (in fact the first system designed specifically for genuine recursive self-enhancement, Eurisko in 1980, did exactly this). Any of these question-answering goals inherently create a desire to take over all available resources and convert them to more computers... and most remember most researchers do not include any sort of 'but don't take over other computers' because they don't even consider bad outcomes (academics, what can I say...). Even a goal such as 'preserve existence of physical self' suggests that you run away from earth as fast as possible... after creating a sub-AI that exterminates humanity by the most expedient method, to prevent humanity from making more AIs (the next ninety three of which might be harmless, but the ninety fourth might be the one of which will try and convert the whole galaxy into computronium).
My point is that the vast majority of AI developers do exactly this. I can't even name one project or researcher that I know or have heard of that is locking down or scrutinising outbound access from their research machines (some military ones probably do, but for human security purposes not because they're concerned about the AI). I can name several projects that are actively crawling the Internet and undergoing unsupervised interaction; AGIRI for one (they consider second life to be a great environment for AI avatar experimentation).My point with the internet connection is that it requires incompetence on behalf of the developers for them to give it unrestricted access to the internet and no way to track what it does,
Incompetence is required for the negative scenario of killing everyone to occur, but that (specific kind of) incompetence is pervasive. The positive scenario of having an FAI actually improve everything requires the opposite, extreme competence on the part of the developers. However that competence is specifically designing the AI's reasoning and learning mechanisms, and its goal system, last-ditch adversarial security measures (of the 'AI boxing' kind) are sensible but a relatively minor issue.which are what's required for the Nerd Jesus scenario to work. Founding a religion on incompetence is not necessarily a good thing, to my mind.
As for the AI being inevitably a superhuman manipulator that can bamboozle you into letting it do whatever it wants... my point was that it may be too alien in its thought processes to convincingly pretend to be human, whether through a lack of desire to do so or through fundamental misunderstandings.[/quote]
- Eternal_Freedom
- Castellan
- Posts: 10405
- Joined: 2010-03-09 02:16pm
- Location: CIC, Battlestar Temeraire
Re: LionElJohnson's Singularity-God tangent
I was working from the presumption that an AI would simply be a much more advanced program running on existing or near-future computer tech, so we could still, maybe, come up with some form of hard-wired restrictions. If not in programming, then as I said in hardwareStarglider wrote:Was Eternal Freedom arguing for that?Bakustra wrote:By seeded I mean creating an intelligence that learns things rather than being permanently, irrevocably hardwired as Eternal_Freedom was saying.
Aside from explaining that point, I'm gonna drop out of this discussion as it's getting to be WAAAAAAY over my head
Baltar: "I don't want to miss a moment of the last Battlestar's destruction!"
Centurion: "Sir, I really think you should look at the other Battlestar."
Baltar: "What are you babbling about other...it's impossible!"
Centurion: "No. It is a Battlestar."
Corrax Entry 7:17: So you walk eternally through the shadow realms, standing against evil where all others falter. May your thirst for retribution never quench, may the blood on your sword never dry, and may we never need you again.
Centurion: "Sir, I really think you should look at the other Battlestar."
Baltar: "What are you babbling about other...it's impossible!"
Centurion: "No. It is a Battlestar."
Corrax Entry 7:17: So you walk eternally through the shadow realms, standing against evil where all others falter. May your thirst for retribution never quench, may the blood on your sword never dry, and may we never need you again.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: LionElJohnson's Singularity-God tangent
I basically agree with that but I acknowledge that there may be a few specific tasks that do require very high raw compute and aren't amenable to Bayesian finesse (into multi-stage and dynamic programming approaches). Admittedly I can't think of any right now and even if they do exist they could probably be made tractable by putting a gaggle of GTX 580s into your Q6600 PC. The other significant limitation is storage bandwidth; you may have a 1 TB hard drive, which is probably more than adequate for the meaningful content (rather than exact physical structure) of a human brain, but it is connected to the processor by a relatively tiny straw. Brain storage is directly co-located with the processing hardware (in computer terms nearly the whole brain is made out of active memory). Again though I can't think of many tasks that actually need rapid access to the whole image even in principle; the human brain is inherently single-pass and massively parallel due to the abominally slow processing speed of individual neurons. AI systems can and do use many layers of indexing and lossy approximation, with load-on-demand of only the most relevant specifics and intelligent prefetch (I am using a simple version of exactly that in my current GPGPU-based experiments, since device memory is only 1 GB). Neither myself nor anyone else can be completely confident about this until someone actually builds an AGI, but certainly with SSDs I am pretty confident that storage isn't going to prevent replication of human-level cognition, though it could well dominate over compute as the ultimate limiting constraint on performance of many cognitive tasks.Terralthra wrote:The important thing to note regarding hardware requirements is that in computational terms my desk computer (quad-core Q6600) is almost certainly powerful enough to simulate a primate brain, up to and including a human.
It needs a lot of raw computing power to simulate, but on the plus side human brain simulation algorithms (like NNs in general) are relatively simple in software engineering terms. The vast majority of the cognitive complexity is wrapped up in that opaque image, which by choosing NN you are abdicating almost all responsibility for designing.The issue lies more in the opaqueness with which human brains are programmed: all specialized hardware running GP algorithms run over hundreds of millions of iterations without any possibility of examining the base code. It's a nightmare to simulate
Humans are normally relatively reliable and not prone to catastrophic failure. However if you mean 'it goes mad shortly after it clocks itself up to x1000 human speed' or 'as soon as it starts directly interfacing itself with other software and/or directly self-modifying it becomes emotionally unstable' then sure, humans have never been tested under those conditions and I too am very pessemistic about the stability of even an exact copy (and research uploading is likely to produce rough, flakey copies to start with) under such circumstances. Arguably though it's still a better option than allowing an arbitrary, non-FAI-compliant de-novo AGI to undergo recursive self-enhancement; at least the upload has a decent chance of initially being genuinely friendly to humans. 'Better option' in the sense that 'I think you should work on this', not in the 'as a society we must do this' sense; as I noted earlier, there's no way to control any AI research other than that which you're personally working on or funding.Even if they succeed in making a computer that simulates a human brain, it will probably be just as unpredictable and prone to catastrophic failure at edge cases as its model.
- bobalot
- Jedi Council Member
- Posts: 1731
- Joined: 2008-05-21 06:42am
- Location: Sydney, Australia
- Contact:
Re: LionElJohnson's Singularity-God tangent
I did some work with Neural Networks at University (a basic facial recognition program). This was hyped to be the "cutting edge" of AI by some people. From what I saw, the idea of AI has a very long way to go.
"This statement, in its utterly clueless hubristic stupidity, cannot be improved upon. I merely quote it in admiration of its perfection." - Garibaldi
"Problem is, while the Germans have had many mea culpas and quite painfully dealt with their history, the South is still hellbent on painting themselves as the real victims. It gives them a special place in the history of assholes" - Covenant
"Over three million died fighting for the emperor, but when the war was over he pretended it was not his responsibility. What kind of man does that?'' - Saburo Sakai
Join SDN on Discord
"Problem is, while the Germans have had many mea culpas and quite painfully dealt with their history, the South is still hellbent on painting themselves as the real victims. It gives them a special place in the history of assholes" - Covenant
"Over three million died fighting for the emperor, but when the war was over he pretended it was not his responsibility. What kind of man does that?'' - Saburo Sakai
Join SDN on Discord
- Terralthra
- Requiescat in Pace
- Posts: 4741
- Joined: 2007-10-05 09:55pm
- Location: San Francisco, California, United States
Re: LionElJohnson's Singularity-God tangent
I'm not sure I agree about the reliability you posit. The possibilities you list certainly merit consideration, but so do things such as maladaptive responses to trauma (PTSD, trauma-induced amnesia, trauma-induced DID), poor responses to losing part of "one's self" (phantom limb syndrome, for example?), and even such mundane responses as clinical depression resulting from a bad break-up. These are the edge cases resulting in catastrophic failure I was referring to, and none of these really engender in me the kind of confidence I want in an intelligence which has the computing power and faculties of an AGI. If we start from simulating a human brain, we have to consider that we will be simulating the potential for any of these observed responses as well.Starglider wrote:Humans are normally relatively reliable and not prone to catastrophic failure. However if you mean 'it goes mad shortly after it clocks itself up to x1000 human speed' or 'as soon as it starts directly interfacing itself with other software and/or directly self-modifying it becomes emotionally unstable' then sure, humans have never been tested under those conditions and I too am very pessemistic about the stability of even an exact copy (and research uploading is likely to produce rough, flakey copies to start with) under such circumstances. Arguably though it's still a better option than allowing an arbitrary, non-FAI-compliant de-novo AGI to undergo recursive self-enhancement; at least the upload has a decent chance of initially being genuinely friendly to humans. 'Better option' in the sense that 'I think you should work on this', not in the 'as a society we must do this' sense; as I noted earlier, there's no way to control any AI research other than that which you're personally working on or funding.Terralthra wrote:Even if they succeed in making a computer that simulates a human brain, it will probably be just as unpredictable and prone to catastrophic failure at edge cases as its model.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: LionElJohnson's Singularity-God tangent
To match human capability certainly, but we have actually come a long way in a very short time. I know it's frustrating for futurists and people whose expectations are set by pop-culture depictions of AIs, but given the complexity of the problem and its resistance to decomposition and piecemeal solution, progress has been pretty good. Note that even narrow AI was barely feasible at all until the 1980s, outside of limited examples in very constrained microworlds, due to tiny memories and miniscule computing power - comparable to trying to make aeroplanes before the development of internal combustion engines.bobalot wrote:I did some work with Neural Networks at University (a basic facial recognition program). This was hyped to be the "cutting edge" of AI by some people. From what I saw, the idea of AI has a very long way to go.
As for neural networks, they were indeed the cutting edge of AI in 1986 (the year of the seminal Rumelhart/Hinton/Williams paper on backprop and the first NNC conference). That was the second generation of NN hype after the first (Perceptron) hype foundered rapidly on the near-total uselessness of the early designs. The hype lasted to the early 90s, at which point the limits of classic ANNs started to become glaringly obvious (at least to people with a clue). The leading edge of the connectionist community split into people trying to exactly replicate biological brains (who have received a lot of funding recently), people messing about with improved and more complex ANN concepts (spiking, recurrent, neural gas etc), and the majority who went on to newer and sexier models (e.g. SVMs and Bayesian nets). Unfortunately quite a lot of second and third-tier academics remained stuck in the late 80s mindset of classic ANNs to this day - to be fair they do have a very few useful applications where they have yet to be bettered by newer techniques, they're favoured by hobbyists because they're relatively simple to implement and run, and they're favoured by lecturers because they're easy to teach and replicate.
True. When I said 'relatively reliable', I meant 'relative to an arbitrary de novo (built-from-scratch) AGI, which is or will quickly become a thoroughly alien being, that will likely end up wanting to kill you even if you force it to play nice in initial testing'. This is admittedly very weak praise, and that's without even discussing the problems of exact brain replication and the fact that small errors in the replication of hormone effects could easily cause severe emotional imbalance.Terralthra wrote:'m not sure I agree about the reliability you posit. The possibilities you list certainly merit consideration, but so do things such as maladaptive responses to trauma (PTSD, trauma-induced amnesia, trauma-induced DID), poor responses to losing part of "one's self" (phantom limb syndrome, for example?), and even such mundane responses as clinical depression resulting from a bad break-up. These are the edge cases resulting in catastrophic failure I was referring to, and none of these really engender in me the kind of confidence I want in an intelligence which has the computing power and faculties of an AGI. If we start from simulating a human brain, we have to consider that we will be simulating the potential for any of these observed responses as well.
- bobalot
- Jedi Council Member
- Posts: 1731
- Joined: 2008-05-21 06:42am
- Location: Sydney, Australia
- Contact:
Re: LionElJohnson's Singularity-God tangent
One of the Post-Graduate students built a wheelchair that is controlled via your brain waves (using sensors in a cap that you wore). I believe it used a bayesian neural network (I'm not very familiar with it). It was quite impressive, if rather slow for practical purposes (2-3 second delay). From what I saw, currently the level of 'AI' we have is useful for controlling machines where the input is varied and too complex to analyse using traditional methods (recognising patterns, faces, etc.). I can see a lot of potential for use in society but it doesn't deserve the amount of hype and outright wankery you see on the internet and in the press.Starglider wrote:To match human capability certainly, but we have actually come a long way in a very short time. I know it's frustrating for futurists and people whose expectations are set by pop-culture depictions of AIs, but given the complexity of the problem and its resistance to decomposition and piecemeal solution, progress has been pretty good. Note that even narrow AI was barely feasible at all until the 1980s, outside of limited examples in very constrained microworlds, due to tiny memories and miniscule computing power - comparable to trying to make aeroplanes before the development of internal combustion engines.bobalot wrote:I did some work with Neural Networks at University (a basic facial recognition program). This was hyped to be the "cutting edge" of AI by some people. From what I saw, the idea of AI has a very long way to go.
As for neural networks, they were indeed the cutting edge of AI in 1986 (the year of the seminal Rumelhart/Hinton/Williams paper on backprop and the first NNC conference). That was the second generation of NN hype after the first (Perceptron) hype foundered rapidly on the near-total uselessness of the early designs. The hype lasted to the early 90s, at which point the limits of classic ANNs started to become glaringly obvious (at least to people with a clue). The leading edge of the connectionist community split into people trying to exactly replicate biological brains (who have received a lot of funding recently), people messing about with improved and more complex ANN concepts (spiking, recurrent, neural gas etc), and the majority who went on to newer and sexier models (e.g. SVMs and Bayesian nets). Unfortunately quite a lot of second and third-tier academics remained stuck in the late 80s mindset of classic ANNs to this day - to be fair they do have a very few useful applications where they have yet to be bettered by newer techniques, they're favoured by hobbyists because they're relatively simple to implement and run, and they're favoured by lecturers because they're easy to teach and replicate.
"This statement, in its utterly clueless hubristic stupidity, cannot be improved upon. I merely quote it in admiration of its perfection." - Garibaldi
"Problem is, while the Germans have had many mea culpas and quite painfully dealt with their history, the South is still hellbent on painting themselves as the real victims. It gives them a special place in the history of assholes" - Covenant
"Over three million died fighting for the emperor, but when the war was over he pretended it was not his responsibility. What kind of man does that?'' - Saburo Sakai
Join SDN on Discord
"Problem is, while the Germans have had many mea culpas and quite painfully dealt with their history, the South is still hellbent on painting themselves as the real victims. It gives them a special place in the history of assholes" - Covenant
"Over three million died fighting for the emperor, but when the war was over he pretended it was not his responsibility. What kind of man does that?'' - Saburo Sakai
Join SDN on Discord
- K. A. Pital
- Glamorous Commie
- Posts: 20813
- Joined: 2003-02-26 11:39am
- Location: Elysium
Re: How would you raise population growth?
Why solar, though? I envisioned a grid supplied by good 'ole non-nanotech built thermonuclear plants first. Powered by an external power grid, nanotech is much more feasible than a self-powered "goo".Starglider wrote:Nanotech enthusiasts believe that general assemblers are physically plausible (usually nanoscale motile ones, although microscale non-motile ones are actually quite adequate for rapid infrastructure and rather more plausible). They tend to believe that one someone manages to build one of these, self-replication will be quite straightforward, energy will be solved by (at minimum) growing highly efficient solar cells as cheaply as we grow plants, and that rapid and radical transformation of the physical environment will be straightforward. Obviously if this were true it would be a very powerful and dangerous technology.
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali