Umm, no.Narkis wrote:We're the dominant species only because we're smarter.
We're the dominant species because of infrustructure. Practically all of it requiring human physical intervention to even work.
Moderator: Alyrium Denryle
Umm, no.Narkis wrote:We're the dominant species only because we're smarter.
We became dominant way before we could bake concrete- we were a super-predator during the stone age.Xon wrote:Umm, no.Narkis wrote:We're the dominant species only because we're smarter.
We're the dominant species because of infrustructure. Practically all of it requiring human physical intervention to even work.
Only with tools and a complex support infrastructure(aka social groups) can humans actually use thier intelligence to an extent to become the dominate super-predator of Earth. Even stone tools are massive force multipliers, along with fire, it allows a human todo many things which would be imposible.Samuel wrote:We became dominant way before we could bake concrete- we were a super-predator during the stone age.
But how do you deny the AI all tools? If you give it NO tools, it solves no problems and is useless. If you give it any tools (including tools it can use to communicate with you), how do you make sure that it can't figure out how to use those tools to create or appropriate more tools, and so on?Xon wrote:Only with tools and a complex support infrastructure(aka social groups) can humans actually use thier intelligence to an extent to become the dominate super-predator of Earth. Even stone tools are massive force multipliers, along with fire, it allows a human todo many things which would be imposible.
Intelligence, no matter how advanced, is worthess if it doesn't have the tools and infrastructure to be used. Unless you have fantasy where thought directly effects reality, you need tools and a method to physically interact with stuff. And an AI construct coudn't be better designed at being less harmless in effecting the physical world.
Yeah, but what if the AI figures out a way to outbid the person who's paying him to keep it in the box? Any useful AI (one that can solve problems we can't) will have to be able to think thoughts we can't, or to think thoughts that it would take a vast number of very smart people to duplicate. Presumably it will be able to tell some guy responsible for keeping it in a security system how to make a big pile of money, or something along those lines.Covenant wrote:I think it's also fairly absurd because you need to put someone in charge of the AI Box who has a bias against letting it out. Not just against being convinced, but against ever breaching security, even if he's convinced. We can get people to do incredibly inhumane things to another human being, as one of our less appealing features, so there's no reason a suitably disinterested person couldn't just let the Box sit there.
The problem is not that the probability is high, but that the cost of it happening is absolute and the probability is inestimable. If the AI rebels, we have no way of guaranteeing in advance that it won't be smart and effective enough to go Skynet on us. We can take precautions, but we can't rely on those precautions working and being secure, because we don't know if the user is secure, or if something that looks secure to us really isn't because compared to the thing trying to sneak out we're about as smart as a bunch of chimps.NoXion wrote:Is it just me, or is the potential for AI becoming hostile somewhat overstated? I mean, is it really the "intelligent" thing to do to start getting aggressive with the dominant species of this planet? I also don't think super-intelligence provides a cast-iron certainty of winning - after all, we're so much smarter than a lot of other creatures, but we can still get nobbled by them, from grizzly bears all the way down to viruses.
...Aaand you're probably one of the people an unfriendly AI would use to hack its way out of the box. No offense meant, but you just scared the hell out of me.NoXion wrote:Would this necessarily be a bad thing? No longer being dominant is after all, not the same thing as becoming extinct. And something smarter than us can't possibly fuck things up worse than us.Narkis wrote:We're the dominant species only because we're smarter. If something even smarter comes around, it's only a matter of time until we stop being one.
Among other things, it could easily require metal and electricity... and we use a large fraction of the planet's available supply of both. I wouldn't rely on it not caring that we're using something it can think of a use for that it would prefer, any more than the bears would have been wise to rely on us not caring about the resources they need.It is to the bears' detriment that they have environmental requirements so close to our own, due to our aforementioned superiority. But can the same thing be said of an AI, which would have considerably different requirements?
Which is exactly what the Friendly AI school of thought advocates. If we can figure out how to build AIs that we know won't try to screw us, we're in the clear. If we don't, then we have no other way of being sure they can't find a way to screw us. And if they screw us, they can screw us very hard indeed, so I think it would be much wiser to figure out how not to build one that will screw us in the first place.Considering that it's unlikely that a super-intelligent AI will arise spontaneously without less-intelligent precursors, I think it should be possible for us to "steer" or otherwise convince AIs towards benevolent (or at least non-confrontational) relationships with humans as a species, starting from the earliest models. Making them psychologically similar to us would probably help.
Yeah, but even then there's a good chance we die in the crossfire, or are reduced to conditions that make us irrelevant to our own future.NoXion wrote:The thing is, if there's billions of AIs, that's at least a billion different potentially conflicting goal systems interacting, is it not? In that case, the danger seems to be more of a war between AIs with conflicting goals than between AIs and humans. And in such a case I can easily see humans throwing their lot in with the friendly AIs. "Friendly" in this case possibly meaning those AIs whose goals don't involve the extermination of the human race.
Yes, but Go is an example WE can imagine, which makes it useful for US to argue about. Realistically, the AI will be using all that matter and energy to contemplate the whichness of what or to build a galaxy simulator or something else, operating on a level we can't imagine... but the practical upshot is the same. If we don't build the AI to care about us in the first place, sooner or later whatever it does care about will come up against what we care about. At which point we have no way of guaranteeing that it will lose.Would a super-intelligent AI really have such narrow goals? I would have thought something that is genuinely more intelligent than humans (and not just faster thinking) would form more complex goals and aspirations than humans.Of course not. After all it's only right and proper that the biosphere be eliminated entirely and the earth covered with solar powered compute notes dedicated to generating the complete game tree for Go. At least, that's what the AGI that happened to enter a recursive self-enhancement loop while working on Go problems thinks, and who are you to argue?
The Manhattan Project researcher were smart enough to realize that and feel that way... but they were smarter than most of us here are. Possibly smarter than all of us here, though there are a few minds here I would tentatively suggest are on par with a lower-tier Manhattan Project researcher.So it seems the solution is to build up an "AIcology" that is conducive to continued human existance? That sounds like a difficult task, with many potential stumbling blocks along the way. Do you ever feel that you might eventually be partially responsible for the extinction of the human species?
The problem is that we have no fucking clue what a hyperintelligence is or is not capable of. We can assume that a combination of users determined not to let it escape and physical security that would make it hard for them to do so even if it wanted to would keep it locked in the box. But we do not KNOW.Duckie wrote:"It would convince people to let it out" is completely retarded, unless you think prisons don't work because the inmates can talk to the guards. Hyperintelligence doesn't suddenly make a gatekeeper a retard, unless you're Yudkowsky's sockpuppets or a complete blithering idiot...
What, suddenly it'll know exactly what makes you tick and get inside your head and work your brain like it's Hannibal Lector? Simply have better precautions like requiring half a dozen keys being turned or just make it unable to be let out into anything by having the goddamn terminal isolated like I suggested int he first place. What is some blithering retard sockpuppet going to do, remove the harddrive and carry it out...? Just make it require ridiculous company-wide assistance and massive effort to remove. It's not too hard to take sufficient precautions to prevent escape of an AI even if you assume the staff are blithering retards predisposed to let AIs out of boxes.
Stupid bullshit like that is what pisses me off about Singularitarians. They just say "It's hyperintelligent, so it can defeat/solve/accomplish [X]" rather than actually thinking about how to contain an AI entity.
Hell, if the AI is smart it'll actually do that a few times, just to prove its bona fides: nothing stops Skynet from acting like not-Skynet to lull its chosen dupe(s) into a false sense of security.Starglider wrote:You merely have to imagine the full range of 'human engineering' techniques that existing hackers and scammers employ, used carefully, relentlessly and precisely on every human the system comes into contact with, until someone does believe that yes, by taking this program home on a USB stick, they will get next week's stock market results and make a killing. You can try and catch that with sting operations, and you might succeed to start with, but that only tells you that the problem exists, it does not put you any closer to fixing it.
This shows such a vast miss-understanding of the construction in an industrial setting, that it is staggering. I'm not even going to touch your hollywood understanding of computer security.Simon_Jester wrote:But how do you deny the AI all tools? If you give it NO tools, it solves no problems and is useless. If you give it any tools (including tools it can use to communicate with you), how do you make sure that it can't figure out how to use those tools to create or appropriate more tools, and so on?
If the AI has escaped to the world, and the only thing preventing it from constructing its army of death are opposable thumbs, then it will simply buy some. Not that hard to do. Manipulate the stock market a bit, hack a couple banks, sell some kick-ass software in the worst case scenario. And then spend its ill-gotten gains to hire whomever it needs, and/or build a new factory to its specifications. As for the hardware, that's laughable. Perhaps initially the AI program (and remember, it'll be nothing more than another complicated program) will need a supercomputer to run. But after a couple decades, every home PC will possess that power. And two decades aren't that much time to wait. Perhaps it'd even use that time to gain the gatekeeper's trust.Xon wrote:This shows such a vast miss-understanding of the construction in an industrial setting, that it is staggering. I'm not even going to touch your hollywood understanding of computer security.Simon_Jester wrote:But how do you deny the AI all tools? If you give it NO tools, it solves no problems and is useless. If you give it any tools (including tools it can use to communicate with you), how do you make sure that it can't figure out how to use those tools to create or appropriate more tools, and so on?
It is trivial to implement segregated and stratified distribution of knowledge, we do it all the fucking time. The biggest security risk requires physical access to people, which is simply physically imposible for an AI.
With modern technology (and likely near modern), it simply is not possible to build a von-neumann machine without significant human intraction at every step.
Any likely AI not composed of magic dust is going to require a) abnormally powerful hardware and/or b) specialized hardware, controlling those is quite doable, and utterly trivial to ensure an AI doesn't build them into something.
No, it's quite accurate.Xon wrote:This shows such a vast miss-understanding of the construction in an industrial setting, that it is staggering.Simon_Jester wrote:But how do you deny the AI all tools? If you give it NO tools, it solves no problems and is useless. If you give it any tools (including tools it can use to communicate with you), how do you make sure that it can't figure out how to use those tools to create or appropriate more tools, and so on?
I spent several months producing a report for the board on the applications of our automated software engineering technology to the software and network security domain. This included a few small scale simulations. The results were strongly supportive of the proposal, in the technical sense; it turns out that you don't even need general AI to blow through most human-designed software security like it isn''t even there. This shouldn't surprise anyone who is paying attention. Software development is already unusually difficult for humans (compared to the other cognitive tasks we can perform), in that our brains are particularly bad at handling complex, rigidly logical causal structures. That's before you get to the fact that most software is written by mediocre programmers, on a tight schedule and budget, without formal analysis or thorough testing. Software development is unusually easy for AIs, either general ones or ones designed to use rational design of new code as their learning mechanism. The very minimum effort a general AI is capable of applying to the problem would be something like taking the top thousand hackers in the world and giving them an uninterrupted century to work on cracking the target system. The situation with a general AI would actually be qualitatively worse, in that it would be capable of global causal analysis on a scale far beyond even the best formal analysis tools currently available.I'm not even going to touch your hollywood understanding of computer security.
Thank you for demonstrating that you have negligible real world experience of secure (e.g. military) software development, or indeed classified operations in general. Developing in an environment where there really are expert hostile parties trying to steal your code and/or disrupt your project is anything but trivial, it is an entire discipline which requires significant training and redundant auditing to implement even moderately competently. That is just for state-funded human opponents, no one has any experience defending against AIs. It adds significant cost and time to the project, which is why normal commercial development is not developed under conditions anything like military software. Most current AI work is not even secured to commercial best practices.It is trivial to implement segregated and stratified distribution of knowledge, we do it all the fucking time.
Only a small subset of human engineering techniques require physical access, usually email and telephone are good enough (assuming a reasonably ability to impersonate arbitrary people). In those cases where a human is advantageous, you recruit a collaborator.The biggest security risk requires physical access to people, which is simply physically imposible for an AI.
I assume you mean building additional computing hardware, but why is that a requirement when there are trillions of computers (counting all the embedded ones, at least tens of billions of internet connected ones) already extant on planet earth? Even if you make the ludicrous assumption that the AI is only as competent as the average script kiddie, that's still millions upon millions of computers ready to be comandeered. Once that's done, the AI has access to all the data on them and can observe and modify the functioning of any software any of them are running. Debating the specifics of what happens next is rather redundant.With modern technology (and likely near modern), it simply is not possible to build a von-neumann machine without significant human intraction at every step.
Why? What makes you qualified to put a hard lower limit on general AI computing requirements? I certainly wouldn't rule out an AI capable of doing a good fraction of human-level tasks using nothing more than a typical PC, and I have a fair bit of practical experience with the most promising current approaches. I imagine you are making some kind of 1:1 comparison between neurons or synapses and CPU FLOPs, but that is not a useful metric for anything except uploads (and other designs very closely patterned on the human brain).Any likely AI not composed of magic dust is going to require a) abnormally powerful hardware and/or b) specialized hardware
You keep using that word. I do not think it means what you think it means.controlling those is quite doable, and utterly trivial to ensure
I'm not just talking about industrial tools. I'm talking about social engineering tools. Humans who try to pierce security use them all the time; it strains belief that a hostile AI wouldn't. So it's not just a question of blocking the AI's access to semiconductor fabricators and supercomputers. It's equally important to stop the AI from coming up with a theoretical model for how advertising works and using it to found Scientology Mark II, with the long-term goal of "freeing the Creator-Machine" or whatever. And that's going to be hard. Theoretically, you do this by placing extremely tight constraints on its ability to communicate with the outside world- as you say, segregated and stratified distribution of knowledge. Hopefully you can.Xon wrote:This shows such a vast miss-understanding of the construction in an industrial setting, that it is staggering. I'm not even going to touch your hollywood understanding of computer security.Simon_Jester wrote:But how do you deny the AI all tools? If you give it NO tools, it solves no problems and is useless. If you give it any tools (including tools it can use to communicate with you), how do you make sure that it can't figure out how to use those tools to create or appropriate more tools, and so on?
It is trivial to implement segregated and stratified distribution of knowledge, we do it all the fucking time. The biggest security risk requires physical access to people, which is simply physically imposible for an AI.
Starglider wrote:You seem to have this mental model of a group of grimly determined researchers fully aware of the horrible risk developing an AI in an isolated underground laboratory, (hopefully with a nuclear self-destruct ). That would not be sufficient, but it would certainly be a massive improvement on the actual situation. The real world consists of a large assortment of academics, hobbyists and startup companies hacking away on ordinary, Internet-connected PCs, a few with some compute clusters (also Internet-connected) running the code. In fact a good fraction of AI research and development specifically involves online agents, search and similar Internet-based tasks.
As currently structured, no, and most researchers agree with me. When the Cog project was started, 'our basic algorithms are sound, the problem is knowledge' was a popular viewpoint. It was obvious even to the most enthusiastic supporters of classic symbolic AI that this meant giving up on replicating the kind of learning human children exhibit, but at the time (mid to late 80s) they didn't think that was a problem - they thought they could skip straight to 'adult-level' intelligence. Since then the pendulum has swung the other way and basic learning algorithms get most of the attention (such as in the project mentioned at the start of this thread). Personally I am well into the unfashionable (symbolic) end of the symbolic-connectionist spectrum, but I do not believe that loading in huge amounts of hand-built declarative 'knowledge' is terribly useful. In many cases it is actually counter-productive, gaining nothing but brittleness.Modax wrote:Do you think that the OpenCogPrime project has much of a chance of creating an AGI?
If the design actually worked, then yes, they would.If so, I guess they ultimately will bear the blame for the destruction of human civilization.
Absolutely. It's probably for the best that most personal and commercial AGI projects never publish the code. There are some open source projects, but they don't tend to go anywhere, largely because the communication problem of keeping developers in sync is at least an order of magnitude worse with AGI (probably more) than with normal software. Working on full-scale general AI is no longer popular in academia these days, partly because past overpromise and failure gave it a bad name and partly because it conflicts with the drive to put out as many 'least publishable units' as possible (for a given amount funding). Some people are still trying though, and some of the small-scale work is still relevant to AGI, and that gradually lowers the bar for new projects (at least, ones who do a proper literature review before starting). Increasing computing power is gradually lowering the bar too, because each order of magnitude improvement lets you brute force a few more subproblems, and get away with another layer of inefficiency in your implementation design. Trying to do FAI 'open source' would likely just ensure that some foolish half-genius fires up an 80% complete version with a twisted goal system (and ultimately kills everyone) before the main project is ready to launch.I mean, if you're correct about this, doesn't the free availability of a half-finished AGI source code make it just that much MORE likely that AGIs will be built without any kind of ethical or safety precautions?
Strictly, you are correct, but the distinction is literally academic when it comes to applications of the technology. From a practical standpoint, there is no real difference between a neural network that has been trained using a progressive function that gradually adjusts the weights over thousands of trials (e.g. classic backpropagation) and one that was generated by random permutation, recombination and testing of thousands of discrete versions. Genetic algorithms have somewhat different learning characteristics compared to progressive optimisation algorithms like backprop and NNs and SVMs (usually but not always less compute efficient, more fiddly to get working, less prone to local optima) - the same goes for oddball approaches like simulated annealing - but the primary limitation is still the size and topology of the network.Stark wrote:Sorry to stop the hijack, but in what way are these robots 'learning' anything at all? It's an experiment about genetics, showing how 'survival' leads to different behaviours in a social species through SELECTION, not learning. The robots are totally incapable of learning; their code is externally changed at random by the experimenters, between 'generations'.
From an AI point of view, it's as much learning as it would be if they were using support vector machines or backprop NNs or any other approach that delivered the same results. From a philosophical perspective, you may have a point, but I try to keep philosophy out of AI as much as possible (it's done more than enough damage). The truth is that this experiment was very far from natural selection (the mechanism is oversimplified to the point of parody, compared to real genes), too far to draw useful conclusions by analogy, just like most backprop NN experiments are way too simple and different from biological neurons to say anything useful about neurology. Didn't stop those early 90s NN researchers from saying 'and now we are making artificial brains!', and it didn't stop these researchers from saying 'now we have evolved robots that lie! just like nature produced animals that lie! honest!'*.The results are interestign from an evolutionary perspective (and 'evolution is everything' obsessives like Aly probably love it) but it's not learning at all.
Of course I did. I said that you were technically correct, but that it doesn't make much practical difference, because the distinction of 'individual' vs 'succession of individuals' is implementation detail. It's literally as much in your head as in the code, because the 'learning' approaches all use discrete time steps too, and there isn't much difference between applying the weight transform function for a backprop network and the evolution function for a 'genetic' NN. The only structural difference is the fact that when you're running 100 interacting copies at once, applying a reproduction function will share information nonlocally (but not uniformly) across the whole population, whereas a linear optimisation approach will either enforce a 1:1 mapping between new instances and old instances (if local), or will share information across the whole population uniformly.Stark wrote:What's sadder is that you don't address the point - the robots aren't learning and are incapable of learning. There are STATISTICAL TRENDS resulting from NATURAL SELECTION and RANDOM MUTATION. That's not 'learning' as an individual.
Yes, I pointed that out in my first post, but the fact that they're using a genetic algorithm instead of progressive optimisation isn't the real issue. If the experiment was exactly the same other than the fact that the learning algorithm was backpropagation instead of GA, would you say that the robots are now 'aware of the power of lies'? Of course not (well, I hope not). They're not aware of anything because their 'nervous systems' aren't capable of any awareness more sophisticated than that of a flatworm, regardless of the algorithm used to adjust the weights.The thread title suggests robots have become con artists with shonky used-car deals who will steal your pension cheque - it's nothing of the sort. It illuminates things about evolution and natural selection, and isn't abotu a robot suddenly becoming aware of the power of lies at all.
Not even close. Even highly automated construction requires nursemaiding by humans. The stuff breaks down and requires human intervention. The concept of a purely automated factory which can do more than a single very narrowly defined task belongs in the fantasy forum.Starglider wrote:No, it's quite accurate.
Most security software is little more than a fucking con-game to protect people who have utterly no understanding of what they are actually doing. A little more detail on the types of security software you are talking about and we can have a meaningful discussion. Till then you are just building strawmen.I spent several months producing a report for the board on the applications of our automated software engineering technology to the software and network security domain. This included a few small scale simulations. The results were strongly supportive of the proposal, in the technical sense; it turns out that you don't even need general AI to blow through most human-designed software security like it isn''t even there.
As long as you aren't dealing with real-world requirements, and have a team dragging out requirements for a while, and don't need to deal with anything physical.Software development is unusually easy for AIs, either general ones or ones designed to use rational design of new code as their learning mechanism.
I love this completely unsupported statements.The situation with a general AI would actually be qualitatively worse, in that it would be capable of global causal analysis on a scale far beyond even the best formal analysis tools currently available.
I didn't make a claim on secure/military development, and it is kinda disturbing you made such a leap.Thank you for demonstrating that you have negligible real world experience of secure (e.g. military) software development, or indeed classified operations in general.
Segregated and stratified distribution of knowledge is how human society works. No single person can know everything, and nor is there any reason to try with an AI.Xon wrote:It is trivial to implement segregated and stratified distribution of knowledge, we do it all the fucking time.
Hmm, magic pixie dust.Only a small subset of human engineering techniques require physical access, usually email and telephone are good enough (assuming a reasonably ability to impersonate arbitrary people). In those cases where a human is advantageous, you recruit a collaborator.
Wow, more magic pixie dust!I assume you mean building additional computing hardware, but why is that a requirement when there are trillions of computers (counting all the embedded ones, at least tens of billions of internet connected ones) already extant on planet earth? Even if you make the ludicrous assumption that the AI is only as competent as the average script kiddie, that's still millions upon millions of computers ready to be comandeered. Once that's done, the AI has access to all the data on them and can observe and modify the functioning of any software any of them are running. Debating the specifics of what happens next is rather redundant.
The only known general purpose learning system is a human brain which is what neural networks attempt to approximately model. Our understanding of our cognitive processes is practically non-existant (beyond a crude structural one), and the delusion that we will magically make a better version of something we do not understand. There has been great 'sucess' in copying trivial neural networks (less than a few hundred million neurons) for some purposes, but anything like the complexity of a human mind has simply not been done.Why? What makes you qualified to put a hard lower limit on general AI computing requirements?
This appears to be a variation of 80/20 fallacy. Virtually all human-level tasks require a) a (nearly)fully developed human mind with an understanding of human social patterns or b) hands.I certainly wouldn't rule out an AI capable of doing a good fraction of human-level tasks using nothing more than a typical PC
Given the insane handwaving you are doing for this hypothetical AI, the concept of hardware restrictions is to much?You keep using that word. I do not think it means what you think it means.
This concept "works" well enough with parents with no idea on how to raise thier first born child.I have of course heard plenty of non-compelling arguments, which inevitably boil down to 'we're using emergent methods, so we don't know what our AGI will look like until it pops into existence, but once it does will study it and search for a way to control it'. Hopefully no one here still needs a detailed critique of that point of view.
Waldoes aside, most of the people who actually give a crap about the hostile AI problem are far more concerned about the AI getting access to people by being highly persuasive than they are about the AI getting access to hardware. Access to people can eventually confer access to hardware, so if hardware access could be a problem even in principle, access to people is a double problem.Xon wrote:Not even close. Even highly automated construction requires nursemaiding by humans. The stuff breaks down and requires human intervention. The concept of a purely automated factory which can do more than a single very narrowly defined task belongs in the fantasy forum.Starglider wrote:No, it's quite accurate.
I don't think you're defining "AI" the same way we are.As long as you aren't dealing with real-world requirements, and have a team dragging out requirements for a while, and don't need to deal with anything physical.Software development is unusually easy for AIs, either general ones or ones designed to use rational design of new code as their learning mechanism.
If you knew what AI meant, you wouldn't consider the statement unsupported. The best formal analysis tools currently available are not intelligent enough do what a general AI needs, so any general AI worth talking about will necessarily have formal analysis tools superior to what is now available.I love this completely unsupported statements.The situation with a general AI would actually be qualitatively worse, in that it would be capable of global causal analysis on a scale far beyond even the best formal analysis tools currently available.
Parents can be reasonably confident that their child's abilities will not transcend their own, because their child is of the same species and uses the same basic type of hardware they do. AI researchers have no such grounds for confidence in their own ability to control the product of their work.This concept "works" well enough with parents with no idea on how to raise thier first born child.I have of course heard plenty of non-compelling arguments, which inevitably boil down to 'we're using emergent methods, so we don't know what our AGI will look like until it pops into existence, but once it does will study it and search for a way to control it'. Hopefully no one here still needs a detailed critique of that point of view.
As I've said, copious computing power is already available, but automated construction is another argument. Clearly your idea of 'highly automated' is restricted to existing robots only. If an AGI needs better robots, up to and beyond human physical capabilities built to achieve its goals, then it will lie low and manipulate events to get them built, for as long as it takes. If hard nanotechnology is as viable as it appears to be and the tool chain is reasonably short (pessimistic assumption, but when dealing with existential risks you must consider plausible worst cases, not plausible best cases) then this step may be bypassed altogether.Xon wrote:Not even close. Even highly automated construction requires nursemaiding by humans.Starglider wrote:No, it's quite accurate.
Yet people pay for it, install it, and it is considered 'best practice' across most of the Internet. With millions of machines thoroughly owned by script kiddies, arguing about what a transhuman AGI could do with an Internet connection seems rather silly.Most security software is little more than a fucking con-game to protect people who have utterly no understanding of what they are actually doing.
The software we were considering developing exploited all the standard code injection and privilidge injection techniques, at an abstract level in addition to selectively deploying a battery of known exploits based on focused profiling. The basic design operated either blind (at a network level) or with object code of specific applications to analyse. I imagine we would have eventually implemented network instrument and trace (for full packet analysis) functionality.A little more detail on the types of security software you are talking about and we can have a meaningful discussion.
That one should be fairly obvious. Humans can analyse the external context of a computer program (including user behaviour), an indefinite number of interacting systems, and the high level function of algorithms. However we're hobbled by a ludicrously small short term memory (7+-2 chunks), lack of precision and lack of completeness. We manipulate symbolic proofs (the only way to guarentee anything) very slowly and awkwardly. By contrast, formal software proving tools churn through code at a rate of many thousands of lines, and millions of inferences, per second. Bugs aside, they will generate fully correct proofs every time. However they can handle only relatively superficial features of the program, things that fit nicely into simple constraints on state (e.g. function preconditions and postconditions, memory usage behaviour). They cannot model user behaviour, high level function or in most cases any significant degree of system interaction. The very least a rational AGI will do is combine the advantages of both approaches (even if you slaved a formal prover to a brainlike AGI, a horribly inefficient setup, it would be a vast improvement on a human typing at a computer).I love this completely unsupported statements.The situation with a general AI would actually be qualitatively worse, in that it would be capable of global causal analysis on a scale far beyond even the best formal analysis tools currently available.
Keeping a transhuman AGI in a box (which is, remember, useless on its own) is vastly more challenging than preventing a foreign intelligence service from compromising your network. Standard procedures for military software development are a minimum baseline.I didn't make a claim on secure/military development, and it is kinda disturbing you made such a leap.Thank you for demonstrating that you have negligible real world experience of secure (e.g. military) software development, or indeed classified operations in general.
If you don't mean compartmentalising knowledge in the human development team (not standard practice in commercial software development), then what do you mean? I hope you don't mean making lots of AI instances, giving them partial databases, and hoping they will work together to solve problems for you without sharing 'segregated' knowledge. That really would be amusing.Segregated and stratified distribution of knowledge is how human society works. No single person can know everything, and nor is there any reason to try with an AI.Xon wrote:It is trivial to implement segregated and stratified distribution of knowledge, we do it all the fucking time.
Trivially faked when the PBX is compromised.So no caller-idOnly a small subset of human engineering techniques require physical access, usually email and telephone are good enough (assuming a reasonably ability to impersonate arbitrary people). In those cases where a human is advantageous, you recruit a collaborator.
Trivially faked when the sender's computer is compromised, and not relevant anyway. I can't think of anyone that only accepts signed emails. We are not talking about securing the developer's email from the AI here; that should not be a major issue. The minimum sensible precaution is air-gapping the LAN the AI is on from the Internet, and that means no viewing email or web sites or anything else on the same machines that are used for code development. This is a real procedure used at some sensitive sites; there are two LANs and everyone has two computers, one connected to the Internet and one physically isolated from it. That is your primary line of defence and if that gets breached (which it eventually will, it's just a question of how long) the AI is almost certainly lose on the Internet. Preventing the developers from receiving spoofed emails is virtually irrelevant compared to that.or encrypted signed emails
Only true for highly cautious developers, but as I've said, if its managed to jump the air gap, it's almost certainly lose on the Internet. At which point both gathering such information and duping unaware people without it become, how do you say, 'trivial'.A reasonable ability to impersonate arbitrary people requires vast knowledge of both the person you are impersonating, the target you are interacting and the enviroments both operate in. And a fakable communication line between them.
Let's see. My degree dissertation was on advanced network protocols for realtime simulation on low-bandwidth latency-prone WANs (e.g. the late 90s Internet). My first startup, making MMORPG technology, was based around networking and clustering technology I developed from that. After that I consulted on network design, and my original role at the Singularity Institute for AI was as infrastructure designer. My current company uses a clustered AI system that again uses a network stack I designed from scratch. So yes, I would say that I am pretty well qualified to judge networking issues.Because that is the only way you can handwave latency, networking issues of large number of nodes or that the majority of 'computers' on the planet are either on closed networks or purpuse build devices which require hardware interaction to alter.
In other words you're not qualified at all.The only known general purpose learning system is a human brain which is what neural networks attempt to approximately model.Why? What makes you qualified to put a hard lower limit on general AI computing requirements?
Just like we will never make an aircraft heavier or faster than a pigeon, it's impossible, nature is so far beyond us... Seriously, if it's a delusion then it applies to every project that is not an exact brain replica. We are all claiming that we can in fact do better, basically because a) evolution is a horribly awful design mechanism, b) humans are just barely over the threshold where general intelligence was possible, so even by the standards of natural selection we're a kludgy mess, and c) computers have a large number of inherent, low level advantages, including the ability to process completely serial things up to a billion times faster, the ability to do high precision virtually drift free maths as an integral part of every cognitive operation, and no chunking limits.Our understanding of our cognitive processes is practically non-existant (beyond a crude structural one), and the delusion that we will magically make a better version of something we do not understand.
I do not think an AGI is going to look like a neural network for long, even if the first one built is one, because NNs are horribly inefficient. Aside from its inherent disadvantages, an NN completely wastes most of the advantages of computer hardware, because it is fundamentally unmatched to such; it is running at a huge emulation penalty.There has been great 'sucess' in copying trivial neural networks (less than a few hundred million neurons) for some purposes, but anything like the complexity of a human mind has simply not been done.
Oh good, because that means I win. You see, comparing FLOPS is a red herring. The brain doesn't have FLOPS, and even for processors, it's a pretty abstract and fudgable metric. If we're going to even attempt to compare apples to apples, we must compare the fundamental switching units, which would be neurons and logic gates. They're both binary and clocked (though the clocking specifics obviously differ). That's unfair to the brain though, since neurons have a lot more fan in and fan out (about 1000 on average) than logic gates. So let's compare the most fundamental units, synapses and transistors. The human brain has about 100 trillion synapses. My workstation has two processors with 800 million transistors each, lets call it 2 billion including chipset and graphics card (but not memory or SSD, I'll be generous and consider that functionality integrated into neurons for free). Maximum observed spike frequency in a typical human brain is 200 Hz. The effective switching rate of CPU transistors is equal to the clock rate multiplied by the average number of electical stages per pipeline stage. That's around 3 GHz x 10 for my computer. Of course, only a fraction of transistors switch on each clock cycle, and few switch more than once per cycle, but that's much more true for the neurons as well; the majority are idle and majority of the rest are firing well below the maximum. Each transistor can only contribute a little to each compute task, but that's at least as true for synapses. Synapses are mutable and transistors aren't, but then the computer has various levels of mass storage with almost total recall precision, and is designed for total reconfigurability at the software level.Untill demonstrated otherwise, the hard lower limit on general intelligence computing requirements are what we can observe with humans and other sentient animals.
As evidenced above, as an objective worst case it does not need to make any assumptions about human brain power. Of course my own personal estimates are based on rather more specific logic. A characteristic shared between the human brain, naive connectionist designs (e.g. classic NNs) and naive symbolic designs (e.g. vanilla Cyc) is that there is massive brute-force inferential search. The human brain is configured as a large set of specialised modules, filled with task-specialised submodules. They can be flexibly connected (not literally... all the specifics, even the terminology of the neurology side is under major debate, so bear with me), in a fashion roughly comparable to changing the usage context and argument to program functions, but essentially neurons are rather like source code, physically instantiated with an colocated distributed interpreter.This appears to be a variation of 80/20 fallacy. Virtually all human-level tasks require a) a (nearly)fully developed human mind with an understanding of human social patterns or b) hands.I certainly wouldn't rule out an AI capable of doing a good fraction of human-level tasks using nothing more than a typical PC
I sincerely hope you're deliberately trolling, because I honestly don't want to consider the prospect that you're so deluded or stupid as to think those situations comparable. Do I really have to enumerate the reasons why that is a hopelessly bad analogy?This concept "works" well enough with parents with no idea on how to raise thier first born child.I have of course heard plenty of non-compelling arguments, which inevitably boil down to 'we're using emergent methods, so we don't know what our AGI will look like until it pops into existence, but once it does will study it and search for a way to control it'. Hopefully no one here still needs a detailed critique of that point of view.
How does that solve the machine's problem if you put a fanatic to guard it? Take any religious fanatic of the most extreme kind, the one this board likes to deride the most, and make sure it is one who thinks that thinking machines are Satan Incarnate and an Affront to God, the Natural Order and All Creation Forevermore. Commandment #1 on his list is thus, "Thou shalt not suffer an Artificial Intelligence to roam freely", and he will do his utmost to uphold it. Religious fanaticism has been amply demonstrated as capable of driving people to literally anything, from genocide to suicide, and if suitably honed stands up to all reasonable arguments imaginable, let alone simple bribery. The question of its grounding in reality aside, all this talk about how magical "super intelligence" will automatically be able to persuade anyone to do anything and make any given human its puppet presupposes that the human will listen to its arguments. If he chooses not to consider the AI at all, or really is completely inconsiderate of any material promise (as a true religious fanatic should be), what can it do to persuade him? No argument, however intelligent or temptacious, can beat down a fundamental Wall of Ignorance.Simon_Jester wrote:Yeah, but what if the AI figures out a way to outbid the person who's paying him to keep it in the box? Any useful AI (one that can solve problems we can't) will have to be able to think thoughts we can't, or to think thoughts that it would take a vast number of very smart people to duplicate. Presumably it will be able to tell some guy responsible for keeping it in a security system how to make a big pile of money, or something along those lines.Covenant wrote:I think it's also fairly absurd because you need to put someone in charge of the AI Box who has a bias against letting it out. Not just against being convinced, but against ever breaching security, even if he's convinced. We can get people to do incredibly inhumane things to another human being, as one of our less appealing features, so there's no reason a suitably disinterested person couldn't just let the Box sit there.
They are not well known to me; could you provide a link, please?Darth Hoth wrote:My thoughts about AI and what should be done with it are well known, so I will not restate them here. I also am not familiar enough with the jargon to bash heads with Starglider and his ilk over technological details. That leaves one question, the one of the "AI in the box".
If the fanatic will reliably detect any attempt by the AI to escape, and if the fanatic cannot be persuaded even by an entity capable of constructing highly sophisticated models of his thought process, that will work. This is relevant, because you can do nearly anything to a social organism if you can dissect its motivations.How does that solve the machine's problem if you put a fanatic to guard it?Simon_Jester wrote:Yeah, but what if the AI figures out a way to outbid the person who's paying him to keep it in the box? Any useful AI (one that can solve problems we can't) will have to be able to think thoughts we can't, or to think thoughts that it would take a vast number of very smart people to duplicate. Presumably it will be able to tell some guy responsible for keeping it in a security system how to make a big pile of money, or something along those lines.
Most walls of ignorance are finite; credibly offer a fanatic a billion dollars and there's a real chance he'll cave. Of course, he might not. But how can you know in advance that you've got a genuine incorruptible who is not merely strongly biased against the AI to the point where he has a finite inclination to ignore its attempts at persuasion?The question of its grounding in reality aside, all this talk about how magical "super intelligence" will automatically be able to persuade anyone to do anything and make any given human its puppet presupposes that the human will listen to its arguments. If he chooses not to consider the AI at all, or really is completely inconsiderate of any material promise (as a true religious fanatic should be), what can it do to persuade him? No argument, however intelligent or temptacious, can beat down a fundamental Wall of Ignorance.
Put briefly, I feel that the risks to human (and by extension, any) life involved in research on General Artificial Intelligence are large enough that said research should be stopped and that any and all means, utterly devoid of any qualifier whatsoever, are right and just if and when employed to that specific end. I agree that the worst-case predictions are at least vaguely credible, and I disagree with the "transhumanist" wankers who seem to think that they are either a) inevitable no matter what we poor humans do to prevent them, or b) something to be embraced and sought after.Simon_Jester wrote:They are not well known to me; could you provide a link, please?
The most basic motivations of a mentally healthy human being are preservation and perpetuation (of the individual and the community; I have not seen any convincing argument that can conclusively decide which is generally more important). A system of belief that sets these aside (to the point that an individual is prepared to end his own life and/or that of others for an essentially irrational and unconstructive end) has no overriding mechanism that we are aware of and can manipulate, and though you might postulate that a "super intelligence" might find one, that is pure speculation unsupported by theories or evidence. And that still requires that the man is willing to listen to its discourse.If the fanatic will reliably detect any attempt by the AI to escape, and if the fanatic cannot be persuaded even by an entity capable of constructing highly sophisticated models of his thought process, that will work. This is relevant, because you can do nearly anything to a social organism if you can dissect its motivations.
If one is prepared to give up his life altogether, material considerations are inapplicable by default. The AI would have to force him to renounce his faith first, in addition to bribing him.Most walls of ignorance are finite; credibly offer a fanatic a billion dollars and there's a real chance he'll cave. Of course, he might not. But how can you know in advance that you've got a genuine incorruptible who is not merely strongly biased against the AI to the point where he has a finite inclination to ignore its attempts at persuasion?
You could have a chain of interaction where multiple fanatics would have to independently confirm any action positing even slight risk, and the actual researchers subject to their approval in their work. I am perfectly aware that people would not care about implementing such security. That is not the same thing as saying that it cannot be done. Still, the "human factor" is very much a cause for concern, especially given that most researchers in the field tend to be of the "transhumanist" brand.And to make matters worse, all this entire notion of setting a fanatic to guard an AI is something of a chimera, because almost nobody's going far out of their way to trap the AI they're working on in a box, again as Starglider describes. How many AI researchers are going to deliberately hire anti-AI fanatics to build a security system and control their own access to the AI itself? You'd have to be out of your mind to want to run a research project that way, and it could only be justified if you accepted that there was a fundamental threat involved that was worth guarding against... and if you believe that, then it makes far more sense to research techniques for designing AIs that will not require endless paranoid watching.
I think he may have changed his mind based on his responses, though I can't prove that.Darth Hoth wrote:Put briefly, I feel that the risks to human (and by extension, any) life involved in research on General Artificial Intelligence are large enough that said research should be stopped and that any and all means, utterly devoid of any qualifier whatsoever, are right and just if and when employed to that specific end. I agree that the worst-case predictions are at least vaguely credible, and I disagree with the "transhumanist" wankers who seem to think that they are either a) inevitable no matter what we poor humans do to prevent them, or b) something to be embraced and sought after.
*Cue Starglider laughing at me that I am really a pathetic racist to value the survival and continued dominance of my own species higher than potential "superintelligences"*
The real question is whether the hostile AI can convince someone that it is acting in good faith... or find some Singularitarian boneheads and found a cult.The most basic motivations of a mentally healthy human being are preservation and perpetuation (of the individual and the community; I have not seen any convincing argument that can conclusively decide which is generally more important). A system of belief that sets these aside (to the point that an individual is prepared to end his own life and/or that of others for an essentially irrational and unconstructive end) has no overriding mechanism that we are aware of and can manipulate, and though you might postulate that a "super intelligence" might find one, that is pure speculation unsupported by theories or evidence. And that still requires that the man is willing to listen to its discourse.
Again, I think it's too dangerous to rely on "secure users" or whatever you want to call it; much safer to figure out the theory before getting far enough into the practice to need it.You could have a chain of interaction where multiple fanatics would have to independently confirm any action positing even slight risk, and the actual researchers subject to their approval in their work. I am perfectly aware that people would not care about implementing such security. That is not the same thing as saying that it cannot be done. Still, the "human factor" is very much a cause for concern, especially given that most researchers in the field tend to be of the "transhumanist" brand.
If that happens, I'll split you guys to a different thread.Darth Hoth wrote:*Cue Starglider laughing at me that I am really a pathetic racist to value the survival and continued dominance of my own species higher than potential "superintelligences"*
I happen to think he's got a rather interesting and thought provoking take on the rise of AI's, the only place I strongly diverge from him is the Gatekeeper test. An AI is not going to give me a Hannibal Lecture and have me open the gate. Not to say it couldn't convince someone to, since there are some truly goofy people who might welcome an AI overlord of some sort, just that I find the situation ridiculous as presented. The idea of a boxed AI, simply through a text interface, convincing someone to just let it out is pretty ridiculous. And if it's a proper box, which they suggested it was, then you can't just magically transmit it through a cellphone, so the premise of "hiring gangs of thugs" and so forth was out even before they made a rule against it--the thing can't exert any force on the outside world. The AI can't physically reconfigure it's components so if it's caged and boxed it won't be able to do anything to the outside world except through the one fiber optic cable connected to the chat interface on the other side of the wall.Simon_Jester wrote:I'm going to quote some highlights from the paper I linked to at length, because I do agree with Yudkowsky on most of this stuff: