Ryan Thunder wrote:...Uh, no. Irbis, without wanting to be derisive--have you ever written so much as a "hello, world" script?
Yes, I did, o Village Idiot. If fact, I made IT degree a few years back, before I moved into multimedia industry. You?
And how about some arguments other than ad personams?
Zinegata wrote:What's funny is that I'm just pointing out how improbable the whole thing is even if we assume that somebody actually programmed a God-AI software into existence in the first place; which as I've repeatedly pointed out is not actually an easy (or imminent) thing to do.
Yes, it's not easy/improbable. However, I just wanted to point out that ascribing human limitations of today to something made 20 years from now, far more intelligent than any human, is downright silly.
Nor is it even necessary to network such a machine, because only idiots think that the World Wide Web with its enormous mass of contradictory information is an ideal tool for "teaching" an AI. Heck, an AI that bases its knowledge base on the Internet would probably conclude the its purpose in life was to create porn for anything and everything.
And that would be bad thing how exactly?
Again, viruses and other similar "infection" are by necessity small programs to avoid detection. You can't have a God-AI program the size of a virus, because it literally can't have that kind of functionality with only a couple of bytes of data.
Here's a piece of reality check for you - even if we talked about 100 GB file, modern desktop of today has 500-1000 GB disk. Do you know how much of it is used if machine is used for typical office work and net surfing? 100 GB tops. There are literally
millions of such machines today, care to tell me how much free space there will be on typical desktop, or even cell phone in 10 years?
And while main program won't be the size of virus, if an AI wanted to spread itself making a virus (that would have infected as much machines as possible looking for good candidates for AI infection, and activating sleeper cells in case of purge) the size of actual virus would be trivial. Good luck purging that, seeing any active AIs would modify this safety net observing human efforts to combat them and learning from it.
Actually, if you again look at the potential file size of an AI program, it will in fact consume the majority of your bandwidth. We're not talking about a 100kb file here. We're probably going to talk in terms of a 100GB file at the minimum. Even if you have a fibr internet connection, you'll generally be working under a usage subscription plan so you're gonna see your usage used up. And there is no way to prevent the inevitable harddisk slowdown as it tries to copy such a huge file into your PC.
Another reality check for you - over past 2 days, I downloaded 80 GB raw movie from my colleagues. Into my laptop with 4200 RPM HD. It downloaded 1-2 MB/sec, and you know how many slowdowns I noticed? None. I even played a few demanding games in meantime, loading a lot of data from HD, little difference to my usual experience. Care to tell me how much slowdown I'd notice on 7200 desktop HD? Or how much I'll notice in 10 years, when all HDs will be SSD anyway?
And you're aware that there are countries where you get just straight broadband, no stupid download limits? If my country, bordering on second world status, can provide such connection over not fiber, but humble LAN cable, I'm pretty sure actual first world countries have connections capable of silently downloading this AI
now, much less in 10 years.
So again, even if we assume that a God AI does try to do a Skynet, only an idiot would believe it can self-replicate. The Terminator 3 scenario was extreme idiocy (and not just because Skynet actually subsequently committed suicide because it nuked all of the computers it was running on)
Yeah, because we totally don't have thousands of examples of programs that would be quite sufficient to maintain infection network even today. Oh, wait, ever heard of botnets?
Compatibility issues are a huge hurdle in any software development cycle. You cannot make a program run on Linux if it was meant to run on Windows natively, and it gets worse with a more complex program. In fact, you're probably going to look at a total recode from top to bottom just to get it to run again... and with a huge and highly complex program, that ain't gonna happen overnight.
You know how I'd do it? Not thought totally locked in the box, like you two. I'd first infect the PCs with the spreading virus I mentioned above, then I'd ignore the hurdle of windows/linux/android/whatever altogether by putting your actual system into virtual machine at boot, making it essentially program ran by AI, which would then live off excess computing power of your graphic card and processor you don't use at the moment. That way, I'd only need something capable of running on x64, ARM and select few other processors. You're aware that there are proof of concept viruses that do it even today? With barely noticeable slowdowns in most common tasks other than gaming?
Assuming AI coded in next 10 years will have the same limitations, approaches, and hardware resources as human programmer 5 years ago is, as I said, downright stupid. What I described above might be impractical/dead end in 10 years, but saying 'we'll defeat AI because it can't fit into floppy drive, hurr!' is, well...
Unlike human programmers, AI would be actually capable of thoroughly testing and understanding the processor it runs at, finding out all the errors it possibly can, and adjusting accordingly.
Actually, they can't. They can't do it now, and it's unlikely they will able to do it quickly in the future. To date we don't even have a programming tool for computer programmers that can do automatic code corrections if there is a testing failure; the best that we have is little more than the equivalent of a spell checker.
Note I said "it
possibly can". I'm well aware that there are problems that can't be detected from the inside, so to speak. However, complete testing of the processor
is possible, as quite a few earlier pentium processors have been so tested (as part of government checks and equipment testing) - and again, assuming AI that can run various tests on processors identical to the one it is using now will only have accuracy of human programmer or automatic tool of today is dangerous.
A human being isn't born knowing how their brain works. We have to use sophisticated tools to actually look at how the brain works.
And that stops something more intelligent than human, something running on perfectly identical, blueprinted units from being capable of discovering it how exactly? Aeronautic engineer might not know how bumblebee wing works, but that doesn't stop him from understanding how turbojet and metal wing work.
A machine would similarly be unable automatically figure out how a microprocessor works. It may be able to know what model of microprocessor it's using by checking the System Registry, but System Registries do not come with complete schematics on how the microprocessor actually looks or functions. It doesn't say what kind of tolerances the hardware is capable of. If it tries to install itself on just some random computer, the likely result is simple: It will not run, period. It's like trying to run a game like Crysis II without knowing the system capabilities.
If a human being can understand it given publicly accessible data, I'm sure actually malicious program both smarter and much faster than human
can figure it out. I noted that its initial attempts at unfamiliar processors/infrastructures will fail, but once it figures out the differences it will prepare a copy that will work. Especially seeing it only need to code for processor, not OS.
Let's take the Crysis II example from above - on how many windows machines of today it can run?
All of them. Sure, it will fail to provide entertainment on a significant percentage, as these will be too slow for generating useful graphics, but the fact is, you don't need to recompile Crysis to work on any modern PC. If the AI can run on typical cell phone of 2020, as postulated by someone above, or even typical desktop with acceptable slowdowns, the hardware barrier will be meaningless. At worst, it will use weaker computers as sleeper cells.
So, again, the claims of the singularists are very much on the level of "insane paranoid ramblings" as far as the Terminator 3 Skynet scenario is concerned. It was a dumb scenario. You may as well claim that the Large Hadron Collider will kill us all.
*yawn* Malicious AI might be dumb scenario, sure. But, all I wanted to point out, if anyone will ever make malicious AI on purpose, assuming it will be blind, dumb, deaf and the resultant chest beating by "proving" limitations of today will stop it in 10 years is even dumber. Especially seeing appropriate connections and disc spaces exist
today, much less in an era where everyone will have multiple megabit/gigabit wireless connections and multiple terabyte SSDs.