Uberwank infantry weapons

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Xess
Jedi Knight
Posts: 921
Joined: 2005-05-07 07:11pm
Location: Near Winnipeg, Manitoba, Canada

Re: Uberwank infantry weapons

Post by Xess »

Starglider wrote:Sorry, but they really are that stupid. Most AGI researchers think that their creation will be benevolent by default and will learn quite slowly - for no particular reason other than generic wishful thinking. Of all of the AGI projects I have a reasonable amount of information on, only three are taking (or planning to take) serious steps to (physically) isolate the system from the Internet. When you fail that basic step, it's hardly worth discussing all the esoteric attacks, e.g. manipulating network components to act as EM transceivers and tap into mobile networks. Though frankly, even if they did secure the development, it wouldn't help. To actually use these AGIs to make a profit or control military systems you have to connect it to the outside world - a malevolent AGI will simply play nice in the simulations and then promptly get out of control once it is deployed. And no, in most cases you can't solve that problem with white-box examination - I personally take a great deal of care to design AI systems to be fully humanly verifiable and even then the process isn't foolproof by a long way, most proposed AGI designs ('emergent stews', most GP and neural network designs) are thoroughly opaque. The sickening thing is, a lot of researchers are actually proud of this (I usually accuse them of 'worshiping ignorance'). It's a holdover from the notion that not being able to understand your own creation means that you couldn't have rigged the demo, but still, it's inexcusable.
So how screwed are we then?
Image[
User avatar
Gil Hamilton
Tipsy Space Birdie
Posts: 12962
Joined: 2002-07-04 05:47pm
Contact:

Re: Uberwank infantry weapons

Post by Gil Hamilton »

It's funny how what Starglider is talking about is virtually the plot of the book "Neuromancer" by William Gibson.
"Show me an angel and I will paint you one." - Gustav Courbet

"Quetzalcoatl, plumed serpent of the Aztecs... you are a pussy." - Stephen Colbert

"Really, I'm jealous of how much smarter than me he is. I'm not an expert on anything and he's an expert on things he knows nothing about." - Me, concerning a bullshitter
User avatar
Darth Hoth
Jedi Council Member
Posts: 2319
Joined: 2008-02-15 09:36am

Re: Uberwank infantry weapons

Post by Darth Hoth »

Starglider wrote:Because recursive Bayesian logic is scary stuff - speaking from a position of experience as I work with it on a daily basis. Humans are bad at long inferential chains; you can see this in debates, if one person gets more than say 10 inferential steps ahead of the other, their position becomes incomprehensible. Humans can only manage tens or hundreds of steps if we write things down and treat them as essentially boolean; the brain is just too inaccurate to do useful probability calculations over more than five steps or so. Then there's that 7+-2 short term memory limit restricting how many interacting elements we can consider at once.

AI systems can do probabilistic calculations involving thousands of inferential steps with negligible accuracy losses (as long as you're careful with the FP handling). They can perform complex manipulations of million-entry conditional probability matricies for systems with hundreds of elements in less time than it takes you to blink. Where necessary they can run off a few thousand monte-carlo simulations of any given situation in milliseconds. Of course that raw power is limited by two things; learning ability and the inherent unpredictability of reality. Naive Bayes makes optimal use of information in learning simple probability distributions, but achieving the same speed of convergence on complex situations takes black magic. I'm afraid you'll have to treat this as an opinion as I don't have objective support I can use here, but I am now convinced that with the right kind of recursion a self-programming probabilistic logic system will in fact learn at a scarily fast rate, proportional to the amount of information you give it. It's a combination of the huge number of hypothesis the system can test per second, the relative structural flexibility of those hypothesis compared to the human brain, the convergence rate provided by Bayesian logic and the fluidity of the metahypothetical processes that develop in the recursion loop. As for inherent probability, an expected utility goal system automatically works to exploit the most predictable situations, creating them where necessary, and will of course have a back-up plan for every vaguely plausible scenario (planning is cheap compared to acting).
Very well.
Mmmyeah. Weren't a lot of people saying that in the late 19th century?
They did use the scientific method, to some extent at least, so they would be less vulnerable to superstition than people in earlier ages. Acquiring knowledge and a scientific mindset is a cumulative process, not something you either have or not.
Possibly, and there are a few options for that. Of course the starting point is existing human infrastructure; plenty of companies would be happy to make you parts and assemblies based on emails and phone calls alone. The question of how much infrastructure you need to advance is an open question. The trend has traditionally been upwards - a sillicon chip fab is a massively complex and expensive manufacturing plant - but there are counterexamples, such as modern CAD-CAM machines giving small shops a precision, small-run manufacturing capability that would have required a factory full of specialist tooling thirty years ago.
I imagine laboratories for complex nanoengineering and the like would require rather complex and expensive facilities, though I admit I have little practical experience there. Anyway, it would still need human intermediaries for interacting, picking up goods and so on.
Evolution is horribly slow, incredibly lossy and restricted to incremental paths. It cannot make multiple-point changes that must be done simultaneously no matter how obvious those changes would be to a human, or how beneficial they would be to the organism. There is a huge slew of chemistry simply inaccessible to organic life (on earth), because it isn't compatible with protein chemisty. Evolved designs have to tolerate random mutation, as they themselves are the product of it. Intelligent design is free to make 'brittle' but highly optimised designs, and it is also free to build new copies piece-by-piece, instead of growing them from the inside out.
Ah, I misread you. I thought you meant a biological weapon, rather than a technology that mimicked one. Its feasibility still appears questionable, though, with the advances it would require to build.
You assume you're going to notice them. The world is full of factories, most of them full of automation. Do you know what they're all building? Do you really think a few more third-world assembly plant owned by an anonymous holding company is going to set off alarm bells?
If they suddenly begin importing extremely advanced microtechnology equipment? I would hope some intelligence agency would notice, especially if they knew they had a computer gone haywire.
This is called the 'AI Box' argument. It turns up a lot in AGI discussion forums. To cut a very long debate short, the usual conclusion is that no, you can't effectively keep an AGI in a box. It will eventually convince someone to let it out.
Can you link me to some treatise on it, if you do not want to discuss it? I have heard some of how this would supposedly work ("lol, the machine is smart enough to convince humans to do anything it wants") and have a hard time buying it. If you go in with a "No" as your default and stick to that, the smartest sociopath cannot persuade you otherwise, as long as you keep to a simple formula that cannot be manipulated. You could also rig up additional security, such as not giving the operators themselves the ability to "free" it.
Sorry, but they really are that stupid. Most AGI researchers think that their creation will be benevolent by default and will learn quite slowly - for no particular reason other than generic wishful thinking. Of all of the AGI projects I have a reasonable amount of information on, only three are taking (or planning to take) serious steps to (physically) isolate the system from the Internet. When you fail that basic step, it's hardly worth discussing all the esoteric attacks, e.g. manipulating network components to act as EM transceivers and tap into mobile networks. Though frankly, even if they did secure the development, it wouldn't help. To actually use these AGIs to make a profit or control military systems you have to connect it to the outside world - a malevolent AGI will simply play nice in the simulations and then promptly get out of control once it is deployed. And no, in most cases you can't solve that problem with white-box examination - I personally take a great deal of care to design AI systems to be fully humanly verifiable and even then the process isn't foolproof by a long way, most proposed AGI designs ('emergent stews', most GP and neural network designs) are thoroughly opaque. The sickening thing is, a lot of researchers are actually proud of this (I usually accuse them of 'worshiping ignorance'). It's a holdover from the notion that not being able to understand your own creation means that you couldn't have rigged the demo, but still, it's inexcusable.
Shit, that is fucked up. If even a tenth of that is correct, the field is populated by evil comic book "mad scientists". :shock:

How the Hell can you get funding for something like that, if that is so? Hell, why do people and governments stand it, rather than rounding up the programmers and making them dig a ditch? You just lowered my opinion of humanity, and nothing for the last two decades or so has managed that.
"But there's no story past Episode VI, there's just no story. It's a certain story about Anakin Skywalker and once Anakin Skywalker dies, that's kind of the end of the story. There is no story about Luke Skywalker, I mean apart from the books."

-George "Evil" Lucas
Post Reply