Simon_Jester wrote:You're kind of missing the point.
If the robots have a huge qualitative edge, it doesn't matter so much, as Alferd already pointed out. But unless we walk into the story presuming that robot armies will be invincible (as you do, but Alferd doesn't), you're still creating a disadvantage by turning your soldiers into faceless hordes for the enemy.
This may not be a huge thing, but it isn't irrelevant unless (again) you assume that robot soldiers are invincible. Very few stories wish to do this.
Wait, what? Since when do the robots have to be invincible? The space between "more effective than a human force" and "totally invincible" is pretty huge.
Look, there's too many assumptions you have to make to draw any kind of universal conclusion about the advantages of robots over humans or vice-versa, but most of the arguments I've seen in this thread just haven't been very convincing. Yeah, okay, fine, human soldiers won't hesitate to kill robots...but it's not like the robots are going to hesitate, either. And if you stop using robots, the enemy might hesitate to kill...but so will your guys, and you're right back where you started. This is a push, at best, not any kind of disadvantage for the side using robots.
Other arguments:
EMP isn't a magic off switch. It plays merry hell with civilian electronics, but hardening against it is not difficult to do. You have to accept a performance or cost tradeoff, but that's pretty much a given with military hardware. Nobody's going to send battle robots into the field with a weakness to EMP.
Robot revolt: if you're building human-level general AIs and you haven't figured out how to reliably keep them friendly, you're already in deep shit, robots or no robots. If your war robot AIs are willing to turn on their masters, then the odds are one of your earlier AIs was willing to, too, and has probably already escaped onto the Internet and self-modified itself into Colossus, Skynet, or AM. The way I see it, either you know how to keep AIs loyal or your war robots aren't smart enough to revolt.
War is illogical; the robots will never do it: give a computer the right set of goals, and you can make it do anything. Once again, if you're to the point of human-level general AI, either you know how to make it stick to the goals you've written for it, or you have bigger problems than disobedient robots. This is assuming you actually need human-level intelligence for a war robot; what if it doesn't have to be any smarter than a chimp? Or a monkey? Or a dog? Who knows? I don't, and you probably don't either.
The most interesting arguments probably revolve around potential software problems--the point of robots, after all, is that they're mass produced, so a hidden software error is repeated in ever robot--or we just don't trust our ability to program a reliably friendly AI. If you
do have reliably friendly AI, then the first issue fixes itself, literally--the computers modify their own programming to eliminate the bugs. The second...it's a good reason not to do it, but I think it presumes too much self-restraint on the part of human societies. If GAI is possible, then it's going to be done. Look at nuclear weapons--they require a huge, highly visible industrial base to produce, expensive and difficult-to-acquire materials, and testing which is
literally detectable by the entire world. And for all that, a pissant country like North Korea was able to build them against the wishes of the entire rest of the planet. GAI, on the other hand, could quite possibly be invented on some guy's laptop somewhere and released onto the Internet, and there'd be exactly fuck-all anyone could do about it. If a war robot AI can be built, somebody's going to build it, because the temptation of an army that doesn't sleep, eat, piss, complain, or leave grieving relatives behind when it dies is going to be too much to resist.
I guess somewhere in the middle of these two arguments lies the war robot that doesn't need to be as smart as a human and can't modify its own programming. Then, yeah, you could conceivably have a stuxnet kind of vulnerability. But on the other hand, the modern military is already tremendously reliant on computers--computers in many cases running Windows, for fuck's sake--and nobody's managed to take down the US Army with a computer virus (yet).
The cultural argument--we don't use robots because it's taboo/scary/God says we can't: that lasts until the first guy that's willing to ignore the taboo/fear/Word of God kicks the shit out of everyone else with his robot army.
The biggest real-world issue I can think of, besides the difficulty of actually developing the software, is the expense of the hardware. And for a society that can mass-produce robots on a scale large enough to actually use them as an army, I cannot possibly imagine how that cost would be more than the cost of raising a human from birth, training him, and feeding, paying, and housing him for however many years he's in the army (plus medical and pension costs for life, plus a survivors' pension if he dies, et cetera). No, I am not arguing that a robot would walk off the assembly line and then you'd never have to pay to maintain or supply it again. But personnel is
always the biggest cost in any organization of more than a few people. This is entirely besides the
other costs of human soldiers; political, ethical, opportunity...
In fiction, as many have already said, you don't see robot armies because they're difficult to empathize with. The in-universe reasons, if there are any at all, are usually either inherent to the setting (40k's robot-corrupting evil magic), some universally enforced cultural thing from Ye Olden Tymes that usually makes a point of being vague on details (Butlerian Jihad), or some arbitrary limitation that doesn't really make sense in-universe (the entire Federation can't duplicate the work of one guy working alone on a backwater colony). More often it's like Star Wars, where war droids just inherently suck for no real reason, or they just don't have war robots at all because of
author fiat quantum.