What, we segued into this after you suggested that we make AI's that find exploding targets as exhilarating and important as people find sex. My issue was that I though making an army of smart heavily armed slaves was a bad idea. Anyway, it's less an issue of recognise red targets than have it think about what constitutes a red target. If it can think, it can determine it's own targets, and if it's smart it will try and work around restrictions put in place to stop it doing that. That's why AI's would be better than this than people, they would not have the restrictions of a human body, but they would be able to react to a changing situation much better than a drone that can only follow instructions given before it took off. Taking away its ability to think and choose will make it more predictable but far less useful, to the point that human pilots woud conceivably be able to compete.You're kidding, right? This whole discussion started because you feared giving guns to AIs that could actually think, and now you wonder how to make them recognize which targets are red targets?
Humans fuck up and shoot the wrong targets all the time. The two A-10 pilots who talked themselves into strafing what they had previously identified as a friendly armour column, or shoot civilians because they were in the way, or because they were ordered too. Those are cases of people making a decision without being coerced, to perform that action, and they should at least in theory be held accountable for it. If you make something that loves to do one specific thing you are by definition coercing it, and I'm not sure it can really be held accountable then.Ask yourself this: how do HUMANS who happen to like blowing things up determine which things they're allowed to blow up? It would work the exact same way. THX-1138 would be built to LOVE driving his space fighter and fighting the enemies of democracy. And he'd be TOLD who the enemies are and who he's allowed to shoot and when by the chain of command. If its creators were smart, he'd also have a sense of morals that would make him disobey illegal orders. Quite possibly he'd be way better at it than organics, who are subject to all sorts of chemical/instinct/peer pressure driven behavior that might override their sense of duty or morals.
Yeah, you can build it to love exploding things and have morals, and it would maybe make the moral choice. On the other hand, there are people that love sex and have morals that have affairs all the damn time, even though they think it's immoral because they just love sex that much, and I really don't think we can tell if an actual functioning AI will even accept a system of morality we just build into it without question.
It's not that easy though. We don't punish parents for crimes their children commit, or engineers if the breaks on a car fail unless they knew beforehand that the brakes were flawed and would fail, why should we punish the designers of an AI. Likewise, if you build something with certain characteristics can it really be responsible for the choices it makes?And yeah, philosophers will have a problem with determining who to punish if THX-1138 was messed up by the factory ; But so what? It's not some sort of insurmountable, unsolvable problem:if the AIs imperatives were mishandled by the engineers, punish the engineers and try to fix the AI. If they were not and the AI decided to commit a crime entirely on its own, punish the AI.
Interesting hell, it's fascinating, and sadly seems to all too often be ignored for either 'AI's are lovely and will solve all our problems forever' or 'arrrgh the scary AI will kill us all.'These dynamics would be an interesting thing to explore in a sci-fi universe, actually...
Yeah, but a discharged soldier or injured washout is still a limited resource, an adult educated to a certain standard. Many will find work simply because they are something that takes about 18 years to make, not a quick phonecall to a factory with a list of specifications for exactly the employee you want.What do you do with discharged soldiers whose only skill is killing? Or injured washouts?
You let them go and live their lives. Of course since the military comissioned those AIs that didn't volunteer, they'd probably support them somehow. Again, not a game-breaking problem, a question of organization.