There's a lot to respond to here, so I'm going to go through and only respond to points that haven't been addressed yet, or need to be further argued.
brianeyci wrote:
Jadeite it doesn't matter if the plane doesn't need true AI. If AI happens as fast as Starglider says it will with neural networks, evolutionary computing and quantum computing, computer scientists will not be able to point to a line where here, now, AI is sentient.
But that still doesn't mean a sentient system will be used in a jet. As it is now, you have computers handling radar, threat identification, and firing control. All that's basically needed is to combine that with a much more sophisticated autopilot and give it authorization to engage the enemy. It does not need ethics, personality, wants, needs, or ambitions. It just needs to be a glorified calculator.
It will be a continual process, and it's entirely conceivable corporations and the military industrial complex will program mental blocks and enslave AI. It's entirely possible down the line, sentient enslaved AI gets uploaded to vehicles fight our wars. Think about all the weapons in human history -- they make fighting more terrible, more terrifying, more awful. But for the first time it's possible to sanitize war, to a huge degree. This is not beneficial, especially if the advantages afforded by AI are trivial.
Why shouldn't war be sanitized? It's not like its going to go away. Don't bring out that tired and oft repeated "It's good that war should be terrible, lest we become too fond of it," nonsense.
We are fond of it and one could argue that peace is not the absence of war, but rather merely the time between wars, from looking at the whole of human history. Humanity is in love with conflict, so why shouldn't we try and minimize the human cost? Why shouldn't we try and minimize suffering and death? Quite frankly, it doesn't matter if 1 or even 100,000 UCAVs get shot down, they're just machines in the end.
The double standard is astonishing. On one hand, nobody is allowed to bring up the flaws of current AI because according to AI wankers, all of this will be fixed. Then, nobody is allowed to bring up potential problems of AI because they don't exist right now and are "made up", despite being explored by science fiction authors before said AI wankers were born. No, I'm not talking about the movie Stealth. People like Heinlein, Asimov, etc., have explored the problems with AI, but of course that is all science fiction so it is invalid, even though the solutions to current AI problems are right now, fictional.
Most of the "potential problems" that you've come up with in this debate, are retarded, to be perfectly honest. So far they are all either highly exaggerated, easily fixed, or highly improbable to begin with. Not only this, but when you make statements like "
Show me an instance of a pilot becoming a traitor and maybe it's time to go AI. " and you are given an
entire list of defectors, you backpedal instead of admitting you were wrong.
I am only an "AI wanker" by your standards, you luddite. You've already shown, thanks to your own backpedaling, fallacies, and increasingly flimsy "problems" that you are demonstrating that you are not prepared to accept any rational arguments or facts that contradict your own beliefs, behavior very similar to creationists, I might add.
General Schatten wrote:
Nice strawman, Mike, but I didn't say a human soldier was incapable of following an unlawful order. I said that a machine was incapable of discerning a lawful order from an unlawful one, it only does what it's told, with a human there is a possibility that they will disobey.
Then the burden of responsibility moves up the chain of command.
brianeyci wrote:You're looking at it the wrong way. War robots will make war more likely, not less, since retard politicians will not be stopped by body bags. The terror of war is the only thing stopping war, human bodies.
The 20th century alone proves that wrong.
Broomstick wrote:
An important difference between human and AI (as it stands now and for the likely near future) is that humans are more likely to detect incongruities between the mission as planned and the mission as it is found to be. If an aircrew is told they're bombing a munitions factory and when they get to the coordinates they see a field full of children playing hopscotch the human crew is FAR more likely to question what the hell is going on whereas the AI will just bomb away.
Not necessarily. It could be easily possible to use a different system for target identification rather than "go here, bomb this coordinate." For example, radar mapping the target, or providing or other sensory data (with your munitions factory example, that'd probably put out a lot of heat, while a field won't). In this case, the bomber would arrive at the target coordinates, match up what it sees to the database given to it for the mission, and then take out the target. That's how SAC used to train, using mock ground targets with radar returns similar to actual ground targets.
Human crews can also be given more flexible orders (such a series of conditions under which to self-abort, or the authority to self-abort if things are not as planned)
This is also just a matter of correct programming. "IF target is not found, THEN return to base." for example.
Humans can change plans - such as diverting to a location that is not home base if circumstances change and that is prudent - in ways that are much more difficult for machines to do so.
That's what communications are for. Human crews are always receiving information updates, why are you blindly assuming a machine can't?
The likelihood of human crews deviating from orders varies considerably depending on the nature of the initial orders and possible consequences of making changes on their own, but the point is that they are able to make these changes whereas machines are not.
Again, you're simply
assuming a machine will not have flexible programming and that for some unknown reason the USAF isn't going to give it any information updates. It's a false dilemna to begin with.
The air force would really like to know that about some of the UAV's that have crashed during testing phases. Yes, we supposedly have that capability now. We also know that it sometimes doesn't work. Why doesn't it always work? Well, the real world isn't as neat and tidy as computer simulations. Obviously there is something we're not accounting for or correcting for.
Mistakes happen, and every project testing has crashes and setbacks. This is part of the development process, and it certainly didn't stop the USAF back when test pilots got splattered pretty regularly. Hell, just as an interim solution, an autonomous UCAV could probably revert to human control from the ground or a command aircraft for landing and takeoff.
Take off and landing is also the most difficult part of flying anything. All you need is a bird passing by at the wrong time and you have a mess on your hands.
Again, for this and your other arguments about take off and landing. If autonomous landing and take-off capability becomes too hard to adequately program for, then simply teleoperate it from either a ground station or a command aircraft, and then release it to its own devices once its safely cleared the area. And of course, given that they'd be launching from military airfields from which civilian air traffic is excluded, crowded skies shouldn't be a problem.
I'd also like to point out that humans often make mistakes as well. In fact, several Arab fighter pilots were captured in one of the wars against Israel when they landed at an IDF airbase by mistake.
brianeyci wrote:How many F-22 will there be, 120? In a country of three hundred million... those F-22 pilots will be the fucking best of the best of the best of the best, or they better be. How can there be "not enough Kasparov?"
The USAF operates 6,217 aircraft. Of these, 228 are attack aircraft, 173 are bombers, and 1,820 are fighters (this does not include F-22s). The USN tacks another 4,000 total aircraft onto this. That's at minimum, 10,217 pilots total, not including other crewmen. Each of them needs to be trained to sufficient quality, and has to be put through numerous exercises and war games so that their skills stay sharp. In contrast, a computer will just need to be patched. And of course, if a pilot dies or retires, you've just lost his experience and need to train a fresh replacement. If a UCAV is shot down, you lost a piece of standardized machinery.
AI wankers can't have it both ways. Either a stupid piece of shit in the plane, or possibility of an enslaved sentient AI.
If the "stupid piece of shit" can get the job done, I don't see any reason not to put it in. After all, weren't you the one arguing "If a blast door works, don't use a forcefield."?
*Chess bullshit snipped because this isn't a debate about chess.*
If it's not a stupid piece of shit there's the door open for slaves. You can possibly have something in between,
But you just said we can't. Which is it then?
but the problem is the line between sentience and non-sentient is not fine so unless they make it a policy to have stupid AI in planes, there will be possibility of sentient AI with mental blocks. In fact, I would say guaranteed, given AI have no legal standing. Is that what you guys want? Slaves revolt you know.
Translation: "ZOMG Skynet!"
So take your pick, AI slaves or stupid AI that cannot ever be as good as human beings.
False dilemna. You
blindly assume that the "stupid AI" cannot be as good as a human being. Here's the things wrong with this:
1. You are not defining what they wouldn't be better at.
2. You are assuming all human beings are equal.
3. You are assuming that skill matters, when air combat is increasingly emphasizing finding the enemy before he finds you, and then engaging him before he has a chance to respond. In this case, a computer will be superior at both due to reaction time and ability to process information faster. A human pilot is
already relying on his computer for both, he's the weakest link in the chain.
It doesn't matter if the Russian's put up an entire squadron of elite pilots if they all get pegged with missiles from beyond their own radar range before they can even react. "Creativity" and "uniqueness" will only get you so far, particularly when combat is increasingly favoring which side can crunch numbers faster, and if you have an idiot savant computer controlling one side, I'm going to bet on it.
In fact, that's the real answer to your false dilemna right there. The line between a self-aware AI and a "piece of shit" would be an idiot savant. It could be programmed to react faster and better than the average enemy pilot, and that's all it would need to be. All it needs is a list of things to kill and how to kill them. It doesn't need a personality or self-awareness, and as long as both of those are avoided the rest of it can be as complex as needed, because it still won't be sentient and thus a slave.
Better no AI, or only AI in human-like bodies who have the same rights as humans. And I doubt that would be any fucking cheaper -- android bodies need maintainence, and they will have salaries and pensions.
Seriously, what the fuck?
Hawkwings wrote:The problem with programming a bunch of recognition systems is that there is so much that a computer would have to be able to recognize that the database would be gigantic and slow.
Not necessarily. Combat aircraft already carry threat databases. As a brainstorming solution, perhaps including an additional computer with its own databanks to handle load sharing, or even handle its own set of responsibilities.
Take the landing on an airstrip example. First, you need to ID the landing area. Then you need to make sure that it's clear of obstruction, not full of potholes, not on fire, not taken over by the enemy, not iced over, etc etc etc. A human does this in a few seconds at most, and can make an appropriate decision. When we have computer that recognizes as much "day to day" stuff as a human, it will be a huge milestone.
Again, as a brainstorming solution, have the airfield transmit an 'all clear' signal to the UCAV to alert it that landing conditions are fine. If this signal isn't received, it could bring up its sensors and go over a checklist, ie "Are there heat plumes rising from the airfield, does a terrain mapping radar detect holes in the strip?" and so on. Then it could be a matter of consulting a decision making tree and deciding to either go ahead and land or divert to another field if there's one in range (or if there isn't, and the decision making tree concludes that the field has been overrun for example, wipe its harddrive and ditch).