Yes, it is possible to use alternate systems - but are they developed enough to be practical? There is navigation using dead reckoning - fly this heading for this distance and do X, then fly this heading and do Y - which has been most common historically for unmanned whatever. The problem is that errors in initial setting, or errors introduced along the route of travel (unexpectedly high winds, for example) can result in being wildly off course as there is no confirmation from external sources that the course is actually correct.Jadeite wrote:Not necessarily. It could be easily possible to use a different system for target identification rather than "go here, bomb this coordinate." For example, radar mapping the target, or providing or other sensory data (with your munitions factory example, that'd probably put out a lot of heat, while a field won't). In this case, the bomber would arrive at the target coordinates, match up what it sees to the database given to it for the mission, and then take out the target. That's how SAC used to train, using mock ground targets with radar returns similar to actual ground targets.Broomstick wrote:An important difference between human and AI (as it stands now and for the likely near future) is that humans are more likely to detect incongruities between the mission as planned and the mission as it is found to be. If an aircrew is told they're bombing a munitions factory and when they get to the coordinates they see a field full of children playing hopscotch the human crew is FAR more likely to question what the hell is going on whereas the AI will just bomb away.
Then there is navigation by pilotage, that is, by actually sensing the world as it is and continually updating course in reference to entirely external cues. UAV's are still very weak in this. The GPS system provides an artificial external set of cues - and it's a darn good one which is why just about everyone uses it these days, from hikers to the military - but there is still the issue of the external environment. GPS won't inform you of a flock of birds, it won't tell you there's ice on the runway, or a hole in the pavement. While the computerized/automated systems are quite good, such that in the correct environment they perform better than people, we still have people monitor them because they can't sense the actual environment around them. When an airliner is using autoland there is a human monitoring the system who, if he or she sees, for example, a service truck pull onto the runway in front of the airplane, will abort the landing - because the autoland system can't detect the anomaly.
That is biggest obstacle to truly autonomous aircraft whether military or civilian - they can't see the world around them. The technical problem of how to land and aircraft has been solved, the problem of spotting that deer wandering around on the runway hasn't. Although such UAV's are expendable, they aren't so cheap as to be disposable. You don't want to lose them unnecessarily.
Although that will work the vast majority of the time, as I have continued to point out, the real world is messy. Since I am not a combat pilot or a soldier I can't pull a real-world example out of my hat, but I have read and been told over and over that as soon as the shooting starts weird shit can happen and plans need to be continually updated.This is also just a matter of correct programming. "IF target is not found, THEN return to base." for example.Human crews can also be given more flexible orders (such a series of conditions under which to self-abort, or the authority to self-abort if things are not as planned)
You miss my point - humans can change plans even when they DON'T receive communications. That's why you have to tell a machine ahead of time that if I can't land at A then land at B, then C. You don't have to detail the decision trees for humans, just tell them that if you can't land at A divert to another suitable field and they will select one based on the circumstances at that particular time. With machines you really do have to try to anticipate everything in advance. With humans, once they have the decision making skills, you don't because they'll develop their OWN decision trees suited to the task at hand.That's what communications are for. Human crews are always receiving information updates, why are you blindly assuming a machine can't?Humans can change plans - such as diverting to a location that is not home base if circumstances change and that is prudent - in ways that are much more difficult for machines to do so.
Again, humans are far, far more flexible - they can rewrite their programs. They don't need to call back to base (although they can and do when circumstances permit), if the shit is hitting the fan they can make their own decisions based on what is happening around them. This is why field commanders have a certain amount of autonomy. Yes, we have flexible programming but it is nowhere near the abilities of a human being. Which is why humans are still so frequently kept in the loop.Again, you're simply assuming a machine will not have flexible programming and that for some unknown reason the USAF isn't going to give it any information updates. It's a false dilemna to begin with.The likelihood of human crews deviating from orders varies considerably depending on the nature of the initial orders and possible consequences of making changes on their own, but the point is that they are able to make these changes whereas machines are not.
Yes, but no one liked the fact test pilots were getting splattered. The USAF doesn't like losing expensive UAV's, either. The point isn't to wreck as much of our hardware as possible, it's the wreck as much of the other guy's hardware as possible.Mistakes happen, and every project testing has crashes and setbacks. This is part of the development process, and it certainly didn't stop the USAF back when test pilots got splattered pretty regularly.The air force would really like to know that about some of the UAV's that have crashed during testing phases. Yes, we supposedly have that capability now. We also know that it sometimes doesn't work. Why doesn't it always work? Well, the real world isn't as neat and tidy as computer simulations. Obviously there is something we're not accounting for or correcting for.
The point is that there is some real problems with UAV's that haven't been solved yet. Until those are solved they are very limited machines. That's not to dismiss what they have done well, but they can't replace humans in all areas yet. Nobody really knows how long it will take to overcome these issues. It could me next year, it could be a couple decades.
You keep missing the boat, buddy. We have solved the mechanics of landing an aircraft. We have had "autoland" - that is, fully automated landings, of passenger aircraft for over a decade now in the civilian world. The "put aircraft on pavement" part of the problem is DONE. That's not the issue. The problem, as I keep saying, is the deer on the runway. Or the truck on the runway. It's the random whacky real-world variable problem. The human is not in the loop to land the aircraft, the human is there to spot the anomalies the aircraft can't and punch the abort button, or perhaps to take over and dodge an obstacle, or otherwise make a change of plans. It's the same old problem in aviation: when things are going right flying an aircraft is very easy, but when things go wrong they go very wrong very quickly and THAT's when the task becomes hard.Hell, just as an interim solution, an autonomous UCAV could probably revert to human control from the ground or a command aircraft for landing and takeoff.Again, for this and your other arguments about take off and landing. If autonomous landing and take-off capability becomes too hard to adequately program for, then simply teleoperate it from either a ground station or a command aircraft, and then release it to its own devices once its safely cleared the area.Take off and landing is also the most difficult part of flying anything. All you need is a bird passing by at the wrong time and you have a mess on your hands.
I assume you that the recent not-so-good landing at Heathrow was NOT done on autopilot - we do NOT have a system that could handle that sort of malfunction which is exactly why we still have human pilots on board. If that airplane had been an automated aircraft all humans aboard would almost certainly have died. Computers do not have the capability to manage that sort of scenario. Maybe one day they will but I have no idea when that will be.
Wicked Pilot has stated in a prior thread that when they launch/land UAV's they also have to excluded all military traffic from the airspace, too. You can't safely put as many UAV's in a volume of airspace as you can manned aircraft - the UAV's need much more clearance around them than manned aircraft to avoid mid-air collisions.And of course, given that they'd be launching from military airfields from which civilian air traffic is excluded, crowded skies shouldn't be a problem.
Also, in the US there is an overlap of military and civilian airspace. Even as a student pilot I routinely passed through military airspace. The military routinely passes through civilian airspace. Some commercial civilian flights routinely pass through military airspace. Civilian pilots (including yours truly) also routinely use military navigational beacons. GPS started as a military navigation system. At present, UAV's can be highly disruptive to other operations, both civilian and military. The consensus in the aviation world is that to be truly practical UAV's need to be able to "see and avoid" at least as well as a human student pilot. "Better programming" won't do it - it's a sensory problem.
What? Every single airfield in the world is wired for landing? Not true. Even in the industrial world. In war time power failures are a fact of life. You're assuming there will NEVER be a power loss or malfunction in what transmits the "all clear".Again, as a brainstorming solution, have the airfield transmit an 'all clear' signal to the UCAV to alert it that landing conditions are fine.Take the landing on an airstrip example. First, you need to ID the landing area. Then you need to make sure that it's clear of obstruction, not full of potholes, not on fire, not taken over by the enemy, not iced over, etc etc etc. A human does this in a few seconds at most, and can make an appropriate decision. When we have computer that recognizes as much "day to day" stuff as a human, it will be a huge milestone.
A human pilot can make a determination of landing conditions with only his/her own senses - a UAV can not.
OK, the field is saying "all clear" - who or what makes that determination? Can the signal be faked? Humans can question apparently valid signals. A number of years ago in the UK someone acquired an aviation transceiver and was broadcasting false instructions to airplanes over London. They had the jargon correct, but because what they were telling the airplanes to do was different than usual, and the pilots could detect no reason for the deviations from the norm, they questioned the orders and the deception was uncovered with no harm done. Your UAV would have simply obeyed what appeared to be normal orders.
The problem is the sensors we have for the UAV's are not as good for determining what is going on in the environment as human eyeballs. While the systems we have are adequate for a simulated environment, and sometimes adequate for the real world when nothing unexpected happens, in actual combat things get VERY chaotic. UAV sensory capabilities may be excellent for navigation but they aren't good at detecting their immediate, actual environment. UAV's, like all machines, are excellent at routine operations but can't handle anomalies as well as humans. Those are two major problems that still need to be solved before we can talk about replacing human pilots in combat situations.If this signal isn't received, it could bring up its sensors and go over a checklist, ie "Are there heat plumes rising from the airfield, does a terrain mapping radar detect holes in the strip?" and so on. Then it could be a matter of consulting a decision making tree and deciding to either go ahead and land or divert to another field if there's one in range (or if there isn't, and the decision making tree concludes that the field has been overrun for example, wipe its harddrive and ditch).