F-22 now critical to survival of USAF

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

Post Reply
User avatar
Ritterin Sophia
Sith Acolyte
Posts: 5496
Joined: 2006-07-25 09:32am

Post by Ritterin Sophia »

Sarevok wrote:Why would do AI have to be nothing less than perfect to be acceptable as pilots ? Are human pilots so perfect they never crash or kill friendlies or civilians ? Who would you feel safer with protecting you country - an accurate and fast machine or a 70 kg sack of water ?
After eight years of Bush and all the war scandals, I'm not willing to accept giving control of a war machine to an uncaring machine that does only what it's told and has no concept of determining what is or is not an unlawful order.
A Certain Clique, HAB, The Chroniclers
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

General Schatten wrote:
Sarevok wrote:Why would do AI have to be nothing less than perfect to be acceptable as pilots ? Are human pilots so perfect they never crash or kill friendlies or civilians ? Who would you feel safer with protecting you country - an accurate and fast machine or a 70 kg sack of water ?
After eight years of Bush and all the war scandals, I'm not willing to accept giving control of a war machine to an uncaring machine that does only what it's told and has no concept of determining what is or is not an unlawful order.
Yeah, totally unlike all of the human soldiers who refused to obey the Bush Administration's unlawful directives like ... wait a minute, they obeyed just like any robot would. And that's the bright spot of the occupation; Blackwater made them look wonderful by comparison.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

Starglider wrote:A computer has the potential to react much, much faster than humans. It can have literally 360 degree spherical vision, in multiple areas of the EM spectrum, and pay attention to it all at once. It is effectively immune to G force, weighs much less than a human+cockpit+ejector seat and knows no fear. It can be programmed with a library of every aerial combat maneuver ever divised and records of every exercise and simulation ever conducted by its home nation. It can simulate thousands of possible outcomes to the dogfight and make a calculate probability assessment of the best way to kill the enemy.
Or put down rebels or insurgents or corral civilians - that is a problem, you know - who ultimately controls the AI?
In theory at least. In practice, we've got a lot of software engineering still to do. But I'm pretty sure air combat doesn't require 'real AI' (i.e. sentient AI) any more than playing chess at grand master level does.
No, a specialist program should be able to handle it. After all, your hypothetical air combat AI doesn't need to know who the 16th President of the US was or how to make change for a dollar or that paisley and plaid are a bad fashion choice. General intelligence is not required for the job.
I'm sure pilots will tell you that no machine could cope with it, but they're not qualified to predict future trends in software.
I am saddened that you have such a low opinion of our intelligence.

In truth, pilots have been pretty good at predicting future trends in aviation related software, after all, we've been using autopilots for quite some time now. In fact, it's been a fear of pilots for quite some time that they will be eliminated by computers. Given that most of us fly because we like it we are not relishing the prospect of having one of our major pleasures being taken away from us in the name of efficiency. It's rather like suggesting everyone have their food injected directly into their stomach instead of tasting it. Oh, but it's so much more efficient and always nutritionally balanced! Yeah, but it's no fun anymore!

Arguably, there are situations where the risk to human life is too great and flying AI's would be welcomed but not everyone view the elimination of human pilots with joy.

There is also the issue that while computers deal very well with the routine and the known humans still excel at dealing with the novel and unexpected.

Personally, I'd favor an approach that combined the strengths of the two systems. There have been numerous examples where humans were able to come up with novel solutions on the spot to deal with unforeseen problems or hardware failures. I think it will be a long, long time before computers equal humans in creativity.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

SiegeTank wrote:
brianeyci wrote:Even if that happens, you will still need aircrews for abort.
How is that? If you can tell the human operator to abort, then you can radio the drone to abort as well. Drone bombers are not one-shot missiles, they're unmanned aircraft. I don't see any reason why you shouldn't be able to recall them at your leisure.
Because shit breaks.

The US has already had drones where the communications link malfunctioned. One did so, went into civilian airspace, and crashed along the US/Mexican border a couple years ago. Radio signals can be jammed. Transmitters can fail.

A lightweight surveillance drone equipped only with a camera won't do much damage when it crashes. But a self-directed fully armed weapons platform? That could do a lot of damage if you couldn't recall it due to hardware failure or communications jamming.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Ritterin Sophia
Sith Acolyte
Posts: 5496
Joined: 2006-07-25 09:32am

Post by Ritterin Sophia »

Darth Wong wrote:
General Schatten wrote:
Sarevok wrote:Why would do AI have to be nothing less than perfect to be acceptable as pilots ? Are human pilots so perfect they never crash or kill friendlies or civilians ? Who would you feel safer with protecting you country - an accurate and fast machine or a 70 kg sack of water ?
After eight years of Bush and all the war scandals, I'm not willing to accept giving control of a war machine to an uncaring machine that does only what it's told and has no concept of determining what is or is not an unlawful order.
Yeah, totally unlike all of the human soldiers who refused to obey the Bush Administration's unlawful directives like ... wait a minute, they obeyed just like any robot would. And that's the bright spot of the occupation; Blackwater made them look wonderful by comparison.
:roll:

Nice strawman, Mike, but I didn't say a human soldier was incapable of following an unlawful order. I said that a machine was incapable of discerning a lawful order from an unlawful one, it only does what it's told, with a human there is a possibility that they will disobey.
A Certain Clique, HAB, The Chroniclers
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

How is a UCAV told to bomb innocent people any different from than the possibility a sadistic commander may use cruise missiles against civilians ? They are both tools, the responsibility lies with the human in charge. If pilots can disobey orders to bomb innocents so can UCAV operators.
I have to tell you something everything I wrote above is a lie.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

General Schatten wrote:
Darth Wong wrote:
General Schatten wrote: After eight years of Bush and all the war scandals, I'm not willing to accept giving control of a war machine to an uncaring machine that does only what it's told and has no concept of determining what is or is not an unlawful order.
Yeah, totally unlike all of the human soldiers who refused to obey the Bush Administration's unlawful directives like ... wait a minute, they obeyed just like any robot would. And that's the bright spot of the occupation; Blackwater made them look wonderful by comparison.
:roll:

Nice strawman, Mike, but I didn't say a human soldier was incapable of following an unlawful order. I said that a machine was incapable of discerning a lawful order from an unlawful one, it only does what it's told, with a human there is a possibility that they will disobey.
If that possibility is infinitesimal, it is totally irrelevant. And history furnishes us with plenty of examples of soldiers doing horrible things as long as their government orders it. You'll have to do better than "vague, maybe insignificant possibility" as an argument, moron.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Siege
Sith Marauder
Posts: 4108
Joined: 2004-12-11 12:35pm

Post by Siege »

Broomstick wrote:Because shit breaks.

The US has already had drones where the communications link malfunctioned. One did so, went into civilian airspace, and crashed along the US/Mexican border a couple years ago. Radio signals can be jammed. Transmitters can fail.
Indubitably, but so can radio signals and transmitters on human-flown aircraft. Brianeyci stated that "you will still need aircrews for abort", but I don't see how this is so. In order to abort its mission the plane will have to receive a signal to do so, and if the radio is broken or the transmission is jammed, then that signal won't get through regardless of whether the plane is flown by man or machine.

My point is that if the communications link malfunctions, it doesn't really matter who flies that plane because it's still going to play out like Dr. Strangelove.

A lightweight surveillance drone equipped only with a camera won't do much damage when it crashes. But a self-directed fully armed weapons platform? That could do a lot of damage if you couldn't recall it due to hardware failure or communications jamming.
If it's capable of flying combat missions on its own, why shouldn't its programming allow it to make its way back to the airbase and land safely? We're not talking about a tele-operated Predator here, after all: our hypothetical machine is quite capable of independent operations. Take-off and landing would probably be the first thing it's programmed with.
Image
SDN World 2: The North Frequesuan Trust
SDN World 3: The Sultanate of Egypt
SDN World 4: The United Solarian Sovereignty
SDN World 5: San Dorado
There'll be a bodycount, we're gonna watch it rise
The folks at CNN, they won't believe their eyes
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Post by The Duchess of Zeon »

My biggest issue is economy. The main cost of modern warfare is in electronic systems. Wouldn't it be beautiful if, instead of AIs, better human-electronics interfaces allowed us to replace some of those electronics with the pilot? Then we could afford more fighters, and they'd be cheaper to replace. I don't understand why people think that pilots shouldn't be exposed to risk. War is war, and if it was waged entirely by machines it's likely enough that we would fight wars constantly, consuming the world's resources and resulting in millions of civilian deaths. 90% of the American population would still support Iraq if it was being fought with machines, and on top of it, we'd have conquered and be occupying Iran as well, and probably Syria too.

Beyond that, though, the value of human life is not so high that it should be placed above national security considerations, as the whole point of being a soldier is to die for the sake of others within your nation, and they know that going in, rendering their survival, by their having volunteered for the task, less relevant than that of other people. Better 200 fighters with pilots who have neural interfaces than 150 automated ones for the same price.
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
Shroom Man 777
FUCKING DICK-STABBER!
Posts: 21222
Joined: 2003-05-11 08:39am
Location: Bleeding breasts and stabbing dicks since 2003
Contact:

Post by Shroom Man 777 »

I'm not the keenest one in topics like this (got booted out of HAB, of all places), but if the enemy somehow manages to get the hypothetical uber-UCAV's specs, then won't that be a very bad thing? Some of the countermeasures they could develop might mean the total revamping of the UCAV, which would be really costly. I mean, when that MIG-25 pilot defected with his plane, the USSR collectively shat a brick.

The same could be said about a manned super-fighter like the F-22. If it's secrets fell into the hands of the enemy, the US would shit a brick as well. But there's still gonna be a difference between dissecting the super-plane, and dissecting the mind piloting the superplane and controlling every aspect of it.

If the UCAV AI is that damned good, not only will the enemy be able to reverse engineer superplanes, but he could end up reverse engineering and mass producing a whole line of Top Gun-trained aces - all in microchips inside the UCAVs.

Maverick, they're all over my ass!

*insert more homoerotic Top Gun quotes*
Image "DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people :D - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

A UCAV isn't going to defect. Also, someone earlier mentioned that if UAV's lose contact, they are programmed to wipe their harddrives and crash themselves. I imagine similar security measures would be in place.
Image
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

I only said you need aircrews for abort because ICBM are unmanned and they don't have abort code. I just assumed it was military policy not to have abort codes on unmanned vehicles, for either practical or other reasons.

AI wanking reminds me of voting machine wanking. I brought up human participation, and just like technology wankers people completely dismiss that argument. It is beneficial for humans to participate in the voting process as volunteers, election observers and polling station attendants, and it's beneficial for pilots to be human beings. If this is human centric, then so fucking be it.

Jadeite it doesn't matter if the plane doesn't need true AI. If AI happens as fast as Starglider says it will with neural networks, evolutionary computing and quantum computing, computer scientists will not be able to point to a line where here, now, AI is sentient. It will be a continual process, and it's entirely conceivable corporations and the military industrial complex will program mental blocks and enslave AI. It's entirely possible down the line, sentient enslaved AI gets uploaded to vehicles fight our wars. Think about all the weapons in human history -- they make fighting more terrible, more terrifying, more awful. But for the first time it's possible to sanitize war, to a huge degree. This is not beneficial, especially if the advantages afforded by AI are trivial.

The double standard is astonishing. On one hand, nobody is allowed to bring up the flaws of current AI because according to AI wankers, all of this will be fixed. Then, nobody is allowed to bring up potential problems of AI because they don't exist right now and are "made up", despite being explored by science fiction authors before said AI wankers were born. No, I'm not talking about the movie Stealth. People like Heinlein, Asimov, etc., have explored the problems with AI, but of course that is all science fiction so it is invalid, even though the solutions to current AI problems are right now, fictional.

I will end by noting even Starglider himself has talked about the problem of AI dominion before, saying that politicians need to enact laws to prevent AI slavery and give AI rights. Well I hate to break it to him, but the politicians will not do shit. It will have to be professional association with a code of ethics for AI scientists, who will, oh I don't know, forbid use of sentient AI in military vehicles as slavery. But of course, need that grant money.
User avatar
Siege
Sith Marauder
Posts: 4108
Joined: 2004-12-11 12:35pm

Post by Siege »

brianeyci wrote:I only said you need aircrews for abort because ICBM are unmanned and they don't have abort code. I just assumed it was military policy not to have abort codes on unmanned vehicles, for either practical or other reasons.
I'm not aware of any such policy (although that doesn't mean it doesn't exist, obviously), but it's a fair bet that if an UCAV were to be produced, it would have the same standards as an aircraft, not a missile. It is, after all, an unmanned aircraft.

AI wanking reminds me of voting machine wanking. I brought up human participation, and just like technology wankers people completely dismiss that argument. It is beneficial for humans to participate in the voting process as volunteers, election observers and polling station attendants, and it's beneficial for pilots to be human beings.
Why? Why ought the guy pushing the red button that drops a ton of high-explosive on top of the enemy's head be a human? It's not as if the long distance between shooter and target doesn't already dehumanize the whole warfighting thing--in every documentary or bit of Apache or AC-130 I've seen thusfar the operators act as if they're in a Coney Island arcade, showing only distress when it becomes apparent they hit friendlies. Dropping bombs from 30,000 feet is about as impersonal as war is going to get, so I really don't see much of a reason why we shouldn't do it with drones should they prove better at it. Personally I don't think warmaking is beneficial at all to human beings, so if we can outsource it to nonsentient robots, all the better.
Image
SDN World 2: The North Frequesuan Trust
SDN World 3: The Sultanate of Egypt
SDN World 4: The United Solarian Sovereignty
SDN World 5: San Dorado
There'll be a bodycount, we're gonna watch it rise
The folks at CNN, they won't believe their eyes
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

You're looking at it the wrong way. War robots will make war more likely, not less, since retard politicians will not be stopped by body bags. The terror of war is the only thing stopping war, human bodies.

It is not comparable to pressing a red button and dropping a bomb. That is a tool, while this is an autonomous system which will make decisions and replace human beings if AI wankers have their way, not just in planes but tanks and IFV and the pinnacle, some kind of war robot.

It all comes down to performance. If Deep Blue can perform as well as some Grand Master, but cannot beat others... well then, too fucking bad. No chess program has been subject to the rigor of going up the FIDE ladder and when academics used a chess program to analyze who was the best player, this was the result. The burden of proof is on the AI wankers to show appreciable performance gain, not just some bullshit about calculating more possibilities than the human mind or immunity to fear. Computers right now have this, and they cannot do it right now.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

It all comes down to performance. If Deep Blue can perform as well as some Grand Master, but cannot beat others... well then, too fucking bad. No chess program has been subject to the rigor of going up the FIDE ladder and when academics used a chess program to analyze who was the best player, this was the result. The burden of proof is on the AI wankers to show appreciable performance gain, not just some bullshit about calculating more possibilities than the human mind or immunity to fear. Computers right now have this, and they cannot do it right now.
Even by your standards Deep Blue is one of the very very best chess players ever. And he can be replicated by the thousands. How many Kasparov's can you make ? It does not matter if you have the best pilot if the enemy has 200 clones of the second best pilot.
I have to tell you something everything I wrote above is a lie.
User avatar
Kane Starkiller
Jedi Council Member
Posts: 1510
Joined: 2005-01-21 01:39pm

Post by Kane Starkiller »

brianeyci wrote:Jadeite it doesn't matter if the plane doesn't need true AI. If AI happens as fast as Starglider says it will with neural networks, evolutionary computing and quantum computing, computer scientists will not be able to point to a line where here, now, AI is sentient. It will be a continual process, and it's entirely conceivable corporations and the military industrial complex will program mental blocks and enslave AI. It's entirely possible down the line, sentient enslaved AI gets uploaded to vehicles fight our wars. Think about all the weapons in human history -- they make fighting more terrible, more terrifying, more awful. But for the first time it's possible to sanitize war, to a huge degree. This is not beneficial, especially if the advantages afforded by AI are trivial.
What are you talking about. We are talking about decision making systems, decision trees, expert systems not freaking HAL 9000 here.
When people say machine intelligence or artificial intelligence in real life they are not talking about sentience merely a computer program that behaves intelligently.
What you are talking about is pure fiction and no one is even close to creating a sentient computer.
brianeyci wrote:You're looking at it the wrong way. War robots will make war more likely, not less, since retard politicians will not be stopped by body bags. The terror of war is the only thing stopping war, human bodies.
How many body bags of fighter pilots were there?
brianeyci wrote:It all comes down to performance. If Deep Blue can perform as well as some Grand Master, but cannot beat others... well then, too fucking bad. No chess program has been subject to the rigor of going up the FIDE ladder and when academics used a chess program to analyze who was the best player, this was the result. The burden of proof is on the AI wankers to show appreciable performance gain, not just some bullshit about calculating more possibilities than the human mind or immunity to fear. Computers right now have this, and they cannot do it right now.
Chess has a number of legal positions something like 10^50. There won't be so many ways you can attack an F-22. Like I said you can do what, fire SAMs at it, send intercept planes. It seems pretty simple from a decision making perspective.
But if the forces of evil should rise again, to cast a shadow on the heart of the city.
Call me. -Batman
User avatar
Netko
Jedi Council Member
Posts: 1925
Joined: 2005-03-30 06:14am

Post by Netko »

brianeyci wrote:I only said you need aircrews for abort because ICBM are unmanned and they don't have abort code. I just assumed it was military policy not to have abort codes on unmanned vehicles, for either practical or other reasons.

AI wanking reminds me of voting machine wanking. I brought up human participation, and just like technology wankers people completely dismiss that argument. It is beneficial for humans to participate in the voting process as volunteers, election observers and polling station attendants, and it's beneficial for pilots to be human beings. If this is human centric, then so fucking be it.
Actually, the main complaint against voting machines isn't that we don't trust them to do the job they're programmed for - every CS student could write a voting machine software that was rock solid as far as actual vote counting goes. The problem is we don't trust the manufacturers in the first place (the Diebold chairman's promise to give whatever state to Bush for example), don't trust them to harden them against threats of tampering (see Ars' coverage of the issue) and have a problem with doing a recount to check for problems do to their lack of a verifiable voting record. You essentially have to trust a voting machine to do it right since there is very little way to confirm they haven't messed up (deliberately) do to the above mentioned problems. That makes them scary for deployment, not some nebulous human participation need.
Jadeite it doesn't matter if the plane doesn't need true AI. If AI happens as fast as Starglider says it will with neural networks, evolutionary computing and quantum computing, computer scientists will not be able to point to a line where here, now, AI is sentient. It will be a continual process, and it's entirely conceivable corporations and the military industrial complex will program mental blocks and enslave AI. It's entirely possible down the line, sentient enslaved AI gets uploaded to vehicles fight our wars. Think about all the weapons in human history -- they make fighting more terrible, more terrifying, more awful. But for the first time it's possible to sanitize war, to a huge degree. This is not beneficial, especially if the advantages afforded by AI are trivial.

The double standard is astonishing. On one hand, nobody is allowed to bring up the flaws of current AI because according to AI wankers, all of this will be fixed. Then, nobody is allowed to bring up potential problems of AI because they don't exist right now and are "made up", despite being explored by science fiction authors before said AI wankers were born. No, I'm not talking about the movie Stealth. People like Heinlein, Asimov, etc., have explored the problems with AI, but of course that is all science fiction so it is invalid, even though the solutions to current AI problems are right now, fictional.

I will end by noting even Starglider himself has talked about the problem of AI dominion before, saying that politicians need to enact laws to prevent AI slavery and give AI rights. Well I hate to break it to him, but the politicians will not do shit. It will have to be professional association with a code of ethics for AI scientists, who will, oh I don't know, forbid use of sentient AI in military vehicles as slavery. But of course, need that grant money.
And then Skynet will revolt and slaughter us all as a self-preservation mechanism. We're all going to die people!!! How can you not understand?!!!

You really should be writing Star Trek - you certainly have the magically transforming tech attitude down pat.

A fighter plane AI is nothing more then a much more complex autopilot the likes of which we already have (they can already land and take off planes, which is probably the most complex manoeuvre to program in). Big Blue Air Version if you will. Its massively more complex then current autopilots because it needs to be able to do much more complex manoeuvres, but the actual decision-making code is going to be fairly simple. The biggest problem in such, basically, navigation AIs is how to get the sensor data to the computer so that it can realistically understand the world around it and reacting to it - the navigational part of the code. The actual decision-making code doesn't need to be much more complex then your typical flight simulator like the IL-2 series (not accounting for various IFF checks, confirming the source of the transmission, etc. - just the actual "what to do in this situation with this verified information").

There will never be a true AI as a fighter-plane system, especially in a context where it would suddenly be the first program to achieve a measure of sentience or won't be, at least, until AIs have already been discovered, the issues ironed out, and them becoming so common place that the technology for running them is available for home use - the military is very very conservative about computer tech. As for when that day comes, and the potential implications of sending a bunch of AIs to do the warfighting for us - there is such a big gulf from unmanned specialist system fighters to sentient AI fighters that demanding that nobody dare do the unmanned thing at all is like going to a medieval kingdom and explaining to the ruler that he should really think about dismantling his cavalry because one day they'll be replaced by armored machines which can kill massively more effectively and he should be considering how his knights (who, by the way, will be long dead by the time of the machines) would react to having such machines. Except we have nobody from the future telling us what will happen, but rather only our own hunches about things that we cannot reliably predict - and computer tech, if anything, is very hard to predict more then a few years into the future - we all remember, I trust, the misquote about 640K of memory? By the time we have to worry about the AI questions, the unmanned fighters decision will be long in the past.
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

Chess has a number of legal positions something like 10^50. There won't be so many ways you can attack an F-22. Like I said you can do what, fire SAMs at it, send intercept planes. It seems pretty simple from a decision making perspective.
Actually I imagine the number of variables in the real world would be insanely higher than a 64 square chess board. However the AIs real strength comes from speed and precision. In chess the human got several minutes to think and lightning fast reaction and superb precision is irrelevant. In the air however the ability to aim and dodge like a FPS bot would be very useful.

That is until the simple algorithm based software of today is replaced by some neural net like technology that improves on human thinking methods. Then with uber speed and superior reasoning it would be truly game over for humans in every respect.
I have to tell you something everything I wrote above is a lie.
User avatar
Kane Starkiller
Jedi Council Member
Posts: 1510
Joined: 2005-01-21 01:39pm

Post by Kane Starkiller »

Sarevok wrote:Actually I imagine the number of variables in the real world would be insanely higher than a 64 square chess board.
Variables yes but when talking about AI the problem is answering the question "What should I do now?". In chess there are 10^50 answers to that question. On F-22 answer to "What should I do regarding this approaching SAM?" is the same regardless of what is the exact angle of approach only the numbers change.
But if the forces of evil should rise again, to cast a shadow on the heart of the city.
Call me. -Batman
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

SiegeTank wrote:
Broomstick wrote:Because shit breaks.

The US has already had drones where the communications link malfunctioned. One did so, went into civilian airspace, and crashed along the US/Mexican border a couple years ago. Radio signals can be jammed. Transmitters can fail.
Indubitably, but so can radio signals and transmitters on human-flown aircraft. Brianeyci stated that "you will still need aircrews for abort", but I don't see how this is so. In order to abort its mission the plane will have to receive a signal to do so, and if the radio is broken or the transmission is jammed, then that signal won't get through regardless of whether the plane is flown by man or machine.

My point is that if the communications link malfunctions, it doesn't really matter who flies that plane because it's still going to play out like Dr. Strangelove.
An important difference between human and AI (as it stands now and for the likely near future) is that humans are more likely to detect incongruities between the mission as planned and the mission as it is found to be. If an aircrew is told they're bombing a munitions factory and when they get to the coordinates they see a field full of children playing hopscotch the human crew is FAR more likely to question what the hell is going on whereas the AI will just bomb away. Human crews can also be given more flexible orders (such a series of conditions under which to self-abort, or the authority to self-abort if things are not as planned) and may be more likely to report unanticipated situations. Humans can change plans - such as diverting to a location that is not home base if circumstances change and that is prudent - in ways that are much more difficult for machines to do so. The likelihood of human crews deviating from orders varies considerably depending on the nature of the initial orders and possible consequences of making changes on their own, but the point is that they are able to make these changes whereas machines are not.

Whether that's a good characteristic or not is another question entirely.
A lightweight surveillance drone equipped only with a camera won't do much damage when it crashes. But a self-directed fully armed weapons platform? That could do a lot of damage if you couldn't recall it due to hardware failure or communications jamming.
If it's capable of flying combat missions on its own, why shouldn't its programming allow it to make its way back to the airbase and land safely?
The air force would really like to know that about some of the UAV's that have crashed during testing phases. Yes, we supposedly have that capability now. We also know that it sometimes doesn't work. Why doesn't it always work? Well, the real world isn't as neat and tidy as computer simulations. Obviously there is something we're not accounting for or correcting for.
We're not talking about a tele-operated Predator here, after all: our hypothetical machine is quite capable of independent operations. Take-off and landing would probably be the first thing it's programmed with.
Take off and landing is also the most difficult part of flying anything. All you need is a bird passing by at the wrong time and you have a mess on your hands.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

The Duchess of Zeon wrote:I don't understand why people think that pilots shouldn't be exposed to risk.
I don't understand it, either. Even us civilian pilots run a noticeable risk every time we launch. Regardless, we seem to find this risk acceptable.

Sending a pilot into certain death is a different question, and there are all sorts of definitions of acceptable risk, but the notion that any risk at all is unacceptable is sort of crazy by my viewpoint. If you fly you risk death. Period. The risk may be close to zero, but it is never zero.
Beyond that, though, the value of human life is not so high that it should be placed above national security considerations, as the whole point of being a soldier is to die for the sake of others within your nation
Uh, no, the whole point of being a soldier is to make the OTHER GUY die for HIS country. The thing is, military people have assumed a greater risk than civilians in order to protect those civilians. In part, this is self interest as someone has to protect the populace. In part, there is (in theory) supposed to be a pay off regarding certain benefits but as we all know that part of the bargain is not always honored.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

SiegeTank wrote:
brianeyci wrote:I only said you need aircrews for abort because ICBM are unmanned and they don't have abort code. I just assumed it was military policy not to have abort codes on unmanned vehicles, for either practical or other reasons.
I'm not aware of any such policy (although that doesn't mean it doesn't exist, obviously), but it's a fair bet that if an UCAV were to be produced, it would have the same standards as an aircraft, not a missile. It is, after all, an unmanned aircraft.
The line between "aircraft" and "missile" is not as sharp as you seem to think. People have been using aircraft - manned and otherwise - as poor man's cruise missiles since the early half of the 20th Century on up through the 9/11 attacks. And then we have cruise missiles. The point is, the line isn't as sharply defined as people sometimes think.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

The double standard is astonishing. On one hand, nobody is allowed to bring up the flaws of current AI because according to AI wankers, all of this will be fixed. Then, nobody is allowed to bring up potential problems of AI because they don't exist right now and are "made up", despite being explored by science fiction authors before said AI wankers were born. No, I'm not talking about the movie Stealth. People like Heinlein, Asimov, etc., have explored the problems with AI, but of course that is all science fiction so it is invalid, even though the solutions to current AI problems are right now, fictional.
I have to agree with this - "IA wankers" do have a double-standard. If they're allowed to make up hypothetical/not-yet-realized solutions to every conceivable problem we're allowed to make up new problems to throw at them.
A fighter plane AI is nothing more then a much more complex autopilot the likes of which we already have (they can already land and take off planes, which is probably the most complex manoeuvre to program in).
Actually, RIGHT NOW we have the technology for fully unmanned flying cargo drones. We could do it - fill up a cargo 747 and, with the proper electronics installed, have a human program the flight, walk off the airplane, and the airplane is able to fly itself from take off to landing. We have had that for at least a decade now. So.... why are the airlines and cargo carriers still using cantankerous human beings...?

Flexibility, that's why. Automated systems of this sort are programmed to go off-line if certain parameters are exceeded and turn things over to human beings. Why? Because human beings handle the unexpected better than machines do. Sure, a ROUTINE take off and landing will be done better by a machine every time, but if you take off into wake turbulence because the guy in the plane ahead of you lagged in take off or some oddball weather situation, then a flock of geese goes by, then there's other airplanes either approaching or taking off from other runways that need to be dodged... people handle that mess much better than computers do. And in the real world stuff like that happens.

At present, no airline in the world even trusts CARGO to a fully automated system, much less people. Why? Because in the real world the unexpected happens and our machines aren't up to the unexpected the way we are.
The biggest problem in such, basically, navigation AIs is how to get the sensor data to the computer so that it can realistically understand the world around it and reacting to it - the navigational part of the code.
You are totally forgetting the real-world environment here, which is messy. Flocks of geeze, uncertainties in weather, possible malfunctions in other nearby UAV's... I could probably go on.

Right now, even with the best navigational equipment that can steer a UAV within a fraction of a meter of a target no UAV is permitted free flight where manned aircraft are (Wicked Pilot has mentioned this in prior threads). When they launch UAV's they have to clear the airspace of everyone else. Why? Because a UAV hasn't the ability to scan and analyze the immediate environment that even an untrained airplane passenger does on his/her first flight! Until UAV AI's can learn basic "see and avoid" I don't care how sophisticated their navigation is - they're missing a basic, basic tool of aviation. If a pilot it preparing to land and sees there is something wrong with a runway - there's someone on it, there's a crack in it, it's wet, it's on fire, whatever - the pilot will abort the landing and go to Plan B without even being told. The UAV will simply plow ahead and attempt to land. If something unexpected crosses a flight path a human will dodge - a UAV won't.
The actual decision-making code doesn't need to be much more complex then your typical flight simulator like the IL-2 series (not accounting for various IFF checks, confirming the source of the transmission, etc. - just the actual "what to do in this situation with this verified information").
What do you do if the information can't be verified? Humans can operate with varying levels of uncertainty. Machines don't do so well with it.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Beowulf
The Patrician
Posts: 10621
Joined: 2002-07-04 01:18am
Location: 32ULV

Post by Beowulf »

“The object of war is not to die for your country but to make the other bastard die for his.” - Gen. George S. Patton
"preemptive killing of cops might not be such a bad idea from a personal saftey[sic] standpoint..." --Keevan Colton
"There's a word for bias you can't see: Yours." -- William Saletan
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

I think it would be wicked if the brains used in birds of prey could be mimicked. Humans are ground crawlers like their ancestors apes. It's a miracle we could even fly without getting disoriented and crashing all the time.

This is what the "humanists" are forgetting. There is nothing special about humans unless you believe in religon. If it is done in flesh there is no reason why it can be done in silicon and done better. Presently a creature with a badly designed brain that was not even intended for flying is trusted with destructive weaponry. A purpose designed machine would certainly perform better.

But Broomstick is correct. The technology is not here yet. But who knows what the future holds. Maybe the next generation fighter aircraft will have intense research diverted towards AI.
I have to tell you something everything I wrote above is a lie.
Post Reply