F-22 now critical to survival of USAF

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

Post Reply
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

brianeyci wrote: Well if you can tell an ICBM to launch you can tell an ICBM to disarm, right? I always just assumed this was for security reasons.
No. There is no means to disarm or self-destruct an ICBM in flight.
Regardless, AV is right -- there's absolutely no reason to upload an AI or even near-AI into F-22. The example Starglider listed -- DMZ -- that is because the DMZ is no-man's land and has to be constantly watched, a perpetual state of war. It makes sense to automate there for repetitive, dangerous tasks. This is a world of difference to accepting AI pilots, pilots who have to be unquestioningly loyal and prepared for dynamic situations. If an AI increases the F-22's kill ratio from 30 to 1 compared to fourth generation aircraft to 32 to 1, big fucking deal. Third world nations do not field that many modern aircraft, and there is more than one F-22. Because of n-squared law air defenses will be completely destroyed if it's 30 to 1 or 100 to 1 with no losses to the American side. It's possible AI gives no real kill ratio advantage. No advantage, no point.
There are other advantages besides kill ratio, which have already been brought up.

In particular: No training costs for a computer controlled aircaft. Pilots and all their support costs can be eliminated (training, pay, housing, medical, etc). Along with ease of "retraining" so to speak. New aircraft design? Either produce a new control module and stick it in, or patch an existing one to handle it. And of course, performance advantages in design that could be gained by not needing to worry about crew (in fighters it might not be much space, but in long-range aircraft they often include galleys and toilets, all of which is space and mass that could be used for other things).
Image
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Sarevok wrote:Starglider is talking about autonomous AI not remote controlled kites of doom like the Predator. Presumably these would be no more vulnerable to jamming than a manned jet.
Exactly. There is no reason that an UCAV cannot have all the sensors a human pilot has and much more. HD cameras are already better than eyeballs. In at least one way the UCAV is actually harder to jam; a good zap with a high powered laser (not a silly little laser pointer, a military laser with a proper tracking mount) will blind a pilot and leave them utterly screwed. Cameras are much quicker to recover from dazzle and more resistant to permenant damage, and even if you did somehow manage to burn them out, the UCAV still has radar/ir/lidar/etc while the pilot can't see their screens.

Show me an example of UAVs being hacked into by the enemy in real life and I might take the hacking accusation seriously.

Ditto Jadeite's comments about why this is valuable.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

Jadeite wrote:
brianeyci wrote: Well if you can tell an ICBM to launch you can tell an ICBM to disarm, right? I always just assumed this was for security reasons.
No. There is no means to disarm or self-destruct an ICBM in flight.
Your response is out of context. The sentence above was in response to a question why AI planes couldn't abort, and is a rhetorical question. I know ICBM cannot abort, see two posts ago.
There are other advantages besides kill ratio, which have already been brought up.

In particular: No training costs for a computer controlled aircaft. Pilots and all their support costs can be eliminated (training, pay, housing, medical, etc). Along with ease of "retraining" so to speak. New aircraft design? Either produce a new control module and stick it in, or patch an existing one to handle it. And of course, performance advantages in design that could be gained by not needing to worry about crew (in fighters it might not be much space, but in long-range aircraft they often include galleys and toilets, all of which is space and mass that could be used for other things).
All that training is to create a loyal, dedicated and skilled soldier. The whole thing is a package deal. What makes you think that these advantages are of any worth at all, compared to the trade off of not having a pilot go renegade? Show me an instance of a pilot becoming a traitor and maybe it's time to go AI.

As for the hacking -- computer programs can be bugged and have millions, trillions of lines of code. A programmer could hide a subroutine in those millions of lines and create a monster. Meanwhile, how often do soldiers become traitors or mutiny? It is not the probability of hacking but the possibility. Even if 1% of AI's decide to play with missiles like a real child would, it is unacceptable. If a real human pilot has zero chance of dropping a bomb on the White House and an AI who has no grown up and has not been indoctrinated has a 1% chance, unacceptable.

I also don't like the idea of people losing well paying jobs and serving their country for an overly complex solution. When and if the US Airforce is being eclipsed by other airforces due to lack of AI then, time to take a look again.
User avatar
Siege
Sith Marauder
Posts: 4108
Joined: 2004-12-11 12:35pm

Post by Siege »

brianeyci wrote:Your response is out of context. The sentence above was in response to a question why AI planes couldn't abort, and is a rhetorical question. I know ICBM cannot abort, see two posts ago.
Then I'm afraid I don't understand your rhetorical question. You can recall an AI-flown plane because it's got an onboard radio transceiver just like a human-flown plane does. Meanwhile an ICBM cannot be recalled because it has no recall feature built into it.

All that training is to create a loyal, dedicated and skilled soldier. The whole thing is a package deal. What makes you think that these advantages are of any worth at all, compared to the trade off of not having a pilot go renegade? Show me an instance of a pilot becoming a traitor and maybe it's time to go AI.
Do Russians count?

Basically you're pitching a human versus a military version of Deep Blue, except it doesn't play chess but pilots some uber-UCAV of doom. I doubt it gets much more loyal than that.
Image
SDN World 2: The North Frequesuan Trust
SDN World 3: The Sultanate of Egypt
SDN World 4: The United Solarian Sovereignty
SDN World 5: San Dorado
There'll be a bodycount, we're gonna watch it rise
The folks at CNN, they won't believe their eyes
User avatar
Androsphinx
Jedi Knight
Posts: 811
Joined: 2007-07-25 03:48am
Location: Cambridge, England

Post by Androsphinx »

"what huge and loathsome abnormality was the Sphinx originally carven to represent? Accursed is the sight, be it in dream or not, that revealed to me the supreme horror - the Unknown God of the Dead, which licks its colossal chops in the unsuspected abyss, fed hideous morsels by soulless absurdities that should not exist" - Harry Houdini "Under the Pyramids"

"The goal of science is to substitute facts for appearances and demonstrations for impressions" - John Ruskin, "Stones of Venice"
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

brianeyci wrote:
In particular: No training costs for a computer controlled aircaft. Pilots and all their support costs can be eliminated (training, pay, housing, medical, etc).
All that training is to create a loyal, dedicated and skilled soldier. The whole thing is a package deal.
You completely missed the point. All that is to create a /competent/ pilot. As opposed to an AI that has /zero/ unit cost for the software (it's all dev costs) and hardware costs comparable to the plane's radar system, if that.
As for the hacking -- computer programs can be bugged and have millions, trillions of lines of code. A programmer could hide a subroutine in those millions of lines and create a monster.
The phrases 'formal verification', 'rigorous testing' and 'functional redundancy' do not appear to be in your lexicon.
Even if 1% of AI's decide to play with missiles like a real child would, it is unacceptable. If a real human pilot has zero chance of dropping a bomb on the White House and an AI who has no grown up and has not been indoctrinated has a 1% chance, unacceptable.
You have no fucking clue. 'Indoctrinating' AIs? That's stupid enough for sentient AIs, but we are talking about a sophisticated autopilot that can beat humans at the game of air combat. 'Real child'? Let me guess, you think watching 'Stealth' makes you an expert on UCAV development?
I also don't like the idea of people losing well paying jobs and serving their country
Your opinion is irrelevant. The pilots are only a small fraction of the air force's personnel, all the support staff will still be needed and new specialists will be needed to keep the AIs flying and performing optimally. But unlike the pilots, those specialists won't risk death on every mission flown.
When and if the US Airforce is being eclipsed by other airforces due to lack of AI then, time to take a look again.
The US military and particularly the USAF has /never/ been prepared to give up technological supremacy on any front. Again, your opinion is irrelevant. The USAF files more UAVs than any other nation and will be more eager to deploy this technology than any other nation. Assuming it's still around if and when the tech is developed that is.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

I haven't watched Stealth. AV mentioned Stealth. Why don't you debate him eh? Maybe because you're full of shit.

Here's the deal: you claim AI's solve a problem. I say all the things you've mentioned are not problems. Why don't you identify the real problem AI's are supposed to solve before you use a forcefield where a plastic container will do? Because it's AI wanking.

You also ignore the problem that replacing soldiers with large amounts of robotic minions will increase the likelihood of pointless war. The only reason why the American people give a fucking shit about Iraq is American bodies. No bodies, nobody gives a flying fuck. If no American bodies come home, there's a far greater chance of war mongering, war profiteering, war economy. The whole point of war is people die on both sides, and if you take that out of the equation let's see how many pointless wars are started. USA! USA! :roll:.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

ghetto edit: By the way does anybody find it insidious that the idea for AI's is to take away healthcare, salaries and training from potential soldiers, citizens? I find it pretty disgusting. America doesn't go to war based on how it can afford it anyway, so the entire line of argument is void.

I wonder if people like Starglider watch A Taste of Armageddon and get angry.
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

brianeyci wrote:
All that training is to create a loyal, dedicated and skilled soldier.
And all we need from him is his brain. Everything else, and the costs associated, is a burden.
The whole thing is a package deal. What makes you think that these advantages are of any worth at all, compared to the trade off of not having a pilot go renegade? Show me an instance of a pilot becoming a traitor and maybe it's time to go AI.
Unfortunately two others have already beaten me to the traitor point. Going over that list, it appears that there have been 46 pilot defections to, which is an average rate of 1 ever 1.2 years. And of course, each of these men takes with him an aircraft for enemy technicians to look over, and all of the secrets in his head. The highest ranking person on the list was a Brigadier General from Cuba! What's really amusing though is the number of Arab pilots who defected to Israel.
As for the hacking -- computer programs can be bugged and have millions, trillions of lines of code.
Oh noes, things are complex.
A programmer could hide a subroutine in those millions of lines and create a monster.
He could do it anyway, every modern USAF aircraft has extensive computerized technology on it.
Meanwhile, how often do soldiers become traitors or mutiny?
Pretty often apparently, along with taking their expensive and classified equipment with them...
It is not the probability of hacking but the possibility. Even if 1% of AI's decide to play with missiles like a real child would, it is unacceptable. If a real human pilot has zero chance of dropping a bomb on the White House and an AI who has no grown up and has not been indoctrinated has a 1% chance, unacceptable.
An obvious fallacy, but I'm not sure which one. You're creating a scenario (without any evidence to back it up) to support a predetermined conclusion.
I also don't like the idea of people losing well paying jobs and serving their country for an overly complex solution. When and if the US Airforce is being eclipsed by other airforces due to lack of AI then, time to take a look again.
Too bad. If it makes more sense from both a viewpoint increasing capabilities and cost effectiveness, it should be implemented.
ghetto edit: By the way does anybody find it insidious that the idea for AI's is to take away healthcare, salaries and training from potential soldiers, citizens? I find it pretty disgusting. America doesn't go to war based on how it can afford it anyway, so the entire line of argument is void.
"Dey took er jobs!" :roll:
Image
User avatar
Hawkwings
Sith Devotee
Posts: 3372
Joined: 2005-01-28 09:30pm
Location: USC, LA, CA

Post by Hawkwings »

A human driver with a couple of years experience will operate a motor vehicle far, far better than any AI-controlled vehicle that we can currently make. If AIs are so good at air combat that pilots offer no advantage over them, why can't we make cars drive themselves yet?

Another thing about the hacking/jamming. A UCAV that has been cut off from all control signals and GPS still has internal navigation, right? So program something that says "If you lose contact with home base for more than 5 minutes, turn around and come back."
User avatar
Kane Starkiller
Jedi Council Member
Posts: 1510
Joined: 2005-01-21 01:39pm

Post by Kane Starkiller »

I wonder if typical combat situation F-22 is going to find itself is more complex than navigating city traffic.
It seems to me that fighter combat involves more of number crunching than any complex decision making.
Enemy fighter detected->crunch numbers->launch a missile
Missile lock detected->jamming and evasive manuvers
But if the forces of evil should rise again, to cast a shadow on the heart of the city.
Call me. -Batman
User avatar
Loner
Jedi Knight
Posts: 750
Joined: 2004-07-31 01:34am

Post by Loner »

Wait, wasn't there a story with a picture of a Super Hornet managing a gun kill on a Raptor?
"There are times I'd like to get my hands on God." - Frank Castle
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

Hawkwings wrote:A human driver with a couple of years experience will operate a motor vehicle far, far better than any AI-controlled vehicle that we can currently make. If AIs are so good at air combat that pilots offer no advantage over them, why can't we make cars drive themselves yet?
Far different. An autonomous vehicle has to detail with terrain issues, line of sight, and other complicating factors that an aircraft doesn't. Second, we are discussing hypothetical aircraft and computers, not existing ones.

Apples to oranges type fallacy as well.
Another thing about the hacking/jamming. A UCAV that has been cut off from all control signals and GPS still has internal navigation, right? So program something that says "If you lose contact with home base for more than 5 minutes, turn around and come back."
Precisely. There's not an issue that can't be solved by technology.
Loner wrote:Wait, wasn't there a story with a picture of a Super Hornet managing a gun kill on a Raptor?
Which was a complete stroke of luck for the Super Hornet pilot (and against an inexperienced F-22 pilot at that).
Image
User avatar
Beowulf
The Patrician
Posts: 10621
Joined: 2002-07-04 01:18am
Location: 32ULV

Post by Beowulf »

Lines of code is a meaningless measurement anyway. It's only used when something needs to look impressive. It has little relation to functionality.

With regards to the F-18 gun sight picture of the F-22, no it wasn't a gun kill. Besides the RoE violations(getting far to close to the F-22, amongst other things), it most likely would have resulted in a complete miss. The kinematics of the motions dictate that the F-18 would have had a very short period of time with the F-22 in front of it's gun, with zero chance of being able to reacquire it.
"preemptive killing of cops might not be such a bad idea from a personal saftey[sic] standpoint..." --Keevan Colton
"There's a word for bias you can't see: Yours." -- William Saletan
Adrian Laguna
Sith Marauder
Posts: 4736
Joined: 2005-05-18 01:31am

Post by Adrian Laguna »

Beowulf wrote:The kinematics of the motions dictate that the F-18 would have had a very short period of time with the F-22 in front of it's gun, with zero chance of being able to reacquire it.
Basically the Hornet pilot needed to start firing before the Raptor came in front of him.
User avatar
Sea Skimmer
Yankee Capitalist Air Pirate
Posts: 37390
Joined: 2002-07-03 11:49pm
Location: Passchendaele City, HAB

Post by Sea Skimmer »

Adrian Laguna wrote: Basically the Hornet pilot needed to start firing before the Raptor came in front of him.
Even then it doesn’t much matter one way or another, the gun is never actually aimed at the Raptor, it passes almost completely under the line of fire. The Hornet driver would have been fairly lucky to have gotten a single 20mm hit.
"This cult of special forces is as sensible as to form a Royal Corps of Tree Climbers and say that no soldier who does not wear its green hat with a bunch of oak leaves stuck in it should be expected to climb a tree"
— Field Marshal William Slim 1956
User avatar
The Dark
Emperor's Hand
Posts: 7378
Joined: 2002-10-31 10:28pm
Location: Promoting ornithological awareness

Post by The Dark »

MKSheppard wrote:
Admiral Valdemar wrote: If the F-15s had more life in them, could've at least got a better UCAV programme set-up
I'm afraid you don't understand; the only thing UCAVs are good for is "go here, kill everything that moves". Basically, bigger, longer ranged reusable versions of existing smart weapons. What, you think that Comrade Hu is going to let us use satellite uplinks to control our UCAV swarm from Arizona in a war against China?

Ho ho ho silly man, he just began shooting down our communications satellites with ASATs.
One project currently being worked on is converting Apache's mast-mounted Longbow radars into drone controllers. So far they've only got datalink operational (the WSO receives video feed from the UAV, and can transmit it to ground troops), but the system's been developed so quickly that from proposal to deployment is under a year. I haven't a clue on range of the system (even if I did, it'd probably be proprietary), but future UCAVs may not require satellite control - two-seater aircraft (rotary or fixed wing) could operate as drone parents.
Stanley Hauerwas wrote:[W]hy is it that no one is angry at the inequality of income in this country? I mean, the inequality of income is unbelievable. Unbelievable. Why isn’t that ever an issue of politics? Because you don’t live in a democracy. You live in a plutocracy. Money rules.
BattleTech for SilCore
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

You chopped paragraphs into sentences with no context Jadeite. I never claimed that lines of code are related to functionality. The point was to mention a programmer could hide a piece of code in AI to commandeer the plane, and nobody could check except for the said programmer because of the huge mass of information. At least you can conduct a background check on pilots. "They can do that anyway" is not a rebuttal at all: you don't make a bad situation worse.
Jadeite wrote:Too bad. If it makes more sense from both a viewpoint increasing capabilities and cost effectiveness, it should be implemented.
Why don't you prove this increased capability?
Oh noes, things are complex.
That's right dickhead. If a blast door will do, no point in a forcefield. What part of that don't you understand?

What's really annoying is Starglider sticking his head in every single conversation involving artificial intelligence and appealing to his own expert opinion without explaining anything. Wow, increased reaction time, g-force, fear. Any moron who knows anything about computers can say that, and I expect more from someone who claims to be an AI expert to do more than claim but connect the dots. Stuart doesn't appeal to his own authority without explaining anything. I don't want Starglider to "dumb down" his argument, just show that all the great things he thinks AIs will do will matter a shit. Starglider routinely uses the is-ought fallacy... "your opinion doesn't matter, AI will march ahead in the USAF". No fucking shit :roll:.

It's also strange AI wankers ignore the accidental and even eventual upload of sentient AI. You know what that is? Slavery. That would be my prime concern if I was an AI wanker, to protect my fucking children and creations, but AI wankers seem to want AI in every possible context without considering the consequences. If I was an AI wanker I would want the AI to grow up with human beings, in a human-like body and go through some of the same things humans do to learn compassion, empathy, loyalty, and in this case patriotism. For some reason AI wankers think this will all just shit out with no guidance.
User avatar
Pu-239
Sith Marauder
Posts: 4727
Joined: 2002-10-21 08:44am
Location: Fake Virginia

Post by Pu-239 »

brianeyci wrote:You chopped paragraphs into sentences with no context Jadeite. I never claimed that lines of code are related to functionality. The point was to mention a programmer could hide a piece of code in AI to commandeer the plane, and nobody could check except for the said programmer because of the huge mass of information. At least you can conduct a background check on pilots. "They can do that anyway" is not a rebuttal at all: you don't make a bad situation worse.
Yes it is. Computers are in control systems everywhere now. If it was possible to slip trojan code into an AI, it would be possible to slip trojan code into the systems controlling the plane or it's missiles today.
Jadeite wrote:Too bad. If it makes more sense from both a viewpoint increasing capabilities and cost effectiveness, it should be implemented.
Why don't you prove this increased capability?
Well, no loyalty problems, no need for life support aboard aircraft taking up space and weight that sap performance, faster response times to deal w/ threats, and that's scratching the surface.

Oh, and screaming about loss of jobs is just you being a luddite. Shall we throw out all our robots and automation in factories today and go back to manual labor? I think not.
Oh noes, things are complex.
That's right dickhead. If a blast door will do, no point in a forcefield. What part of that don't you understand?
Increased complexity is justified if it brings an increase in capabilities. If the forcefield can withstand a nuke while the blast door can't, the use of the forcefield is justified.
What's really annoying is Starglider sticking his head in every single conversation involving artificial intelligence and appealing to his own expert opinion without explaining anything. Wow, increased reaction time, g-force, fear. Any moron who knows anything about computers can say that, and I expect more from someone who claims to be an AI expert to do more than claim but connect the dots. Stuart doesn't appeal to his own authority without explaining anything. I don't want Starglider to "dumb down" his argument, just show that all the great things he thinks AIs will do will matter a shit. Starglider routinely uses the is-ought fallacy... "your opinion doesn't matter, AI will march ahead in the USAF". No fucking shit :roll:.
How is that an appeal to authority? Anybody w/ even the slightest inkling about computers knows for a fact that computers can respond faster, don't have problems w/ fear, and are more immune to g-forces than people.
It's also strange AI wankers ignore the accidental and even eventual upload of sentient AI. You know what that is? Slavery. That would be my prime concern if I was an AI wanker, to protect my fucking children and creations, but AI wankers seem to want AI in every possible context without considering the consequences. If I was an AI wanker I would want the AI to grow up with human beings, in a human-like body and go through some of the same things humans do to learn compassion, empathy, loyalty, and in this case patriotism. For some reason AI wankers think this will all just shit out with no guidance.
I'll give this one to you, it's a legitimate concern. Then again, depends on the level of AI. You probably don't need sentience for military applications which makes the whole issue of AI rights moot.

ah.....the path to happiness is revision of dreams and not fulfillment... -SWPIGWANG
Sufficient Googling is indistinguishable from knowledge -somebody
Anything worth the cost of a missile, which can be located on the battlefield, will be shot at with missiles. If the US military is involved, then things, which are not worth the cost if a missile will also be shot at with missiles. -Sea Skimmer


George Bush makes freedom sound like a giant robot that breaks down a lot. -Darth Raptor
User avatar
The Duchess of Zeon
Gözde
Posts: 14566
Joined: 2002-09-18 01:06am
Location: Exiled in the Pale of Settlement.

Post by The Duchess of Zeon »

Analogy time:

In Korea, there was someone in a Corsair that scored a kill on a Mig-15. Does that mean that Corsairs are an acceptable defence against Mig-15s? Even if some brilliant genius bastard in an F-18 gets enormously, randomly lucky on top of his skill and manages to chew apart an F-22 once, that does not indicate anything negative about the capability of the F-22. For instance, if we based science on experiments which hadn't been duplicated, cold fusion would "exist".
The threshold for inclusion in Wikipedia is verifiability, not truth. -- Wikipedia's No Original Research policy page.

In 1966 the Soviets find something on the dark side of the Moon. In 2104 they come back. -- Red Banner / White Star, a nBSG continuation story. Updated to Chapter 4.0 -- 14 January 2013.
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

brianeyci wrote:You chopped paragraphs into sentences with no context Jadeite. I never claimed that lines of code are related to functionality. The point was to mention a programmer could hide a piece of code in AI to commandeer the plane, and nobody could check except for the said programmer because of the huge mass of information.
Other programmers could check it. Look at the reviewing process that the space shuttle goes through after all, when they do something as simple as update internal clocks.
At least you can conduct a background check on pilots. "They can do that anyway" is not a rebuttal at all: you don't make a bad situation worse.
And you can do background checks on programmers as well. I'm going to ask you this: How many incidents of sabotage has there been in the military regarding automated systems? How many pilots have defected? If you need help answering that, I suggest looking at that list that was posted. :wink:

You're claiming that pilots that can and have defected and often compromising tons of classified data that belongs to their former countries, is somehow preferable to your assumption that a programmer might somehow successfully sabotage an autonomous aircraft. Ignoring of course, the fact that he's most likely going to get caught after the fact (unless he flees the country first, in which case everything he worked on is going to get a close lookover I imagine).
Why don't you prove this increased capability?
By simple virtue of not having to keep a human alive, an aircraft could potentially conduct accelerations and manuevers that would cause a human pilot to blackout. By getting rid of a pilot, you also get rid of the canopy (an F-22 canopy is 360 lbs), ejection system, seat, and life support system. That's just for a plane similar to the F-22. When you start looking at long range aircraft like B-52s, Tu-160s, etc, which have multiple crew members, the amount of saved space and weight increases dramatically. In the case of an automated equivalent to a Tu-160, automating it would get rid of four crew, a galley, rest bunk, and toilet (and all the systems required to keep them alive and happy). It probably totals to a few thousand lbs that could be elimated.

Anyway, all this saved space and weight could be used for other things, like fuel, bombs, or eliminated entirely in an automated equivalent to result in a smaller and lighter aircraft.
That's right dickhead. If a blast door will do, no point in a forcefield. What part of that don't you understand?
Except in this case the "blast door" is obsolete.
It's also strange AI wankers ignore the accidental and even eventual upload of sentient AI. You know what that is? Slavery. That would be my prime concern if I was an AI wanker, to protect my fucking children and creations, but AI wankers seem to want AI in every possible context without considering the consequences. If I was an AI wanker I would want the AI to grow up with human beings, in a human-like body and go through some of the same things humans do to learn compassion, empathy, loyalty, and in this case patriotism. For some reason AI wankers think this will all just shit out with no guidance.
Why are you assuming it'd be sentient? While I will of course, defer to Starglider on this if he thinks differently, it seems to me that all an automated aircraft would need to do is:

1. Have sensory awareness of its surroundings.
2. Have a threat library.
3. Have a library of manuevers and tactics.
4. Have the ability to select an appropriate response based on 2-3.

So let's say an automated fighter is approaching a target, and the enemy sends up an interceptor. The AF detects it with the wide variety of sensors it has available, analyzes it and matches it to something in its threat library. It then selects an appropriate weapon and fires. If it has the speed, altitude, stealth, and BVR capability of an F-22, then most likely it does this from beyond a range that the enemy can reply to, and won't even need to do combat manuevers.

In the rear, you could have an AWACs (human controlled or not) that provides continuous information updates to the fighters. All of this could probably be accomplished with something that's not even remotely approaching sentience. It doesn't need to be smart, it just needs to be fast and accurate.
Image
User avatar
Netko
Jedi Council Member
Posts: 1925
Joined: 2005-03-30 06:14am

Post by Netko »

Jadeite wrote:
brianeyci wrote:The whole thing is a package deal. What makes you think that these advantages are of any worth at all, compared to the trade off of not having a pilot go renegade? Show me an instance of a pilot becoming a traitor and maybe it's time to go AI.
Unfortunately two others have already beaten me to the traitor point. Going over that list, it appears that there have been 46 pilot defections to, which is an average rate of 1 ever 1.2 years. And of course, each of these men takes with him an aircraft for enemy technicians to look over, and all of the secrets in his head. The highest ranking person on the list was a Brigadier General from Cuba! What's really amusing though is the number of Arab pilots who defected to Israel.
Off the top of my head, you can also add in some dozen or so ex-Yugoslav pilots who defected to Croatia's new fledgling airforce in '91-'92 (The first one to do so,Rudolf Perešin, is considered something of a national here), some with their planes.
User avatar
Netko
Jedi Council Member
Posts: 1925
Joined: 2005-03-30 06:14am

Post by Netko »

Ghetto edit: national hero rather then here obviously.
User avatar
FSTargetDrone
Emperor's Hand
Posts: 7878
Joined: 2004-04-10 06:10pm
Location: Drone HQ, Pennsylvania, USA

Post by FSTargetDrone »

The Duchess of Zeon wrote:Analogy time:

In Korea, there was someone in a Corsair that scored a kill on a Mig-15. Does that mean that Corsairs are an acceptable defence against Mig-15s? Even if some brilliant genius bastard in an F-18 gets enormously, randomly lucky on top of his skill and manages to chew apart an F-22 once, that does not indicate anything negative about the capability of the F-22.
Something similar happened early in 1966. A flight of four A-1 Skyraiders engaged four Mg-17s and knocked down at least one of the jets.
...I figured that any time my nose was pointed at the ground my ordnance should be armed. I armed the guns and set up the rockets. About that time I saw a large unguided rocket go past downward. My first inclination was that it was a SAM, but SAMs generally go up. A second rocket hit the ground near Ed and Jim. There was no doubt we were under attack by MiGs. This was confirmed when a silver MiG-17 with red marking on wings and tail streaked by Charlie and me heading for Ed. Tracers from behind and a jet intake growing larger in my mirror were a signal to start pulling and turning. As I put g's on the Skyraider I could see the two distinct sizes of tracers falling away (The MiG-17 had two 23mm and one 37mm cannon in the nose.) He stayed with us throughout the turn firing all the way. Fortunately, he was unable to stay inside our turn and overshot. As he pulled up Charlie got a quick shot at him but caused no apparent damage. He climbed to a perch position and stayed there.

Our turning had separated us from Ed and Jim. Now that we were no longer under attack my main concern was to rejoin the flight. I caught a glimpse of the leader and his wingman and headed for them. As we had been flying at treetop level in and out of small valleys, we had to fly around a small hill to get to them. Coming around the hill we saw Ed Greathouse and Jim LYNNE low with the MiG lined up behind them. I fired a short burst and missed, but got his attention. He turned hard into us to make a head-on pass. Charlie and I fired simultaneously as he passed so close that Charlie thought that I had hit his vertical stabilizer with the tip of my tail hook and Charlie flew through his wake. Both of us fired all four guns. Charlie's rounds appeared to go down the intake and into the wing root and mine along the top of the fuselage and through the canopy. He never returned our fire, rolled inverted and hit a small hill exploding and burning in a farm field...
What's especially interesting about that is that the Skyraiders weren't even designed as fighters (unlike the Corsair). Doubtless there are other stories of unlikely kills.
Image
User avatar
Sarevok
The Fearless One
Posts: 10681
Joined: 2002-12-24 07:29am
Location: The Covenants last and final line of defense

Post by Sarevok »

Why would do AI have to be nothing less than perfect to be acceptable as pilots ? Are human pilots so perfect they never crash or kill friendlies or civilians ? Who would you feel safer with protecting you country - an accurate and fast machine or a 70 kg sack of water ?
I have to tell you something everything I wrote above is a lie.
Post Reply