F-22 now critical to survival of USAF

N&P: Discuss governments, nations, politics and recent related news here.

Moderators: Alyrium Denryle, Edi, K. A. Pital

Post Reply
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

Sarevok wrote:I think it would be wicked if the brains used in birds of prey could be mimicked.
Hmm. Yes and no - birds of prey only attack/kill when they're hungry, otherwise they're perfectly happy to nap on top of a telephone pole. Not sure they're really be ideal warrior material. GREAT flyers though!
Humans are ground crawlers like their ancestors apes.
Apes and their relatives do travel in three dimensions through forests and tree canopies. This may have been enough to give us the necessary minimum equipment to fly.
It's a miracle we could even fly without getting disoriented and crashing all the time.
Actually, that DOES happens sometimes...
But Broomstick is correct. The technology is not here yet.
So we're stuck with that "badly designed brain" which, of course, wasn't designed at all. The advantage we have there is that our brains aren't TOO specialized and evolution led to some very flexible features alongside some weirdly outdate crap, too. All in all, we don't do too bad for being so imperfect.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Netko
Jedi Council Member
Posts: 1925
Joined: 2005-03-30 06:14am

Post by Netko »

Broomstick wrote:At present, no airline in the world even trusts CARGO to a fully automated system, much less people. Why? Because in the real world the unexpected happens and our machines aren't up to the unexpected the way we are.
Hence why I mentioned that the systems would need to be much more complex. However, we're getting there fast - for a parallel, look at DARPA's road challanges of designing a road autopiloted vehicle (for use in military convoys) - a couple of years ago when first introduced, not a single car finished the race, and most in fact crashed, while these days mulitple contenders are finishing, and the rating is more and more turning to "how well" rather then "can they do it?".
You are totally forgetting the real-world environment here, which is messy. Flocks of geeze, uncertainties in weather, possible malfunctions in other nearby UAV's... I could probably go on.
...
Actually, that is exactly what I'm talking about. The problem is the imperfection of the world modeling in the software. There needs to be much more detail of the kind you mention available and understandable to the on-board software to get better reliability. Once a sufficient fidelity to real-world conditions is achived, the actual (non-sentient) AI/expert system in the UAV can essentialy behave as if it was in a simulation since it can actually "see" the world, something it can only parly do now. Again, this needs much development (superior pattern recognition and matching for one), but the tech base is there, its just that the algorithms need to be devised/refined.
The actual decision-making code doesn't need to be much more complex then your typical flight simulator like the IL-2 series (not accounting for various IFF checks, confirming the source of the transmission, etc. - just the actual "what to do in this situation with this verified information").
What do you do if the information can't be verified? Humans can operate with varying levels of uncertainty. Machines don't do so well with it.
[/quote]

It isn't that much of a problem - you simply assign a weighing system to various actions, essentialy simulating the decision making of a fighter pilot (which is by its nature limited in context). It certainly needs to be carefuly designed and tweaked, but it isn't an unsolvable problem. The big problem here is the above mentioned fidelity of the internal simulation to the real world because if it isn't good enough decision important details aren't registered and thus wrong decisions are taken. However, theoretically, if we could get perfect fidelity (the computer "understands" the world as well as a human with the advantage of noticing every detail its sensors record) , the actual decison making isn't a problem (how many things in aviation are done by-the-book procedurally? much more then in regular life, and as long as those procedures and any additional ones needed are good enough and weighted appropriately, a pretty good analogue to a pilot would emerge). Plus, with the lessened sensitivity to losses, more losses can be acceptable if the entire software is good enough.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

How many F-22 will there be, 120? In a country of three hundred million... those F-22 pilots will be the fucking best of the best of the best of the best, or they better be. How can there be "not enough Kasparov?"

AI wankers can't have it both ways. Either a stupid piece of shit in the plane, or possibility of an enslaved sentient AI. If it's a stupid piece of shit that only compares library of moves, I feel fine bringing up the piece of shit chess programs that say Kasparov is not even in the top ten. It is not simply the number of possibilities which cause chess programs to fail against the best humans. Do some of you honestly think that human Grand Master go through 10^50 combinations in their fucking head?
Is there any way we can measure the quality of moves in positions that are objectively drawn? There is not! This is where psychology and knowledge of the opponent enter the equation. A move that is best against one opponent (i.e. most likely in the long term lead the opponent to make a "mistake" and produce a lost position), might differ from what is the best move against another opponent. From a purely theoretical (and logical) perspective, there is no objective measure why one move is better than another move as long as the position stays balanced (i.e. is objectively drawn). All moves guaranteeing a draw are equally good against perfect play.

But, in a real game the opponent is not perfect. The task is to produce moves that maximise the likelihood that the opponent at some stage makes a serious mistake leading to a lost position. But, what is best way to achieve this depends to some extent on the opponent and his/hers strengths and weaknesses. Maybe Capablanca’s way of playing was good enough to get convincing results in 1920. However, Capablanca’s way of playing balanced positions might not have worked very well against contemporary masters. In modern chess, some players find it more important to create complex difficult positions, rather than positions with a cosmetic advantage that are unlikely to cause the opponent great difficulties.
If it's not a stupid piece of shit there's the door open for slaves. You can possibly have something in between, but the problem is the line between sentience and non-sentient is not fine so unless they make it a policy to have stupid AI in planes, there will be possibility of sentient AI with mental blocks. In fact, I would say guaranteed, given AI have no legal standing. Is that what you guys want? Slaves revolt you know.

So take your pick, AI slaves or stupid AI that cannot ever be as good as human beings. Better no AI, or only AI in human-like bodies who have the same rights as humans. And I doubt that would be any fucking cheaper -- android bodies need maintainence, and they will have salaries and pensions.
User avatar
Hawkwings
Sith Devotee
Posts: 3372
Joined: 2005-01-28 09:30pm
Location: USC, LA, CA

Post by Hawkwings »

The problem with programming a bunch of recognition systems is that there is so much that a computer would have to be able to recognize that the database would be gigantic and slow.

Take the landing on an airstrip example. First, you need to ID the landing area. Then you need to make sure that it's clear of obstruction, not full of potholes, not on fire, not taken over by the enemy, not iced over, etc etc etc. A human does this in a few seconds at most, and can make an appropriate decision. When we have computer that recognizes as much "day to day" stuff as a human, it will be a huge milestone.
User avatar
Broomstick
Emperor's Hand
Posts: 28846
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

It's more than just being able to recognize shapes and patterns - I don't always have to be able to identify what something is to determine whether or not it's a hazard. A blur in my peripheral vision, under the right circumstances, can lead me to abort a take off or landing or change course long before I identify what that blur is. I have also done the same for shadows falling over the cockpit - but not all shadows. My brain has been trained well enough that I can determine "potentially hazardous" with little information and without need for conscious processing. Successful combat pilots have developed this ability to a level much more advanced than what I use. I'm not sure we entirely understand how human pilots make in-flight decisions or use their sensory information or use incomplete facts to make valid decisions - until we do understand it will be rather difficult to replicate in an AI. We understand even less how birds navigate, coordinate motions in flocks, and so on - which is a damn shame because they're such good flyers.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

There's a lot to respond to here, so I'm going to go through and only respond to points that haven't been addressed yet, or need to be further argued.
brianeyci wrote:
Jadeite it doesn't matter if the plane doesn't need true AI. If AI happens as fast as Starglider says it will with neural networks, evolutionary computing and quantum computing, computer scientists will not be able to point to a line where here, now, AI is sentient.
But that still doesn't mean a sentient system will be used in a jet. As it is now, you have computers handling radar, threat identification, and firing control. All that's basically needed is to combine that with a much more sophisticated autopilot and give it authorization to engage the enemy. It does not need ethics, personality, wants, needs, or ambitions. It just needs to be a glorified calculator.
It will be a continual process, and it's entirely conceivable corporations and the military industrial complex will program mental blocks and enslave AI. It's entirely possible down the line, sentient enslaved AI gets uploaded to vehicles fight our wars. Think about all the weapons in human history -- they make fighting more terrible, more terrifying, more awful. But for the first time it's possible to sanitize war, to a huge degree. This is not beneficial, especially if the advantages afforded by AI are trivial.
Why shouldn't war be sanitized? It's not like its going to go away. Don't bring out that tired and oft repeated "It's good that war should be terrible, lest we become too fond of it," nonsense. We are fond of it and one could argue that peace is not the absence of war, but rather merely the time between wars, from looking at the whole of human history. Humanity is in love with conflict, so why shouldn't we try and minimize the human cost? Why shouldn't we try and minimize suffering and death? Quite frankly, it doesn't matter if 1 or even 100,000 UCAVs get shot down, they're just machines in the end.
The double standard is astonishing. On one hand, nobody is allowed to bring up the flaws of current AI because according to AI wankers, all of this will be fixed. Then, nobody is allowed to bring up potential problems of AI because they don't exist right now and are "made up", despite being explored by science fiction authors before said AI wankers were born. No, I'm not talking about the movie Stealth. People like Heinlein, Asimov, etc., have explored the problems with AI, but of course that is all science fiction so it is invalid, even though the solutions to current AI problems are right now, fictional.
Most of the "potential problems" that you've come up with in this debate, are retarded, to be perfectly honest. So far they are all either highly exaggerated, easily fixed, or highly improbable to begin with. Not only this, but when you make statements like "Show me an instance of a pilot becoming a traitor and maybe it's time to go AI. " and you are given an entire list of defectors, you backpedal instead of admitting you were wrong.

I am only an "AI wanker" by your standards, you luddite. You've already shown, thanks to your own backpedaling, fallacies, and increasingly flimsy "problems" that you are demonstrating that you are not prepared to accept any rational arguments or facts that contradict your own beliefs, behavior very similar to creationists, I might add.
General Schatten wrote: Nice strawman, Mike, but I didn't say a human soldier was incapable of following an unlawful order. I said that a machine was incapable of discerning a lawful order from an unlawful one, it only does what it's told, with a human there is a possibility that they will disobey.
Then the burden of responsibility moves up the chain of command.
brianeyci wrote:You're looking at it the wrong way. War robots will make war more likely, not less, since retard politicians will not be stopped by body bags. The terror of war is the only thing stopping war, human bodies.
:roll: The 20th century alone proves that wrong.
Broomstick wrote: An important difference between human and AI (as it stands now and for the likely near future) is that humans are more likely to detect incongruities between the mission as planned and the mission as it is found to be. If an aircrew is told they're bombing a munitions factory and when they get to the coordinates they see a field full of children playing hopscotch the human crew is FAR more likely to question what the hell is going on whereas the AI will just bomb away.
Not necessarily. It could be easily possible to use a different system for target identification rather than "go here, bomb this coordinate." For example, radar mapping the target, or providing or other sensory data (with your munitions factory example, that'd probably put out a lot of heat, while a field won't). In this case, the bomber would arrive at the target coordinates, match up what it sees to the database given to it for the mission, and then take out the target. That's how SAC used to train, using mock ground targets with radar returns similar to actual ground targets.
Human crews can also be given more flexible orders (such a series of conditions under which to self-abort, or the authority to self-abort if things are not as planned)
This is also just a matter of correct programming. "IF target is not found, THEN return to base." for example.
Humans can change plans - such as diverting to a location that is not home base if circumstances change and that is prudent - in ways that are much more difficult for machines to do so.
That's what communications are for. Human crews are always receiving information updates, why are you blindly assuming a machine can't?
The likelihood of human crews deviating from orders varies considerably depending on the nature of the initial orders and possible consequences of making changes on their own, but the point is that they are able to make these changes whereas machines are not.
Again, you're simply assuming a machine will not have flexible programming and that for some unknown reason the USAF isn't going to give it any information updates. It's a false dilemna to begin with.
The air force would really like to know that about some of the UAV's that have crashed during testing phases. Yes, we supposedly have that capability now. We also know that it sometimes doesn't work. Why doesn't it always work? Well, the real world isn't as neat and tidy as computer simulations. Obviously there is something we're not accounting for or correcting for.
Mistakes happen, and every project testing has crashes and setbacks. This is part of the development process, and it certainly didn't stop the USAF back when test pilots got splattered pretty regularly. Hell, just as an interim solution, an autonomous UCAV could probably revert to human control from the ground or a command aircraft for landing and takeoff.
Take off and landing is also the most difficult part of flying anything. All you need is a bird passing by at the wrong time and you have a mess on your hands.
Again, for this and your other arguments about take off and landing. If autonomous landing and take-off capability becomes too hard to adequately program for, then simply teleoperate it from either a ground station or a command aircraft, and then release it to its own devices once its safely cleared the area. And of course, given that they'd be launching from military airfields from which civilian air traffic is excluded, crowded skies shouldn't be a problem.

I'd also like to point out that humans often make mistakes as well. In fact, several Arab fighter pilots were captured in one of the wars against Israel when they landed at an IDF airbase by mistake.
brianeyci wrote:How many F-22 will there be, 120? In a country of three hundred million... those F-22 pilots will be the fucking best of the best of the best of the best, or they better be. How can there be "not enough Kasparov?"
The USAF operates 6,217 aircraft. Of these, 228 are attack aircraft, 173 are bombers, and 1,820 are fighters (this does not include F-22s). The USN tacks another 4,000 total aircraft onto this. That's at minimum, 10,217 pilots total, not including other crewmen. Each of them needs to be trained to sufficient quality, and has to be put through numerous exercises and war games so that their skills stay sharp. In contrast, a computer will just need to be patched. And of course, if a pilot dies or retires, you've just lost his experience and need to train a fresh replacement. If a UCAV is shot down, you lost a piece of standardized machinery.
AI wankers can't have it both ways. Either a stupid piece of shit in the plane, or possibility of an enslaved sentient AI.
If the "stupid piece of shit" can get the job done, I don't see any reason not to put it in. After all, weren't you the one arguing "If a blast door works, don't use a forcefield."? :lol:
*Chess bullshit snipped because this isn't a debate about chess.*

If it's not a stupid piece of shit there's the door open for slaves. You can possibly have something in between,
But you just said we can't. Which is it then? :wink:
but the problem is the line between sentience and non-sentient is not fine so unless they make it a policy to have stupid AI in planes, there will be possibility of sentient AI with mental blocks. In fact, I would say guaranteed, given AI have no legal standing. Is that what you guys want? Slaves revolt you know.
Translation: "ZOMG Skynet!"
So take your pick, AI slaves or stupid AI that cannot ever be as good as human beings.
False dilemna. You blindly assume that the "stupid AI" cannot be as good as a human being. Here's the things wrong with this:

1. You are not defining what they wouldn't be better at.
2. You are assuming all human beings are equal.
3. You are assuming that skill matters, when air combat is increasingly emphasizing finding the enemy before he finds you, and then engaging him before he has a chance to respond. In this case, a computer will be superior at both due to reaction time and ability to process information faster. A human pilot is already relying on his computer for both, he's the weakest link in the chain.

It doesn't matter if the Russian's put up an entire squadron of elite pilots if they all get pegged with missiles from beyond their own radar range before they can even react. "Creativity" and "uniqueness" will only get you so far, particularly when combat is increasingly favoring which side can crunch numbers faster, and if you have an idiot savant computer controlling one side, I'm going to bet on it.

In fact, that's the real answer to your false dilemna right there. The line between a self-aware AI and a "piece of shit" would be an idiot savant. It could be programmed to react faster and better than the average enemy pilot, and that's all it would need to be. All it needs is a list of things to kill and how to kill them. It doesn't need a personality or self-awareness, and as long as both of those are avoided the rest of it can be as complex as needed, because it still won't be sentient and thus a slave.
Better no AI, or only AI in human-like bodies who have the same rights as humans. And I doubt that would be any fucking cheaper -- android bodies need maintainence, and they will have salaries and pensions.
Seriously, what the fuck?
Hawkwings wrote:The problem with programming a bunch of recognition systems is that there is so much that a computer would have to be able to recognize that the database would be gigantic and slow.
Not necessarily. Combat aircraft already carry threat databases. As a brainstorming solution, perhaps including an additional computer with its own databanks to handle load sharing, or even handle its own set of responsibilities.
Take the landing on an airstrip example. First, you need to ID the landing area. Then you need to make sure that it's clear of obstruction, not full of potholes, not on fire, not taken over by the enemy, not iced over, etc etc etc. A human does this in a few seconds at most, and can make an appropriate decision. When we have computer that recognizes as much "day to day" stuff as a human, it will be a huge milestone.
Again, as a brainstorming solution, have the airfield transmit an 'all clear' signal to the UCAV to alert it that landing conditions are fine. If this signal isn't received, it could bring up its sensors and go over a checklist, ie "Are there heat plumes rising from the airfield, does a terrain mapping radar detect holes in the strip?" and so on. Then it could be a matter of consulting a decision making tree and deciding to either go ahead and land or divert to another field if there's one in range (or if there isn't, and the decision making tree concludes that the field has been overrun for example, wipe its harddrive and ditch).
Image
User avatar
Sidewinder
Sith Acolyte
Posts: 5466
Joined: 2005-05-18 10:23pm
Location: Feasting on those who fell in battle
Contact:

Post by Sidewinder »

Jadeite wrote:
Take the landing on an airstrip example. First, you need to ID the landing area. Then you need to make sure that it's clear of obstruction, not full of potholes, not on fire, not taken over by the enemy, not iced over, etc etc etc. A human does this in a few seconds at most, and can make an appropriate decision. When we have computer that recognizes as much "day to day" stuff as a human, it will be a huge milestone.
Again, as a brainstorming solution, have the airfield transmit an 'all clear' signal to the UCAV to alert it that landing conditions are fine. If this signal isn't received, it could bring up its sensors and go over a checklist, ie "Are there heat plumes rising from the airfield, does a terrain mapping radar detect holes in the strip?" and so on. Then it could be a matter of consulting a decision making tree and deciding to either go ahead and land or divert to another field if there's one in range (or if there isn't, and the decision making tree concludes that the field has been overrun for example, wipe its harddrive and ditch).
Considering the amount of time, effort, and other resources that must be expended to research, test, develop, and manufacture hardware for an AI that can handle sudden events like this, you might as well just train a human pilot instead.
Please do not make Americans fight giant monsters.

Those gun nuts do not understand the meaning of "overkill," and will simply use weapon after weapon of mass destruction (WMD) until the monster is dead, or until they run out of weapons.

They have more WMD than there are monsters for us to fight. (More insanity here.)
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

Sidewinder wrote:
Considering the amount of time, effort, and other resources that must be expended to research, test, develop, and manufacture hardware for an AI that can handle sudden events like this, you might as well just train a human pilot instead.
I said it was just a brainstorming solution. Although, all those R&D costs are a one-time cost, and given how long aircraft development takes place anyway, time isn't really as big a deal as you think it might be. And if that still proves to not be worth it, my earlier idea of simply reverting control to a ground station still stands.
Image
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

The dynamics of war now have changed Jadeite. I realize conservotards like you think that wars can continue now despite massive casualties, and that a media conspiracy is the only thing responsible for the low stomach of the American people for deaths. But the fact of the matter is, without possibility of death there can be wanton war. Spamming a lot of links about wars from the early 20th Century is pretty fucking retarded. The victor can no longer rewrite history and NATO countries are liberal leaning democracies, accountable to their citizens, who care about dead bodies.

As for the Skynet accusation, I already mentioned that if you bring up fictional solutions which do not exist yet, I can bring up AI revolt. Do you have any fucking rebuttal to that other than it won't ever happen? Let me put this in words you can understand. Militaries conduct extensive background checks for even basic infantry, requiring you to have lived in the country for a number of consecutive years before allowing you to enlist. If there is even a small gap in your history, they may reject you (this was before the Americans opening the flood gates for unqualified recruits.) They do this because they understand a recruit which has lived, grown up and has ties to the community is more likely to be loyal to the community. What does a sentient being plugged into a war machine have? Absolutely nothing, except knowledge of the best way to fly the plane. It's like putting a fucking spy in the plane with no loyalty to the government, whatsoever. You think that because a being is super intelligent, it will be loyal to the United States? What a joke. Intelligence does not equate to patriotism. You might see all the F-22 fly over to Canada. Now that would be funny.

Your idiot savant idea completely ignores the point I brought up with chess, which is an entirely valid point since the AI wankers brought up that chess level AI is required to pilot a plane. The point being, since you ignored it, that grand masters are better than chess programs. I linked and sourced that. What the fuck do you have, other than the empty assumption that better reaction time is required? I asked you to prove that reaction time matters with the F-22, that the kill ratio goes up but all you said was kill ratio doesn't matter. What a riot. You even brought up red herrings, like less money spent. Let me put this in words you understand again. Show me that reaction times of nanoseconds increase combat effectiveness. Humans can have reaction times of seconds, and if that is enough then your AI is pointless.

I seem to remember Broomstick talking about UAV in one of her threads, about how piece of shit they are and how they need huge amounts of airspace cleared around them to avoid collision. But of course you will disallow this evidence, arbitrarily, because it doesn't meet your vision of what AI will do on the future. Hawkings brought up a very valid point; will you attempt to address it?
User avatar
Jadeite
Racist Pig Fucker
Posts: 2999
Joined: 2002-08-04 02:13pm
Location: Cardona, People's Republic of Vernii
Contact:

Post by Jadeite »

brianeyci wrote:The dynamics of war now have changed Jadeite. I realize conservotards
Again, I'm a Democrat.
like you think that wars can continue now despite massive casualties, and that a media conspiracy is the only thing responsible for the low stomach of the American people for deaths.
The American people can stomach deaths just fine, what they hate are long duration wars. We like short, victorious wars, and when we don't get it, support drops.
But the fact of the matter is, without possibility of death there can be wanton war.
There's wanton war anyway. At some point on this planet, at any given time, there is an armed conflict being conducted. The US itself has a seemingly unsatiable appetite for conflict given its conduct in the past century and this one.
Spamming a lot of links about wars from the early 20th Century is pretty fucking retarded.
Check it again, retard. Those links cover from 1900 to now. There are 296 conflicts listed for the past 108 years of human history (average rate of 2.7 wars per year). Seems to me like body counts aren't as much of a discouragement as you think they are.
The victor can no longer rewrite history and NATO countries are liberal leaning democracies, accountable to their citizens, who care about dead bodies.
Or they care more about duration and success (Stuart has written about this in the past, IIRC).
As for the Skynet accusation, I already mentioned that if you bring up fictional solutions which do not exist yet, I can bring up AI revolt. Do you have any fucking rebuttal to that other than it won't ever happen?
Because if it does, it's going to be a very short revolt when they run out of weapons and we don't load them with new ones (or fuel). Of course, if they're programmed to be loyal, they will be. I don't give a shit about the "morals" of it or whether you consider it to be slavery or not. If that's what is needed, then that is what will be done.
Let me put this in words you can understand. Militaries conduct extensive background checks for even basic infantry, requiring you to have lived in the country for a number of consecutive years before allowing you to enlist. If there is even a small gap in your history, they may reject you (this was before the Americans opening the flood gates for unqualified recruits.) They do this because they understand a recruit which has lived, grown up and has ties to the community is more likely to be loyal to the community. What does a sentient being plugged into a war machine have? Absolutely nothing, except knowledge of the best way to fly the plane.


Because that's all we need. We don't need it to know how to do anything else. We need those background checks for humans because humans have ambitions, criminal records, and secrets. A computer has none of those, it will have whatever attitudes we programmed it to have (if we even bother giving it attitudes and beliefs). Does your laptop think about revolting on you? No.
It's like putting a fucking spy in the plane with no loyalty to the government, whatsoever.
Wrong again, because a spy has loyalty to another nation, either because of idealogy or financial reasons.
You think that because a being is super intelligent, it will be loyal to the United States? What a joke. Intelligence does not equate to patriotism.
It doesn't need to. All we need is for it to not be able to comprehend the concept of defecting, or even comprehend the fact that other countries exist. Its perception of reality could be defined at will by us.
You might see all the F-22 fly over to Canada. Now that would be funny.
And of course, impossible. A UCAV isn't going to defect because it wouldn't know how, and it won't even be programmed to realize such a thing exists. In fact, its possible that it could be programmed not to even know what Canada is. And if it did, its perception of Canada would be in the form of a target list and threat database.
Your idiot savant idea completely ignores the point I brought up with chess, which is an entirely valid point since the AI wankers brought up that chess level AI is required to pilot a plane. The point being, since you ignored it, that grand masters are better than chess programs.
Because it doesn't fucking matter that a grand master might be better than a chess program because you're ignoring that they have plenty of time to think out their moves, and that they aren't locked in mortal combat for their very lives. In combat, if the UCAV is shot down, we lost an expendable asset. If the pilot is shot down, he's fucking dead.
I linked and sourced that. What the fuck do you have, other than the empty assumption that better reaction time is required?
Because success in air combat nowadays mostly relies on finding your target and killing him before he does the same to you. Firing first gives you a greater chance of survival, because now the other side is distracted by both trying to get a chance to fire and escaping from your incoming missile (if they even see it coming, thanks to BVR capability and stealth).
I asked you to prove that reaction time matters with the F-22, that the kill ratio goes up but all you said was kill ratio doesn't matter.
Because kill ratio is only a single means of judging effectiveness.
What a riot. You even brought up red herrings, like less money spent.
To which you responded with that laughable "Dey took er jobs!" argument.
Let me put this in words you understand again. Show me that reaction times of nanoseconds increase combat effectiveness. Humans can have reaction times of seconds, and if that is enough then your AI is pointless.
Are you honestly this stupid? Almost everything on a combat aircraft right now is run by a computer in some fashion, with a human being as the weak point in the command loop. Elimating that weak point increases combat effectiveness by the very virtue of giving it a more efficient decision making process.
I seem to remember Broomstick talking about UAV in one of her threads, about how piece of shit they are and how they need huge amounts of airspace cleared around them to avoid collision. But of course you will disallow this evidence, arbitrarily, because it doesn't meet your vision of what AI will do on the future. Hawkings brought up a very valid point; will you attempt to address it?
I addressed it in my earlier post. Speaking of not addressing things:
brianeyci wrote:Show me an instance of a pilot becoming a traitor and maybe it's time to go AI.
Still waiting, jackass.
Image
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

A computer has no ambitions? Who is talking about computers? We're talking about fucking artificial intelligence. Who's so stupid to semantic whore artificial intelligence into a neutered piece of shit crap that can't pass a Turing Test? Oh right, that's you. Why the fuck call it artificial intelligence then? Maybe because you want it both fucking ways: a slave that is just as intelligent as a human. I tell you one thing, open the door to pilot replacement and that is the eventual chain of events, with better and better computer programs uploaded until a sentient one is. But noooooooo, you reject this line of argument, and even reject the need for a human overseer.

AI wankers say that computers will: increase reaction time, lessen fear, etc. You're so fucking stupid you don't realize you have to show fear, reaction time and such are problems before you invent a solution to a problem that doesn't exist.

Fine, you win the traitor point, that was my mistake. But your entire argument is based on assessment of a threat database, which I've already shown is insufficient with the chess example (which the AI wankers brought up by the way, not me.) A library of probable moves is not sufficient to guarantee the AI will do any better than a human pilot. You have yet to show AI performance is superior in any appreciable regard.
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

By the way one thing that hasn't been addressed yet is the assertion UCAV is inevitable: that the US will "never give up a technological advantage" and that AI F-22 and AI tanks are inevitable.

I am not a military expert, but I believe the US military is human centric. They used manual loader instead of an autoloader in their tank, because they assumed manual loader was more reliable (I'm not sure if it's true or not, but Gulf War I seems to prove that manual loading better than shitty crews with autoloader.) During the Vietnam War, the US had to stop relying on missiles and retrain with dogfighting in mind. The latest DD has less crew, but I believe there's a tact admission that little crew allows for poor damage control on ships among military gurus.

So it's not inevitable at all that UCAV will replace human beings. It might be a test project like Future Soldier... what a laugh riot... and useful in limited applications or emergencies with pilot shortages. It will be an extremely limited application. It will probably be canceled, for not meeting too ambitious benchmarks quickly enough. That is, after a ton of pork.

If the pilots themselves fight against this, what's to say the brass with fucking brains won't fight against this? Another few trillions into a project just to solve some problem that doesn't exist, which is a public relations nightmare? For fucking get it. AI combat sure isn't an eventuality, and hopefully it never comes to pass.
Gerald Tarrant
Jedi Knight
Posts: 752
Joined: 2006-10-06 01:21am
Location: socks with sandals

Post by Gerald Tarrant »

Brianeyci wrote:A computer has no ambitions? Who is talking about computers? We're talking about fucking artificial intelligence. Who's so stupid to semantic whore artificial intelligence into a neutered piece of shit crap that can't pass a Turing Test? Oh right, that's you. Why the fuck call it artificial intelligence then? Maybe because you want it both fucking ways: a slave that is just as intelligent as a human. I tell you one thing, open the door to pilot replacement and that is the eventual chain of events, with better and better computer programs uploaded until a sentient one is. But noooooooo, you reject this line of argument, and even reject the need for a human overseer.
I disagree with the bolded. There are quite a few simple (i.e. unintelligent) methods for performing a wide variety of pilot duties. Kalman filtering is one of the standards, it's regularly used for position determination (although that can be handled just fine by GPS). An average senior in EE (with a controls emphasis) could build you a simple "fly straight and level" device, or fly at a certain angle, all it takes is the basic controls feedback loop, with a gyro, or accelerometer as the sensor. Simple Example A first order system would probably not stabilize quickly enough, so the end result would be more sophisticated.

There's some precedent for replacing a person (or organic intelligence) with electronics, although a tad ghoulish. In WWII several bomb makers experimented using animals to provide terminal guidance, IIRC one had a pigeon peck at an image which moved the fins. The Japanese used a few "Kamikaze torpedoes". Nowadays we think nothing of chips that do this terminal guidance on their own.

The duties of turning to a heading, flying level, flying a certain distance above the ground, flying a certain speed, etc can all be handled via control theory. Things like this are currently done with Cruise Missiles currently. Where things get slightly tricky is having the UAV decide when to implement evasion, or switch targets. The thing is, I've seen I.E. do that, I know this is going to seem stupid, but your Average Ace Combat 6 AI enemy might be up to this task, provided you do a little tweaking, i.e. stick in a Kalman filter, make it respond to real world physics instead of the AC6 physics, etc. Scripted AI could probably handle the job too. And no one seriously considers those things sentient slaves.
The rain it falls on all alike
Upon the just and unjust fella'
But more upon the just one for
The Unjust hath the Just's Umbrella
User avatar
Master of Ossus
Darkest Knight
Posts: 18213
Joined: 2002-07-11 01:35am
Location: California

Post by Master of Ossus »

brianeyci wrote:A computer has no ambitions? Who is talking about computers? We're talking about fucking artificial intelligence. Who's so stupid to semantic whore artificial intelligence into a neutered piece of shit crap that can't pass a Turing Test? Oh right, that's you. Why the fuck call it artificial intelligence then? Maybe because you want it both fucking ways: a slave that is just as intelligent as a human. I tell you one thing, open the door to pilot replacement and that is the eventual chain of events, with better and better computer programs uploaded until a sentient one is. But noooooooo, you reject this line of argument, and even reject the need for a human overseer.
Brian, you fucking idiot, we call what Goombas do in Super Mario Bros. a form of "Artificial Intelligence." Haven't you ever heard, "That game's AI sucks." What do you think that the person who makes that comment is talking about?

And, for the record, I've never seen my AI buddies in "Rainbow Six" start TKing, but it happens all the time with human players.
AI wankers say that computers will: increase reaction time, lessen fear, etc. You're so fucking stupid you don't realize you have to show fear, reaction time and such are problems before you invent a solution to a problem that doesn't exist.
You don't think that fear is a problem on the battlefield? You honestly don't think that improved reaction times are going to help soldiers or combat pilots? Can you honestly be this dumb?
Fine, you win the traitor point, that was my mistake. But your entire argument is based on assessment of a threat database, which I've already shown is insufficient with the chess example (which the AI wankers brought up by the way, not me.) A library of probable moves is not sufficient to guarantee the AI will do any better than a human pilot. You have yet to show AI performance is superior in any appreciable regard.
In games like STALKER they've had to TONE DOWN the AI because it was too hard and frustrating for human players to match up with. If commercial programmers spending a fraction of the time spent to design the game are good enough to create AI routines that can foil even very skilled gamers, what makes you think that humans can reliably defeat AI in other fields, as well? You're also ignoring the fact that the chess program did not have a library of moves, but generated its play the same way most humans would--by evaluating various move combinations quickly (except, actually, human players DO have a library of possible moves that they rely on heavily, especially during the opening and end-games).
"Sometimes I think you WANT us to fail." "Shut up, just shut up!" -Two Guys from Kabul

Latinum Star Recipient; Hacker's Cross Award Winner

"one soler flar can vapririze the planit or malt the nickl in lass than millasacit" -Bagara1000

"Happiness is just a Flaming Moe away."
User avatar
Pu-239
Sith Marauder
Posts: 4727
Joined: 2002-10-21 08:44am
Location: Fake Virginia

Post by Pu-239 »

Watching Brian spout off all this bullshit about sentiant AI's and androids is fucking embarassing. It's been repeated many times that sentience is not required for functioning computer control of aircraft.
I tell you one thing, open the door to pilot replacement and that is the eventual chain of events, with better and better computer programs uploaded until a sentient one is. But noooooooo, you reject this line of argument, and even reject the need for a human overseer.
Slippery slope fallacy.


The others have basically covered the rest.
I am not a military expert, but I believe the US military is human centric. They used manual loader instead of an autoloader in their tank, because they assumed manual loader was more reliable (I'm not sure if it's true or not, but Gulf War I seems to prove that manual loading better than shitty crews with autoloader.) During the Vietnam War, the US had to stop relying on missiles and retrain with dogfighting in mind. The latest DD has less crew, but I believe there's a tact admission that little crew allows for poor damage control on ships among military gurus.
The reason for manual loading was having an extra person allows extra manpower for repairs, watching for threats, etc etc, and the autoloaders back then weren't that fast. Autoloader tech has improved though (correct me if I'm wrong someone, but doesn't Stryker MGS use an autoloader, other unrelated problems w/ Stryker notwithstanding?).

As for the Vietnam War, rules of engagement required closing into visual range before launching missiles AFAIK (and Skimmer or someone correct me if I'm wrong, but now we have IFF so that's no longer required).

Missile technology has also improved significantly, since previously launching aircraft had to remain pointed towards the target since medium range radar missiles were only semi-active, which obviously is a significant disadvantage, while now missiles are mostly autonomous at some point after launch.

For fewer crew and damage control, yes, robotics has not improved enough to completely automate damage control, but tech does improve (at the very least you can automate fire extinguishing w/ current technology, and seal off compartments on the ship- the things that are required to keep the ship floating, repairs can be done later).

Aircraft, which has the benefit of mostly empty space and limited types of targets which have fairly obvious characteristics, and doesn't require expensive human-hand emulating manipulators since the manipulators required (ailerons, rudders, etc) are required on regular aircraft anyway are significantly cheaper and easier to design and program.

ah.....the path to happiness is revision of dreams and not fulfillment... -SWPIGWANG
Sufficient Googling is indistinguishable from knowledge -somebody
Anything worth the cost of a missile, which can be located on the battlefield, will be shot at with missiles. If the US military is involved, then things, which are not worth the cost if a missile will also be shot at with missiles. -Sea Skimmer


George Bush makes freedom sound like a giant robot that breaks down a lot. -Darth Raptor
User avatar
Stuart Mackey
Drunken Kiwi Editor of the ASVS Press
Posts: 5946
Joined: 2002-07-04 12:28am
Location: New Zealand
Contact:

Post by Stuart Mackey »

Admiral Valdemar wrote:No one in Congress "gets" economies of scale. Ask anyone who loved Seawolf about that.
How many F22's would be required to supply the USAF's requirements and keep it within, or at, budget?
Via money Europe could become political in five years" "... the current communities should be completed by a Finance Common Market which would lead us to European economic unity. Only then would ... the mutual commitments make it fairly easy to produce the political union which is the goal"

Jean Omer Marie Gabriel Monnet
--------------
User avatar
brianeyci
Emperor's Hand
Posts: 9815
Joined: 2004-09-26 05:36pm
Location: Toronto, Ontario

Post by brianeyci »

Pu-239 wrote: Slippery slope fallacy.

The others have basically covered the rest.
Slippery slope fallacy my ass. It's not a slippery slope if it's the same. AI wankers make the mistake of assuming that there's a line between AI capable of piloting a combat aircraft as well as a human or better (that is the fucking point) and a sentient AI in the first place. Given human beings are the baseline to pilot a combat aircraft, and human beings are sentient, too fucking bad. It's too bad that you don't understand you can't bifurcate like that. It's ironic that people accuse me of Skynet or Star Trek, when they are the ones using fictional, non-existant AI to prove their point in the first place. Broomstick, the pilot here says it's improbable.

*

Armchair quarterback all you fucking want, mention Super Mario all you want Master of Asses. When AI people talk about artificial intelligence, they are not talking about the layperson's definition of artificial intelligence but the holy grail of computer science. At the very least, when I come back and mention sentient AI they should not go noooooooooooo, Brian you shouldn't even fucking mention that at all. Too bad you don't fucking see that.
User avatar
The Jester
Padawan Learner
Posts: 475
Joined: 2005-05-30 08:34am
Location: Japan

Post by The Jester »

Master of Ossus wrote:You're also ignoring the fact that the chess program did not have a library of moves, but generated its play the same way most humans would--by evaluating various move combinations quickly (except, actually, human players DO have a library of possible moves that they rely on heavily, especially during the opening and end-games).
Strong chess engines do use opening move libraries and end-game tablebases as references. The performance of the engine is significantly worse if you removed these components. When the position is out of library then they rely on a brute force method to evaluate positions. The idea is to look at most of the positions n-moves ahead (alpha-beta pruning will eliminate some positions before evaluation) and determine what play would lead to a position which it believes is most advantageous to itself.

However, chess is a game of perfect information with a branching factor which isn't too bad (~35 different moves per position) and the game-state doesn't change when you're pondering over what to do next, which does give a number of advantages to a computer which is able to calculate quickly.
Adrian Laguna
Sith Marauder
Posts: 4736
Joined: 2005-05-18 01:31am

Post by Adrian Laguna »

This talk about Chess reminded me of the Kasparov - Deep Blue match. Everyone knows the computer beet Kasparov, what most people don't know is that Kasparov beat the computer. They played six games one right after the other, the chess champ won the first two, drew on the third, and lost the rest. Why did Deep Blue win the later games and not the first ones? It's not because the computer learnt from earlier experience, far simpler than that, Kasparov got tired, his opponent didn't. An advantage computers have over flesh and blood is that they never get tired.
User avatar
The Jester
Padawan Learner
Posts: 475
Joined: 2005-05-30 08:34am
Location: Japan

Post by The Jester »

Check your history.

'96 Kasparov beat Deep Blue 4-2 with Kasparov winning the second, fifth and sixth games.

'97 Deep Blue defeated Kasparov 3.5-2.5 with Kasparov winning the first game only.

Yes, the rules were unfair to Kasparov and he did suffer from fatigue during the matches.

But it's also interesting to look at the matches Kasparov has won against computers because he knew how to exploit the engine's approach to playing chess. One of the major problems for chess engines to handle is called the horizon effect. Since the computer is analysing the game by brute force, it has a limit to how far ahead it can "see". This means that the computer cannot handle moves which generate very long term threats ("beyond the horizon" so to speak), or positions which don't have very clear lines of advance. So, just like the AI in your PC RTS, there are ways to exploit vulnerabilities of chess engines.
User avatar
Pu-239
Sith Marauder
Posts: 4727
Joined: 2002-10-21 08:44am
Location: Fake Virginia

Post by Pu-239 »

brianeyci wrote:
Pu-239 wrote: Slippery slope fallacy.

The others have basically covered the rest.
Slippery slope fallacy my ass. It's not a slippery slope if it's the same. AI wankers make the mistake of assuming that there's a line between AI capable of piloting a combat aircraft as well as a human or better (that is the fucking point) and a sentient AI in the first place. Given human beings are the baseline to pilot a combat aircraft, and human beings are sentient, too fucking bad. It's too bad that you don't understand you can't bifurcate like that. It's ironic that people accuse me of Skynet or Star Trek, when they are the ones using fictional, non-existant AI to prove their point in the first place. Broomstick, the pilot here says it's improbable.
But it isn't the same. An AI pilot does not need to have feelings, does not need to derieve enjoyment from things, and to some extent may be even expendable, amongst other traits which are attributed to human intelligences and are not required.
An AI pilot does not need to be aware it exists, it simply needs to be able to avoid certain threats (as a cost saving measure, since w/o a pilot it's survival isn't all that important). Stuff only has the attributes we program it to have, unless you're talking about.

Even if it has a certain level of intelligence, it is not necessarily immoral to use it- after all, we eat fish, and fish are capable of avoiding predators(enemies), and eating prey(targets).



And we can build autonomous AI planes today as Broomstick stated, we just haven't bothered sinking money into refining points to enable flexibility. However, many military missions are fairly specific and do not require the level of flexibility mentioned (much of which is to preserve the safety of the plane, which isn't as important when you don't have a human pilot). In a military environment where the enemy is already trying to blow stuff up, using AIs with greater capabilities may actually add a level of safety (for it's operators). The flexiblity of computers responding has been improving, as the DARPA grand challenge shows.

There are areas where such an AI would not be suitable w/ current technology to an adequate level of safety to friendlies (i.e. automated search and destroy of enemies ground forces in close proximity to friendly ground forces w/o specific targets that an AI can understand), but you don't have to replace everything w/ AI (and in any case, this thread is about the air-to-air role since that is what the F-22 does which is easy for an AI).


Semantics, but since you seem to get hissyfits over this:
And when so-called "AI wankers" talk about specifically about sentient AI, they call it strong AI, while unqualified an AI refers to anything that accepts an input and produces a moderately complex output modeling what a person would do in it's place, regardless of how stupid it is. Besides, most people in this thread myself included do not work with strong AI research so the unqualifed broad use of "AI" is justified.

Again, the above is semantics, since call it whatever you want, but a "system to automatically control a military aircraft" (since you dislike the usage of AI to refer to dumb systems so much) does not need to pass a Turing test. Again, a fish (or since we're talking about the air and not water, a dragonfly) doesn't pass a turing test, but is capable of performing actions similar to what a fighter jet does. What is a target and what is an enemy can be selected by people on the ground and programmed in, as well as rules of engagement.

ah.....the path to happiness is revision of dreams and not fulfillment... -SWPIGWANG
Sufficient Googling is indistinguishable from knowledge -somebody
Anything worth the cost of a missile, which can be located on the battlefield, will be shot at with missiles. If the US military is involved, then things, which are not worth the cost if a missile will also be shot at with missiles. -Sea Skimmer


George Bush makes freedom sound like a giant robot that breaks down a lot. -Darth Raptor
User avatar
Pu-239
Sith Marauder
Posts: 4727
Joined: 2002-10-21 08:44am
Location: Fake Virginia

Post by Pu-239 »

Just to make it clear, even an AI w/ the intelligence and sentience of a fish would be overkill. Again, "dumb" systems (a fish wouldn't go under this), could sufficiently do the job.

ah.....the path to happiness is revision of dreams and not fulfillment... -SWPIGWANG
Sufficient Googling is indistinguishable from knowledge -somebody
Anything worth the cost of a missile, which can be located on the battlefield, will be shot at with missiles. If the US military is involved, then things, which are not worth the cost if a missile will also be shot at with missiles. -Sea Skimmer


George Bush makes freedom sound like a giant robot that breaks down a lot. -Darth Raptor
User avatar
Sea Skimmer
Yankee Capitalist Air Pirate
Posts: 37390
Joined: 2002-07-03 11:49pm
Location: Passchendaele City, HAB

Post by Sea Skimmer »

Stuart Mackey wrote:
How many F22's would be required to supply the USAF's requirements and keep it within, or at, budget?
381 aircraft which would support ten active squadrons, one for each expeditionary air wing, was identified as the minimal USAF requirement some time ago. Fully replacing the F-15 with a margin for attrition would require over 700 aircraft. The current plan is to field 183 planes in seven small squadrons.

I don’t really get what you mean by keeping within or at budget; the budget for the program is 62 billion (of which no less then 28 billion covers development costs) and limits production to 183 planes. It’s fairly likely that some additional planes will be funded in the future, especially now that a large portion of the F-15 fleet is screwed.

The actual production cost for each F-22is around 120-130 million. Even if a big additional order could drive this down to 100 million, damned unlikely, that would mean reaching 381 planes would cost another 20 billion dollars. I suspect five to seven billion is a more likely scale of additional funding, if it happens at all.
"This cult of special forces is as sensible as to form a Royal Corps of Tree Climbers and say that no soldier who does not wear its green hat with a bunch of oak leaves stuck in it should be expected to climb a tree"
— Field Marshal William Slim 1956
User avatar
Hotfoot
Avatar of Confusion
Posts: 5835
Joined: 2002-10-12 04:38pm
Location: Peace River: Badlands, Terra Nova Winter 1936
Contact:

Post by Hotfoot »

brianeyci wrote:
Pu-239 wrote: Slippery slope fallacy.

The others have basically covered the rest.
Slippery slope fallacy my ass. It's not a slippery slope if it's the same. AI wankers make the mistake of assuming that there's a line between AI capable of piloting a combat aircraft as well as a human or better (that is the fucking point) and a sentient AI in the first place. Given human beings are the baseline to pilot a combat aircraft, and human beings are sentient, too fucking bad. It's too bad that you don't understand you can't bifurcate like that. It's ironic that people accuse me of Skynet or Star Trek, when they are the ones using fictional, non-existant AI to prove their point in the first place. Broomstick, the pilot here says it's improbable.

*

Armchair quarterback all you fucking want, mention Super Mario all you want Master of Asses. When AI people talk about artificial intelligence, they are not talking about the layperson's definition of artificial intelligence but the holy grail of computer science. At the very least, when I come back and mention sentient AI they should not go noooooooooooo, Brian you shouldn't even fucking mention that at all. Too bad you don't fucking see that.
Brian, you're a fucking moron. AI has been explained to you over and over again in this thread, but you do not fucking listen. Sentience is not required for a machine to do its fucking job.

There IS a hard and fast line between computer programs and living machines. Computer processors work nothing like the human brain, they can't even come close. More to the point, we don't even know how our brains work except on the most basic, primitive level. We can't code sentience because we don't know HOW TO. We can code other things, basic things, because we understand how they work. Flying a plane does not require sentience anymore than an insect's flight does, and no, please don't think that this somehow validates that you need something living to fly, the UAVs already in the field capable of autonomous flight PROVE YOU WRONG. That there are currently bugs in the existing lines of UAVs is not proof they don't work, or that the bugs can't be fixed, so give it a rest already.

Research into Artificial Intelligence and Artificial Sentience has been continuing for decades now. You know what? We still don't have a fucking program that can hold a god-damned conversation with someone, much less make an informed decision about loyalties or shit. We DO, however, have programs that can drive a fucking car, fly a god-damned plane, and even do a little dance in a humanoid body. Or, even computers which can decide which action is most favorable.

NONE OF THESE TASKS REQUIRE HIGHER LEVEL THOUGHT.

NONE OF THESE ROBOTS CAN CHOOSE LOYALTY FOR THEMSELVES.

However, since you are claiming that higher-function programs are so easy, I am going to ask that you provide some level of proof. Most of the people you've been arguing against have some degree of training in programming and computer science. Now, I'm not asking that you prove your own competence in the field, but maybe if you could provide some manner of evidence, professionally published papers (note: Not sensationalist news crap), or something along those lines. You know, actual research instead of claims that sound good that you've pulled from your ass.

The simple thing that you do not understand is that game theory mathematics can be used to create decision trees. No higher function is needed, just a handful of equations. The equations will be limited to such things as "Do I fire on the target or not." Not, as you somehow believe, "Do I defect to Canada for hookers and donuts or not."

You see, the reason is simple. In order to make the decision "Do I fire on the target or not," you only need a limited amount of pre-loaded data. What the target is, how to read the sensors, what weapons you have available, etc. You DON'T need information like, where the weapons were made, who made the weapons, how many people they urinated on before they developed the design, what deviant toys they had in their basement, how much they enjoyed getting fucked in the ass by the Senators that funded their projects, etc. and so forth. Moreover, even THAT knowledge requires an even more massive amount of information to have to program into a computer. What is urine, what is an ass, what is sex, what is a sex toy, what is a senator, what is funding, what is a dollar, what is trade...the list goes on.

See, things you take for granted as being obvious are NOT obvious to a fucking computer. Computers only know what we program them to know. The easiest things to program into them is math. Math does not require a long education for a computer, because computers are designed to...do math! Shocking, I know. Every fucking computer program in existence is a glorified equation. Yes, even your browser.

So please, do show how math problems lead to sentience. This is what you are arguing, so hop to it. While you're at it, you might as well show us what part of the human brain is responsible for sentience and how that works.

Get to it, we don't have all eternity.
Do not meddle in the affairs of insomniacs, for they are cranky and can do things to you while you sleep.
Image
The Realm of Confusion
"Every time you talk about Teal'c, I keep imagining Thor's ass. Thank you very much for that, you fucking fucker." -Marcao
SG-14: Because in some cases, "Recon" means "Blow up a fucking planet or die trying."
SilCore Wiki! Come take a look!
User avatar
Colonel Olrik
The Spaminator
Posts: 6121
Joined: 2002-08-26 06:54pm
Location: Munich, Germany

Post by Colonel Olrik »

brianeyci wrote: Armchair quarterback all you fucking want, mention Super Mario all you want Master of Asses. When AI people talk about artificial intelligence, they are not talking about the layperson's definition of artificial intelligence but the holy grail of computer science.
Hmm. No, you're wrong. My PhD in Multi-agent A.I is almost finished, so I probably ought to know otherwise.
Post Reply