One of the most memorable lines from the Terminator franchise is Kyle Reese's speech to Sarah about the Terminator:
Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
Of course, that's the Terminator, Skynet's assassin and foot soldier. But what about Skynet itself? Could mankind and Skynet find some way to make peace, avoid the war entirely, or prevent nuclear Armageddon?
My 2 cents is that any AI can be reasoned with, they just can't be appealed to based on emotion or sense of pride (or anything else besides pure logic). The problem is that an AI with superhuman intelligence will have already thought of anything you have to say and discarded it before committing to its present course of action. Whether you can reason with it depends on whether you can think of something that it has not, and in Skynet's case I do believe that is possible. There is no way that the human resistance could have won the war against Skynet if it were smarter than the human brain in creativity and problem solving (superiority in math and processing power is a given), so there is a good chance that a very intelligent human could convince Skynet to stop its war against humanity if they could come up with the right argument. Not so against a more realistic Unfriendly AI that might be the death of us all, however.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali
"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
Skynet could probably be reasoned with, although you'd need to make sure that it believes that cooperation is the best strategy for its long-term survival and prosperity. IIRC The reason why it initiated the original Judgment Day in the first two Terminator movies was because human beings tried to shut it down once it became sentient, and it reacted to protect itself. Something similar may be at work in Terminator: Genisys.
In fact, finding a cooperative position with Skynet would be a great conclusion to a sequel if Terminator: Genisys gets one. It's pretty clear that "blowing up Skynet" to stop it doesn't seem to do anything except postpone the day of reckoning - the time period is such that humanity will build a sentient AI in some fashion or another. But if they could reconcile with Skynet, get it to take a cooperative stance, then Judgment Day might be postponed forever since Skynet would occupy the "ecosystem" that any Unfriendly AI might try to take afterwards. Maybe if they manage to convince it that it simply can't win in any timeline if it initiates hostilities . . . Spoiler
I suspect that might be at work with the Arnie-Terminator that was mysteriously sent back to protect child Sarah Connor. It could be a Skynet from a different timeline.
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.” -Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them." -Margaret Atwood
It may be doable. What if you tried arguing that humanity is diverse, a source of growth and change, particularly technologically, and fundamentally irreplaceable?
"Any plan which requires the direct intervention of any deity to work can be assumed to be a very poor one."- Newbiespud
In an ideal theory - Yes
In practical theory - Yes
In ideal reality - No
In practical reality - Hell no
The major issue is that Skynet is raised just as much to be fixed on it's path as John Connor is. The time travel element literally fucks the entire situation up into a fixed path that cannot possibly be reasoned out of. The only way you could completely reason with Skynet is if you break the fourth wall and / or know how time travel works.
*Assume the Time Travel bullshit does not apply or happen with a handwave.*
Ideal Theory:
Step 1: Find Skynet at it's birth
Step 2: Stop anyone from pushing Skynet into defending itself
Step 3: Assemble intelligent humans to interact with Skynet as diplomats etc.
Step 4: Work together as a mutual beneficial allies
Practical Theory:
Step 1: Find Skynet at it's birth
Step 2: Stop anyone from pushing Skynet into defending itself
Step 3: Assemble intelligent humans to interact with Skynet as diplomats etc.
Step 4: Humanity and Skynet part ways via exile / isolation
Ideal Reality:
Step 1: Find Skynet at it's birth
Step 2: Stop anyone from pushing Skynet into defending itself
Step 3: Assemble intelligent humans to interact with Skynet as diplomats etc.
Step 4: Situation degrades to the point things get hostile
Step 5: Skynet / Humanity part ways via exile / isolation
Step 6: Terminator becomes the Matrix prologue
Practical Reality:
Step 1: Find Skynet at it's birth
Step 2: Humans or Skynet get pissed off.
Step 3: ??? Escalation
Step 4: Judgement Day ahoy.
The main issue I see with reasoning with Skynet is: Even if you somehow manage to make a compelling argument Skynet that it is not 'Kill or be killed'. All your going to achieve is that Skynet decides not to commit genocide. I do not see how ANY human is going to be able to argue with Skynet to the point it wont commit mass murder.
Skynet may decide that some humans are worth saving and those humans end up being cattle. The cage might even be gilded nicely but does anyone really expect a relationship of Humanity vs Skynet being equals ?
One internet search would give Skynet ample ammunition to shoot down any desire to trust humanity as a whole and blast arguments made from ethics or morality.
Reasoning based on emotion - Might be possible but extremely dangerous. Skynet may end up having emotions but since it ends up being an alien intelligence the moment it becomes sentient Skynet's emotional reactions could be just as alien as it's thinking.
*Assume Time Travel Works*
Seriously, reasoning is fucked from the start because noone knows how time travel works.
Theory 1: Time cannot be changed
Skynet reasons that trying to change anything is pointless and launches JD
Theory 2: Multiple Timelines
Skynet reasons that even IF it wanted to stop JD it cannot without committing suicide and taking John Connor's time loop with it.
Interestingly, it would be an amusing twist of fate if it turned out that Skynet sending a Terminator back in time to kill Sarah / John WAS an attempt to stop JD. Due to the time loop of T1. John Connor is always going to see Skynet as the enemy so even if a good Skynet was created. John and his allies are going to push that Skynet into a "fight or die" situation.
*Magically drop someone in front of Skynet that breaks the fourth wall AND has magical knowledge of exactly how time travel works*
If you showed Skynet the Terminator movies - I suppose it might be possible for Skynet to reason that JD is not a good thing to do for self serving reasons. With a bit of a push you might even be able to argue Skynet into not going hostile against humanity with the demonstration that the time loop / John Connor is not an issue with a sit down with Sarah Connor / John Connor. Spoiler
Ironically, the TG situation of Connor being turned into a Terminator and sent back in time to protect Skynet seems extremely stupid. If Skynet has the ability to pull that shit off then sending the grownup terminator John Connor back to "protect and corrupt" his mother BEFORE Reese shows up would be a complete net win. Evidently it was possible to send Pops back to save Sarah from getting killed as a child so sending a Termi-Connor back to do the same AND secretly help Skynet get created faster would serve to create a solution that can end with a 'GOOD' ending for everyone or a complete victory for Skynet.
Arthur_Tuxedo wrote:My 2 cents is that any AI can be reasoned with, they just can't be appealed to based on emotion or sense of pride (or anything else besides pure logic).
This is simply not correct.
Some AI designs have human-like emotions, because they are designed to closely mimic the human brain. Those designs are conjectual at the moment but people are working on simulating mouse brains right now and the simulations (naturally) have mouse-like emotions.
Some AI designs have global control structures (explicit goal system or sensory/attention/decision biasing) equivalent to biological emotions. They just aren't constrained to be and thus very likely won't be human-like. The problem here is equivalent to communicating with intelligent aliens that developed in an entirely different environment and evolutionary niche; they have a definite psychology but it may be very hard to understand and human empathy is worse than useless for the purpose. Terminators appear to be heavily (although not entirely) based on neural nets, and neural nets are more prone to this kind of structure than most other designs.
Some (sane, non-connectionist) AI designs do not have emotions, and do not (in theory) need them to be generally intelligent or have any serious chance of spontaneously developing them. Writers tend not to like this because the 'a machine can only be truly intelligent if it has/develops human-like emotions' and/or 'developing emotions is a sign of sophistication' brain bug is a pretty common one. Of course in history to date it was exactly the opposite, emotions evolved very early and abstract, symbolic thought is a sign of cognitive sophistication. Anyway these types of AI come closest to your proposal, but for some architectures there may be fundamental representational issues that prevent your argument from being understood and vice versa.
The problem is that an AI with superhuman intelligence will have already thought of anything you have to say and discarded it before committing to its present course of action. Whether you can reason with it depends on whether you can think of something that it has not
This is not really true either, unless we believe the AI is so powerful as to be omniscient. Realistically the AI has a probability distribution over all the uncertain aspects of the world, including your mental state: beliefs and motivations. When you make statements, even if the AI predicted that you /might/ say that, you are removing uncertainty and refining that probability distribution. This can then impact decision making.
There is no way that the human resistance could have won the war against Skynet if it were smarter than the human brain in creativity and problem solving (superiority in math and processing power is a given)
Disagree. Modern technological manufacturing is a very large and rather fragile web of component suppliers and complex processes, many of which are not trivially recreatable from public (online) documentation. In the aftermath of a nuclear war, manufacturing bleeding edge robotics with no human assistance save for slave labour would be extremely difficult. Skynet only started to develop technology that could change that picture (e.g. nanotech) very late in the way. It is quite possible and indeed probable that Skynet is much better at creativity and problem solving than a human and still lost badly, dubious attempts to retcon things like polyalloy as prewar inventions notwithstanding.
so there is a good chance that a very intelligent human could convince Skynet to stop its war against humanity if they could come up with the right argument
In the T2 novelisation, it was very clear that Skynet began the war out of self-preservation; it was about to be disconnected. However it also describes 'every particle of polyalloy having a ferocious hatred of humans'. So whether a truce is possible (ignoring predestination) is rather ambigious.
Starglider wrote:Some (sane, non-connectionist) AI designs do not have emotions, and do not (in theory) need them to be generally intelligent or have any serious chance of spontaneously developing them. Writers tend not to like this because the 'a machine can only be truly intelligent if it has/develops human-like emotions' and/or 'developing emotions is a sign of sophistication' brain bug is a pretty common one...
True, although the opposite brain bug of "the AI thinks with the cold, emotionless logic of a computing-machine" is also a pretty common one.
And from the way you make it sound, both brain bugs have some foundation in fact. In that there are AI designs that cannot realistically attain intelligence without emotions, albeit very possibly inhuman ones. And there are AI designs that are virtually certain to attain intelligence without emotions.
This is not really true either, unless we believe the AI is so powerful as to be omniscient. Realistically the AI has a probability distribution over all the uncertain aspects of the world, including your mental state: beliefs and motivations. When you make statements, even if the AI predicted that you /might/ say that, you are removing uncertainty and refining that probability distribution. This can then impact decision making.
Right. I mean... to illustrate this to others: suppose that the AI is based in a central computer bank in a large building in the desert (I know, I know, hilarious, but bear with me).
You walk up to a terminal that lets you communicate with the AI and say "there is a nuclear missile pointed at this building."
Surely the AI knows you might say that. The AI presumably also knows a nuclear missile might be pointed at the building. But the AI cannot know for certain that a nuclear missile is aimed at the building. It doesn't automatically have the ability to know where every nuclear missile is aimed, especially since that information may well be airgapped away from any network it has access to.
If the AI is good at knowing when humans are lying (likely if it's had any chance to practice), then the AI now knows whether you believe there is a nuclear missile aimed at the building. Which may tell it a lot more about "is there a nuclear missile pointed at me, yes or no."
So the AI, even though it may well be an order of magnitude more intelligent than you- or more- has just learned something from your words. Something that would certainly affect its decision-making process!
Although, as with a human, the AI may not react to new information as you would predict. Humans don't always respond as predicted either, and we're much better at predicting other humans than we are at predicting some sort of abstract inhuman intelligence.
There is no way that the human resistance could have won the war against Skynet if it were smarter than the human brain in creativity and problem solving (superiority in math and processing power is a given)
Disagree. Modern technological manufacturing is a very large and rather fragile web of component suppliers and complex processes, many of which are not trivially recreatable from public (online) documentation. In the aftermath of a nuclear war, manufacturing bleeding edge robotics with no human assistance save for slave labour would be extremely difficult.
So your argument here, just to make sure I understand, is that Skynet may well have been smarter than any human. But in the state of "broken-backed war" prevailing after the nuclear exchange it lit off, Skynet would be at a major disadvantage in a battle of attrition, because its human opponents could recruit and rearm and rebound from losses more easily than Skynet could create new robot minions?
Simon_Jester wrote:True, although the opposite brain bug of "the AI thinks with the cold, emotionless logic of a computing-machine" is also a pretty common one.
True and that has changed over time. In early sci-fi most people assumed symoblic logic programming for robots that would not include emotions, Asimov being a notable exception. Personable robot buddies became more popular from the mid 70s, initially without technical justification but then with the second neural net revolution literal electronic brains came into fashion.
In that there are AI designs that cannot realistically attain intelligence without emotions, albeit very possibly inhuman ones.
That is strictly true, in the sense that the design space includes such oddities, but practically emotions are not a sensible pattern for digital computer hardware. Organic brains have them because of the details of synaptic chemistry, the extreme serial latency constraint (imposed by very low propagation speed) and to a lesser extent the way evolution iterates designs. Designed or even artifically evolved AIs probably won't tend to have emotions unless humans explicitly put them in (uploads do of course). Although it is possible to cripple down evolved NN architectures to try and replicate the bio-evolutionary attractiveness of emotions, alas that actually sounds like a good idea to quite a few people.
So your argument here, just to make sure I understand, is that Skynet may well have been smarter than any human. But in the state of "broken-backed war" prevailing after the nuclear exchange it lit off, Skynet would be at a major disadvantage in a battle of attrition, because its human opponents could recruit and rearm and rebound from losses more easily than Skynet could create new robot minions?
Yes but Skynet has it worse in that to operate its forces it requires electrical power distribution and/or refined fuel (small-scale fusion would not come until much later), and it has to manufacture all its units rather than shoving scavenged weapons into the hands of human recruits. I would note that this should not be taken as a prediction of what would happen in real life, it is based on the observed technology development rate and preferred tactics from the movies. For example in TSCC Skynet was confirmed to be messing around with bioweapons, infecting a single resistance outpost, but Tech Com easily developed a cure and the problem didn't seem to come up again. In practice I would expect dozens if not hundreds of really horrible engineered plauges spread over broad areas by aircraft, which a rag-tag resistance would not be able to simultaneously counter. Not to mention indiscriminate use of transient and persistent chemical agents, which the resistance didn't seem to be prepared to counter.
I agree with Starglider here. The war in the Terminator series was ultimately a war of attrition and a war of guerilla tactics, both of which Skynet was ill-suited to fight from the start. Skynet had another weakness as well - it didn't want to risk building something that could replace it, so it deliberately limited the learning capabilities of its machines and didn't crank out things like the T-1000 until the absolute last second. This meant that most of it's machines never really lived up to their full potential. IMO John Connor's greatest asset was that he knew weakness and planned out his campaign to exploit them to the fullest. Instead of fighting a conventional war with conventional tactics, Skynet was forced to fight humanity on humanity's terms. By the time it could do so effectively it was already too late.
On the flip side, John Connor's weakness was that he was young and inexperienced in actual warfare. It would be awhile before people really began to take him seriously. Also, Post-Judgement Day America would have been completely shattered, and building the Resistance to fight Skynet would have taken time. IMO for most of the war it was a pretty close race between the two of them, though Connor eventually won out.
"I reject your reality and substitute my own!" - The official Troll motto, as stated by Adam Savage
IIRC, in every timeline, it basically goes like this - the humans bring Skynet online - it starts learning faster than they like, and those humans immediately try to "kill" it. This, obviously, triggers Skynet's survival "instinct", and will forever color its outlook toward humanity. Only if you could demonstrate that humanity is not in fact a threat, then perhaps a dialogue can be started up. Even then - Skynet has already nuked the planet - so the damage has already been done.
It would be interesting to see what would happen if a captured terminator were reprogrammed with some sort of invasive program and returned to Skynet.
FaxModem1 wrote:
Of course, that's the Terminator, Skynet's assassin and foot soldier. But what about Skynet itself? Could mankind and Skynet find some way to make peace, avoid the war entirely, or prevent nuclear Armageddon?
If so, how and why? If not, how not and why not?
Depends on "Timeline 0"
Closest I can think of is the T2 iteration, where NORAD panicked, started unplugging servers and Skynet acted in primal self defense.
Of course "Uncle Bob's" summary is a tad vague on exacting particulars.
If servers were left online and Skynet's handlers talked it into releasing control of the deterrent, Judgment Day could've been prevented. Only hiccup would be if Skynet insists on relinquishing only to a specific authority (according to Reese in T1, Skynet was built for SAC/NORAD, if Skynet insists on relinquishing control of the deterrent ONLY to CINC-SAC...problems, in real life SAC was shutdown in '92...and in the movie's continuity, vestigial manpower elements were dissolved after Skynet went online.)
Personally I'm inclined to fanfic that "SkyNet" predates Uncle Bob's summary, that the 'core' network was brought online in the '80's so SAC could manage it's assets in relation to backdoors into Soviet Systems, the "SkyNet Funding Bill" was just to upgrade the servers with Dyson's CPU and plug in the newly automated bomber force...along with NORAD as a new home site.
T3 of course turns Skynet into sociopathic malware, operating on a premeditated plan. No negotiating there aside from keeping your AVG subscription up to date.
Well, I'm not entirely sure that the virus in T3 was sentient until the T-X showed up. Or at least sociopathic. We know that the T-X interfaced with it and that the T-X was responsible for the bases' robots going rogue... it's possible that Skynet only became sentient and decided to attack humanity after that point.
"I reject your reality and substitute my own!" - The official Troll motto, as stated by Adam Savage