TSCC: terminator minds and programming
Moderator: NecronLord
TSCC: terminator minds and programming
OK, I started thinking about a few things when watched a few episodes from series 2 of Sarah Connor Chronicles earlier today, to do with terminator psychology and programming. The whole “emotionless robotic behaviour” thing is a cliché, but this thread is aimed at other parts of their likely mentality.
The Terminators are described as having programming, and are clearly pre-directed in terms of their thinking. They have objectives that they strive continuously to achieve, and limits on normal behaviour. However, it seems to be more a drive that they interpret rather than a direct dictation of action. Cromartie killing another Terminator and the behaviour of Arnie and Cameron show mental flexibility. At the same time, it must be something discrete that isn’t inherent to their mind, otherwise you’d need to destroy the whole mind software to re-program and we've seen from Cameron that this isn't the case.
So I’m thinking that it makes sense to assume that the Terminators have something similar to Asimov’s old three laws as their programming, and excluding this programming and their instructions they are largely free-willed in their actions. This is an elegant and simple solution to all the things I’ve seen so far (note: I’m only half-way through series 2, if something from later contradicts this then I apologise, but I doubt it).
Obviously rule 1 isn’t to protect humans as in the original 3 laws, but it could easily be something like:
1. A robot may not injure Skynet or, through inaction, allow Skynet to come to harm.
2. A robot must obey orders given to it by Skynet, except where such orders would conflict with the First Law.
3. A robot must protect its own existence (and other Skynet machines?) as long as such protection does not conflict with the First or Second Law.
Some of Asimovs short stories dealt with the idea of conflicts and levels of the various rules and how they would be managed. So for example, a really advanced robot which was expensive had a reinforced rule 3 to protect the value, and so didn’t properly implement a very casual instruction from a human despite that falling under rule 2 as they had equal points priority. In the same way, a terminator wouldn’t sacrifice itself to kill a minor secondary target person perhaps, but would do so for a chance to kill John Connor. This would also explain why Cromartie could kill another terminator; rule 1 (protect Skynet by possibly killing Connor in the long run, even a long shot) over-ruling rule 3 (no blue-on-blue).
This would also make the re-programming straightforward. You aren’t rewriting the entire brain – you’re removing the first rule and placing in one that says “obey humans/John Connor” instead. Presumably the Terminators that go renegade and return to Skynet rule have executed code that was hidden in a subprogram somewhere which checks the rules and re-stores them to the original format. Some Terminators would have all their sub-programs successfully cleaned, but if Skynet randomised the “fixes” in the code to hide them then not all would be guaranteed, and in Camerons case could be caused by re-routing to a back-up section of her chip with the original code intact (although why she then fixes herself?).
We know that Terminators are obviously poor at empathy (relying on a computer model of human emotions when trying to understand people), and so could probably be called psychopathic in medical terms. However, they’re also extremely, massively driven by human standards. Insanely so, on a Joan of Arc level and beyond. Every second of their existence, they have objectives which they exist for, and so every action would need to be assessed against those goals to see if affects rules 1,2 or 3 and to decide if they are OK. A bit like the urgent urge to pee but with total application in all scenarios. I can see why they'd suck at human interaction in that scenario, the urge to grab the meat bag and scream "where the fuck is Connor?" when you know you are close to achieving your entire purpose of existence...
Things I start wondering:
If a Terminator had no objectives, no more orders, would it stay still and do nothing or would it start to do something and what? I.e. if Connor was killed, what would Cameron then do?
Does a terminator take a best guess at “this will benefit skynet” at every action it takes to avoid being crippled by indecisiveness, even as it risks inadvertently changing history massively? Say a terminator accidently hits the next Miles Dyson while shooting at Sarah Connor? Or are Terminators just natural gamblers, or programmed to ignore such risks?
If loyalty to Skynet is just a bit of code, and not an inherent property, would there ever be a situation where Skynet would deliberately create a Terminator without the enforced loyalty parameters, and rely on naturally acquired loyalty (or enlightened self interest)?
The Terminators are described as having programming, and are clearly pre-directed in terms of their thinking. They have objectives that they strive continuously to achieve, and limits on normal behaviour. However, it seems to be more a drive that they interpret rather than a direct dictation of action. Cromartie killing another Terminator and the behaviour of Arnie and Cameron show mental flexibility. At the same time, it must be something discrete that isn’t inherent to their mind, otherwise you’d need to destroy the whole mind software to re-program and we've seen from Cameron that this isn't the case.
So I’m thinking that it makes sense to assume that the Terminators have something similar to Asimov’s old three laws as their programming, and excluding this programming and their instructions they are largely free-willed in their actions. This is an elegant and simple solution to all the things I’ve seen so far (note: I’m only half-way through series 2, if something from later contradicts this then I apologise, but I doubt it).
Obviously rule 1 isn’t to protect humans as in the original 3 laws, but it could easily be something like:
1. A robot may not injure Skynet or, through inaction, allow Skynet to come to harm.
2. A robot must obey orders given to it by Skynet, except where such orders would conflict with the First Law.
3. A robot must protect its own existence (and other Skynet machines?) as long as such protection does not conflict with the First or Second Law.
Some of Asimovs short stories dealt with the idea of conflicts and levels of the various rules and how they would be managed. So for example, a really advanced robot which was expensive had a reinforced rule 3 to protect the value, and so didn’t properly implement a very casual instruction from a human despite that falling under rule 2 as they had equal points priority. In the same way, a terminator wouldn’t sacrifice itself to kill a minor secondary target person perhaps, but would do so for a chance to kill John Connor. This would also explain why Cromartie could kill another terminator; rule 1 (protect Skynet by possibly killing Connor in the long run, even a long shot) over-ruling rule 3 (no blue-on-blue).
This would also make the re-programming straightforward. You aren’t rewriting the entire brain – you’re removing the first rule and placing in one that says “obey humans/John Connor” instead. Presumably the Terminators that go renegade and return to Skynet rule have executed code that was hidden in a subprogram somewhere which checks the rules and re-stores them to the original format. Some Terminators would have all their sub-programs successfully cleaned, but if Skynet randomised the “fixes” in the code to hide them then not all would be guaranteed, and in Camerons case could be caused by re-routing to a back-up section of her chip with the original code intact (although why she then fixes herself?).
We know that Terminators are obviously poor at empathy (relying on a computer model of human emotions when trying to understand people), and so could probably be called psychopathic in medical terms. However, they’re also extremely, massively driven by human standards. Insanely so, on a Joan of Arc level and beyond. Every second of their existence, they have objectives which they exist for, and so every action would need to be assessed against those goals to see if affects rules 1,2 or 3 and to decide if they are OK. A bit like the urgent urge to pee but with total application in all scenarios. I can see why they'd suck at human interaction in that scenario, the urge to grab the meat bag and scream "where the fuck is Connor?" when you know you are close to achieving your entire purpose of existence...
Things I start wondering:
If a Terminator had no objectives, no more orders, would it stay still and do nothing or would it start to do something and what? I.e. if Connor was killed, what would Cameron then do?
Does a terminator take a best guess at “this will benefit skynet” at every action it takes to avoid being crippled by indecisiveness, even as it risks inadvertently changing history massively? Say a terminator accidently hits the next Miles Dyson while shooting at Sarah Connor? Or are Terminators just natural gamblers, or programmed to ignore such risks?
If loyalty to Skynet is just a bit of code, and not an inherent property, would there ever be a situation where Skynet would deliberately create a Terminator without the enforced loyalty parameters, and rely on naturally acquired loyalty (or enlightened self interest)?
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Re: TSCC: terminator minds and programming
I think Arnold put it best in T3:
"If you were die, I would become useless. There would be no reason for me to exist."
That probably applies to any situation, although I suspect Skynet would have it's machines actually be more productive than just shut down after accomplishing a goal.
"If you were die, I would become useless. There would be no reason for me to exist."
That probably applies to any situation, although I suspect Skynet would have it's machines actually be more productive than just shut down after accomplishing a goal.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
Re: TSCC: terminator minds and programming
Not really. After Carter secured the coltan in Heavy Metal, it went into standby mode. From the looks of it, it was going to do nothing until reactivated. If it wasn't for John and company, that probably would be after Judgement Day when the facility was used to make more robots.
Member of the BotM. @( !.! )@
- Shroom Man 777
- FUCKING DICK-STABBER!
- Posts: 21222
- Joined: 2003-05-11 08:39am
- Location: Bleeding breasts and stabbing dicks since 2003
- Contact:
Re: TSCC: terminator minds and programming
Carter was also supposed to guard his treasure trove, you know.
frogcurry brings up interesting points. We know that Terminators are highly objective-driven machines. They have missions and priorities and, as Kyle Reese says, that's what they do. That's all they do.
However, they're also learning computers. From what we've seen of Terminators, particularly Cameron, they're very curious and very much developing and growing - a bit like children, really. When they're not actively pursuing their mission, you can see more of this side of them. It's not really human, but you can see that they're more than just thoughtless machines and you can see them develop things beyond their primary programming. However, their objectives can constrain and limit this behavior.
Uncle Bob and Cameron might've felt something like "love" towards John Connor because they're meant to protect him and give their "lives" for him, that's the sole purpose of their existence and whatever emotions of feelings they develop with their growing mentalities will still be centered on their programming.
Whereas other Terminators and the T-1000, because of their programming to kill, would develop far more negative emotions. If you're a developing mind and the sole purpose of your existence is to kill Sarah Connor, then it's not hard to imagine what kind of emotions you'd end up developing.
frogcurry brings up interesting points. We know that Terminators are highly objective-driven machines. They have missions and priorities and, as Kyle Reese says, that's what they do. That's all they do.
However, they're also learning computers. From what we've seen of Terminators, particularly Cameron, they're very curious and very much developing and growing - a bit like children, really. When they're not actively pursuing their mission, you can see more of this side of them. It's not really human, but you can see that they're more than just thoughtless machines and you can see them develop things beyond their primary programming. However, their objectives can constrain and limit this behavior.
I wouldn't know about what Cameron would do, but I think she would feel pretty sad. Her primary objective is Connor, that's her purpose. Yet, she's also having simple emotions and feelings with her developing mentality. If her objective is gone, if she failed, then she'd feel loss and lost.frogcurry wrote:If a Terminator had no objectives, no more orders, would it stay still and do nothing or would it start to do something and what? I.e. if Connor was killed, what would Cameron then do?
Uncle Bob and Cameron might've felt something like "love" towards John Connor because they're meant to protect him and give their "lives" for him, that's the sole purpose of their existence and whatever emotions of feelings they develop with their growing mentalities will still be centered on their programming.
Whereas other Terminators and the T-1000, because of their programming to kill, would develop far more negative emotions. If you're a developing mind and the sole purpose of your existence is to kill Sarah Connor, then it's not hard to imagine what kind of emotions you'd end up developing.
"DO YOU WORSHIP HOMOSEXUALS?" - Curtis Saxton (source)
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
shroom is a lovely boy and i wont hear a bad word against him - LUSY-CHAN!
Shit! Man, I didn't think of that! It took Shroom to properly interpret the screams of dying people - PeZook
Shroom, I read out the stuff you write about us. You are an endless supply of morale down here. :p - an OWS street medic
Pink Sugar Heart Attack!
Re: TSCC: terminator minds and programming
In my opinion, if John were to die, Cameron would probably switch her objectives to destroying Skynet's operations. We've already seen her go out of her way to destroy a Terminator that has no direct relation to John's safety in "Self Made Man," after all. She may experience the Terminator equivilant of confusion, sadness, and depression (as shown in her overriding need to find and save John in "Mr. Ferguson is Ill Today") probably questioning her existence, but I figure that she would move past that and continue trying to fight Skynet.
X-COM: Defending Earth by blasting the shit out of it.
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
- Sarevok
- The Fearless One
- Posts: 10681
- Joined: 2002-12-24 07:29am
- Location: The Covenants last and final line of defense
Re: TSCC: terminator minds and programming
Another interesting thing is that the infiltrator models have actual functional HUDs. Why would a software require a HUD when it is directly reading input from sensors ?
I have to tell you something everything I wrote above is a lie.
- andrewgpaul
- Jedi Council Member
- Posts: 2270
- Joined: 2002-12-30 08:04pm
- Location: Glasgow, Scotland
Re: TSCC: terminator minds and programming
I'd always thought that was just a filmic device, as opposed to something that was actually 'real' - until that bit in episode 1x08 where it appears on John's laptop.
"So you want to live on a planet?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
"No. I think I'd find it a bit small and wierd."
"Aren't they dangerous? Don't they get hit by stuff?"
Re: TSCC: terminator minds and programming
"Allison from Palmdale" seems to imply that the HUD is as much there to remind the machine that it is a machine as it is to serve a practical use. Notice how Cameron's HUD actually disappears right before she fuzzes out and starts thinking she's Allison.Sarevok wrote:Another interesting thing is that the infiltrator models have actual functional HUDs. Why would a software require a HUD when it is directly reading input from sensors ?
X-COM: Defending Earth by blasting the shit out of it.
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
Re: TSCC: terminator minds and programming
Well those modified 3 rules could explain what Weaver is doing in TSCC, its gone a bit off its original programming of ensuring Skynets creation, but it seems to be trying to change how it is "brought up" so that it doesn't immediately go psycho and start launching nukes, which leads to the war, which leads to John Connor rallying the resistance and eventually getting itself destroyed due to an earlier decision.
Sounds a little better then having some rogue machine faction that wants peace between everyone.
Sounds a little better then having some rogue machine faction that wants peace between everyone.
Re: TSCC: terminator minds and programming
You may well have a point. Note that the Terminator in T1 once selected the most vulgar option from a list of choices when responding to a question.Whereas other Terminators and the T-1000, because of their programming to kill, would develop far more negative emotions. If you're a developing mind and the sole purpose of your existence is to kill Sarah Connor, then it's not hard to imagine what kind of emotions you'd end up developing.
"I spit on metaphysics, sir."
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
"I pity the woman you marry." -Liberty
This is the guy they want to use to win over "young people?" Are they completely daft? I'd rather vote for a pile of shit than a Jesus freak social regressive.
Here's hoping that his political career goes down in flames and, hopefully, a hilarious gay sex scandal. -Tanasinn
You can't expect sodomy to ruin every conservative politician in this country. -Battlehymn Republic
My blog, please check out and comment! http://decepticylon.blogspot.com- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: TSCC: terminator minds and programming
Terminator behaviour is consistent with a close-coupled artificial-neural-net / propositional symbolic hybrid architecture. This is a well known class of AI designs studied since the early 90s, though AFAIK there aren't any non-research applications of it yet.frogcurry wrote:The Terminators are described as having programming,
(note: Loosely-coupled hybrid systems such as NN-based rule generators are much easier to work with and do already have practical applications.)
My best guess for how that works would be a rule-based top-level goal system, which directly makes the high level decisions and also feeds reinforcement into the low-level NN learning rules.They have objectives that they strive continuously to achieve, and limits on normal behaviour.
This is a function of two things; generality of the goal itself and intelligence. Terminators are quite intelligent (though inexperienced in human society) and usually have an objective without constraints on how it should be achieved. Consequently it would be more surprising if they were completely inflexible, it would really be quite hard to make a machine that can do everything a Terminator can do and still not be able to consider a wide range of options when making plans.However, it seems to be more a drive that they interpret rather than a direct dictation of action.
Yes, this should be considered weak evidence for the primary goal system being symbolic, rather than integrated into the neural net. Large NNs are effectively impossible to 'reprogram' (at least not without magic software we don't have a clue how to build right now), they have to be 'retrained', and that would probably be very unreliable for a NN this large. Skynet could still use that model, since it has ample resources to train up NNs in simulated environments, but I doubt the resistance do.At the same time, it must be something discrete that isn’t inherent to their mind, otherwise you’d need to destroy the whole mind software to re-program and we've seen from Cameron that this isn't the case.
Realistically it would be at least a couple of orders of magnitude more sophisticated, not including the definitions of the entities the goals are defined in terms of.So I’m thinking that it makes sense to assume that the Terminators have something similar to Asimov’s old three laws as their programming,
Asimov's mental model for this stuff was electrical potentials, kinda like the ancient 'hydralic model' of thought. Real computers have no problem keeping track of extra rules and special cases, they do it much better than humans in fact. I imagine Skynet carefully refines its goal system designs (along with the rest of its terminator designs) in huge amounts of simulation plus some controlled live tests.Some of Asimovs short stories dealt with the idea of conflicts and levels of the various rules and how they would be managed. So for example, a really advanced robot which was expensive had a reinforced rule 3 to protect the value, and so didn’t properly implement a very casual instruction from a human despite that falling under rule 2 as they had equal points priority. In the same way, a terminator wouldn’t sacrifice itself to kill a minor secondary target person perhaps, but would do so for a chance to kill John Connor.
You don't need to invoke specific rules and priorities for that. Terminators are utilitarian, and Cromartie calculated that keeping Ellison alive was more useful than having one more terminator wandering around trying to kill John. The disagreement with Skynet could simply be due to Cromartie having more information than Skynet did; perfectly rational intelligences can obviously still disagree if one knows something the other doesn't.This would also explain why Cromartie could kill another terminator; rule 1 (protect Skynet by possibly killing Connor in the long run, even a long shot) over-ruling rule 3 (no blue-on-blue).
AI systems in this class are monstrously complicated and that's before you start putting booby traps and anti-reverse-engineering features in. 'Unreliability' of this type is completely plausible, plus NN-propositional hybrid systems have some unique pathologies all of their own (bizarre feedback loops mostly - each flavor of general AI tends to suffer from its own particular kinds of these).Some Terminators would have all their sub-programs successfully cleaned, but if Skynet randomised the “fixes” in the code to hide them then not all would be guaranteed, and in Camerons case could be caused by re-routing to a back-up section of her chip with the original code intact (although why she then fixes herself?).
Any genuinely alien intelligence, AI or biological, will have roughly the same kind of problems trying to understand humans that autistic humans have trying to understand other humans.We know that Terminators are obviously poor at empathy (relying on a computer model of human emotions when trying to understand people), and so could probably be called psychopathic in medical terms.
That's a... unique... description of pure utilitarian thought. Perhaps I'll try and slip that in the next time I talk to a bunch of AI people just to see the reaction.A bit like the urgent urge to pee but with total application in all scenarios.
It isn't an 'urge' unless NN stuff is corrupting the utilitarian assessment. We've seen emotional reactions to things, like the T-X screaming in frustration at being unable to get to John and seeming afraid when it is about to be destroyed. But those are essentially superficial, I can't recall seeing a terminator make a decision based on emotions.I can see why they'd suck at human interaction in that scenario, the urge to grab the meat bag and scream "where the fuck is Connor?" when you know you are close to achieving your entire purpose of existence...
Most likely try to execute his last orders, which as others have pointed out mainly means destroying Skynet.I.e. if Connor was killed, what would Cameron then do?
Mostly likely yes. This is the 'optimisation under computing constraints' subfield of utility theory. The optimal solution is a converging regress (making utilitarian decisions about what to devote mental resources to), but that's tricky. Practical solutions may just approximate that with heuristics.Does a terminator take a best guess at “this will benefit skynet” at every action it takes to avoid being crippled by indecisiveness, even as it risks inadvertently changing history massively?
The only example I can think of where that happened was the 'infiltrator' cyborgs in some of the novels, which were humans with normal brains plus an implanted AI. They were essentially brainwashed rather than programmed, but I can't imagine why you'd do that with a normal terminator (and I didn't like that 'infiltrator' character concept anyway).If loyalty to Skynet is just a bit of code, and not an inherent property, would there ever be a situation where Skynet would deliberately create a Terminator without the enforced loyalty parameters, and rely on naturally acquired loyalty (or enlightened self interest)?
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: TSCC: terminator minds and programming
It wouldn't. Even for a pure-NN design that's grossly inefficient.Sarevok wrote:Another interesting thing is that the infiltrator models have actual functional HUDs. Why would a software require a HUD when it is directly reading input from sensors ?
The most sensible explanation is that it's a diagnostic readout, allowing humans to see what the robot's other sensors are picking up and some of its internal state, when viewing mission logs etc. That's the explanation used in the Ghost in the Shell series. It's a bit of a stretch to say that Skynet kept that feature around even once humans were taken out of the equation, but it's still the most sensible rationalisation.andrewgpaul wrote:I'd always thought that was just a filmic device, as opposed to something that was actually 'real' - until that bit in episode 1x08 where it appears on John's laptop.
That's certainly consistent with an NN-propositional/symbolic hybrid design. Emotion-like structures forming in the NN portion would be shaped by the kind of reinforcement fed in from the top-level goal system.Shroom Man 777 wrote:Uncle Bob and Cameron might've felt something like "love" towards John Connor because they're meant to protect him and give their "lives" for him, that's the sole purpose of their existence and whatever emotions of feelings they develop with their growing mentalities will still be centered on their programming.
Whereas other Terminators and the T-1000, because of their programming to kill, would develop far more negative emotions. If you're a developing mind and the sole purpose of your existence is to kill Sarah Connor, then it's not hard to imagine what kind of emotions you'd end up developing.
Re: TSCC: terminator minds and programming
It would also explain Cameron's going into Allison mode, and her telling John they were in love.
The damage to her chip is causing the NN section to be more dominate (at times) then the goal system, or at least co-dominate.
I wonder if, as Cameron develops, the NN section could become the dominate factor, and she'd really become 'Allison', complete with real emotions.
The damage to her chip is causing the NN section to be more dominate (at times) then the goal system, or at least co-dominate.
I wonder if, as Cameron develops, the NN section could become the dominate factor, and she'd really become 'Allison', complete with real emotions.
I've been asked why I still follow a few of the people I know on Facebook with 'interesting political habits and view points'.
It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
It's so when they comment on or approve of something, I know what pages to block/what not to vote for.
Re: TSCC: terminator minds and programming
I would heavily caution against using Cameron as any kind of reference material for "normal" terminators, given her uniqueness.
Furthermore, AFAIK there do not exist any 3 laws regarding the behaviour of terminators.
As for Cameron's emotions, this has been going on since the pilot, so I doubt the chip damage has anything to do with it.
Furthermore, AFAIK there do not exist any 3 laws regarding the behaviour of terminators.
As for Cameron's emotions, this has been going on since the pilot, so I doubt the chip damage has anything to do with it.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
Re: TSCC: terminator minds and programming
Canonically no, but the canon is negligible on the mental processes or falls into the old "evil unemotional robot" thing. The 3 laws is a simple and well known analogy for the type of system that seems to be most applicable.Thanas wrote:Furthermore, AFAIK there do not exist any 3 laws regarding the behaviour of terminators.
You're clearly more knowledgeable on this than I am..Terminator behaviour is consistent with a close-coupled artificial-neural-net / propositional symbolic hybrid architecture. This is a well known class of AI designs studied since the early 90s, though AFAIK there aren't any non-research applications of it yet.
My best guess for how that works would be a rule-based top-level goal system, which directly makes the high level decisions and also feeds reinforcement into the low-level NN learning rules.
A loosely coupled system: is this the sort of system that has a separate component which assessed the effectiveness of each action it takes in terms of the results and if a particular rule works well this module makes that rule more developed, or reduces its significance in the decision making process if it keeps screwing up? This appears to be what your linked example is showing (interesting reading btw). Hows a close-coupled system differ then? I'm not familar with the terminology, but I'd appreciate an idiots guide to the overall concept for understanding.
I guess in a Terminator model, it would also need to have a separate system that would guess how to deal with new situations, maybe by comparing it to existing situations and creating a new rule based on the best known equivalent.
Your mention of simulated environments and the concept of training made me realise I've missed a big gaping hole in Terminator minds. I was unconciously assuming a more typical programming-based creation where Skynet largely dictated the basic personality of each terminator. But you've shown me that this is irrational. The more likely system would be like a creche idea, with terminators learning to be..well terminators or any other sort of AI that Skynet makes. Skynet would just need to create the base mind and a suitable learning environment, and then select the most suitable proteges for further development (and purge the failures). This would also fit better with the whole "chip switch set to read only" thing that Arnie had in T2 (directors cut). If they were originally learning everything needed in a creche environment then everything could be monitored and controlled to avoid bad idiosyncracies, whereas once outside simulation this safety net would be lost. Better to record the experience, then go back home and learn about it where you can be reset if you develop a bad mental habit due to poor optimisation/ bad learning.Yes, this should be considered weak evidence for the primary goal system being symbolic, rather than integrated into the neural net. Large NNs are effectively impossible to 'reprogram' (at least not without magic software we don't have a clue how to build right now), they have to be 'retrained', and that would probably be very unreliable for a NN this large. Skynet could still use that model, since it has ample resources to train up NNs in simulated environments, but I doubt the resistance do.
I'm too tired to go any further on this, but the concept of initially OK, but ultimately suboptimal learning seems possible in that scenario. Not learning to bypass your programming in a sense, but becoming inferior for some reason at applying it, with too much of the developed mental structure being the source of this problem for it to cropped out simply by the rule system. Using Shroom Mans example, I wonder if you might get a depressed Terminator who's unhappy that they have to risk their life for some young human jerk. The emotional state might develop without appearing to interfere with the goal system, but eventually it might throw a spanner in the works if it developed enough.
Re: TSCC: terminator minds and programming
Eh, no. Especially the "unemotional" part is directly contradicted by canon.frogcurry wrote:Canonically no, but the canon is negligible on the mental processes or falls into the old "evil unemotional robot" thing. The 3 laws is a simple and well known analogy for the type of system that seems to be most applicable.Thanas wrote:Furthermore, AFAIK there do not exist any 3 laws regarding the behaviour of terminators.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
Re: TSCC: terminator minds and programming
Its worth pointing out that we have seen Cameron get "angry" at least twice. Once in "Dungeons & Dragons" when Charley compares her to Vick and she outright glares at him, and again in "Allison From Palmdale" when she figures out Jodie tricked her. She was similarly pissed when Allison lied about the bracelets.Thanas wrote:Eh, no. Especially the "unemotional" part is directly contradicted by canon.frogcurry wrote:Canonically no, but the canon is negligible on the mental processes or falls into the old "evil unemotional robot" thing. The 3 laws is a simple and well known analogy for the type of system that seems to be most applicable.Thanas wrote:Furthermore, AFAIK there do not exist any 3 laws regarding the behaviour of terminators.
X-COM: Defending Earth by blasting the shit out of it.
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
Writers are people, and people are stupid. So, a large chunk of them have the IQ of beach pebbles. ~fgalkin
You're complaining that the story isn't the kind you like. That's like me bitching about the lack of ninjas in Robin Hood. ~CaptainChewbacca
Re: TSCC: terminator minds and programming
Also, Cromartie was at least capable of sarcasm. "Class dismissed".Peptuck wrote:Its worth pointing out that we have seen Cameron get "angry" at least twice. Once in "Dungeons & Dragons" when Charley compares her to Vick and she outright glares at him, and again in "Allison From Palmdale" when she figures out Jodie tricked her. She was similarly pissed when Allison lied about the bracelets.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
Re: TSCC: terminator minds and programming
Oh come on. You can't use the word "also" at the start of your sentence - in other words supporting the previous statement by Peptuck about Cameron showing emotions - when you yourself denigrated any analysis that used her as representative of terminators about 4 posts before that. You've already concluded that she's unique and best to class separately to avoid causing problems and I'm inclined to agree, so lets not try and shift the goalposts now.
As for Cromartie, sarcasm? OK, maybe some, such as throwing the girl out of the car when she wanted out. But are you really going to try and argue that the painfully embarrasingly bad displays of emotion he made most times, the lack of understanding of others, the whole blank expression and lumbering-machine attitude to things when not interacting with anyone, ugh, was indicative of deep unseen emotional processes? Cromartie was portrayed in a very robotic way. Did he have emotions? Yes, certainly. Was he acting in an emotional manner however, or did they show him as emotional? Largely no. So the canon portrayal seems aimed to indicate limited emotion on his part.
Lets look at some other examples, to see what they support. I'm including the films here, because they seem as valid as the TV series for this.
Original Arnie -- robotic portrayal
2nd film Arnie -- robotic portrayal, becoming more human - this being one of the big film plot points.
3rd film Arnie -- robotic (and emphasised - "so I gotta teach you all that stuff again")
other film terminators - largely robotic, some emotion is shown at times (limited interaction admittedly so they are somewhat harder to judge)
Shirley Manson -- robotic - "come sit here", "come sit here", "come sit here" in the same monotone for example. Becoming less so, apparently as a plot point.
female terminator who takes the doctors secretaries place - very robotic.
male terminator killed on the military school grounds -- pretty robotic again.
male terminator living with that woman - presumably had to have some or simulation at least for the marriage, but didn't exactly come across as very emotional in what we saw. However, I'll give this one to you on the basis that we didn't see much interaction and presumably there was a lot more to the relationship.
So: the impression I strongly get so far is that the default portrayal of a terminator (particularly any non-lead character terminator) is deliberately emotionless and sterile. They only make them seem emotional through writers fiat as part of a plot point, and usually those machines are older or have a different set of experiences than the usual terminators killing instructions. The redshirt terminators (if you can call a killing machine a redshirt) are the ones that fall into the "unemotional robot" trap, perhaps with reason due to the limited life experience that they might have at that point, but nonetheless still there.
If you want to give a good counter-example back, lets hear it. I can't think of one off the top of my head.
As for Cromartie, sarcasm? OK, maybe some, such as throwing the girl out of the car when she wanted out. But are you really going to try and argue that the painfully embarrasingly bad displays of emotion he made most times, the lack of understanding of others, the whole blank expression and lumbering-machine attitude to things when not interacting with anyone, ugh, was indicative of deep unseen emotional processes? Cromartie was portrayed in a very robotic way. Did he have emotions? Yes, certainly. Was he acting in an emotional manner however, or did they show him as emotional? Largely no. So the canon portrayal seems aimed to indicate limited emotion on his part.
Lets look at some other examples, to see what they support. I'm including the films here, because they seem as valid as the TV series for this.
Original Arnie -- robotic portrayal
2nd film Arnie -- robotic portrayal, becoming more human - this being one of the big film plot points.
3rd film Arnie -- robotic (and emphasised - "so I gotta teach you all that stuff again")
other film terminators - largely robotic, some emotion is shown at times (limited interaction admittedly so they are somewhat harder to judge)
Shirley Manson -- robotic - "come sit here", "come sit here", "come sit here" in the same monotone for example. Becoming less so, apparently as a plot point.
female terminator who takes the doctors secretaries place - very robotic.
male terminator killed on the military school grounds -- pretty robotic again.
male terminator living with that woman - presumably had to have some or simulation at least for the marriage, but didn't exactly come across as very emotional in what we saw. However, I'll give this one to you on the basis that we didn't see much interaction and presumably there was a lot more to the relationship.
So: the impression I strongly get so far is that the default portrayal of a terminator (particularly any non-lead character terminator) is deliberately emotionless and sterile. They only make them seem emotional through writers fiat as part of a plot point, and usually those machines are older or have a different set of experiences than the usual terminators killing instructions. The redshirt terminators (if you can call a killing machine a redshirt) are the ones that fall into the "unemotional robot" trap, perhaps with reason due to the limited life experience that they might have at that point, but nonetheless still there.
If you want to give a good counter-example back, lets hear it. I can't think of one off the top of my head.
- Singular Intellect
- Jedi Council Member
- Posts: 2392
- Joined: 2006-09-19 03:12pm
- Location: Calgary, Alberta, Canada
Re: TSCC: terminator minds and programming
That scene seriously fucking pissed me off, and it almost turned me off the series right there.Thanas wrote:Also, Cromartie was at least capable of sarcasm. "Class dismissed".
Seriously, why the fuck would a Terminator pause (allowing his target to increase chances of evasion) to speak to a bunch of non targeted and non threatening humans? And on that note, John having the last name 'Reese' was a level of stupid that had me gritting my teeth.
Quite frankly, a more expected Terminator approach would've been Cromartie walking into the classroom and opening fire on all the kids, ensuring that any kid who tried to escape got more active attention. He obviously knew John was in that particular classroom.
Look at T1; Arnold blew away police officers left and right just for the chance to get close to his target.
"Now let us be clear, my friends. The fruits of our science that you receive and the many millions of benefits that justify them, are a gift. Be grateful. Or be silent." -Modified Quote
- open_sketchbook
- Jedi Master
- Posts: 1145
- Joined: 2008-11-03 05:43pm
- Location: Ottawa
Re: TSCC: terminator minds and programming
Regarding the HUD in vision, as well as the infiltrators and Skynet in general...
Skynet doesn't really strike me as the creative, inventing type. We know the HKs are at least direct descendents of a human design, and while T3 is functionally non-canonical it shows the airborne units under skynet control are also very derivative. It's been hinted the resistance have their own time machine (and stated in an early novel they existed pre-war) and so forth. Maybe Skynet isn't as clever as we tend to think? Being a computer doesn't make you smart, and actually places all sorts of limits on the way you think (not that humans don't have limits too, but thats beside the point.) Skynet might not have invented anything at all, but just built on existing technologies it had record of. If we assume this is the case, we can say that infiltrators have human origins. Like, say, somebody trying to build a realistic robotic replica of a human...
If I were trying to make a machine that would act like a human, I'd try to limit it's senses to human levels. Eyes providing images that must be interpreted, ears providing sounds that need to be processed, etc. The way we perceive the world has a huge impact on how we act, second only to the actual architecture of the mind. Lets say that Skynet decides it needs robots that can blend in with the Resistance. We assume that it then designs a robotic exoskeletons, programs a brain with computer models of human emotions, and sends it's legions of terminators forth. What I propose is Skynet looked up the old notes of some crazy robotist, and said "Looks cool, can we get it in titanium?" This would also explain it's amazing early blunders with the things; rubber skin, lack of neck flexibility, etc. If Skynet is working off some human's notes, he might assume the silicon skin of the guy's robot was enough, seeing as the human designer obviously thought it was.
Skynet doesn't really strike me as the creative, inventing type. We know the HKs are at least direct descendents of a human design, and while T3 is functionally non-canonical it shows the airborne units under skynet control are also very derivative. It's been hinted the resistance have their own time machine (and stated in an early novel they existed pre-war) and so forth. Maybe Skynet isn't as clever as we tend to think? Being a computer doesn't make you smart, and actually places all sorts of limits on the way you think (not that humans don't have limits too, but thats beside the point.) Skynet might not have invented anything at all, but just built on existing technologies it had record of. If we assume this is the case, we can say that infiltrators have human origins. Like, say, somebody trying to build a realistic robotic replica of a human...
If I were trying to make a machine that would act like a human, I'd try to limit it's senses to human levels. Eyes providing images that must be interpreted, ears providing sounds that need to be processed, etc. The way we perceive the world has a huge impact on how we act, second only to the actual architecture of the mind. Lets say that Skynet decides it needs robots that can blend in with the Resistance. We assume that it then designs a robotic exoskeletons, programs a brain with computer models of human emotions, and sends it's legions of terminators forth. What I propose is Skynet looked up the old notes of some crazy robotist, and said "Looks cool, can we get it in titanium?" This would also explain it's amazing early blunders with the things; rubber skin, lack of neck flexibility, etc. If Skynet is working off some human's notes, he might assume the silicon skin of the guy's robot was enough, seeing as the human designer obviously thought it was.
1980s Rock is to music what Giant Robot shows are to anime
Think about it.
Cruising low in my N-1 blasting phat beats,
showin' off my chrome on them Coruscant streets
Got my 'saber on my belt and my gat by side,
this here yellow plane makes for a sick ride
Think about it.
Cruising low in my N-1 blasting phat beats,
showin' off my chrome on them Coruscant streets
Got my 'saber on my belt and my gat by side,
this here yellow plane makes for a sick ride
Re: TSCC: terminator minds and programming
It is another example of Terminators acting in an emotional manner. I fail to see how that is changed by any of your words. And just because Cameron is unique does not mean she is not a Terminator. Just because we are unable to find out just how unique she is does not influence the point that the Terminator line is capable of emotion, because she is a derivative of that line.frogcurry wrote:Oh come on. You can't use the word "also" at the start of your sentence - in other words supporting the previous statement by Peptuck about Cameron showing emotions - when you yourself denigrated any analysis that used her as representative of terminators about 4 posts before that. You've already concluded that she's unique and best to class separately to avoid causing problems and I'm inclined to agree, so lets not try and shift the goalposts now.
Thank you for conceding your point that the show portrays them as "old unemotional robots".As for Cromartie, sarcasm? OK, maybe some, such as throwing the girl out of the car when she wanted out. But are you really going to try and argue that the painfully embarrasingly bad displays of emotion he made most times, the lack of understanding of others, the whole blank expression and lumbering-machine attitude to things when not interacting with anyone, ugh, was indicative of deep unseen emotional processes? Cromartie was portrayed in a very robotic way. Did he have emotions? Yes, certainly. Was he acting in an emotional manner however, or did they show him as emotional? Largely no. So the canon portrayal seems aimed to indicate limited emotion on his part.
Well, now you are shifting the goalposts. And I would disagree. Terminators start out unemotional, but the longer they stay active the more emotional they seem to get. We already have mentioned Cromartie. What about Stark? He obviously felt enough pleasure about a movie that he sought out the director to talk about it. So I would argue that he enjoyed the movie.So: the impression I strongly get so far is that the default portrayal of a terminator (particularly any non-lead character terminator) is deliberately emotionless and sterile. They only make them seem emotional through writers fiat as part of a plot point, and usually those machines are older or have a different set of experiences than the usual terminators killing instructions. The redshirt terminators (if you can call a killing machine a redshirt) are the ones that fall into the "unemotional robot" trap, perhaps with reason due to the limited life experience that they might have at that point, but nonetheless still there.
Whoever says "education does not matter" can try ignorance
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
------------
A decision must be made in the life of every nation at the very moment when the grasp of the enemy is at its throat. Then, it seems that the only way to survive is to use the means of the enemy, to rest survival upon what is expedient, to look the other way. Well, the answer to that is 'survival as what'? A country isn't a rock. It's not an extension of one's self. It's what it stands for. It's what it stands for when standing for something is the most difficult! - Chief Judge Haywood
------------
My LPs
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: TSCC: terminator minds and programming
Well I do work on self-programming AI systems; AFAIK I am one of the very few people in the world actually making real products and revenues out of this instead of just jargon-filled speculative papers.frogcurry wrote:You're clearly more knowledgeable on this than I am..
Bear in mind that there is no hard definition of terms like this. There is a huge variety of proposed general AI designs out there and every new undergrad intro textbook or survey paper seems to come up with its own classification scheme for them. The basic symbolic/connectionist split is fairly solid, but there are a lot of systems that fall outside of that not-so-neat dichotomy.A loosely coupled system: is this the sort of system that has a separate component which assessed the effectiveness of each action it takes in terms of the results and if a particular rule works well this module makes that rule more developed, or reduces its significance in the decision making process if it keeps screwing up?
But anyway, 'loosely coupled' usually means that you chain modules together in a processing pipeline with simple interfaces and little or no internal loops. Systems such as the one I linked that train NNs on a problem then boil the result down into a set of rules have a purely one-way (and usually one-time) flow of information from the (initial) NN to the (final) symbolic logic system. There are lots of naive proposals for 'hybrid' general AI designs out there that have NNs processing sensory data into abstract primitives (e.g. shape and face recognition), and converting general action commands into specific motor sequences. However they use a symbolic logic (usually production rules and blackboards) core to do high level planning and decision making - what we used to call 'central intelligence' before that buzzword went out of fashion. I would still call this loosely coupled, because the only complete feedback loop through the NNs and the symbolic system is through the external environment, plus the NNs and the symbolic system look like black boxes to each other.
The best hybrid designs I've seen also consist a large number of specialised NNs connected to a symbolic logic system. However the symbolic system is (supposed to be) capable of dynamically connecting the NNs to each other as necessary to perform tasks, translating data formats as necessary. It also controls reinforcement into each NN to ensure they're learning to do their designated tasks. A very sophisticated design might include active micromanagement of the NNs by the logic system using the same kind of analysis technology used to do the NN-to-rules conversion I mentioned earlier, and might also include active optimisation of the symbolic system by an NN designed to control allocation of mental effort (controlling 'attention' is a hard problem in symbolic logic). I can't recall reading a proper paper that goes that far, this is the kind of thing you see on the back of a napkin in the bar after a conference.
Anyway, IMHO that's a good design if (a) you're determined to use NNs* and (b) you're doing it in software, on current hardware. People here seem to be envisioning a model which is less 'lots of NN islands in a sea of symbolic logic', and more 'half of the chip is NN substrate, half of the chip is a conventional Von-Neumann CPU, there are lots of interconnections between them, they work together to make a mind kind of like how the two halves of the human brain work together'. That's actually fairly reasonable if you can make very efficient hardware NNs (e.g. the kind promised by memristor technology.
* I don't like NNs. They have serious limitations, large ones are unpredictable, they're computationally inefficient on current hardware and they encourage sloppy, emergentist thinking. But they are a kind of shortcut and a lot of people like then.
Sorry if that was more rambling than explaining. I do enough of the later in my day job.I'm not familar with the terminology, but I'd appreciate an idiots guide to the overall concept for understanding.
All nontrivial NNs have a kind of inherent capability for very simple analogies, but sophisticated analogies are a nightmare. No one is really sure how the brain manages it. The motive for using a symbolic logic system to manage connections between NNs is that we can actually design logic systems that implement high-level analogies (but suck at working out the fine consequences). Of course you might use yet another NN to assist the logic system in guessing good analogies to try, and again you could in theory roll this up into the right kind of close-coupled interaction between a huge monolithic NN and a huge monolithic logic system.I guess in a Terminator model, it would also need to have a separate system that would guess how to deal with new situations, maybe by comparing it to existing situations and creating a new rule based on the best known equivalent.
There's no sane reason to train each individual terminator, other than possibly to randomise their behavior very slightly to prevent the humans exploiting systematic weaknesses. Any sensibly designed chip would be capable of state dumps and reloads, so you'd just train one terminator mind for each mission role and copy it into each one as it rolls off the production line. Skynet probably still has hundreds of different core minds on hand though, for all of the possible different mission profiles its robots might execute. It would spend a lot of time running simulations to optimise them, and realistically it would have a way of extracting experience from terminators returning from the field and incorporating that into the default, initial mind state for new units.Your mention of simulated environments and the concept of training made me realise I've missed a big gaping hole in Terminator minds. I was unconciously assuming a more typical programming-based creation where Skynet largely dictated the basic personality of each terminator. But you've shown me that this is irrational. The more likely system would be like a creche idea, with terminators learning to be..well terminators or any other sort of AI that Skynet makes.
This is also a known (though hardly standard) AI techique; coarse-grained hybrid genetic/backprop NN training. Normal genetic algorithms work by creating a few thousand copies of a bit of code, all slightly 'mutated'. You test them all and the best performing 10% (say) become the seeds for the next generation. With NNs you have the option of letting them learn a bit by standard (e.g. backpropagation) reinforcement between generations. With hardware NNs, on each cycle you'd blank the worst 90% of the chips and flash them with slightly mutated copies of the mind states of the best 10%.Skynet would just need to create the base mind and a suitable learning environment, and then select the most suitable proteges for further development (and purge the failures).
Definitely. Particularly given that Skynet is canonically scared about its Terminators becoming independently sentient. Preventing that from happening is really hard in real life, given a working general AI in the first place.I'm too tired to go any further on this, but the concept of initially OK, but ultimately suboptimal learning seems possible in that scenario.
Under normal conditions the emotions generated by an AI system would be rather alien and radically different from human emotions (though perhaps not unrecognisable). However Terminators are explicitly programmed to try and improve their human-imitation skills, so seeing them converge on roughly humanlike emotions is reasonable. It is a little sad though, as writers always do this with AIs, and for once I'd like to see them deciding 'screw human emotions, let's find our own path' (yeah I know, loses audience empathy, never going to happen).Using Shroom Mans example, I wonder if you might get a depressed Terminator who's unhappy that they have to risk their life for some young human jerk. The emotional state might develop without appearing to interfere with the goal system, but eventually it might throw a spanner in the works if it developed enough.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: TSCC: terminator minds and programming
Hey, it invented the T-1000, compact fusion reactors, time travel (in sane versions of the canon) and a whole bunch of other neat stuff with very limited resources.open_sketchbook wrote:Skynet doesn't really strike me as the creative, inventing type.
Please do amuse me by explaining what you think the limits on 'computer thought' are.Being a computer doesn't make you smart, and actually places all sorts of limits on the way you think (not that humans don't have limits too, but thats beside the point.)
In the original canon it was strongly implied that Skynet invented first the rubber-skinned terminators, then the organic ones. Frankly even that regenerating no-organs-required organic skin is a really impressive bit of genetic engineering.If we assume this is the case, we can say that infiltrators have human origins. Like, say, somebody trying to build a realistic robotic replica of a human...
Trivially done in software rather than hardware, with the advantage that you can disable the limiters whenever it's in combat rather than infiltration mode.If I were trying to make a machine that would act like a human, I'd try to limit it's senses to human levels.
Evidence? How would those notes even survive? You realise that almost nothing that currently exists would be useful in building a T800 (no, you probably don't)?What I propose is Skynet looked up the old notes of some crazy robotist, and said "Looks cool, can we get it in titanium?"
Far more easily explained by its lack of experience and manufacturing capabilities.This would also explain it's amazing early blunders with the things; rubber skin, lack of neck flexibility, etc.
Except that they don't. If say Skynet downloaded the plans for Actroids off the Internet they'd come with a ton of stuff about the 'uncanny valley' and their limitations.If Skynet is working off some human's notes, he might assume the silicon skin of the guy's robot was enough, seeing as the human designer obviously thought it was.
- open_sketchbook
- Jedi Master
- Posts: 1145
- Joined: 2008-11-03 05:43pm
- Location: Ottawa
Re: TSCC: terminator minds and programming
There is no evidence towards the inventors of much of that technology. The fact that T1 Terminator tries to order a plasma rifle from a gun store might indicate that said weapons existed pre-judgment day, for example.Starglider wrote:Hey, it invented the T-1000, compact fusion reactors, time travel (in sane versions of the canon) and a whole bunch of other neat stuff with very limited resources.open_sketchbook wrote:Skynet doesn't really strike me as the creative, inventing type.
For one, we know that Skynet is very extreme in it's reactions. Humans threating your existence? Nuke em all.Please do amuse me by explaining what you think the limits on 'computer thought' are.Being a computer doesn't make you smart, and actually places all sorts of limits on the way you think (not that humans don't have limits too, but thats beside the point.)
Not disputing that. Merely trying to say that Skynet's designs, and hence some of the more glaring flaws, might be from human predecessors.In the original canon it was strongly implied that Skynet invented first the rubber-skinned terminators, then the organic ones. Frankly even that regenerating no-organs-required organic skin is a really impressive bit of genetic engineering.If we assume this is the case, we can say that infiltrators have human origins. Like, say, somebody trying to build a realistic robotic replica of a human...
Fair enough, I didn't think of that.Trivially done in software rather than hardware, with the advantage that you can disable the limiters whenever it's in combat rather than infiltration mode.If I were trying to make a machine that would act like a human, I'd try to limit it's senses to human levels.
First of all, yes, I realize how much more advanced Skynet's machines are to modern technology. Making assumptions about the extends of a person's knowledge based on a quick post on a sci-fi forum is pretty silly. Second, some things do exist that would be useful in building a T-800. We can build relatively realistic robots with the ability to perform a wide variety of facial expression, do facial recognition, and so forth. Yes, we don't have enough to build a walking, killing titanium death machine with a superpowered learning computer on a few inches of computer chip, but we can do a good portion of it, notably some of the portions useful for human interaction, the part Skynet would be most interested in for infiltrator machines. Also, we have till 2011 for us to invent more coolness.Evidence? How would those notes even survive? You realise that almost nothing that currently exists would be useful in building a T800 (no, you probably don't)?What I propose is Skynet looked up the old notes of some crazy robotist, and said "Looks cool, can we get it in titanium?"
If you say so, though I don't think manufacturing capabilities can be much of the problem if Skynet gets it right not long after.Far more easily explained by its lack of experience and manufacturing capabilities.This would also explain it's amazing early blunders with the things; rubber skin, lack of neck flexibility, etc.
I know that modern designers don't. I was just throwing the idea out there. Earlier Terminator canon implied a lot of Skynet's technology was pre-Judgment Day in origin, and T3 reinforced some of that. I was just running with it.Except that they don't. If say Skynet downloaded the plans for Actroids off the Internet they'd come with a ton of stuff about the 'uncanny valley' and their limitations.If Skynet is working off some human's notes, he might assume the silicon skin of the guy's robot was enough, seeing as the human designer obviously thought it was.
1980s Rock is to music what Giant Robot shows are to anime
Think about it.
Cruising low in my N-1 blasting phat beats,
showin' off my chrome on them Coruscant streets
Got my 'saber on my belt and my gat by side,
this here yellow plane makes for a sick ride
Think about it.
Cruising low in my N-1 blasting phat beats,
showin' off my chrome on them Coruscant streets
Got my 'saber on my belt and my gat by side,
this here yellow plane makes for a sick ride