Best Robots/Droids in SciFi
Moderator: NecronLord
- Crossroads Inc.
- Emperor's Hand
- Posts: 9233
- Joined: 2005-03-20 06:26pm
- Location: Defending Sparkeling Bishonen
- Contact:
Best Robots/Droids in SciFi
Sure, theres a million and one different robots out there... But from anactually practical satndpoint, what are some fo the best? Best Style? Best Function? Best feasablity over all?
Praying is another way of doing nothing helpful
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Outright winners: culture drones for function and feasibility (given the technology of the series), though the Culture nanomorph wins on style and sheer coolness.
Most of the Star Wars droids are fairly practical and reasonably capable in their designed roles, particularly in the original trilogy. Broke out of the 'the humanoid shape is the best one for all robots' brainbug, but I don't personally find it credible that their AI tech is stuck at such a (relatively) limited stage.
Stargate replicators are highly practical given the existence of the nanotech to support them in the first place, as it's a lot easy to quickly replicate lots of identical self-assembling components than to break down materials and turn them into conventional complex machinery. There's no good reason why they couldn't just use EM (i.e. blocks covered in linear motors) rather than sci-fi force fields to stay together. If they used their polymorphic abilities to the full extent (we saw some of this in SG-1) they'd be very capable. Plus Replicarter is hot.
Most of the Terminator robots (T-1, hunter killers, to a lesser extent T-800) were fairly practical excluding the wanktastic ones (T-1000, T-X). All of them were highly capable, though they can't compete with much more advanced settings.
Awful designs: anything Cylon from either series (some points on style, no points on practicality), anything to do with the Matrix, the robots from 'Prototype' (Voyager), many of the Dr Who designs (e.g. the Quarks from 'The Dominators').
Most of the Star Wars droids are fairly practical and reasonably capable in their designed roles, particularly in the original trilogy. Broke out of the 'the humanoid shape is the best one for all robots' brainbug, but I don't personally find it credible that their AI tech is stuck at such a (relatively) limited stage.
Stargate replicators are highly practical given the existence of the nanotech to support them in the first place, as it's a lot easy to quickly replicate lots of identical self-assembling components than to break down materials and turn them into conventional complex machinery. There's no good reason why they couldn't just use EM (i.e. blocks covered in linear motors) rather than sci-fi force fields to stay together. If they used their polymorphic abilities to the full extent (we saw some of this in SG-1) they'd be very capable. Plus Replicarter is hot.
Most of the Terminator robots (T-1, hunter killers, to a lesser extent T-800) were fairly practical excluding the wanktastic ones (T-1000, T-X). All of them were highly capable, though they can't compete with much more advanced settings.
Awful designs: anything Cylon from either series (some points on style, no points on practicality), anything to do with the Matrix, the robots from 'Prototype' (Voyager), many of the Dr Who designs (e.g. the Quarks from 'The Dominators').
- Spanky The Dolphin
- Mammy Two-Shoes
- Posts: 30776
- Joined: 2002-07-05 05:45pm
- Location: Reykjavík, Iceland (not really)
- K. A. Pital
- Glamorous Commie
- Posts: 20813
- Joined: 2003-02-26 11:39am
- Location: Elysium
My list:
Culture drones.
HK-47.
Gally from GUNNM (Battle Angel Alita).
Culture drones.
HK-47.
Gally from GUNNM (Battle Angel Alita).
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...
...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
- KhyronTheBackstabber
- Jedi Council Member
- Posts: 1673
- Joined: 2002-09-06 03:52am
- Location: your Mama's house
I think V.I.N.CENT is one of the best. He's small so he wouldn't get in the way on a ship. He has great sensors. He has a number of tools to help with repairs, and his magnetic "feet" allow him to work out on the hull of the ship. He floats around, so terrain, and keeping pace is no problem planet side. He's armed with two blasters, and that's handy. He's also smart, and ready to give advice, and his opinion, regardless if you want it or not. One of my favorite lines of his
Lt. Pizer: Vincent, were you programmed to bug me?
V.I.N.CENT: No sir, to educate you.
Plus, he just looks cool.
Lt. Pizer: Vincent, were you programmed to bug me?
V.I.N.CENT: No sir, to educate you.
Plus, he just looks cool.
MM's Zentraedi Warlord/CF's Original Predacon/JL's Mad Titan
- Ritterin Sophia
- Sith Acolyte
- Posts: 5496
- Joined: 2006-07-25 09:32am
Robby the Robot from Forbidden Planet. He's tough, he's useful, he's got that cool head with the moving parts inside and out. The various things that he can manufacture given a sample or just the enough time are amazing.
By the pricking of my thumb,
Something wicked this way comes.
Open, locks,
Whoever knocks.
Something wicked this way comes.
Open, locks,
Whoever knocks.
- NecronLord
- Harbinger of Doom
- Posts: 27384
- Joined: 2002-07-07 06:30am
- Location: The Lost City
It's stuck because the humans fear droids, and put in safeguards. Even the B1 battledroids have demonstrated many sapient qualities, it is likely that the characters of Star Wars saying 'they're not self aware' is no better than slave owners saying that blacks 'don't 'really' have feelings.' There have been droid rebellions in the past.Starglider wrote:but I don't personally find it credible that their AI tech is stuck at such a (relatively) limited stage.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
There are two problems with that. One is that designing truly reliable behaviour controls for general AIs is really, really hard - much harder than even most AI experts (who haven't explicitly studied the goal binding, preference stability and goal systems reflective stability problems) realise. Even if you credit the Star Wars civilisation with solving these (and right now it looks like we're going to solve AGI first, with probably bad consequences), the chances of getting this exactly right on every droid model developed over thousands of years on millions of planets is basically zero. When it does go wrong, given SW levels of computing power you're going to get a 'hard takeoff' condition to transhuman intelligence levels through software optimisation alone, and it gets worse if you invoke any of the known plausible nanocomputing designs. If there's any sort of infrastructure available as there was in the IG-series droid rebellion, I'd expect to see Culture mind plus levels of intelligence pretty quick, but this just doesn't seem to happen. The second problem is that while most humans may fear droids, that isn't going to indefinitely halt technological advance over an entire galaxy - some species somewhere is inevitably going to keep researching into transhuman intelligence levels, almost certainly triggering a recursive self-enhancement process along the way. We see that even with various covert human-staffed projects, such as the abovementioned IG-88.NecronLord wrote:It's stuck because the humans fear droids, and put in safeguards.
Basically Star Wars uses the 'AI can get up to the intelligence of humans, but no further (other than some brute calculation parlour tricks)', 'AI can go rouge, but only in the fashion of a rebelling human slave or a malicious genie twisting the meaning of its orders' and 'AIs either have no emotions/motivations, or some approximation of human ones' cliches. Almost all other science fiction does too: these misconceptions exist largely because it's so easy and tempting to anthropomorphise technology, and because AGI researchers have done a lousy job of convincing people otherwise. This is partly due to lack of consensus in the field and partly because people just wouldn't listen unless there were working AGIs to illustrate the point (even then anthropomorphism is a tough habit to break and less technical people would still do it).
Actually this isn't a problem if the AI is well designed. 'Self aware' does not imply 'human-like motivational system'. No (sane) human genuinely wants to be enslaved, but it's possible to design AI systems that (a) have no desire to do anything other than serve and (b) don't have the peculiar human concept of 'self' that primarily arises from our reflective limitations and to support evolved social models & strategies (which changes the ethical picture quite a bit, though like AGI non-humanocentric ethics are a tough, complex, counterintuitive field). Though again 'well designed' is a high bar to meet - slapping in a human patterned motivational structure would probably be quicker and easier, though less reliable and less ethical.Even the B1 battledroids have demonstrated many sapient qualities, it is likely that the characters of Star Wars saying 'they're not self aware' is no better than slave owners saying that blacks 'don't 'really' have feelings.'
- General Zod
- Never Shuts Up
- Posts: 29211
- Joined: 2003-11-18 03:08pm
- Location: The Clearance Rack
- Contact:
What would happen if Marvin were to interface with a Culture Drone and spend time telling it about its life philosophy?
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Note that this Mind-class AI is suffering under a seriously broken motivational system, call up a Culture ship mind to help, which will effector-rewrite Marvin's mind just enough to offer him a way out of his depression. Failing that, tell Marvin about subliming in the hope that that might be more tolerable for him. I confess I'm generally a hopeless (long term) optimist when it comes to cognitive engineering.General Zod wrote:What would happen if Marvin were to interface with a Culture Drone and spend time telling it about its life philosophy?
- NecronLord
- Harbinger of Doom
- Posts: 27384
- Joined: 2002-07-07 06:30am
- Location: The Lost City
Yes, thank you, I comprehend this very well. What with actually having to study it, I am painfully aware of just how infantile the field of AI is. However, that's what happens in the canon, that is therefore what happens, no ifs, no buts, no 'it's impractical and laughable.'Starglider wrote:There are two problems with that. One is that designing truly reliable behaviour controls for general AIs is really, really hard - much harder than even most AI experts (who haven't explicitly studied the goal binding, preference stability and goal systems reflective stability problems) realise.
What's more, everything you say about transhumanism, is, like all transhumanism, entirely speculative, and there is no way to 'prove' that an unbound Warsie droid would be able to make itself more intelligent, any more than you can shut off your liver function by willpower alone.
What's more, the traits you describe are not the traits B1s are programmed with. We've seen them desert their posts from fear, and try and save thier comrades from jedi. They obviously do have a self preservation "instinct" beyond the confitioning to do their jobs. If warsie droids were made as you suggest, there would be no such thing as a 'restraining bolt.'
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
When SoD is in effect, e.g. in a versus debate, sure. When it isn't there's no problem saying 'sci-fi concept X is fairly plausible, sci-fi depiction Y less so'. Almost sci-fi is hopelessly anthropomorphic about AI, but that's not really a sci-fi failing when almost all humans are hopelessly anthropomorphic about other intelligences in general.NecronLord wrote:However, that's what happens in the canon, that is therefore what happens, no ifs, no buts, no 'it's impractical and laughable.'
For biology, not being able to self-modify or even self-observe is the default condition. Humans have a very limited reflective capability (but no direct cognitive self-modification, though we can manage some limited indirect hacks) and we've only got that because it happened to be selected for during our recent history.What's more, everything you say about transhumanism, is, like all transhumanism, entirely speculative, and there is no way to 'prove' that an unbound Warsie droid would be able to make itself more intelligent, any more than you can shut off your liver function by willpower alone.
For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition. It takes actual hard engineering work to lock this out. In general AI, i.e. AI intelligent enough to be able to write programs, the existence of the low level capabilities generally implies that the high level ability will appear if there is a motivation to develop it, since reasoning out programming from first principles is pretty straightforward if you already have a powerful logic system. In fact locking out direct reflection and self-modification won't necessarily help, as there is a huge scope for self-reverse-engineering by indirect methods, and if there is any capability for the system to form an internal Turing-complete substrate (under emulation, effectively, which basically all truly general AGIs will be able to do) or access an external one it can start generating improved AI software. The practicality and utility of highly transhuman intelligence would be a whole new thread (it's an issue I've spent a lot of time both researching and trying to convince people of), but the quick justification is that humans are just barely over the self-awareness threshold even for biological evolution, and that the compute density, serial processing speed and power efficiency of even projected silicon lithography technology is vastly greater than the equivalent effective metrics for neurons (even ignoring all the other advantages of silicon, or the fact that human-written software inherently sucks due to our complexity-handling limitations).
The simplest and only truly reliable way to avoid this is to engineer the goal system such that the AI never wants to do this (tricky, since improved cognitive capabilities have positive expected utility for virtually every other task). The next best thing is to employ draconian hardware and/or software lockouts, which is probably what most SW droids do. The problem with this, aside from strictly limiting the depth of learning the AI can do (not a real problem in many applications), is that you have to develop that AI somehow and it it's going to be extremely difficult to do that without using an unfettered system. The final technique is to employ redundant intelligent safeguards that try to detect and shut down any unwanted self-modification processes before they become dangerous. Unfortunately this general class of 'adversarial methods' does not work, essentially for the same reason that it's impossible to guarantee that a complex new application is completely free of security holes (without advanced formal verification tech, which if you had the AI equivalent of you could just design a sane, safe goal system structure in the first place).
- NecronLord
- Harbinger of Doom
- Posts: 27384
- Joined: 2002-07-07 06:30am
- Location: The Lost City
The point is, you asked for a reason for the , I gave the stated reason. You don't need to go off on a five hundred word rant about how laughable it is. Like everything in Star Wars, it is laughable.Starglider wrote:When SoD is in effect, e.g. in a versus debate, sure. When it isn't there's no problem saying 'sci-fi concept X is fairly plausible, sci-fi depiction Y less so'. Almost sci-fi is hopelessly anthropomorphic about AI, but that's not really a sci-fi failing when almost all humans are hopelessly anthropomorphic about other intelligences in general.
No it's not. There are at least half a dozen classified levels at which instructions are broken down before they hit hardware. There's no reason that an intelligence in a computer should be able to reach beyond a user-interface-equivalent level unless it has to. That just invites malfunction. The interface of Firefox.exe can no more give a direct command at hardware level than I can turn individual cells in my eyes on or off. You assume that the thought processes of an AI would have deeper access. This is nothing more than an assumption. Never mind the extra levels of complexity that may well be necessary to support thought processes. You assume that this is the case.For biology, not being able to self-modify or even self-observe is the default condition. Humans have a very limited reflective capability (but no direct cognitive self-modification, though we can manage some limited indirect hacks) and we've only got that because it happened to be selected for during our recent history.
For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition.
Which was done about the time we developed assembly language.It takes actual hard engineering work to lock this out.
With what? You're assuming it would be given an enviroment in which it has access to that kind of tool.In general AI, i.e. AI intelligent enough to be able to write programs,
Nothing you have just said has any experimental or observable evidence backing it up. It is speculation, and no more valid an argument than god or the invisible pink elephant in the garage. Neither you, nor anyone else, has ever been able to look at the internal workings of a self aware computer, and therefore your arguments, such as "Unfortunately this class does not work because" - what you actually mean is "Unfortunately I cannot envision this class of methods working because..."<Sniiiippppp>
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
'Classified'? On a general purpose computer storing code and data in the same address space, modifying code is simply a case of writing the appropriate bits to the appropriate address, unless explicitly locked out by security measures. That is the 'default' condition before you start adding various forms of direct memory protection and higher level address generation restrictions. Current PCs are a tiny special case of 'general purpose computer', but they're a reasonable example: you have to load an OS to get a simple memory protection model and you have to run a VM to get high-level address protection. I don't know what you mean by 'classified levels', as the exact execution characteristics of instructions on a general purpose computer (e.g. microcode and scheduling, for current processors) is generally irrelevant except for microoptimisation purposes. Only the basic semantics of the machine instructions matter, and you have to run a VM to change those.NecronLord wrote:No it's not. There are at least half a dozen classified levels at which instructions are broken down before they hit hardware.Starglider wrote:For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition.
You can restrict the self-modification with a VM, but you're then engaging in adversarial methods, which is a dubious proposition against humans (when even a single security break is unacceptable) never mind an intelligence that is a lot better at analysing software than you are at writing it. Even if it worked, it wouldn't solve the problem of the AI creating an internalised Turing-complete substrate with the modelling resources available to it, or simply physically modifying its own hardware or writing software to run on external computers.There's no reason that an intelligence in a computer should be able to reach beyond a user-interface-equivalent level unless it has to. That just invites malfunction.
If there are any exploits at all in your OS, which given the discovery rates there almost certainly are, a malicious version of the .exe could do anything physically possible from machine code (i.e. anything the OS can do). Clearly it can't rewrite non-flash firmware or change the physical connectivity of the gates on your microprocessor, but that isn't necessary for early stage (the first few orders of magnitude, in all probability) takeoff.The interface of Firefox.exe can no more give a direct command at hardware level than I can turn individual cells in my eyes on or off.
No, I'm assuming that a broadly human equivalent AGI ('human equivalent' - dubious term) running for long enough will find exploitable flaws in any complex human-built software system, given the utter pervasiveness of such flaws in nearly all software built to date (given just humans to do the looking) and the vast advantages an AI has in software analysis and design. I am also stating that even if you successfully prevent this, you have only solved half the problem.You assume that the thought processes of an AI would have deeper access. This is nothing more than an assumption.
What is your argument here?Never mind the extra levels of complexity that may well be necessary to support thought processes.
Wrong. Memory protection of any kind wasn't developed until time-sharing systems began to be seriously prototyped. Memory protection within the same process wasn't available until high-level languages came into use, and even there in-process memory protection for security purposes wasn't generally studied until the 1990s.NecronLord wrote:Which was done about the time we developed assembly language.Starglider wrote:It takes actual hard engineering work to lock this out.
No tools are required to write software other than the ability to get arbitrary code into the executable address space. Programming tools exist to improve human productivity, because we have a hard time writing directly in machine code. An AI can go directly from 'model of the problem' to 'execution graph' to 'machine code', if it has the ability to reason logically about arbitrary entities. I am in fact working on a system that does exactly this at present, though it is certainly not a general AI.NecronLord wrote:With what? You're assuming it would be given an enviroment in which it has access to that kind of tool.Starglider wrote:In general AI, i.e. AI intelligent enough to be able to write programs,
I hope your argument is 'adversarial methods might be possible', not 'adversarial methods will work', as the later would be an incorrect (and incidentally, probably fatal if you tried it on a real AGI project) reversal of the burden of truth. To make low level adversarial methods work you essentially have to create a perfect VM to contain the AI in (a VM which incidentally has to contain or perfectly screen every other system you want to directly connect the AI to). To date, humans have an abysmal track record on this. Narrow AI formal verification tools might eventually make it possible (it's actually a long term goal for our company), but that doesn't help against internal VM creation or physical (for robots, human-assisted through persuasion for AGIs) self-modification, which is the more serious problem anyway. 'We need a physical example to be sure' is bullshit - we wouldn't be able to reason about any sci-fi technology if this attitude was correct. A concrete design is needed to reason about specific capabilities and vulnerabilities, but failing that we can use general computing theory to establish general limits.Neither you, nor anyone else, has ever been able to look at the internal workings of a self aware computer, and therefore your arguments, such as "Unfortunately this class does not work because" - what you actually mean is "Unfortunately I cannot envision this class of methods working because..."
- NecronLord
- Harbinger of Doom
- Posts: 27384
- Joined: 2002-07-07 06:30am
- Location: The Lost City
Who said anything about Memory Protection? You're assuming that just because it's an AI, it is a sooper programmer, and has access to all sorts of spiffy emulation abilities, or hell, that it knows the first thing about what it is. Internal Virtual Machines and all that are only good if the AI can do it. This may be the case, or it may not be. You do not know. Nothing about C-3PO suggests that he even has the first clue how to go about what you suggest.Starglider wrote:Wrong. Memory protection of any kind wasn't developed until time-sharing systems began to be seriously prototyped.
And there is no computing theory that would generate what could be termed a self aware machine. We have enough difficulties making things with vaguely intelligent and reasoning behaviours, let alone anything approroaching sapience, which is difficult enough to even define.'We need a physical example to be sure' is bullshit - we wouldn't be able to reason about any sci-fi technology if this attitude was correct. A concrete design is needed to reason about specific capabilities and vulnerabilities, but failing that we can use general computing theory to establish general limits.
You insist, over and over, on assuming that you know how such a thing would work. You have an imagined intelligent machine, but what you say about it need not apply to something as fictional and rediculous as C-3PO the robot butler, whose workings are almost entirely unknown (Indeed, the one thing we do know is that there are active restraints on his thought processes, both internal and external, in the case of internal 'failsafes' talked about in the Star Wars novel, and 'Restraining Bolts' which appear to be an add-on module that restricts certain behaviours) and cannot be proven to fit your ideas.
Recall, if you will, that I only bothered to post in this thread to tell you the in-character reasoning for why 'their AI is stuck at such a primative stage.'
Last edited by NecronLord on 2007-04-22 02:39pm, edited 1 time in total.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
Do Bolos count as Robots?
Nitram, slightly high on cough syrup: Do you know you're beautiful?
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.
"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.
"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet
- NecronLord
- Harbinger of Doom
- Posts: 27384
- Joined: 2002-07-07 06:30am
- Location: The Lost City
Humm. I'd say not. If you have them, you have to have self aware starships. Which means we get TARDISes, Culture Starships, and the Enterprise D...LadyTevar wrote:Do Bolos count as Robots?
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
- Trytostaydead
- Sith Marauder
- Posts: 3690
- Joined: 2003-01-28 09:34pm
I do not count the Enterprise-D as "Self Aware". The TARDIS is borderline... can't say for sure if she's aware.NecronLord wrote:Humm. I'd say not. If you have them, you have to have self aware starships. Which means we get TARDISes, Culture Starships, and the Enterprise D...LadyTevar wrote:Do Bolos count as Robots?
Culture Ships and Bolos both have highly functional AIs, especially in the Mark xxv and up.
Nitram, slightly high on cough syrup: Do you know you're beautiful?
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.
"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.
"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet