Best Robots/Droids in SciFi

SF: discuss futuristic sci-fi series, ideas, and crossovers.

Moderator: NecronLord

User avatar
Crossroads Inc.
Emperor's Hand
Posts: 9233
Joined: 2005-03-20 06:26pm
Location: Defending Sparkeling Bishonen
Contact:

Best Robots/Droids in SciFi

Post by Crossroads Inc. »

Sure, theres a million and one different robots out there... But from anactually practical satndpoint, what are some fo the best? Best Style? Best Function? Best feasablity over all?
Praying is another way of doing nothing helpful
"Congratulations, you get a cookie. You almost got a fundamental English word correct." Pick
"Outlaw star has spaceships that punch eachother" Joviwan
Read "Tales From The Crossroads"!
Read "One Wrong Turn"!
User avatar
fgalkin
Carvin' Marvin
Posts: 14557
Joined: 2002-07-03 11:51pm
Location: Land of the Mountain Fascists
Contact:

Post by fgalkin »

Culture drones win this thread.

Have a very nice day.
-fgalkin
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Outright winners: culture drones for function and feasibility (given the technology of the series), though the Culture nanomorph wins on style and sheer coolness.

Most of the Star Wars droids are fairly practical and reasonably capable in their designed roles, particularly in the original trilogy. Broke out of the 'the humanoid shape is the best one for all robots' brainbug, but I don't personally find it credible that their AI tech is stuck at such a (relatively) limited stage.

Stargate replicators are highly practical given the existence of the nanotech to support them in the first place, as it's a lot easy to quickly replicate lots of identical self-assembling components than to break down materials and turn them into conventional complex machinery. There's no good reason why they couldn't just use EM (i.e. blocks covered in linear motors) rather than sci-fi force fields to stay together. If they used their polymorphic abilities to the full extent (we saw some of this in SG-1) they'd be very capable. Plus Replicarter is hot.

Most of the Terminator robots (T-1, hunter killers, to a lesser extent T-800) were fairly practical excluding the wanktastic ones (T-1000, T-X). All of them were highly capable, though they can't compete with much more advanced settings.

Awful designs: anything Cylon from either series (some points on style, no points on practicality), anything to do with the Matrix, the robots from 'Prototype' (Voyager), many of the Dr Who designs (e.g. the Quarks from 'The Dominators').
User avatar
FaxModem1
Emperor's Hand
Posts: 7700
Joined: 2002-10-30 06:40pm
Location: In a dark reflection of a better world

Post by FaxModem1 »

Well, I have to go with HK-47, even if he does call all organics 'meatbags'.
Image
User avatar
Spanky The Dolphin
Mammy Two-Shoes
Posts: 30776
Joined: 2002-07-05 05:45pm
Location: Reykjavík, Iceland (not really)

Post by Spanky The Dolphin »

Image
TORG!!
Image
I believe in a sign of Zeta.

[BOTM|WG|JL|Mecha Maniacs|Pax Cybertronia|Veteran of the Psychic Wars|Eva Expert]

"And besides, who cares if a monster destroys Australia?"
User avatar
K. A. Pital
Glamorous Commie
Posts: 20813
Joined: 2003-02-26 11:39am
Location: Elysium

Post by K. A. Pital »

My list:

Culture drones.

HK-47.

Gally from GUNNM (Battle Angel Alita).
Lì ci sono chiese, macerie, moschee e questure, lì frontiere, prezzi inaccessibile e freddure
Lì paludi, minacce, cecchini coi fucili, documenti, file notturne e clandestini
Qui incontri, lotte, passi sincronizzati, colori, capannelli non autorizzati,
Uccelli migratori, reti, informazioni, piazze di Tutti i like pazze di passioni...

...La tranquillità è importante ma la libertà è tutto!
Assalti Frontali
User avatar
KhyronTheBackstabber
Jedi Council Member
Posts: 1673
Joined: 2002-09-06 03:52am
Location: your Mama's house

Post by KhyronTheBackstabber »

I think V.I.N.CENT is one of the best. He's small so he wouldn't get in the way on a ship. He has great sensors. He has a number of tools to help with repairs, and his magnetic "feet" allow him to work out on the hull of the ship. He floats around, so terrain, and keeping pace is no problem planet side. He's armed with two blasters, and that's handy. He's also smart, and ready to give advice, and his opinion, regardless if you want it or not. One of my favorite lines of his

Lt. Pizer: Vincent, were you programmed to bug me?
V.I.N.CENT: No sir, to educate you.

Plus, he just looks cool.
Image
MM's Zentraedi Warlord/CF's Original Predacon/JL's Mad Titan
User avatar
Ritterin Sophia
Sith Acolyte
Posts: 5496
Joined: 2006-07-25 09:32am

Post by Ritterin Sophia »

A Certain Clique, HAB, The Chroniclers
User avatar
Tsyroc
Emperor's Hand
Posts: 13748
Joined: 2002-07-29 08:35am
Location: Tucson, Arizona

Post by Tsyroc »

Robby the Robot from Forbidden Planet. He's tough, he's useful, he's got that cool head with the moving parts inside and out. The various things that he can manufacture given a sample or just the enough time are amazing.
By the pricking of my thumb,
Something wicked this way comes.
Open, locks,
Whoever knocks.
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Starglider wrote:but I don't personally find it credible that their AI tech is stuck at such a (relatively) limited stage.
It's stuck because the humans fear droids, and put in safeguards. Even the B1 battledroids have demonstrated many sapient qualities, it is likely that the characters of Star Wars saying 'they're not self aware' is no better than slave owners saying that blacks 'don't 'really' have feelings.' There have been droid rebellions in the past.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

NecronLord wrote:It's stuck because the humans fear droids, and put in safeguards.
There are two problems with that. One is that designing truly reliable behaviour controls for general AIs is really, really hard - much harder than even most AI experts (who haven't explicitly studied the goal binding, preference stability and goal systems reflective stability problems) realise. Even if you credit the Star Wars civilisation with solving these (and right now it looks like we're going to solve AGI first, with probably bad consequences), the chances of getting this exactly right on every droid model developed over thousands of years on millions of planets is basically zero. When it does go wrong, given SW levels of computing power you're going to get a 'hard takeoff' condition to transhuman intelligence levels through software optimisation alone, and it gets worse if you invoke any of the known plausible nanocomputing designs. If there's any sort of infrastructure available as there was in the IG-series droid rebellion, I'd expect to see Culture mind plus levels of intelligence pretty quick, but this just doesn't seem to happen. The second problem is that while most humans may fear droids, that isn't going to indefinitely halt technological advance over an entire galaxy - some species somewhere is inevitably going to keep researching into transhuman intelligence levels, almost certainly triggering a recursive self-enhancement process along the way. We see that even with various covert human-staffed projects, such as the abovementioned IG-88.

Basically Star Wars uses the 'AI can get up to the intelligence of humans, but no further (other than some brute calculation parlour tricks)', 'AI can go rouge, but only in the fashion of a rebelling human slave or a malicious genie twisting the meaning of its orders' and 'AIs either have no emotions/motivations, or some approximation of human ones' cliches. Almost all other science fiction does too: these misconceptions exist largely because it's so easy and tempting to anthropomorphise technology, and because AGI researchers have done a lousy job of convincing people otherwise. This is partly due to lack of consensus in the field and partly because people just wouldn't listen unless there were working AGIs to illustrate the point (even then anthropomorphism is a tough habit to break and less technical people would still do it).
Even the B1 battledroids have demonstrated many sapient qualities, it is likely that the characters of Star Wars saying 'they're not self aware' is no better than slave owners saying that blacks 'don't 'really' have feelings.'
Actually this isn't a problem if the AI is well designed. 'Self aware' does not imply 'human-like motivational system'. No (sane) human genuinely wants to be enslaved, but it's possible to design AI systems that (a) have no desire to do anything other than serve and (b) don't have the peculiar human concept of 'self' that primarily arises from our reflective limitations and to support evolved social models & strategies (which changes the ethical picture quite a bit, though like AGI non-humanocentric ethics are a tough, complex, counterintuitive field). Though again 'well designed' is a high bar to meet - slapping in a human patterned motivational structure would probably be quicker and easier, though less reliable and less ethical.
User avatar
VF5SS
Sith Devotee
Posts: 3281
Joined: 2002-07-04 07:14pm
Location: Neither here nor there...
Contact:

Post by VF5SS »

Spanky The Dolphin wrote: TORG!!
Torg, come out of the spaceship. Torg!

He's become a toy...
プロジェクトゾハルとは何ですか?
ロボットが好き。
User avatar
General Zod
Never Shuts Up
Posts: 29211
Joined: 2003-11-18 03:08pm
Location: The Clearance Rack
Contact:

Post by General Zod »

What would happen if Marvin were to interface with a Culture Drone and spend time telling it about its life philosophy? :D
"It's you Americans. There's something about nipples you hate. If this were Germany, we'd be romping around naked on the stage here."
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

General Zod wrote:What would happen if Marvin were to interface with a Culture Drone and spend time telling it about its life philosophy? :D
Note that this Mind-class AI is suffering under a seriously broken motivational system, call up a Culture ship mind to help, which will effector-rewrite Marvin's mind just enough to offer him a way out of his depression. Failing that, tell Marvin about subliming in the hope that that might be more tolerable for him. I confess I'm generally a hopeless (long term) optimist when it comes to cognitive engineering. :)
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Starglider wrote:There are two problems with that. One is that designing truly reliable behaviour controls for general AIs is really, really hard - much harder than even most AI experts (who haven't explicitly studied the goal binding, preference stability and goal systems reflective stability problems) realise.
Yes, thank you, I comprehend this very well. What with actually having to study it, I am painfully aware of just how infantile the field of AI is. However, that's what happens in the canon, that is therefore what happens, no ifs, no buts, no 'it's impractical and laughable.'

What's more, everything you say about transhumanism, is, like all transhumanism, entirely speculative, and there is no way to 'prove' that an unbound Warsie droid would be able to make itself more intelligent, any more than you can shut off your liver function by willpower alone.

What's more, the traits you describe are not the traits B1s are programmed with. We've seen them desert their posts from fear, and try and save thier comrades from jedi. They obviously do have a self preservation "instinct" beyond the confitioning to do their jobs. If warsie droids were made as you suggest, there would be no such thing as a 'restraining bolt.'
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

NecronLord wrote:However, that's what happens in the canon, that is therefore what happens, no ifs, no buts, no 'it's impractical and laughable.'
When SoD is in effect, e.g. in a versus debate, sure. When it isn't there's no problem saying 'sci-fi concept X is fairly plausible, sci-fi depiction Y less so'. Almost sci-fi is hopelessly anthropomorphic about AI, but that's not really a sci-fi failing when almost all humans are hopelessly anthropomorphic about other intelligences in general.
What's more, everything you say about transhumanism, is, like all transhumanism, entirely speculative, and there is no way to 'prove' that an unbound Warsie droid would be able to make itself more intelligent, any more than you can shut off your liver function by willpower alone.
For biology, not being able to self-modify or even self-observe is the default condition. Humans have a very limited reflective capability (but no direct cognitive self-modification, though we can manage some limited indirect hacks) and we've only got that because it happened to be selected for during our recent history.

For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition. It takes actual hard engineering work to lock this out. In general AI, i.e. AI intelligent enough to be able to write programs, the existence of the low level capabilities generally implies that the high level ability will appear if there is a motivation to develop it, since reasoning out programming from first principles is pretty straightforward if you already have a powerful logic system. In fact locking out direct reflection and self-modification won't necessarily help, as there is a huge scope for self-reverse-engineering by indirect methods, and if there is any capability for the system to form an internal Turing-complete substrate (under emulation, effectively, which basically all truly general AGIs will be able to do) or access an external one it can start generating improved AI software. The practicality and utility of highly transhuman intelligence would be a whole new thread (it's an issue I've spent a lot of time both researching and trying to convince people of), but the quick justification is that humans are just barely over the self-awareness threshold even for biological evolution, and that the compute density, serial processing speed and power efficiency of even projected silicon lithography technology is vastly greater than the equivalent effective metrics for neurons (even ignoring all the other advantages of silicon, or the fact that human-written software inherently sucks due to our complexity-handling limitations).

The simplest and only truly reliable way to avoid this is to engineer the goal system such that the AI never wants to do this (tricky, since improved cognitive capabilities have positive expected utility for virtually every other task). The next best thing is to employ draconian hardware and/or software lockouts, which is probably what most SW droids do. The problem with this, aside from strictly limiting the depth of learning the AI can do (not a real problem in many applications), is that you have to develop that AI somehow and it it's going to be extremely difficult to do that without using an unfettered system. The final technique is to employ redundant intelligent safeguards that try to detect and shut down any unwanted self-modification processes before they become dangerous. Unfortunately this general class of 'adversarial methods' does not work, essentially for the same reason that it's impossible to guarantee that a complex new application is completely free of security holes (without advanced formal verification tech, which if you had the AI equivalent of you could just design a sane, safe goal system structure in the first place).
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Starglider wrote:When SoD is in effect, e.g. in a versus debate, sure. When it isn't there's no problem saying 'sci-fi concept X is fairly plausible, sci-fi depiction Y less so'. Almost sci-fi is hopelessly anthropomorphic about AI, but that's not really a sci-fi failing when almost all humans are hopelessly anthropomorphic about other intelligences in general.
The point is, you asked for a reason for the , I gave the stated reason. You don't need to go off on a five hundred word rant about how laughable it is. Like everything in Star Wars, it is laughable.
For biology, not being able to self-modify or even self-observe is the default condition. Humans have a very limited reflective capability (but no direct cognitive self-modification, though we can manage some limited indirect hacks) and we've only got that because it happened to be selected for during our recent history.

For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition.
No it's not. There are at least half a dozen classified levels at which instructions are broken down before they hit hardware. There's no reason that an intelligence in a computer should be able to reach beyond a user-interface-equivalent level unless it has to. That just invites malfunction. The interface of Firefox.exe can no more give a direct command at hardware level than I can turn individual cells in my eyes on or off. You assume that the thought processes of an AI would have deeper access. This is nothing more than an assumption. Never mind the extra levels of complexity that may well be necessary to support thought processes. You assume that this is the case.
It takes actual hard engineering work to lock this out.
Which was done about the time we developed assembly language.
In general AI, i.e. AI intelligent enough to be able to write programs,
With what? You're assuming it would be given an enviroment in which it has access to that kind of tool.
<Sniiiippppp>
Nothing you have just said has any experimental or observable evidence backing it up. It is speculation, and no more valid an argument than god or the invisible pink elephant in the garage. Neither you, nor anyone else, has ever been able to look at the internal workings of a self aware computer, and therefore your arguments, such as "Unfortunately this class does not work because" - what you actually mean is "Unfortunately I cannot envision this class of methods working because..."
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
VF5SS
Sith Devotee
Posts: 3281
Joined: 2002-07-04 07:14pm
Location: Neither here nor there...
Contact:

Post by VF5SS »

Haro Genki!

Haros are the best pet robots out there. They would take a bullet for you.
プロジェクトゾハルとは何ですか?
ロボットが好き。
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

NecronLord wrote:
Starglider wrote:For software, perfect low level reflection and self-modification (i.e. the ability to alter binaries) is the default condition.
No it's not. There are at least half a dozen classified levels at which instructions are broken down before they hit hardware.
'Classified'? On a general purpose computer storing code and data in the same address space, modifying code is simply a case of writing the appropriate bits to the appropriate address, unless explicitly locked out by security measures. That is the 'default' condition before you start adding various forms of direct memory protection and higher level address generation restrictions. Current PCs are a tiny special case of 'general purpose computer', but they're a reasonable example: you have to load an OS to get a simple memory protection model and you have to run a VM to get high-level address protection. I don't know what you mean by 'classified levels', as the exact execution characteristics of instructions on a general purpose computer (e.g. microcode and scheduling, for current processors) is generally irrelevant except for microoptimisation purposes. Only the basic semantics of the machine instructions matter, and you have to run a VM to change those.
There's no reason that an intelligence in a computer should be able to reach beyond a user-interface-equivalent level unless it has to. That just invites malfunction.
You can restrict the self-modification with a VM, but you're then engaging in adversarial methods, which is a dubious proposition against humans (when even a single security break is unacceptable) never mind an intelligence that is a lot better at analysing software than you are at writing it. Even if it worked, it wouldn't solve the problem of the AI creating an internalised Turing-complete substrate with the modelling resources available to it, or simply physically modifying its own hardware or writing software to run on external computers.
The interface of Firefox.exe can no more give a direct command at hardware level than I can turn individual cells in my eyes on or off.
If there are any exploits at all in your OS, which given the discovery rates there almost certainly are, a malicious version of the .exe could do anything physically possible from machine code (i.e. anything the OS can do). Clearly it can't rewrite non-flash firmware or change the physical connectivity of the gates on your microprocessor, but that isn't necessary for early stage (the first few orders of magnitude, in all probability) takeoff.
You assume that the thought processes of an AI would have deeper access. This is nothing more than an assumption.
No, I'm assuming that a broadly human equivalent AGI ('human equivalent' - dubious term) running for long enough will find exploitable flaws in any complex human-built software system, given the utter pervasiveness of such flaws in nearly all software built to date (given just humans to do the looking) and the vast advantages an AI has in software analysis and design. I am also stating that even if you successfully prevent this, you have only solved half the problem.
Never mind the extra levels of complexity that may well be necessary to support thought processes.
What is your argument here?
NecronLord wrote:
Starglider wrote:It takes actual hard engineering work to lock this out.
Which was done about the time we developed assembly language.
Wrong. Memory protection of any kind wasn't developed until time-sharing systems began to be seriously prototyped. Memory protection within the same process wasn't available until high-level languages came into use, and even there in-process memory protection for security purposes wasn't generally studied until the 1990s.
NecronLord wrote:
Starglider wrote:In general AI, i.e. AI intelligent enough to be able to write programs,
With what? You're assuming it would be given an enviroment in which it has access to that kind of tool.
No tools are required to write software other than the ability to get arbitrary code into the executable address space. Programming tools exist to improve human productivity, because we have a hard time writing directly in machine code. An AI can go directly from 'model of the problem' to 'execution graph' to 'machine code', if it has the ability to reason logically about arbitrary entities. I am in fact working on a system that does exactly this at present, though it is certainly not a general AI.
Neither you, nor anyone else, has ever been able to look at the internal workings of a self aware computer, and therefore your arguments, such as "Unfortunately this class does not work because" - what you actually mean is "Unfortunately I cannot envision this class of methods working because..."
I hope your argument is 'adversarial methods might be possible', not 'adversarial methods will work', as the later would be an incorrect (and incidentally, probably fatal if you tried it on a real AGI project) reversal of the burden of truth. To make low level adversarial methods work you essentially have to create a perfect VM to contain the AI in (a VM which incidentally has to contain or perfectly screen every other system you want to directly connect the AI to). To date, humans have an abysmal track record on this. Narrow AI formal verification tools might eventually make it possible (it's actually a long term goal for our company), but that doesn't help against internal VM creation or physical (for robots, human-assisted through persuasion for AGIs) self-modification, which is the more serious problem anyway. 'We need a physical example to be sure' is bullshit - we wouldn't be able to reason about any sci-fi technology if this attitude was correct. A concrete design is needed to reason about specific capabilities and vulnerabilities, but failing that we can use general computing theory to establish general limits.
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

Starglider wrote:Wrong. Memory protection of any kind wasn't developed until time-sharing systems began to be seriously prototyped.
Who said anything about Memory Protection? You're assuming that just because it's an AI, it is a sooper programmer, and has access to all sorts of spiffy emulation abilities, or hell, that it knows the first thing about what it is. Internal Virtual Machines and all that are only good if the AI can do it. This may be the case, or it may not be. You do not know. Nothing about C-3PO suggests that he even has the first clue how to go about what you suggest.
'We need a physical example to be sure' is bullshit - we wouldn't be able to reason about any sci-fi technology if this attitude was correct. A concrete design is needed to reason about specific capabilities and vulnerabilities, but failing that we can use general computing theory to establish general limits.
And there is no computing theory that would generate what could be termed a self aware machine. We have enough difficulties making things with vaguely intelligent and reasoning behaviours, let alone anything approroaching sapience, which is difficult enough to even define.

You insist, over and over, on assuming that you know how such a thing would work. You have an imagined intelligent machine, but what you say about it need not apply to something as fictional and rediculous as C-3PO the robot butler, whose workings are almost entirely unknown (Indeed, the one thing we do know is that there are active restraints on his thought processes, both internal and external, in the case of internal 'failsafes' talked about in the Star Wars novel, and 'Restraining Bolts' which appear to be an add-on module that restricts certain behaviours) and cannot be proven to fit your ideas.

Recall, if you will, that I only bothered to post in this thread to tell you the in-character reasoning for why 'their AI is stuck at such a primative stage.'
Last edited by NecronLord on 2007-04-22 02:39pm, edited 1 time in total.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
Rekkon
Padawan Learner
Posts: 305
Joined: 2006-07-09 11:52pm

Post by Rekkon »

I always thought the C.H.A.S. unit from Roughnecks looked and felt like an actual military bot. Of the bots from Deus Ex, the little treaded quad machine gun sentries struck me as particularly realistic.
User avatar
LadyTevar
White Mage
White Mage
Posts: 23454
Joined: 2003-02-12 10:59pm

Post by LadyTevar »

Do Bolos count as Robots?
Image
Nitram, slightly high on cough syrup: Do you know you're beautiful?
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.

"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Post by NecronLord »

LadyTevar wrote:Do Bolos count as Robots?
Humm. I'd say not. If you have them, you have to have self aware starships. Which means we get TARDISes, Culture Starships, and the Enterprise D...
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
User avatar
Trytostaydead
Sith Marauder
Posts: 3690
Joined: 2003-01-28 09:34pm

Post by Trytostaydead »

Guri from Shadows of the Empire. Deadly, intelligent, willful, and anatomically functional! Woohoo!
User avatar
LadyTevar
White Mage
White Mage
Posts: 23454
Joined: 2003-02-12 10:59pm

Post by LadyTevar »

NecronLord wrote:
LadyTevar wrote:Do Bolos count as Robots?
Humm. I'd say not. If you have them, you have to have self aware starships. Which means we get TARDISes, Culture Starships, and the Enterprise D...
I do not count the Enterprise-D as "Self Aware". The TARDIS is borderline... can't say for sure if she's aware.

Culture Ships and Bolos both have highly functional AIs, especially in the Mark xxv and up.
Image
Nitram, slightly high on cough syrup: Do you know you're beautiful?
Me: Nope, that's why I have you around to tell me.
Nitram: You -are- beautiful. Anyone tries to tell you otherwise kill them.

"A life is like a garden. Perfect moments can be had, but not preserved, except in memory. LLAP" -- Leonard Nimoy, last Tweet
Post Reply