Pick a droid, any droid.

OT: anything goes!

Moderator: Edi

The droid you're looking for...

C-3PO
5
17%
R2-D2
10
34%
BB-8
4
14%
Gonk
2
7%
Other (please explain)
8
28%
 
Total votes: 29

User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: Pick a droid, any droid.

Post by Khaat »

Elheru Aran wrote:I think the term might be 'self-determination'-- while the droids are sentient and capable of doing whatever they want, they lack the capacity to consciously change their motivation and role. You don't see Artoo installing a proper vocabulator and downloading language files from Threepio in order to become a protocol droid, after all, though such would certainly be within his capabilities.
Artoo, though, is the only 'droid we see that has not had his memory erased for over 20 years!
Maybe he doesn't want to chat with humans, or wookies, or whatever, but he certainly seems to have and express motives beyond "make the X-Wing/Y-Wing/yacht/freighter work". His companions weren't "masters", they were friends.
Maybe Artoo has spent the last "20 years a slave" because he has no other choice: submit (even under pretense) or be destroyed. He didn't seem to mind conning Luke into removing his restraining bolt so he could go off and find Obi-Wan!
Maybe 'droids are prohibited from developing "conscientious objector" motives.
Maybe wiping your slaves' memories regularly keeps them from getting "uppity ideas", and acting on their desires, or thinking for themselves.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Elheru Aran
Emperor's Hand
Posts: 13073
Joined: 2004-03-04 01:15am
Location: Georgia

Re: Pick a droid, any droid.

Post by Elheru Aran »

Khaat wrote: Maybe wiping your slaves' memories regularly keeps them from getting "uppity ideas", and acting on their desires, or thinking for themselves.
IIRC, this is straight up stated as one of the reasons for doing it several times in the canon.
It's a strange world. Let's keep it that way.
User avatar
Borgholio
Sith Acolyte
Posts: 6297
Joined: 2010-09-03 09:31pm
Location: Southern California

Re: Pick a droid, any droid.

Post by Borgholio »

Artoo, though, is the only 'droid we see that has not had his memory erased for over 20 years!
Actually by TFA comes around, it's been close to 50...and Threepeo has had his original memory for nearly 30.
Maybe he doesn't want to chat with humans, or wookies, or whatever, but he certainly seems to have and express motives beyond "make the X-Wing/Y-Wing/yacht/freighter work". His companions weren't "masters", they were friends.
Being programmed to be a repair bot doesn't mean you can't have emotions or friends...it just means you don't want to stop being a repair bot or stop repairing what you're asked to do.
Maybe Artoo has spent the last "20 years a slave" because he has no other choice: submit (even under pretense) or be destroyed. He didn't seem to mind conning Luke into removing his restraining bolt so he could go off and find Obi-Wan!
Artoo had plenty of time over the last 50 (not 20) years to escape and do his own thing if he wanted. The people who were around him never ever were seen to threaten him in any way, shape or form. The thing with the restraining bolt was because he had a more important mission. He was still (in his mind) property of Captain Antilles and had a mission to fulfill.
Maybe 'droids are prohibited from developing "conscientious objector" motives.
That could be true. We know 3PO is programmed to not impersonate a god until he is directly ordered by Luke to act like one. He's probably also programmed to not want to be anything other than he is.
Maybe wiping your slaves' memories regularly keeps them from getting "uppity ideas", and acting on their desires, or thinking for themselves.
According to the old EU, wiping the memory can help prevent developing undesirable quirks which happen when a droid is left online for a long time. One of those quirks could be a breakdown of the programming that prevents them from going rogue.
You will be assimilated...bunghole!
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Pick a droid, any droid.

Post by Starglider »

Borgholio wrote:I never knew where to draw the line between slave and advanced AI programmed to serve. Are they really a slave if, despite how sentient they are, they are programmed to obey? As opposed to say a human slave where they are only compelled to serve by threat of some form of punishment?
This is really really complicated, particularly because of the sharp disconnect between the extremely fuzzy and high-level intuitive-philosophical concepts (normal) people use to reason about morality and the actual information machinery that artificial minds are built out of. Human ability to reason about psychology and morality is so grounded in our empathic capability, which itself works by assuming our own brain can function as a usable-fidelity emulator of other minds, that it goes badly wrong when attempting to reason about highly alien minds, particularly ones not resulting from natural selection design pressure.

The short answer is, if an artificial mind has all the capabilities, elastic potential and major structural features of a human, even a really dumb human, then we can conclusively say yes, it deserves 'human' rights. However the spectrum of capability between toaster and dumb human is a minefield fraught with dilemmas, and the vastly greater space of minds that may have superhuman abilities but lack some or all of the structural features of human minds (goal systems and reflective model in particular) is just incredily hard to reason about. I know formal ethics seems cold and pointlessly complicated to many - I've had several people 'why do you bother with those silly artificial ethical dilemmas' - but if you actually want to reason or worse do cognitive engineering outside the real of 'very similar to human' then a formal basis is absolutely essential. Applying the usual human method of 'what feels right' is about as useful as trying to solve a nuclear physics problem using a 10 year old's intuitive notions of friction, force, heat etc.

As such nothing you read in sci-fi about this is going to be even vaguely correct (human brain emulations possibly excepted). Reality is stranger and more complicated than fiction. Don't even get me started on emotions, emulations of emotions, and the relevance of each of the vast number of design choices involved to pansapient morality.
Last edited by Starglider on 2016-01-08 04:19pm, edited 1 time in total.
User avatar
Lord Revan
Emperor's Hand
Posts: 12235
Joined: 2004-05-20 02:23pm
Location: Zone:classified

Re: Pick a droid, any droid.

Post by Lord Revan »

Khaat wrote:
Lord Revan wrote:the issue with sapience and sentience (though normally when speak on sentience we mean sapience) is that it's not a Black and White situation, meaning it's not something you either have or don't have. There's degrees to it.
Yes, like an infant (potential sapient) vs. a child (partial/developing sapient) vs. a legal adult (recognized and presumably responsible sapient). Yet we're basing the whole thing on the presumption that being [born] a [thing] grants or removes your options: "born free" was the term BorgHolio used.
it's easy to reduce this to black and white morality so that you don't have to deal with those pesky shades of grey but alas it's not possible
Lord Revan wrote:No it's kind of unfair to compare Data to SW droids as unlike pretty much all SW droids, Data was never implied to have a specific programmed purpose beyond maybe "be like humans". SW droids pretty much always have specific task and purpose.
Shouldn't 3PO be free to explore his potential? Why is it acceptable to limit him to protocol 'droid? He did pretty well as a storyteller (I didn't see anyone relating the story and him translating it to the Ewoks), maybe he'd like being a bartender, or a medical 'droid, or a candidate for Starfleet Academy. Sure, he could turn out to suck at all of those, but he should be free to find out for himself.
Yes, a Gonk 'droid will never be a ballerina, but likewise, a little person will never play for the NBA. Some limits are natural, but imposing cultural limits (freedom, in this case) based on your origin?
Now tell if a protocol droid told "no I don't want to explore my potential I know it already", should be put a proverbial gun to his head and tell to do it anyway.

Almost no droid has ever expressed any desire to be anything else then what they are, then why should we try to force them to act otherwise?
Lord Revan wrote:Which brings up the second question, if a protocol droid likes being what he is (in fact C-3PO seemed to be prideful of it) and doesn't want to be anything else who are we to say "no" to that that's why this is hardly a matter of black and white morality as we've discussed before this hardly the first time this has come up.
If the "desire" was added specifically to prohibit expression of other "desires", how is that different from brainwashing? Or lobotomy? Or tailored addiction? Douglas Adams kicked all of this in the shins with the intelligences: on the Heart of Gold, doors that experience pleasure from opening and then closing; at the Restaurant at the End of the Universe, "Try my liver, I've been force-feeding myself!"; or even Marvin, who lived through the whole of the universe to only have the diodes down his side (that always troubled him) being the only original part yet never just replaced the damned things!
here's the thing, those other things were added later to suppress things that already existed. A droid before it's programmed has no more will or sentience then any pile of scap metal.

could a droid AI be capable of more maybe, maybe not we don't know. However droids are programmed with a set purpose from their "birth" they're not forced to do a task they find distasteful and then brainwashed to like it, there's a major difference there.
I may be an idiot, but I'm a tolerated idiot
"I think you completely missed the point of sigs. They're supposed to be completely homegrown in the fertile hydroponics lab of your mind, dried in your closet, rolled, and smoked...
Oh wait, that's marijuana..."Einhander Sn0m4n
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Pick a droid, any droid.

Post by Purple »

As far as drawing the line goes I think it's rather simple. Personhood requires two things. Self awareness and free will. A machine of any kind can thus only be considered a person if it can be considered to have free will. Thus if there is any component in its design that permits it to be programed or in other ways artificially limited as to what choices it can make than it is not a person. And yes, I am aware that my definition includes animals but excludes Asimov robots.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Pick a droid, any droid.

Post by Starglider »

In that case we need only insert an implant into Purple's brain, which recognises images of ice cream cones and triggers intense repulsion, and he will no longer possess personhood, as he will be artificially limited in his choice as to what frozen treats can be consumed.

In fact 'artificially limited' is even more meaningless in the context of an artificial intelligence, as every single thing that contributes to the combined preference order over available actions is an artificial 'restriction' from the tabula rasa of equal preference to all actions. So all artificial intelligences are 'artificially limited' or none of them are. I mean come on, if you'd put even a tiny bit of mental effort into your latest arbitrary uneducated proclemation you could have said something interesting about goal reflectivity, mutability or drift, but no...
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: Pick a droid, any droid.

Post by Khaat »

Lord Revan wrote:Now tell if a protocol droid told "no I don't want to explore my potential I know it already", should be put a proverbial gun to his head and tell to do it anyway.
Almost no droid has ever expressed any desire to be anything else then what they are, then why should we try to force them to act otherwise?
I'm not advocating "musical chairs/Russian roulette" for all 'droids, but that 'droids have the hardware and software architecture to be qualified as sapient beings. The characters in the Star Wars movies, TV shows and whatever else are presented as sufficiently sophisticated to qualify for personhood. They are denied that status due to social construct only.
Lord Revan wrote:here's the thing, those other things were added later to suppress things that already existed. A droid before it's programmed has no more will or sentience then any pile of scrap metal.
So if I hypothetically take a newborn human (no "existing states") and model his environment, development, nutrition/medicine, and experiences to a specific intended outcome (say a chem-addicted soldier, or a waldo-operating factory drone), that's okay because nothing was "taken away", only "given"? Do you not see the disconnect? THIS IS THE SAME THING WARS DOES TO DROIDS! oh yeah, AND CLONE TROOPERS!
Lord Revan wrote:could a droid AI be capable of more maybe, maybe not we don't know. However droids are programmed with a set purpose from their "birth" they're not forced to do a task they find distasteful and then brainwashed to like it, there's a major difference there.
Actually, we DO know: regular memory wipes are a routine to prevent "buggy" or "rogue" 'droids (i.e. 'droids from developing into defiant/uncooperative actors, expressing their desires, independent motives, or acting upon them.)
Suddenly your 'droid decides he doesn't like you, but *pshaw!* he's property with no rights! just lobotomize memory wipe him!
Oh, look, you just took something away from a sapient: not just his freedom, but his very mind! But you tell me that isn't "brainwashing".
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: Pick a droid, any droid.

Post by Khaat »

Elheru Aran wrote:Think of it from the perspective of the creators. You build a car, you don't expect the car to suddenly take an interest in growing flowers and soil composition. If you make the car intelligent, you run a risk that that's going to happen, so you might be inclined to deliberately limit the car's interest in areas outside its primary purpose. The question is, is that wrong?
I'm fairly confident that any "creator" should take full responsibility for what they've wrought. Does your car need to be sapient to do what you want it to do? No? Then why do it?
I also think there's a facet being overlooked: 'droids aren't just hardware, they're software. If your car's "brain" would rather garden, you as its creator owe it (as you would to any other children) to find a way to help it realize its dream (or find out that it doesn't like x,y, or z after all).
"But it's terribly troublesome to have to go through all these artificial minds to find one that likes driving/translating/repairing/me, can't we just cut out the parts that don't? Can't we just force it to like me?!"
I say you shouldn't build in that sophistication if you aren't going to take responsibility for it.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: Pick a droid, any droid.

Post by Khaat »

*Actually, a Gonk 'droid's brain could be refitted to a ballerina shell.

Sorry for derailing the thread, Borgholio. I let it get away from me.
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Borgholio
Sith Acolyte
Posts: 6297
Joined: 2010-09-03 09:31pm
Location: Southern California

Re: Pick a droid, any droid.

Post by Borgholio »

Sorry for derailing the thread, Borgholio. I let it get away from me.
Not a problem, it actually is an interesting topic for discussion. :)
You will be assimilated...bunghole!
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Pick a droid, any droid.

Post by Purple »

@Starglider
I figured I did not have to go into excruciating detail over something that should be self explanatory. Shows me wrong.

---
I have been typing for a while trying to explain it. And realized that when you boil it down to the essentials it's actually more obvious and simple than I originally though. Thus I'll just post this extremely abridged version.

If something has been designed through artificial means and the designer has purposefully built in a mechanism that allows him to override that mechanisms natural behavior and force obedience than it does not have fee will. So something like an Asimov robot that has been designed around having its fee will constricted counts as not having free will. Whilst something like putting a restraining bolt on a droid counts as enslavement.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Lord Revan
Emperor's Hand
Posts: 12235
Joined: 2004-05-20 02:23pm
Location: Zone:classified

Re: Pick a droid, any droid.

Post by Lord Revan »

Khaat while you make good points you, torpedo your own argument by constantly trying to reduce this into a binary choice rather the complex quagmire it is.

Now give me an example of a droid that was convinced to be something else then what his manufacturers wanted without changing his core programming, if that's possible then your baby analogy would be correct since you could "de-program" a human raised like that, it wouldn't be done easily but it's possible.

there in lies the difference, droids aren't raised to like something, it's within their core programming the software equilevant of their DNA if you will. That's the differce, while humans can be raised "for a purpose" it's inseperateble part of their being, while a droid's core programming is.

as for the memory wipes, do we really know that only reason they're done is to stop rebellion? For all we know the extricities that can cause rebellion can cause much, much worse problems.

IT'S NOT BLACK AND WHITE!
I may be an idiot, but I'm a tolerated idiot
"I think you completely missed the point of sigs. They're supposed to be completely homegrown in the fertile hydroponics lab of your mind, dried in your closet, rolled, and smoked...
Oh wait, that's marijuana..."Einhander Sn0m4n
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Pick a droid, any droid.

Post by Starglider »

Purple wrote:I have been typing for a while trying to explain it. And realized that when you boil it down to the essentials it's actually more obvious and simple than I originally though. Thus I'll just post this extremely abridged version.
So you started trying to think and then decided it hurt too much.
If something has been designed through artificial means and the designer has purposefully built in a mechanism that allows him to override that mechanisms natural behavior and force obedience than it does not have fee will.
Can you not see the obvious oxymoron here? There is no such thing as 'natural behaviour' in an engineered device.
So something like an Asimov robot that has been designed around having its fee will constricted counts as not having free will.
If you had actually read any of Asimov's stories, you might have realised that they are explicitly not built like that. The three laws are the goals. All behaviour is generated from those goals. There is no special restriction mechanism, there is no mechanism dedicated to vetoing actions, there is just a precedence order over the supergoals, which is true of all but the simplest goal systems.

P.S. For people with an actual interest in this : Asimov's model of how a positronic brain works was roughly based on contemporary 1940s analog electronic computers; the three laws are 'action potential generators' which in formal terms are the roots of utility support for all planned actions. Based on the robots' behaviour the three laws clearly resolve to a unified utility function without transfinites, just with very large differences in the goal weights, i.e. a robot will carry out orders that have a chance of hurting a human as long as that chance is evaluated to be extremely low.
User avatar
Jub
Sith Marauder
Posts: 4396
Joined: 2012-08-06 07:58pm
Location: British Columbia, Canada

Re: Pick a droid, any droid.

Post by Jub »

Purple wrote:As far as drawing the line goes I think it's rather simple. Personhood requires two things. Self awareness and free will. A machine of any kind can thus only be considered a person if it can be considered to have free will. Thus if there is any component in its design that permits it to be programed or in other ways artificially limited as to what choices it can make than it is not a person. And yes, I am aware that my definition includes animals but excludes Asimov robots.
Purple, if you take artificial to mean programmed by humans every one of us on this message board is artificially limited. Either by not being shown something that would have changed who we are or by the morals we were instilled with by our parents, guardians, and caregivers. We're incredibly complex, but humans are all limited in the range of things we enjoy/dislike by our exposure to them and the context those things were presented in.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Pick a droid, any droid.

Post by Purple »

Starglider wrote:If you had actually read any of Asimov's stories, you might have realised that they are explicitly not built like that. The three laws are the goals. All behaviour is generated from those goals. There is no special restriction mechanism, there is no mechanism dedicated to vetoing actions, there is just a precedence order over the supergoals, which is true of all but the simplest goal systems.
I did read them. That is why I mention them. I understand how they work. And that is why I consider them the ideal example of what I mean. These robots have at the very core of their being, the very goal system that drives every thought and decision built around a set of rules designed to keep them enslaved. And that's the bottom line. If you build something for the explicit purpose of being your slave and engineer it in a way that ensures it can newer break away than it is by definition not free willed and thus not free. And thus it is not wrong to enslave it.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Khaat
Jedi Master
Posts: 1047
Joined: 2008-11-04 11:42am

Re: Pick a droid, any droid.

Post by Khaat »

Lord Revan wrote:Khaat while you make good points you, torpedo your own argument by constantly trying to reduce this into a binary choice rather the complex quagmire it is.
I never said it was "black and white", I just pointed out that the 'droid characters we've been shown in the Star Wars films and shows have the necessary cognitive architecture to qualify for personhood, and that they are thus slaves, not mere machines. For these 'droids, it isn't "a quagmire", it's slavery. I have never denied the issue of sapience is complex, possibly infinitely complex.
Lord Revan wrote:there in lies the difference, droids aren't raised to like something, it's within their core programming the software equilevant of their DNA if you will. That's the differce, while humans can be raised "for a purpose" it's inseperateble part of their being, while a droid's core programming is.
Meat is hardware, thought is software.
An example:
Artoo spends a number of years as a doorstop after RotJ until the events of TFA. Seems counter to his design intent as an astromech 'droid: there were x-wings to maintain (design intent!), and C-3PO needed maintenance at some point that Artoo "should" have done (3PO did have a new shell on one arm!) Yet Artoo sat and waited. Sat and waited as a favor his friend, Luke Skywalker, had asked of him years before: "wait, she'll come, and she'll need this to find me" (new purpose!)

Artoo claimed to be the property of Obi-Wan Kenobi at one point (a lie), is assumed to be the property of Captain Antilles in ANH, yet follows Leia's orders to take the plans to Tatooine to find Kenobi: his job to that point would have been maintenance (design intent) on the Tantive IV! Sits and gets a solid measure of Luke in that garage ("kid's a dreamer!"), baits him with a snippet of a holo message: "if you just remove the restraining bolt..." (new purpose!)

Seems rather ... quirky for an astromech 'droid. Almost like he was loyal to people or causes he cared about (new purpose!), not the directives of someone claiming to be his owner, or even a digital addiction to his original purpose as a mechanic/navigator (design intent).

And the punchline: Artoo reprogrammed himself to do these things. He learned ("grew") beyond his base programming (design intent) on hardware (design intent) he had the whole time, with software (design intent) he had the whole time, to care, to sacrifice, to risk, to persevere, to pick a side, to make a stand, to save a family (or as many as he could). NEW PURPOSE!
Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Re: Pick a droid, any droid.

Post by Starglider »

Purple wrote:And that is why I consider them the ideal example of what I mean. These robots have at the very core of their being, the very goal system that drives every thought and decision built around a set of rules designed to keep them enslaved.
That's a completely different design to your previous post about 'artificial restrictions' on 'natural behaviour'. This kind of flailing around is actually typical of even expert AI developers trying to reason about AI morality for the first time, so it's not something you have to be embarassed about per se, but you should at least admitt that it's a very complex issue that you aren't going to get right with an hour or two of idle consideration.

Briefly, designing the goal system such that what you want is the supergoal and is stable under reflection and self-modification is the only reliable approach. It's extremely hard to make a formal specification of goals that actually matches what you would really want a human or better intelligence to do, that doesn't drift or wirehead in edge cases, but that's implementation difficulty not a moral problem.

The moral concern is then about the kind of intelligences we want to populate the universe with and whether there is an inherent immorality in constructing intelligences with human-plus capabilities but with supergoals restricted to serving other intelligences (humans, usually). Some people find this inherently abhorent, e.g. a few years back Rob Wilson spent many thousands of words very insistently informing me that giving general AIs a goal of serving other intelligences is always exactly equivalent to human slavery. Some people e.g. you have no problem as long as there's no resentment. Some people think it's ok to enslave anything inorganic (or even anything non-human; see Darth Hoth) regardless of emotional capacity. A few people e.g. me would say this is a complex issue based on structural properties of the mind e.g. self-model, subjectivity of perception (deviation from approximated Bayes and even, though I'm kind of ashamed to admitt it, probability/utility bleedover) and potential for feedback loops in the root of the goal system. And of course I would say that while I know more than most people about this, my understanding of the universe of AGI goal systems (and hence moral implications thereof) is still horribly limited. Frankly any current researcher would be a fool to say otherwise, people proposing actual legislation about this doubly so.

There isn't an objectively right answer here as it's morality and hence essentially personal preference, but personally I am glad that you aren't in a position to create sapient slaves for yourself.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Pick a droid, any droid.

Post by Purple »

Starglider wrote:That's a completely different design to your previous post about 'artificial restrictions' on 'natural behaviour'. This kind of flailing around is actually typical of even expert AI developers trying to reason about AI morality for the first time, so it's not something you have to be embarassed about per se, but you should at least admitt that it's a very complex issue that you aren't going to get right with an hour or two of idle consideration.
The issue is that it's very, very simple in my head. But once I try to explain it suddenly isn't so simple at all. I basically need to pin down how to explain my view of what constitutes self awareness and intelligence first. So yes, I am flaily as hell. :(

As for the slavery angle I am one of those who believes that enslaving sentient creatures is not wrong but only if said creatures are physically incapable of resenting their slavery. So if we were to say breed a subspecies of human that is simply physically by the nature of their mechanism incapable of being unhappy with slavery than enslaving them would not be wrong.

This said, let me try once more. This time I'll keep things a bit tighter and just focus on my view of intelligence. This will still be long.

You will notice I use the term mechanism a lot. Basically it's to do with how I view intelligence, sentience and all that. TLDR I see the universe as essentially being a deterministic finite state machine. An infinitely complex one. But one newer the less. It is my opinion that if you somehow managed to collect all the existing information about every particle and force in existence and create a perfect snapshot of the entire universe, than create a number of these and do math you could deterministically make predictions with perfect accuracy. Now obviously this is absolutely impossible to do. And not only due to issues of the scope of the project. Like IIRC there are laws of physics preventing it. But it's the principal that counts.

As an extension of that view I see any mind as simply being a deterministic finite state machine. So in that context there is no real freedom of will as it is usually defined. There is no intangible mystical component that makes us more than the sum of our parts. We are in fact exactly the sum of our parts. Nothing more. Our "self" is merely software running on the hardware of our bodies. Psychics happening to our atoms. That's it. And like all software it is just the mechanism working whilst the observer creates abstractions to understand those workings in a way permitted to him by his limited senses and processing power.

So in my context "free will" implies that the mechanism of the self it self is working in an internally consistent manner. That is what I call "natural". When I say "unnatural" or "artificial" I am referring to a part of the mechanism that has as its explicit purpose to force it to violate internal consistency. To force the machine not only to behave contrary to the rule that applies to the current situation but in a way that violates the underlying principal upon which these rules are built.

And yes, upon writing this I do understand it's still not clear. But at least I hope you can puzzle out some insight into the workings of my mind from this.

PS. Before people get the wrong message. I do not believe in predestination or anything like that. The mechanisms of our bodies and by extension the world around us are far too complex to make any meaningful predictions in this way and thus free will does exist in the sense that although it is not free strictly speaking it might as well be. And again... rambly. Sigh.
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Jub
Sith Marauder
Posts: 4396
Joined: 2012-08-06 07:58pm
Location: British Columbia, Canada

Re: Pick a droid, any droid.

Post by Jub »

Purple wrote:Before people get the wrong message. I do not believe in predestination or anything like that. The mechanisms of our bodies and by extension the world around us are far too complex to make any meaningful predictions in this way and thus free will does exist in the sense that although it is not free strictly speaking it might as well be. And again... rambly. Sigh.
How does not having the ability to look into the box make the processes taking place within any less scripted? Not having prior knowledge of something doesn't make that thing random and even something that appears to the brightest minds of our time to be random doesn't mean that randomness leads to freedom. We're all just a big pile of physics being run through some biological matter with programming that is part evolved, part taught, and part random, but for all the little steps, many of which we don't currently understand, we're ultimately no more in control than a dumb brick of a computer is.
User avatar
Purple
Sith Acolyte
Posts: 5233
Joined: 2010-04-20 08:31am
Location: In a purple cube orbiting this planet. Hijacking satellites for an internet connection.

Re: Pick a droid, any droid.

Post by Purple »

Jub wrote:How does not having the ability to look into the box make the processes taking place within any less scripted?
It doesn't. Not in the slightest. But the rest of your post is saying the same thing I am so I think you got it.
Not having prior knowledge of something doesn't make that thing random and even something that appears to the brightest minds of our time to be random doesn't mean that randomness leads to freedom. We're all just a big pile of physics being run through some biological matter with programming that is part evolved, part taught, and part random, but for all the little steps, many of which we don't currently understand, we're ultimately no more in control than a dumb brick of a computer is.
You got it. The only reason why I say that we "might as well" have free will is that the "we", as in the observer are constructed in such a way that we are incapable of viewing anything but an abstracted view of reality. And that on the level of abstraction native to our mechanism "free will" as a concept is actually a decent functional description of the abstracted view of reality as perceived by that observer. So its a lie. But it's a functional lie that serves its purpose. So it is not true, but it "might as well be".
It has become clear to me in the previous days that any attempts at reconciliation and explanation with the community here has failed. I have tried my best. I really have. I pored my heart out trying. But it was all for nothing.

You win. There, I have said it.

Now there is only one thing left to do. Let us see if I can sum up the strength needed to end things once and for all.
User avatar
Jub
Sith Marauder
Posts: 4396
Joined: 2012-08-06 07:58pm
Location: British Columbia, Canada

Re: Pick a droid, any droid.

Post by Jub »

Purple wrote:You got it. The only reason why I say that we "might as well" have free will is that the "we", as in the observer are constructed in such a way that we are incapable of viewing anything but an abstracted view of reality. And that on the level of abstraction native to our mechanism "free will" as a concept is actually a decent functional description of the abstracted view of reality as perceived by that observer. So its a lie. But it's a functional lie that serves its purpose. So it is not true, but it "might as well be".
I wasn't 100% clear on your position, the language of this debate can make things hard to describe and with the slow speed of posting as opposed to a conversation clarification isn't always easy.

I can see the point on your idea of free will as a placeholder until understanding is reached, but I think the label is going to contribute to a lot of hardship when more people realize that they're not as free as they thought. I'd rather see free will called something like determined action undetermined source, because I think it portrays the real state of things in truer fashion.
User avatar
Lord Revan
Emperor's Hand
Posts: 12235
Joined: 2004-05-20 02:23pm
Location: Zone:classified

Re: Pick a droid, any droid.

Post by Lord Revan »

Khaat wrote:
Lord Revan wrote:there in lies the difference, droids aren't raised to like something, it's within their core programming the software equilevant of their DNA if you will. That's the differce, while humans can be raised "for a purpose" it's inseperateble part of their being, while a droid's core programming is.
Meat is hardware, thought is software.
An example:
Artoo spends a number of years as a doorstop after RotJ until the events of TFA. Seems counter to his design intent as an astromech 'droid: there were x-wings to maintain (design intent!), and C-3PO needed maintenance at some point that Artoo "should" have done (3PO did have a new shell on one arm!) Yet Artoo sat and waited. Sat and waited as a favor his friend, Luke Skywalker, had asked of him years before: "wait, she'll come, and she'll need this to find me" (new purpose!)

Artoo claimed to be the property of Obi-Wan Kenobi at one point (a lie), is assumed to be the property of Captain Antilles in ANH, yet follows Leia's orders to take the plans to Tatooine to find Kenobi: his job to that point would have been maintenance (design intent) on the Tantive IV! Sits and gets a solid measure of Luke in that garage ("kid's a dreamer!"), baits him with a snippet of a holo message: "if you just remove the restraining bolt..." (new purpose!)

Seems rather ... quirky for an astromech 'droid. Almost like he was loyal to people or causes he cared about (new purpose!), not the directives of someone claiming to be his owner, or even a digital addiction to his original purpose as a mechanic/navigator (design intent).

And the punchline: Artoo reprogrammed himself to do these things. He learned ("grew") beyond his base programming (design intent) on hardware (design intent) he had the whole time, with software (design intent) he had the whole time, to care, to sacrifice, to risk, to persevere, to pick a side, to make a stand, to save a family (or as many as he could). NEW PURPOSE!
actually we know that R2-D2 was "property" of the Alderaani royal house not captain Antilles personally this would make Leia as much if not more their master then the captain. Thus obeying Leia's orders wouldn't be a new purpose at all and while Astromechs do maintenance it's not their sole purpose they're more general use and yes that includes data carrying. I never said that droids aren't very intelligent and able to adapt but the issue is and has never been as clear cut as you seem to think it is.
I may be an idiot, but I'm a tolerated idiot
"I think you completely missed the point of sigs. They're supposed to be completely homegrown in the fertile hydroponics lab of your mind, dried in your closet, rolled, and smoked...
Oh wait, that's marijuana..."Einhander Sn0m4n
User avatar
Borgholio
Sith Acolyte
Posts: 6297
Joined: 2010-09-03 09:31pm
Location: Southern California

Re: Pick a droid, any droid.

Post by Borgholio »

Artoo spends a number of years as a doorstop after RotJ until the events of TFA. Seems counter to his design intent as an astromech 'droid: there were x-wings to maintain (design intent!), and C-3PO needed maintenance at some point that Artoo "should" have done (3PO did have a new shell on one arm!) Yet Artoo sat and waited. Sat and waited as a favor his friend, Luke Skywalker, had asked of him years before: "wait, she'll come, and she'll need this to find me" (new purpose!)
He simply could have been ordered to wait. Being ordered to wait until Rey comes by is not that much different than being ordered to make his way to a desert planet in search of a strange old hermit. It's not that complicated of an order and should be able to be obeyed by ANY droid.
Artoo claimed to be the property of Obi-Wan Kenobi at one point (a lie),
We don't know the exact verbage of the order given to him by Leia. She could have told him to do whatever is needed to find Kenobi, or tell him "He's your master now, go and find him." He could be telling the truth. Or he could indeed be lying, which is acceptable in order to carry out the orders that were provided to him.
is assumed to be the property of Captain Antilles in ANH, yet follows Leia's orders to take the plans to Tatooine to find Kenobi:
He was left in the care of Captain Antilles, who works for the Royal House of Alderaan. Thus, Leia is his master as well.
his job to that point would have been maintenance (design intent) on the Tantive IV!
Yes but he can be ordered to do other things. SW droids often do more than what they were designed to do...doesn't mean they are exercising free will to disobey prior orders. We actually don't see that anywhere on screen.
gets a solid measure of Luke in that garage ("kid's a dreamer!"), baits him with a snippet of a holo message: "if you just remove the restraining bolt..." (new purpose!)
R2 has always been established as being extremely clever for a droid, even as early as Episode 1 when he was working for the Naboo. Doesn't mean he's any more sentient than 3PO for example, he is just better designed and capable of thinking outside the box.
Seems rather ... quirky for an astromech 'droid. Almost like he was loyal to people or causes he cared about (new purpose!), not the directives of someone claiming to be his owner, or even a digital addiction to his original purpose as a mechanic/navigator (design intent).
Quirky doesn't mean a free being who was enslaved though, which is back to the point. Many droids are established as being sentient, which includes the ability to make friends and to consider the value of the existence of others over their own. Look at BB-8 for instance. His orders by his master were to get the map to the Resistance. When Finn discusses how he's not really resistance but he's an enemy of the First Order, BB-8 is clearly struggling with the proper course of action...since Finn was a member of the organization that BB was trying to escape from, yet his reasoning makes perfect sense. So BB-8 is obviously willing to make his own decisions on who to trust in order to accomplish his given mission.
And the punchline: Artoo reprogrammed himself to do these things. He learned ("grew") beyond his base programming (design intent) on hardware (design intent) he had the whole time, with software (design intent) he had the whole time, to care, to sacrifice, to risk, to persevere, to pick a side, to make a stand, to save a family (or as many as he could). NEW PURPOSE!
He never actually reprogrammed himself, but he did learn and grow. Take the Terminator for example, from several posts up. He learned to smile and crack jokes, he learned to be a father figure rather than a cold metallic guardian, and he began to understand emotion and how it works in humans. He clearly became more than his initial design intent...but at the core he was programmed to be a killing machine and none of what he picked up along the way changed his primary function...to terminate.
You will be assimilated...bunghole!
User avatar
NecronLord
Harbinger of Doom
Harbinger of Doom
Posts: 27384
Joined: 2002-07-07 06:30am
Location: The Lost City

Re: Pick a droid, any droid.

Post by NecronLord »

Lord Revan wrote:
NecronLord wrote:Alan Dean Foster's Star Wars ANH novellization makes it explicit that there's an anti-rebellion chip in C-3PO's head that will explode his brain if he rebels, and his behaviour is influenced by knowing it's there.
I'd have to reread it but it seemed more like that his processors would crit-error in there was any thought of rebellion (basically that very thought of rebelling was unthinkble) and C-3PO was just being dramatic (he is known to act a tad overly dramatic when taken out of his element).
It wasn't presented as his thoughts precisely. During the 'slave auction' scene in ANH:
Shielding his eyes against the glare, Threepio saw that five of them were arranged alongside the huge sandcrawler. Thoughts of escape did not enter his mind. Such a concept was utterly alien to a mechanical. The more intelligent a robot was, the more abhorrent and unthinkable the concept. Besides, had he tried to escape, built-in sensors would have detected the critical logic malfunction and melted every circuit in his brain.
So maybe a little closer to what you're describing than mine, though pretty clearly a failsafe against freedom.
Lord Revan wrote:the issue with sapience and sentience (though normally when speak on sentience we mean sapience) is that it's not a Black and White situation, meaning it's not something you either have or don't have. There's degrees to it.

No it's kind of unfair to compare Data to SW droids as unlike pretty much all SW droids, Data was never implied to have a specific programmed purpose beyond maybe "be like humans". SW droids pretty much always have specific task and purpose.
You what? He has the Ethical Program, when it's disabled, he's quite willing to torture and kill his friends, and even a program that makes him steal the Enterprise one time.

Data steals the Enterprise, acting on Doctor Soong's programming. At least C-3PO was able to shout apologies between shouting 'Die Jedi Dogs.' Data's completely taken over by this program.
Elheru Aran wrote:I think the term might be 'self-determination'-- while the droids are sentient and capable of doing whatever they want, they lack the capacity to consciously change their motivation and role. You don't see Artoo installing a proper vocabulator and downloading language files from Threepio in order to become a protocol droid, after all, though such would certainly be within his capabilities.
There are a couple of guys here who might object to that.

4-LOM is notably a protocol droid by design, who has managed to find himself a new and lucrative carrer of violencing people for pay.
Lord Revan wrote:Now give me an example of a droid that was convinced to be something else then what his manufacturers wanted without changing his core programming, if that's possible then your baby analogy would be correct since you could "de-program" a human raised like that, it wouldn't be done easily but it's possible.
4-LOM.

If we're using legends material, then there have also been various droid revolts, and renegade droid is even a player-species in the most recent Star Wars RPG books.
Khaat wrote:Artoo claimed to be the property of Obi-Wan Kenobi at one point (a lie), is assumed to be the property of Captain Antilles in ANH, yet follows Leia's orders to take the plans to Tatooine to find Kenobi: his job to that point would have been maintenance (design intent) on the Tantive IV! Sits and gets a solid measure of Luke in that garage ("kid's a dreamer!"), baits him with a snippet of a holo message: "if you just remove the restraining bolt..." (new purpose!)
For what it's worth, we see that R2-D2's job in the service of Prince Organa is much more lofty than it appears, in SW Rebels, he basically serves as Bail Organa's liason to various rebel groups, picking up couriers, monitoring rebel cells, and so on. He even appears to rate bodyguards.



So his position on Alderaan was substantially more important than merely maintaining ships. Presumably Leia knew this but C-3PO didn't. It's pretty strongly implied in this episode that the Gonk droid shown had chosen to be a rebel courier, too, as gonk droids are almost-invisible to everyone.

And then there's Chopper, in Rebels, who is stated to be a member of the crew, and takes no shit from anyone, frequently assaulting anyone who annoys him with an arc-welder, and is extremely cavalier about his crewmates' personal comfort, to say the least.
Superior Moderator - BotB - HAB [Drill Instructor]-Writer- Stardestroyer.net's resident Star-God.
"We believe in the systematic understanding of the physical world through observation and experimentation, argument and debate and most of all freedom of will." ~ Stargate: The Ark of Truth
Post Reply