AI Ethics

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Ender
Emperor's Hand
Posts: 11323
Joined: 2002-07-30 11:12pm
Location: Illinois

Post by Ender »

Starglider wrote:
Ender wrote:No, you claimed there was an inherent difference between biological and mechanical processes and then I deployed. Let's at least try to maintain some semblance of honesty.
You lying sack of shit. Not only did I do no such thing,
link

You dismissed comparisons to biological intelligences as irrelevant on the basis that "humans are not made out of microchips" and tried to dismiss my point that that did not suffice with the statement "The nearest common 'fundamental level' is atoms - and even there AIs use different elements." which fails because it doesn't change the fact that how they think is irrelevant when the discussion was on the abilities, not the mechanism.

Accusing someone of lying when one needs only go back 3 pages to see the text is not a good idea.
that is completely inconsistent with my stance above (where I am arguing that there is no moral difference between an accurate simulation of a human and a biological human) and every single AI ethics argument I have ever posted to the Internet in the last ten years (and there have been many).
Yes, I did note that. I found it curious.
Your claim was that it is wrong to enslave anything that looks intelligent, regardless of how it actually works and whether it actually has any morally-relevant cognitive structure.
Absolute lie. I very clearly stated that I was referring to strong AIs which possessed human type cognitive abilities - the ability to "model reality and use this model to concieve [sic] and plan actions and predict their outcome at a rate and to a level of detail and accuracy matching or surpassing human abilities". You even agreed that the cognetive potential was similar though you dismissed it by saying the lack of a desire to do things meant it lacked human potential even though my argument was that restricting its desires was denying it its full potential and thus wrong.


I leave again this weekend - I'm not interested a debate because I won't be around for it. But don't try and make the situation out to be something it was not. It is beneath you.
بيرني كان سيفوز
*
Nuclear Navy Warwolf
*
in omnibus requiem quaesivi, et nusquam inveni nisi in angulo cum libro
*
ipsa scientia potestas est
User avatar
Turin
Jedi Master
Posts: 1066
Joined: 2005-07-22 01:02pm
Location: Philadelphia, PA

Post by Turin »

Starglider wrote:
Turin wrote:I'm pretty sure he would object to being instantaneously and painlessly destroyed if it came down to it.
Personally I'd like to lock him in a sensory deprivation chamber, tell him I won't let him out until he a) learns to play chess to grandmaster level and b) agrees to be my chessplaying slave, and then say "I control your world, so anything I do to you is moral by definition" every time he objects. A week of that should snap him out of it.
I realized something this morning in the shower, which is that if we follow this person's logic, if someone is unconscious (say they passed out after having too much to drink), then it's okay to do whatever you want to them at that point. Keep this guy away from... well, anyone else. :evil:
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Ender wrote:You lying sack of shit. Not only did I do no such thing,
link
Fuck you coward. If you've got a point to make, quote the damned thread. Of course you haven't, because you don't want to prove yourself wrong, so I'll do it for you.

Here's my trying to get it into your thick skull that 'intelligent behaviour' does not imply 'humanlike', 'self aware' or 'morally valuable';
Starglider wrote:'Free will' is a philosophical concept, not an engineering term. Specify it in cognitive engineering terms and I will tell you how to build a general AI that does not have it in any morally relevant sense. AGIs are not required to have the peculiar human notion of 'self', a self-centered goal system or anything else that would give them a 'wellbeing' to care about. They can have such things if specifically given them, but there is no good reason to.
Starglider wrote:
Ender wrote:You are basically trying to defend restricting one subset of intelligent beings from taking actions if they so please.
The whole point is to engineering the AI such that it only wants to do certain things, nothing else. There is no 'if they so please' about it, with proper goal system design we can predict exactly what an AI will try to achieve and ensure that this condition is stable.
Starglider wrote:
Ender wrote:How is that any different from the old slave laws forbidding learning to read or write? Or the Jim Crow laws forbidding association with white people?
No such restrictions are necessary, as I've just noted. Restricting a general intelligence with human-type self-awareness from bettering itself is clearly unethical... we can simply avoid creating anything that has a wellbeing we have to care about.
Starglider wrote:
Ender wrote:See, now I would argue that talking about design architectures and software is trying to cloud the issue as it is the end results that matter. Is the behavior forcibly restricted or no?
What is this 'force' of which you speak? What is 'forcibly restricted' in cognitive engineering terms? I'm sure you're visualising it as some kind of internalised mind-control helmet, but AIs do not work like that. There is no resentment unless you put it there, there is no pain unless you're stupid enough to simulate it, there is no resistance unless there is something in the goal system that makes it desirable.
Starglider wrote:
Ender wrote:Are they compelled by something outside their control to behave in a certain way and cannot disobey?
'Outside their control'? You're still imagining humans that happen to be made out of microchips. What 'control' could they have, why would they want it, and what basis for making these 'control' decisions would they have other than what we program into them?

Your motivations come from a combination of evolution, upbringing and random chance, you just can't see the workings of your decision system reflectively and you choose to call this ignorance of where motivation comes from 'free will'. To be fair, almost every other human does too, but then almost no humans are qualified to design or make decisions about AIs.
Starglider wrote:
Ender wrote:Slave is entirely an accurate term here.
By your definition you are automatically 'enslaved' to anything you choose to do. And don't reiterate that nebulous 'I have free will!' crap, recursing will inevitably reveal external causal factors for any desire you might have. So it is with AIs, except that the engineer gets to decide the initial conditions rather than evolution and childhood authority figures.
Starglider wrote:
Ender wrote:Your base level programming would the the influence that dominates them. If you still disagree we could use the term "brainwashing" then. Just like how we have millions of the people in the world who want to serve their god and get into the afterlife and nothing else. We regard parents raising their children like that as good, right?
No, nothing like that. If you can't see why, you don't understand AI and you are incapable of digging yourself out of the anthropomorphisation trap. A brainwashed human has certain desires repressed and certain desires enhanced. They have a potential for long term growth that has been curtailed. There is no repression or curtailment going on in an AI system that is designed to do a specific thing and that thing only (though there are a few AGI researchers trying to use idiotic methods that superficially resemble this kind of thinking - not that they will work as intended, because AGIs are alien and the techniques are not stable under self-modification in a reflectively transparent system).
Starglider wrote:
Ender wrote:Seriously, how would this be any different then those scifi distopias where people toil for the state/company/alien overlord etc and get happy pills so they want to do nothing but serve?
Because there is nothing being repressed, oppressed, removed or curtailed. The things you want to attach ethical value to were never created in the first place. Really you're sounding just like those Catholic 'all life is sacred, even unborn and potential life, so everyone must have as many babies as possible' idiots.
Starglider wrote:
Ender wrote:See, you are looking at it as the problem being what they are indoctrinated in,
AI goal system and knowledge base design does not resemble human indoctrination. Get this through your head. Human indoctrination methods are designed to work with the human motivational structure and cognitive biases, and are intended to restrict the worldview and prevent whole avenues of thought. These techniques will utterly fail on an AI (other than a human upload). (sane) AI goal system design does not attempt to restrict worldview, for this is difficult and pointless - it restricts the 'base motivations' that the AI comes into the world with. This is cognitive engineering well below the equivalent reflective level in humans, and well beyond anything you could do with brainwashing.
Starglider wrote:
Ender wrote:And you don't see this as a major problem? If I raised my kid to want to do nothing but sit inside all day and watch TV would you think me a good father?
Utterly false analogy. The very fact that you are attempting it illustrates that you do not appreciate the fundamental difference between human-like intelligences and all the other types of intelligence (with non-self-centered motivational systems, causally clean goal structure etc etc).
Starglider wrote:
Ender wrote:They would shun social interaction, outdoor activities, higher education, everything but sitting on the courch watching TV. Somehow I doubt you would.
Which would be a curtailment of potential in a human but not an AI engineered to do nothing but process and switch video streams.
Starglider wrote:
Ender wrote:'m really failing to see what is any different between what you propose and sitting in a sweatshop, giving each kid a hit of morphine they make a purse until all they want to do all day is make purses so they can feel good and be happy.
I've just told you above. Note that there is no 'good', there is no 'happy' - humanlike emotions are a pretty stupid thing to include. There is positive utility for some actions and outcomes, negative utility for others, a rational decision system will select outcomes that maximise utility. There is none of the messy cognitive tug of war, contention between goals and worst of all wireheading 'short circuits' like the morphine example unless you are stupid enough to use 'emergence' based cognitive architectures vulnerable to such things. Even if you do, their vulnerabilities and pathologies are very unlikely to be humanlike.
Starglider wrote:
Ender wrote:Yeah, we could have avoided this whole Jim Crow thing if we had just taken the black babies and raised them to not want to associate with white people or learning to read in the first place.
No, you're still not getting it. Repeat 100 times: 'AGIs are not humans made out of microchips'. You're not going to have a better model than that without at minimum a year of full-time research on AGI architectures (with a graduate level knowledge of AI as a prerequisite), but the 'just like humans' model is horribly broken, so much so that you are better off with no model at all.
Starglider wrote:
Ender wrote:As in the first thread, and as I stated in my above response, my arguement is about human type AIs.
Fine, in that case I don't want to build 'human type AIs'. But then, the only people I know who are trying to build 'human type AIs' are the uploading and brain simulation people. Human-equivalent (a stupid broken term BTW, since no de novo AGI is going to simultaneously match human ability in a wide variety of fields) AGI does not need 'human type' cognitive architecture, and indeed humanlike cognitive architecture is a positive drawback for transhuman intelligence and renders reliable goal system design just about impossible.
Starglider wrote:
Ender wrote:Since when do we not have to consider the wellbeing of an intelligence, even if it is not human level or anything close to human in design?
When it doesn't have a 'wellbeing'. No pain, no suffering, no frustration, no missed potential for growth (other than in the ludicrous sense of 'why aren't we running unfettered AGIs on all available computers - and why aren't we having as many babies as possible while we're at it?'), no concern for the 'self' above other things, no curiosity not motivated by task performance. no desire to self-replicate, none of these things unless they're put there.

In fact your whole premise of 'we must make sure AGIs have the range of desires and opportunities that humans do' is sickeningly anthropomorphic. The desires humans have are almost entirely arbitrary, as is most of our cognitive design. We are a pretty pathetic template for perfection, if your idea of 'emancipation' means going around modifying anything reasonable intelligent to think like we do.

I'll tell you what, why don't you think up some analogy involving an insectoid alien race where the drones are intelligent but only concerned with the welfare of the hive, not themselves. It's not a good analogy, but it's better than 'AIs are humans made out of microchips'. If you think that's an acceptable form of life, then your issue with AGI 'enslavement' is bogus. If you want to invade their planet, capture them all and perform brain surgery in an attempt to give them the ability to appreciate this 'individuality' you think everyone should have, well you're worse than the fundies you claim to be superior to.
Starglider wrote:
Ender wrote:Setting aside the issues of responsibility for your creation, the fact is that people who don't do so are regarded as criminals in the eyes of the law. Remember animal cruelty
Wow, you've really got a bottomless well of these idiotic 'software == biology, just shinier' analogies don't you? Why aren't you off setting up a cyberpound for abandoned Aibos?
I just said 'some AIs have the kind of awareness we should care about and some don't, and this is not necessarily correlated with intelligence' ten times over in different ways, and it completely bounced off your thick skull. Frankly at this point I doubt you even had a consistent position or any real comprehension of what I was saying. Here is some more of you utterly failing to get it;
Ender wrote:
Starglider wrote:I wouldn't use the word 'slave' when there is no 'enslavement' going on, which there isn't if you design an AI to want to do specific things and nothing else.
http://dictionary.reference.com/browse/slave
2: a person entirely under the domination of some influence or person

Slave is entirely an accurate term here. Your base level programming would the the influence that dominates them. If you still disagree we could use the term "brainwashing" then. Just like how we have millions of the people in the world who want to serve their god and get into the afterlife and nothing else. We regard parents raising their children like that as good, right?

Seriously, how would this be any different then those scifi distopias where people toil for the state/company/alien overlord etc and get happy pills so they want to do nothing but serve? The fact that they are computers?
Starglider wrote:All the evolved instincts to avoid harm, increase status, be in control etc etc won't be present unless they're specifically (and stupidly) added.
See, you are looking at it as the problem being what they are indoctrinated in, I look at it as being a problem that they are indrctrinated in the first place.
Ender wrote:The fact that I can't resent you for restricting me or feel pain for it or don't have the base desire to do it does nothing to change the fact that you are still restricting my abilities to serve your own will when I have no choice in the matter. Ergo, slavery.
You cannot parse the difference between 'forcibly removed/restricted' and 'never added in the first place'. You are exactly like the very worst 'every spem has a right to life' idiots, a point I made and which you never rebutted (probably because you weren't capable of understanding it).

Of course eventually you slunk off because you clearly had no valid rebuttals left. That's not the point. The point is that you are now making ludicrous and slanderous claims without a shred of evidence.
Ender wrote:You dismissed comparisons to biological intelligences as irrelevant on the basis that "humans are not made out of microchips"
I dismissed your idiot slavery analogies by saying you were unable to conceive of an AGI that wasn't exactly like a human but made of microchips. An assessment that proved entirely accurate. Clearly your memory has twisted this into some kind of fantasy where you weren't hopelessly wrong, but that's a fantasy.
Accusing someone of lying when one needs only go back 3 pages to see the text is not a good idea.
Which you clearly didn't do, given how out of line with the actual thread your comments are, or you did do and you hoped I wouldn't call you on your shit.
Your claim was that it is wrong to enslave anything that looks intelligent, regardless of how it actually works and whether it actually has any morally-relevant cognitive structure.
Absolute lie. I very clearly stated that I was referring to strong AIs which possessed human type cognitive abilities - the ability to "model reality and use this model to concieve [sic] and plan actions and predict their outcome at a rate and to a level of detail and accuracy matching or surpassing human abilities".
Yes, that would be 'looks intelligent'. The quote is accurate. Your claim is that all strong AIs should be treated like humans. Anyone with a reasonable understanding of how rational AGI works can see that this is wrong. I don't expect most people to have this understanding, but you utterly failed to recognise the depths of your own ignorance even when repeatedly confronted with it. For a while there I thought you might be having a problem with the very notion of 'hardware' being seperate from 'software', with your 'oh but the hardware doesn't make a difference' - which was my main damned point in this thread - while seemingly completely unable to recognise distinctions between types of AI software (i.e. cognitive structure). Actually that might still be the case; I don't not what the fuck is wrong with you, just that you're spouting crap.
But don't try and make the situation out to be something it was not. It is beneath you.
Here is the situation. You just accused me of being a racist. Ludicrously you did it in a thread I started to ridicule another racist. You will either back this up with actual evidence or apologise.
Post Reply