This train wreck started when someone started claiming that the best way to make AIs was to let them evolve in a simulated world, create some sort of test to find out whether they meet your needs (including loyalty), then delete all the ones that fail and implant the ones that do into robot bodies. Apparently this is moral regardless of how humanlike/sapient these intelligences are. Here are some choice statements of his (bolding mine). Anyone here willing to defend this twit, or care to help in destroying his position?
The sopilist crap at the beginning really should've tipped me off;
While I've seen plenty of people waste time with pointless navel gazing, this is the first time I've seen someone navel-gaze their way into a rationalistion for toturing and killing an indefinite number of sentient beings for personal benefit and amusement. His true position quickly becomes apparent:Tarzing wrote:Um how do you know reality isn't EXACTLY like that?Starglider wrote:So killing sapients is just fine in your book as long as it isn't particularly painful? And it's fine to decree arbitrary lifespans whether they like it or not. Ok, I decree that you will have a 40 year lifespan, and then you will have a massive brain hemorage and die instantly. Don't bother complaining, it's perfectly moral by your own definition and since I'm actually an avatar of the intelligence running the simulation you believe is reality you can't do anything to stop me.Whitehawke wrote: I must be missing something here. When the AIs die, some (most?) simply die. The ones who would be thrilled to do a particular job are given the opportunity to do that job. How, exactly, is this immoral?
Actually oddly enough I've certainly followed that train of thought before. "Reality" is a kind of test, if you are enlightened enough to figure out the true nature of reality, you pass and leave the reality simulation, into the real world. If you don't figure it out, you either get kicked out of the simulation and failed, deleted, or reborn into the simulation. Part of the reason I liked (well, liked) this theory is that the world often makes very little sense, you'd have to be pretty stupid to believe in it sometimes.
Oh, what kind of test? It's like a test, to see if your mind is strong enough to recognize when it's being deceived. Imagine like, a test for secret agents or something, who have to have tremendous mental willpower to resist sophisticated interrogation techniques. If you can't figure out that reality isn't real (and the way out of reality), you just suck, you fail.
In the end I decided I liked the Buddhism philosophy better (which doesn't attempt to answer questions about the nature of reality, only questions about the nature of the mind).
Yep, being 'shallow' (i.e. sanely designed) means you have no rights, no matter how intelligent you are, how human your emotions are, whether you have self-awareness etc.Tarzing wrote:It becomes clear, that the Intelligences with physical bodies, are "real" and the ones without, are mere simulations, to be treated as such. To me this morality sounds good enough, an intelligence which can't be separated from the physical realm without killing it, has the right to live. Those which can be freely copied, deleted etc, don't have any right to live, because their existence is just so damn shallow (this would actually make a good definition for "Synthetic AI's", if an AI can be copied whole-cloth into a hardware device, and doesn't have significant capacity to evolve in that device, it can be morally killed towards any ends).
I start pulling his arguments apart:
He initially presents his notions as semi-hypothetical, but he quickly drops the pretense of 'if it works like this... then these conclusions follow' and starts declaring his "slavery is a-ok if I can declare it 'not real'" position to be objective truth.Starglider wrote:'Inherently non-linear' = what? Do you mean 'analogue'? If so the benefits of digital operation massively outweigh those of analogue operation - note that brains are digital in the amplitude domain anyway, they're just analogue in their gate settings and clockless (advanced hardware is probably clockless/asynchronous too).Tarzing wrote:I strongly suspect that the innately non-linear nature of hardware (or wetware, if you please), gives it much more processing power than software
In any case software is massively more powerful than fixed function hardware, because it can adapt itself exactly to the task. Completely reconfigurable hardware (i.e. advanced FPGAs) would effectively remove the hardware/software distinction.
This is nonsense. It would have to be a hopelessly awful hardware design not to be able to give a state dump. It would be a nightmare to develop, you couldn't mass produce it, it would just be pointless.Tarzing wrote:As such (real) AI's will be bound to bodies, it wont be possible to download, copy or delete them.
Well you're wrong. Probably in several different ways at once, but particularly in how continuity of consciousness works. Even your silly model doesn't rule out gradual uploading anyway.Tarzing wrote:]I also say that it wont be possible to copy humans into computers! While you might be able to emulate an existing human in a computer, it would be the whole clone/death thing.
You cannot distinguish morally between intelligences based on minor implementation details.Tarzing wrote:If all the intelligences bound to physical bodies, are magnitudes more powerful than AI's in software, then morality probably gets a lot simpler. The software AI's become unimportant, sort of like Sims really.
Tarzing wrote:It's NOT a bizarre moral position. It's basically saying "I don't care what the reality of nature is, I'm going to act my actions are significant".Starglider wrote:That is a highly bizarre moral position. You're saying it's fine to kill you or not based on implementation detail which neither of us have any way of detecting? Despite it having zero effect on your cognitive capabilities, your ability to feel and emphasise, your dreams and desires etc?Tarzing wrote: Oh I do deal with it. If I am a simulation and I freely acknowledge that possibility, I'm perfectly happy to be terminated at any time! I can thus condone terminating simulations under me without any hypocrisy.
Tarzing wrote:What if dreams are reality and reality is dreaming?Starglider wrote:There aren't any 'beings in the dream world', there are just memories your brain has confabulated. Humans have nowhere near the ability to completely encapsulate/simulate another sapient.Tarzing wrote:Is it wrong to wake up in the morning because that wipes out all the beings in the dream world? Wink.
I say it is okay but not on the basis of the feeble processing power of my brain. I say it's okay because the dream characters exist in isolation from reality, I'm the only one who can interact with them, thus I can do what I please with them without feeling guilt.
If I myself am in that position then whoever is dreaming me can do whatever they want with me including ending my existence, I don't mind.
More semi-sopilist idocy. Incidentally I had classed Buddhism as one of the most harmless religions, but this guy is actually trying to use it to justify (currently theoretical, but plausible in the near future) horrors on a scale beyond the dreams of the worst burn-the-heretics Christians. Later Tarzing tries to show something approaching mercy, but fails pathetically:Tarzing wrote:Are you familiar with the "Fourteen Unanswerable Questions" (of Buddhism, which deal with the nature of reality)? WinkStarglider wrote:You haven't done anything to justify this statement as anything other than a bizarre personal pronouncement. Though I'd note that going around torturing your simulations for personal amusement is going to ensure that you're never let out of the box into reality WinkTarzing wrote: I really don't see what the big deal is. THE SAPIENTS IN THE HIGHEST REALITY ARE THE ONLY ONES WHO MATTER.You can't be serious, you're saying 'I've adopted this utterly ridiculous ethical system because I can't be bothered to think things through? It's too hard! My head hurts! Let's just declare the problem irrelevant!'Tarzing wrote:May sound harsh but anything else brings up too many ethical issues.
(I personally was rather impressed and relieved that the Buddha had declared a class of questions to be unanswerable, that's some good honest intellectual integrity so rarely seen)
I'm talking about the unknowable nature of reality. If you can find a way to end my existence without my suffering and without making any other sapient suffer, then go ahead. I don't mind. Because if you do do it, I wont know it. You see: My suffering does not depend on the ability of a higher being to delete me, because I don't know the probability. This holds true as long as the nature of reality is unknowable.
I haven't even bothered including his many and gross technical errors, in both hardware and AI design, despite the fact he claims;Tarzing wrote:It is not in fact okay to go around deleting sapients at will if other sapients who knew them, endure. You can create a being in a void, then delete that being. Or you can shut down an entire simulation. But you can't go around torturing beings willy-nilly, unless they perceive that as "Just the way reality is". If we are a simulation, then it's morally okay in the Upper-Reality, to make simulations of worlds which aren't paradise, because our reality is not paradise, and they made it. If we are Alpha-Reality, I don't see why we should be obligated to make simulations which are nicer than our reality.
Nor have I bothered copying over his inane and unworkable ideas about how to ensure AI loyalty, which I also ripped apart and which he wisely chose not to try and support further. Of course the fucker has been picking and chosing which of my morality points he bothers to respond to as well.I am an AI programmer (job title I didn't choose it Razz), although I only do expert systems and have the intellectual integrity to not claim that as AI programming.