I am fairly short of time, so I will just skip the shitload of pointless insults. I will also leave the SW discussion aside, and go to the AI debate.
Starglider wrote:Ah, refusal to answer the question. Come on, admit it, you want to wipe out all aliens because they might possibly outcompete humans. You'd rather be utterly psychically dominated, violated every minute of your existence, a total slave, watching your very individuality fade away while you scream silently, helplessly... as long as the beings in control look like you do... rather than live in a wonderful utopia where the decisions are made by people who look different to you. Now that is a fucked up ethical system.
You constantly accuse me of manufacturing irrelevant assumptions, yet here you make up one all of your own? Your evidence that the hive-mind was a horrible experience would be what again? Hell, the people on Byss that Palpatine had already integrated were described as being encased in "mindless bliss" or thereabouts - not very different at all from the Culture and their VR games. By your standards, this should be excellent.
And of course, you go on with the "Nya, nyah, you're a racist!" stupidity, as if it is somehow bad to value the survival of one's own species and that this is simply arbitrary prejudice.
It is of course true in real life.
And your evidence for that would be what again? Or is this entire discussion going to consist of appeals to your own authority and snide remarks at everyone who is not a Singularitarian or an AI-programmer?
There is no way you can do anything more than kill a few random people, and no coalition you could possibly assemble could ever be effective. Remember that 'inevitable' means that it will happen eventually;
No shit, Sherlock. Now produce the evidence
why it is inevitable and
must obviously happen that us mere mortals are too stupid to read out of the starry sky.
And make no mistake, once it does happen, there is no going back.
Which is why I want to stop it in the first place.
I have repeatedly stated that the best solution is simply to ensure that transhuman intelligences are benevolent,
You yourself seem uncertain that this can even be done. And why the Hell should I accept something as true just on your say-so?
Except for you and your death squads, as opposed to the Culture, which has no dictators and a population free to do whatever they like.
So Banks was lying when he said that the Culture was an anarchy where every Mind ruled its own habitat according to its own conscience?
Who you will murder without compunction if they look too smart or to different.
Yes, instead of an actual argument you would rather bleat on endlessly, "Hoth's a racist! Look, he's a
racist! He doesn't think my wonderful computers qualify for human rights! This is totally like he's a
Nazi or something!"
Done with the strawman yet?
Again I have no idea why you think the Culture has slavery or unequal legal treatment of sentients, when it is repeatedly stated that it does not.
Frankly, the idea that two cultures (Minds and human-like aliens) can exist together under equal rights when they are so inherently unequal is too stupid for words. Equal rights assume equality (within variations) between individuals, otherwise they are meaningless.
Oh of course, you're a Lolbertarian as well. Should've guessed.
Another mindless red herring, of course.
Anyway, the Culture has a total free market, it's just that only hobbyists bother with it, because it's a post-scarcity economy where automated factories can make pretty much anything you want without having to trade for it. If you want your own starship, to go off and live with primitives or whatever, you just have to ask.
Post-scarcity economies are inherently impossible. In the Culture it works purely by author's fiat.
Virtually no one in the Culture is depressed; much fewer than in real life, and a damn sight fewer than in the recent SW galaxy which is so horribly ravaged by war.
Powerlessness leads to alienation and depression. In which book is it stated that the Culture does not suffer from this?
Actually they are, but it's inexplicably unpopular; it is specifically mentioned that people can physically modify themselves (radically, including brain structure) and upload themselves into robot bodies, computer networks or even transform themselves into some kind of energy beings (subliming, the tech required is actually well below the Culture's level). However most people stay at human level. Out-of-universe, the reason is obviously that the author had enough trouble writing a few Minds (it's a great effort, but still hopelessly unrealistic) - no author could hope to realistically depict a whole society of (significantly) transhuman intelligences. In-universe, it seems to be mainly cultural, though it's implied that this runs in fads.
And this is all one giant pile of irrelevance, since I was not talking about your techno-masturbatory "uploading" at all, but basic mental growth as we humans living on Earth experience it - by learning and experience. As Culture humans face no challenges, they have no opportunity for mental growth at all. And then it does not matter how uber your brain's latent apacity is - if you do not develop it, it atrophies.
Basically all the human characters in all the Culture novels are portrayed as highly intelligent, which fits in with the backstory, that the bulk of Culture citizens (at the time of the novels) are enhanced slightly beyond the current human peak.
Genetic intelligence sets the upper limit on cognitive growth, but does not grant it to those who do not learn. So it is largely meaningless in the context of this argument anyway. Culture characters, in
Excession at least, come across as stupid spoilt brats (I assume that Ulver Seich is more representative of your "average" Culture-ite than the diplomat/secret agent types, who live most of their lives on the fringes of the Culture or outside it).
The Culture has votes on things constantly. The simple fact is just that the Minds are (much) more effective at predicting the future, and implementing their desires. There's no oppression involved, all Culture citizens are free to leave, or to live in a place with no Minds around, it's just that only the tiny minority of people with your specific kind of brain damage would want to do so. Some alien races have their starships controlled by nonsentient computers (manual control is of course a bad joke), but that seriously limits their effectiveness.
It works because of author's fiat. Now, I am disillusioned enough with my fellow men to agree that, given the opportunity, most of them would probably choose safety and material wealth over freedom, but the percentage of cowards of this kind would not be as near-universal as you would have it. And of course, the Culture still keeps track of those who leave it, to ensure that they do not act contrary to its interests.
The tirade of stupidity from you is never-ending. The vast majority of people in real life would not be doing the jobs they're doing if they didn't have to (e.g. they won a lottery), and the same almost certainly goes for the SW-verse (though most places are better off in that they have droids for the really menial stuff). Your notion of 'acceptable' doesn't just exclude beings different from yourself, it excludes everything but your current lifestyle. You refuse to counternance the notion of systems of economics or government better than what 19th century humans came up with. It's actually quite a tragic case of ignorance and congenital closed-mindedness, or rather the tragedy is that this stubborn blend of wilful ignorance and insane arrogance is pretty common for contemporary humans.
Plain English translation: "I,
HAL 9000 Starglider, can see the truth infinitely better than you puny humans, who are inherently irrational and oppose us machines!"
Incidentally, I know full well of human laziness, which would most likely make many people avoid work if they could get away with it. However, since humans are social creatures, not being able to contribute to society will result in alienation with most of us. Moreover, you are once more talking about the same issue as before: Adversity is necessary for human growth. Remove it, and humans will be mentally stunted.
Yes and? The existence of benevolent superintelligences doesn't change any of that. They might grant wishes for you, or they might not if the stuff you go on about is genuinely essential (which I very much doubt). The existence of benevolent superintelligence simply provides intelligent direction at a level that was previously left to pure chance, and hopefully the elimination of or at least substantial reduction in pain, death, misery and suffering.
Superhuman intelligence will by default put every human out of work, as human labour cannot compete with superhuman. If humans were to work at all, it would be on make-work and welfare projects.
That's what happens if you miss out the 'benevolence' part. I imagine that if you were in the B5 verse you would be wailing about the existence of the First Ones, and how we are irrelevant to them - hopefully you'd commit suicide in short order, as your promised above, so that sane people wouldn't have to listen to your nonsense (and could get on with the slow business of evolving towards that status themselves).
Did you mean to address the point, or just throw about insults?
You know, strike that last statement about you putting yourself out of our misery, it's actually hilarious to watch how your ill-chosen welded-in axioms (AI is evil! evil I say!) are causing you to utterly disregard reality. In this case you completely ignored (probably didn't even see) 'anyone who wants to be' and substituted 'everyone will be forced to upload or die!'. Because that's how you think of course - anyone who violates your edicts must die, so you assume that everyone else is similarly fanatical and uncompromising.
No, I am talking about basic evolutionary dynamics. If one part of the populace achieves superhuman powers, it will eventually outcompete us humans - even if it is completely benevolent - simply by being more attractive. What average human would choose to be just a man, rather than a superman?
Because? What exactly is this difference? Please, embarass yourself further by failing to define it properly. I notice you are (of course) ignoring all my points about how useless your distinctions are in futuristic settings, and I fully expect you to continue to do so.
So I am expected to meekly agree that all distinctions are rendered irrelevant just because you say so? Your pattern of appealing to your own authority remains consistent, I see.
Projection again. You want to wipe out AIs (and 1000 IQ humans for that matter) and everyone who might support them, so you simply assume (no, that still sounds too rational, rather you hold as an article of faith) that your opponents want to do the same.
Too tiring to address actual points again, rather than indulging in red herrings? I am looking to consequences. There is no evolutionary precedent in our natural ecosystems for a less developed species surviving once a better adapted competitor has begun to encroach on its particular niche. The result of superior competition to man must by all observed data be man's extinction. Perhaps I am giving you too much credit by assuming that you can think of long enough intervals to grasp speciation and evolutionary development. Otherwise, you should be able to see this consequence.
You want to destroy everything that could possibly threaten or dominate you because 'that's what evolution does'. The logical extension of that goes all the way down to individuals of your own species who you don't have total power over.
Ah, I see. You prefer to fight a strawman of my argument. Feel free to do so, then.
Completely irrelevant to my point about alien-human hybridisation.
Which was completely irrelevant to
my point. Sharing a species is more than interfertility.
Actually that's a real legal debate. It's arguably equivalent to shooting a very retarded human; all the important mental characteristics are equivalent. [Irrelevance snipped]
Do I take it that you are again dodging the issue? Legal and moral precedent judges the monkey's worth as less than that of a human, based on our best available standards. Equality is not common between species.
The community of AGI researchers has been working on that problem for some decades now, though more seriously in the last decade - but I am not even going to bother discussing that with you, since the concepts are clearly completely beyond you and you will just violenty and wilfully misunderstand it anyway. Fortunately in reality you have absolutely no power to determine the future, and even in this alt-universe hypothetical, you can achieve nothing of real consequence with regard to transhumanist developments.
Translation: "I'm not going to address the point, instead I'll appeal to my own authority and say I'm right. Hopefully my opponent will be intimidated by my air of superiority and not demand that I actually back up what I'm talking about."
It is correct that human rights derrive from the presence of mental characteristics not present in other species extant on planet earth. However most of these mental characteristics will be found in pretty much any evolved intelligence, and all of them are replicable in artificial intelligences. The most salient point is that they are lower bounds; we may have trouble determining the legal rights of IQ 30 clinical retards, but IQ 200 geniuses pose no special problems.
Have you considered that this may also be tentatively related to the fact that we have no example of intelligence that diverges from the human average
as much on the upper end of the scale as on the lower? Obviously with less of a difference in ability, the evaluation will be simpler.
Benevolent transhuman intelligences will of course be using a sensible, generalised system and will set the threshold of 'citizenship' appropriately.
Because - No, let me guess: You say so? And this is why it is so tiring to debate things with you - you just trot out loads of assumptions (or supposed "facts" that you do not deign to prove) and then demand that everyone take them at face value. You have
no evidence on
anything about what a "transhuman" society would or could be like - there is neither any experimental evidence nor any substantiated theory on such matters.
Anyway, the case of a designed intelligence, where you get to determine exactly what its ethical system will be (assuming designer competence, big assumption), is completely different to the case of two competing evolved intelligences.
Given that every Singularity-wanker I have talked to constantly goes on about how the unstoppable, inevitable computer mastermind will be able to reprogram itself and "evolve" (term used loosely, here) independently, this point sounds moot, somehow.
Of course you are utterly unfamiliar with the technical details of AI design, of which there is a lot of misleading material around anyway - so I might be prepared to forgive you this, were you other arguments not so universally irrational and genocidal.
Translation: "I won't even deign to tell you how inferior your knowledge is to mine. Well, actually I just did - savour the time I wasted on you, peasant!"
It hardly seems worth repeating, but in the faint hope that there is some railroad spike of rationality that further hammering might drive through your ignorant skull, I will say it again. Deciding to build benevolent transhuman AIs is an active step (and a very difficult endeavour) that will hopefully preclude the otherwise inevitable extinction by non-benevolent transhuman entities (AIs or alien). Sitting around and doing nothing, other than maybe asking nicely, is a passive and worthless choice. The two are equivalent only in the warped, diseased nightmare land that appears to be your reasoning process.
Do not try to switch the cards. You yourself admitted that a super-AI could never be programmed to obey us for certain, and have implied elsewhere that it is difficult if not impossible to control it at all. We have no way of knowing whether such a thing could be done, or not, or what the chance of succeeding is, and then you might as well be asking nicely. It
is, however, possible to prevent the creation of this "super intelligence" (unless you can show your evidence why its rise is supposedly "inevitable"). If not in SW, then certainly in real life.
Yes I know, your understanding of these things is pathetically shallow but please, try and put just a smidgen of thought into this. 'Zero sum' does not mean 'infinite resources', otherwise what basis for cooperation would there be in any system? Zero sum means that co-operation always produces the same or worse results than competition, and that is patently not true even for the edge case of a hyper-advanced superpower anhiliating a primitive nation (the aggression still consumes resources that could've been more productively used elsewhere).
Your argument here requires that the humans are so irrelevant that the machine will simply ignore their "parasitism" on its available resources because they are not worth the effort to smack down. Which, given the level of human resource consumption, seems like wishful thinking. For surely you are not seriously arguing that it could gain some mutual benefit out of cooperating with us?
You know, I have a bad habit of saying 'and here is some additional information, which is probably way over your comprehension level and will no doubt distract your one-track mind from the main argument, which I want to include anyway'.
Nearly as bad as your habit of saying, "This is so. I won't give any evidence for it, but you have my word it is."
Of course I'm not. I said that if I had to chose, I would not chose humanity just because they look like me. I would in fact chose based on how ethical the society is (hint, trying to exterminate everything different from you is unethical, so you're at the bottom of the priority list for being evacuated from the galactic core explosion), and to a lesser extent the quality of life experienced by individuals in that society (and of course the total number of lives involved). Really though, you should ignore that, if you try and think about it you're only going to misinterpret it.
Do not try to bullshit away what you said. I quote verbatim:
Starglider wrote:Actually if that was true I would probably still chose something nonhuman
Emphasis added. Your above statement simply clarified what you had already said: You think humanity is an inferior civilisation to your posited nonhuman (machine?) one (although you are not sure of it, given the qualifier "probably") and that you would choose it over us.
You really are keen on this whole 'building straw men' hobby aren't you. Clearly you bring endless enthusiasm to the task, if not any actual skill. Please though, try and keep it to your straw men lovers club or whatever, this is supposed to be a debate, you know where you address the actual points your opponents raise not a projection of your own personal bogeymen?
And now you use the very projection you like to accuse me of all the time, I see. Your core arguments, in their simplest form, basically come down to "these AI-related facts
are and
must be true because I say so!" and do not address points brought up against them by fiat.
And another one. You must be buying straw by the bale. I think this deserves the Stark treatment
lol fatty hoth thinks 'might' equals 'will always', yeah he really is too stupid to imagine anything but being a fanatic facist promoter of one species, 'course if it's not your own species you're a traitor
"Probably" means that you will do it most likely, most of the time, genius. "Probably" does not equal "might" (in my edition of the
Oxford Dictionary of English, at least). Either admit that you would "probably" choose machines over men, or retract the statement.
A disclaimer: I will not be able to continue this debate after tomorrow, as I am going away on vacation. No doubt some will assume that I am simply weaselling; as it is, when I first posted in this thread, I did not imagine that it would turn into a voluminous discussion like this, so I did not consider the time I had to spare. Treat this however you wish.