Darth Hoth wrote:It might have been other factors - perhaps the cloning tubes he had access to were simply good enough to avoid it.
I don't have the source material to hand, so I don't know for sure, but I'm pretty sure he would not have gone to the trouble of acquiring and keeping significant numbers of ysalamari (a very rare and not widely known species) without being a) quite sure that it would work, or b) desperate after trying to use the clone tubes without shielding, without success.
The difference with world devastators is that they are weaponised to the point that they can take most worlds on their own, obliterating local defences. Even the New Republic only managed to defeat them with hacking. Conventional automated factories are vulnerable and usually static installations, so you will still have to engage in conventional military expansion (or defence against warlords and other SDnetters if you manage to find genuinely uninhabited and unclaimed bits of the galaxy).
Total galactic space is vast. Even a civilisation with the scope of Star Wars cannot be everywhere at once (and there is ample evidence of systems that are either uninhabited or populated by primitives in the canon).
My point was not that World Devestators make you invulnerable to attack - they do help but are not indestructible - it's that they can be used offensively, and in fact can be deployed successfully without support on all but the most defended areas. Considering the fact that we are all total beginners at running an interstellar conflict, this would make World Devestators even more of a boon, should any SDN members manage to get hold of one.
No, but we do not assume that everything that is not immediately apparent to us comes down to it.
So? I'm obviously not doing that, since I accepted your explanation (ish) for the uncommonness of RKVs. I think it's the most plausible explanation for the lack of significantly transhuman intelligences because
there is no other credible explanation (in universe).
Introducing new powers of omniscience to the Force takes us ever farther from the laws of physics as we know them.
I don't see why. It can already influence and be influenced by people's minds, usually on an 'emotional' level, something which is usually at the opposite end of the complexity/abstraction spectrum from basic physics. It's treated like magic, not physics (to the point of 'force powers' that are like spells and 'force alchemy' that creates magical items). In a consistent universe it must ultimately be based on physicalist rules, but the complexity of the 'force' may be on the order of the complexity of the human brain (in terms of levels of organisation between observed behaviour and the fundamental laws) or even higher. The best model of the Force may in fact be one of Trek's telepathic 'pure energy aliens', enlarged to cover the whole galaxy.
It is better to simply admit that we do not know
Strictly correct, but there is a certain amount of 'completeness assumption' to all of these RARs, in that we assume the source material we have is reasonably representative of the target universe (particularly when the source base is on the scale of the entire SW EU). They are certainly plenty of things not documented, but we pretty much have to assume that there won't be many
significant things left undocumented, otherwise it is impossible to make sensible deductions about the consequences of the hypothetical situation.
On the topic of millions of cultures, what is the evidence for all of them being independent?
Have you read any of the SW novels (or worse, comics)? They are constantly running across isolationist planets, lawless planets and entire empires in the 'unknown regions' that burst out and menace the galaxy briefly before the protagonists take them down. Furthermore there is absolutely
no evidence (AFAIK) for any kind of centralist regulation of AI research or droid production. Assassin droids fall under local arms control regulations, where they exist, and that's it.
Given the humanoid or outright human nature of most of them
Much less so in the EU than in the films.
Which makes sense; the species that first made use of hyperdrive (probably humans, given their apparent dominance in demographics and most fields of science, politics, commerce and so on)
That assumption is of your usual quality, which is to say utterly wrong. The first recorded use of (primitive) hyperdrives was by the (nonhumanoid) Rakata, over thirty thousand years before the founding of the Old Republic. Their Infinite Empire was the dominant power in the galaxy for a span roughly equal to the lifetime of the Old Republic, before they were virtually wiped out by war and plague.
Ah, so you hate intelligence and worship ignorance. That's the only possible explanation, since you're ok with abominations like involuntary hive minds, and you seem to have no problem with oppresive tyrannies that can make you personally their play thing. Are alien empires that make humanity their playthings ok, or do you intend to wipe out all alien races you can to prevent that possibility? I'll be looking for your little fleet of star destroyers going around BDZing everything that isn't as stupid as you are.
An involuntary hive-mind would still be human; a machine intelligence would not. I would not want either if I could at all avoid them.
Ah, refusal to answer the question. Come on, admit it, you want to wipe out all aliens because they might possibly outcompete humans. You'd rather be utterly psychically dominated, violated every minute of your existence, a total slave, watching your very individuality fade away while you scream silently, helplessly... as long as the beings in control look like you do... rather than live in a wonderful utopia where the decisions are made by people who look different to you. Now that is a fucked up ethical system.
I have already explained that
a) it's going to happen anyway (unless ruled out by author fiat), so it's far more sensible to try and make sure that there are benevolent superintelligences that respect your desire to be left alone, than futilely fail to suppress machine intelligence (which will incidentally ensure that machine intelligences want to wipe you out as opposed to simply not caring about your wishes).
Are we speaking of reality or the SW galaxy, now?[/quote]
If my hypothesis is wrong and the Force does not in fact block transhuman intelligence, then yes. It is of course true in real life.
And I refuse to accept anything as inevitable unless it has already happened.
That would be because you are a moron. I refuse to credit you even with the term 'hopeless romantic', because I do not wish to sully that phrase with your disgusting xeno-hatred.
The future is never set; it is what we make it.
There is no way
you can do anything more than kill a few random people, and no coalition you could possibly assemble could ever be effective. Remember that 'inevitable' means that it will happen eventually; no possible group, not even an utterly totalitarian and effective government (which the Empire was clearly not), could hope to suppress AI research for the rest of time. In practice even a totalitarian government on the scale of the Galactic Empire couldn't suppress it for a decade, never mind all the billion years until the universe ends. And make no mistake, once it
does happen, there is no going back.
Some problems we may not be able to solve, but that is no excuse to just roll over.
I have repeatedly stated that the best solution is simply to ensure that transhuman intelligences are benevolent, which you are studiously ignoring, because you have made it an axiom that 'transhuman intelligences must be feared and loathed'.
The very idea that I would need some machine to allow me or my species to live is absolutely revolting to me. Then I would quite literally rather die.
This is a ridiculous and frankly pathetic attitude. Samuel has already pointed out that this argument applies to
all entities with power over you, starting with human governments. I'd certainly much rather be at the mercy of an AI with the temperment of an angel than the corrupt, selfish, short-sighted politicians we're currently lumbered with in real life, much less the genocidal fascists of the GE or the utter timid incompetence of the New Republic. Of course in the SW-verse your so-called ethical system is
exponentially more stupid, because it implies that every alien species that survives 'at the mercy of humans' should either risk everything to overthrow the humans or just commit suicide (i.e. for those poor single-planet species that could be trivialled BDZed by any human warlord). The 'some machine' part is of course just a meaningless label, since you seem to be incapable of making any real distinctions involving free will or some such, but can merely bleat about superficialities.
In a choice between your galaxy where everyone lives in fear of being murdered by your death squads for being too smart or too fertile, until a Bezerker slips through your net and wipes everyone out, and the Cultureverse where pretty much everyone is happy, safe and fulfilled, what sane individual would possibly chose your dystopia?
Anyone who wants to know that there are no Gods and no Kings
Except for you and your death squads, as opposed to the Culture, which has no dictators and a population free to do whatever they like.
but only free and equal sapient beings
Who you will murder without compunction if they look too smart or to different. Again I have no idea why you think the Culture has slavery or unequal legal treatment of sentients, when it is repeatedly stated that it does not.
sharing a common heritage
You just have a whole list of excuses for murdering all the aliens don't you? Why stop with DNA? Go on, murder all those human cultures that don't share your values too!
In actual fact a 'common heritage' is at best neutral and at worst a drawback. It certainly makes things less interesting.
under the auspices of democratic government and a free market? As opposed to a dictatorship by machines
Oh of course, you're a Lolbertarian as well. Should've guessed. Anyway, the Culture has a total free market, it's just that only hobbyists bother with it, because it's a post-scarcity economy where automated factories can make pretty much anything you want without having to trade for it. If you want your own starship, to go off and live with primitives or whatever, you just have to ask. As opposed to your universe, which will actually be the Galactic Empire (because no one will vote for your insane xenocidal policies, so you will have to impose them by force, imaging for a moment that you would somehow be in a rulership position) but even more opressive and murderous.
where the average person is so depressed with his life that he flees into VR simulations?
Virtually no one in the Culture is depressed; much fewer than in real life, and a damn sight fewer than in the recent SW galaxy which is so horribly ravaged by war.
It is not just humans who are threatened by your lovely AIs, you know, but everyone in the Universe.
Yes, and? By the same token, humanity is threatened by alien AIs, but the AI part is really just changing the speed at which we get conquered. As has been repeatedly pointed out to you, and which you refuse to acknowledge, alien empires are just as deadly to humans (arguably, moreso) as AIs, if they have a technological and/or material advantage.
The Culture (as it comes across in Excession, at least, the one novel that I have read) is dystopic. Its people are given no opportunity to grow intellectually (except for uselessly trivial games)
Actually they are, but it's inexplicably unpopular; it is specifically mentioned that people can physically modify themselves (radically, including brain structure) and upload themselves into robot bodies, computer networks or even transform themselves into some kind of energy beings (subliming, the tech required is actually well below the Culture's level). However most people stay at human level. Out-of-universe, the reason is obviously that the author had enough trouble writing a few Minds (it's a great effort, but still hopelessly unrealistic) -
no author could hope to realistically depict a whole society of (significantly) transhuman intelligences. In-universe, it seems to be mainly cultural, though it's implied that this runs in fads.
Wait, why am I even bothering to discuss this? In your hellish state, any attempt to self-enhance results in your death squads killing the researcher, and probably their whole family and anyone who happens to be near the lab at the time, 'just to be sure'. The absolute last thing you are justified in doing is criticising an incredibly liberal transhuman state for being 'too restrictive on self-modification'.
and most are probably quite literally dumb enough that they could not change a lightbulb unless a drone did it for them.
Basically all the human characters in all the Culture novels are portrayed as highly intelligent, which fits in with the backstory, that the bulk of Culture citizens (at the time of the novels) are enhanced slightly beyond the current human peak.
Humanity (or analogues) have totally relinquished all control of their destiny into the hands of the Minds.
The Culture has votes on things constantly. The simple fact is just that the Minds are (much) more effective at predicting the future, and implementing their desires. There's no oppression involved, all Culture citizens are free to leave, or to live in a place with no Minds around, it's just that only the tiny minority of people with your specific kind of brain damage would want to do so. Some alien races have their starships controlled by nonsentient computers (manual control is of course a bad joke), but that seriously limits their effectiveness.
The terms get muddled when real life issues are brought into fictional settings, but let us look at it. These planetary cultures are not "insignificant"; they contribute to the galaxy, their peoples have meaningful jobs and places to fill.
The tirade of stupidity from you is never-ending. The vast majority of people in real life would not be doing the jobs they're doing if they didn't have to (e.g. they won a lottery), and the same almost certainly goes for the SW-verse (though most places are better off in that they have droids for the really menial stuff). Your notion of 'acceptable' doesn't just exclude beings different from yourself, it excludes everything but your current lifestyle. You refuse to counternance the notion of systems of economics or government better than what 19th century humans came up with. It's actually quite a tragic case of ignorance and congenital closed-mindedness, or rather the tragedy is that this stubborn blend of wilful ignorance and insane arrogance is pretty common for contemporary humans.
They can build careers for themselves.
Yes and? The existence of
benevolent superintelligences doesn't change any of that. They might grant wishes for you, or they might not if the stuff you go on about is genuinely essential (which I very much doubt). The existence of benevolent superintelligence simply provides intelligent direction at a level that was previously left to pure chance, and hopefully the elimination of or at least substantial reduction in pain, death, misery and suffering.
Planets can make political deals, sign treaties, small polities ally with each other for increased influence on larger scales.
Already irrelevant on an individual scale. The existence of earth countries as 'independent political entities' endows literally insignificant direct empowerment on individuals, particularly given the massive remove between voting for local representatives and voting directly on foreign policy choices. In the SW verse it's a million times more insignificant, but it doesn't really matter.
Sweden, or the city of Stockholm, are as irrelevant as Naboo on a galactic scale
Actually they're about as relevant as a small country town.
but does this mean that the country is useless or that nothing any of us who live here matters?
No, but you're the one claiming that somehow, the existence of something smarter than you will make your whole life meaningless. Of course by that logic it already is, as you've demonstrated that you're on the low end of the human intelligence spectrum.
By contrast, wank "AI Gods" operate on another level. Humanity is literally irrelevant to them, no more important than the SW galaxy would be to the Xeelee or a single pack of Siberian wolves are to us.
That's what happens if you miss out the 'benevolence' part. I imagine that if you were in the B5 verse you would be wailing about the existence of the First Ones, and how we are irrelevant to them - hopefully you'd commit suicide in short order, as your promised above, so that sane people wouldn't have to listen to your nonsense (and could get on with the slow business of evolving towards that status themselves).
On top of all that, there is the fact that in a benevolent transhuman culture (e.g. The Culture), anyone who wants to be a superintelligence can be.
Our own species is still wiped out with "uploading" and similar jargon.
You know, strike that last statement about you putting yourself out of our misery, it's actually hilarious to watch how your ill-chosen welded-in axioms (AI is evil! evil I say!) are causing you to utterly disregard reality. In this case you completely ignored (probably didn't even see) 'anyone who wants to be' and substituted 'everyone will be forced to upload or die!'. Because that's how you think of course - anyone who violates your edicts must die, so you assume that everyone else is similarly fanatical and uncompromising.
You just have to upgrade yourself (and possibly pass some psychological tests, to prove that you're going to be responsible with the power it implies).
Which would be administrated by . . . the Minds? Who are fit to judge us because they are so uber?
Yes. Hopefully the checking will be quite objective, given a thorough understanding of how human intelligence works, but still the best people to check would be former humans. The checks can be quite loose, since of course the same laws (or equivalent) still apply to you as before, but if you're getting the effective power to bypass normal safeguards on harming others, then some basic sanity checking has to be done. Of course I'm just speculating, as no one can say for sure what the necessary structural features of an ethical transhuman society are.
Morals are arbitrary and I certainly don't subscribe to that one. Again, it makes no sense to generalise across gender, race, class etc but not species, or for that matter, any other superficial distinction (such as evolved vs designed).
Distinction between species is not arbitrary, nor is it superficial. The difference between natural life and machines is greater still.
Because? What exactly is this difference? Please, embarass yourself further by failing to define it properly. I notice you are (of course) ignoring all my points about how useless your distinctions are in futuristic settings, and I fully expect you to continue to do so.
If you want to see humanity (and in this discussion, human-equivalent aliens) wiped out in the name of fucking computer code, you have indeed transcended morality.
Projection again. You want to wipe out AIs (and 1000 IQ humans for that matter) and everyone who might support them, so you simply
assume (no, that still sounds too rational, rather you hold as an article of faith) that your opponents want to do the same.
Wait, why aren't you out drowning your neighbour's children so that they can't compete for resources with yours?
What the Hell was that supposed to mean?
You want to destroy everything that could possibly threaten or dominate you because 'that's what evolution does'. The logical extension of that goes all the way down to individuals of your own species who you don't have total power over.
It's logically equivalent. The distinction of being able to interbreed is ethically irrelevant and in fact functionally irrelevent given sufficient technology (which may not even be that high, even stunted Trek bioscience manages to hybridise species).
It is not equivalent, this is a retarded argument and a red herring. A non-human creature or object does not have human rights* and we are not morally obliged to treat it as if it did; am I a murderer for having fried eggs for dinner?
Completely irrelevant to my point about alien-human hybridisation.
Would I be one if I shot a chimpanzee?
Actually that's a real legal debate. It's arguably equivalent to shooting a very retarded human; all the important mental characteristics are equivalent. In any case you would be charged with severe animal cruelty, as opposed to your fantasy of killing AI researchers without any legal comeback.
If you have an objective standard for determining what potential computer programme should enjoy human rights, I would love to hear it.
The community of AGI researchers has been working on that problem for some decades now, though more seriously in the last decade - but I am not even going to bother discussing that with you, since the concepts are clearly completely beyond you and you will just violenty and wilfully misunderstand it anyway. Fortunately in reality you have absolutely no power to determine the future, and even in this alt-universe hypothetical, you can achieve nothing of real consequence with regard to transhumanist developments.
The entire idea of ethics common to all of humanity arises precisely from all human beings being human, and therefore of roughly equal abilities and mindsets (within certain variations), thereby deserving the same considerations.
It is correct that human rights derrive from the presence of mental characteristics not present in other species extant on planet earth. However most of these mental characteristics will be found in pretty much any evolved intelligence, and all of them are replicable in artificial intelligences. The most salient point is that they are
lower bounds; we may have trouble determining the legal rights of IQ 30 clinical retards, but IQ 200 geniuses pose no special problems. Of course I am using a level of abstraction far beyond what you likely are using, are or even capable of; I imagine you grant beings rights because they look like you. Benevolent transhuman intelligences will of course be using a sensible, generalised system and will set the threshold of 'citizenship' appropriately. The question of whether there should be any classes of 'citizen' beyond just 'sapient' vs 'animal' is a more thorny one, and not one we can answer at this time. You are evidently
utterly unqualified for that debate though, so just assume that the answer is 'no'.
Race, class, sex and what have you do not divide us more than our shared identity as humans unites us. The same is not true for entities based on entirely different principles.
Which means all aliens, which means you apparently want to kill them for not fitting in which your perfect homogenous 'shared heritage' human dictatorship (which is simultaneously democratic, free market and murderous of all aliens and transhumans, somehow).
I never talk about wiping out humanity. People who want to stay human, or have human children, should be allowed to (as long as it doesn't cause severe overpopulation).
I wonder if some Neanderthal said something similar when
Homo sapiens emerged. Well, actually I am fairly certain that they did not, not being intelligent enough
So in fact your point is worthless.
Neanderthal Hoth: "These new humans will, by competing with us for the same scarce resources, wipe us out, them being more intelligent and adaptable than we are. If we want our species to survive, we should gather our large flock and wipe out this small, newman one before it grows big and deprives us of that option."
Neanderthal Starglider: "But surely we can coexist peacefully with a vastly more intelligent life-form?
Blatantly wrong and misleading analogy; up to your usual intellectual standards in other words. You are in fact simply confirming,
once again, that you want to genocide all alien species who might possibly become dominant over you. Anyway, the case of a designed intelligence,
where you get to determine exactly what its ethical system will be (assuming designer competence, big assumption), is completely different to the case of two competing evolved intelligences.
I'm sure the new humans will let us survive, if we just ask them nicely!
Of course you are utterly unfamiliar with the technical details of AI design, of which there is a lot of misleading material around anyway - so I might be prepared to forgive you this, were you other arguments not so universally irrational and genocidal.
And if you try to fight this small group of humans, which we can never defeat anyway because their emergence is inevitable, they'll be angry and want to wipe us out, rather than merely being our benign overlords."
It hardly seems worth repeating, but in the faint hope that there is some railroad spike of rationality that further hammering might drive through your ignorant skull, I will say it again. Deciding to build benevolent transhuman AIs is an
active step (and a very difficult endeavour) that will hopefully
preclude the otherwise inevitable extinction by non-benevolent transhuman entities (AIs or alien). Sitting around and doing nothing, other than maybe asking nicely, is a
passive and worthless choice. The two are equivalent only in the warped, diseased nightmare land that appears to be your reasoning process.
Competition for the same set of scarce resources invariably leads to the demise of the weaker species as the stronger takes the resource it is dependent on.
I see you are also using repetition, but to reinforce the point that you want to kill all the aliens. Oh well. Of course even if you claim 'other human tribes should be subsumed, because they look like me, but all the aliens must be killed', the facist empire you'd hypothetically have to build to enforce your will is unlikely to be nearly as discriminating.
(This post concerned, once again, real life. Star Wars has shown that many species can and do indeed coexist peacefully in that setting, so the same considerations clearly do not apply there)
I'm sorry, did you actually just say 'even if my broken argument wasn't broken, it would still be bullshit in this scenario'?
You are the one assuming a zero sum game, that only one can exist.
In the real world, no matter what "post-scarcity" wankers say, resources are and will always (to the extent of our knowledge of physics) be finite.
Yes I know, your understanding of these things is pathetically shallow but please, try and put just a smidgen of thought into this. 'Zero sum' does not mean 'infinite resources', otherwise what basis for cooperation would there be in any system? Zero sum means that co-operation always produces the same or worse results than competition, and that is patently not true even for the edge case of a hyper-advanced superpower anhiliating a primitive nation (the aggression still consumes resources that could've been more productively used elsewhere).
If this machine intelligence competes with us for the same resources (which is likely, given that its infrastructure will most likely depend on technology utilising similar materials and power sources as we use), then it is indeed a zero-sum game.
Which we will lose, in short order, if the other society doesn't respect and value the existence of others ('machine intelligence' is a red herring as usual, it makes no real difference what the basis of the power is, only that the other society is more powerful than the one you're in).
Actually if that was true I would probably still chose something nonhuman, but that's beside the point
No, it is very much the point actually.
You know, I have a bad habit of saying 'and here is some additional information, which is probably way over your comprehension level and will no doubt distract your one-track mind from the main argument, which I want to include anyway'.
You are claiming that humanity must inevitably be superseded by "better" machine species
Of course I'm not. I said that if I had to chose, I would not chose humanity just because they look like me. I would in fact chose based on how ethical the society is (hint, trying to exterminate everything different from you is unethical, so you're at the bottom of the priority list for being evacuated from the galactic core explosion), and to a lesser extent the quality of life experienced by individuals in that society (and of course the total number of lives involved). Really though, you should ignore that, if you try and think about it you're only going to misinterpret it.
and that we should embrace this as our inevitable destiny rather than fight for the survival of our own kind.
You really are keen on this whole 'building straw men' hobby aren't you. Clearly you bring endless enthusiasm to the task, if not any actual skill. Please though, try and keep it to your straw men lovers club or whatever, this is supposed to be a debate, you know where you address the actual points your opponents raise not a projection of your own personal bogeymen?
And you are saying yourself, in your own words, that you prefer machines to prosper over humans, given the choice.
And another one. You must be buying straw by the bale. I think this deserves the Stark treatment
lol fatty hoth thinks 'might' equals 'will always', yeah he really is too stupid to imagine anything but being a fanatic facist promoter of one species, 'course if it's not your own species you're a traitor