Ziggy Stardust wrote:Simon_Jester wrote:If it's so wonderful, then could you please say anything whatsoever suggesting you actually got my point, rather than missing it?
Why so antagonistic? Nothing I said in my post was at all aggressive, nor was any of it a ridiculous distortion of what you said.
Because I put considerable time and effort into that analogy, and I feel like you praised it while completely not getting it, and raising spurious objections. I apologize for the tenor of my post, but I was trying to make a point I felt was important.
Simon_Jester wrote:My point is simply this. Forms of life that are transcendentally more intelligent than their predecessors tend to displace those predecessors. They win competition for resources. They come up with literally unimaginable strategies for doing so. And if they want to repurpose their predecessors, if they view YOU as a valuable resource... then they do that too.
Yes, I understand that was your point. That was the entire point my post was responding to.
Did you actually understand
my point, rather than missing it? You even admit right here that "competition for resources" is the key point to your analogy. And I simply pointed out that any "competition for resources" between humans and A.I. will be VASTLY different than any competition between humans and wolves, and that you can't necessarily expect the same patterns and relationships to arise. How is that at all a misinterpretation of your point? Even if you disagree with it, it's ludicrous to claim that I'm not addressing a key tenet of your argument.
I didn't argue that exactly the same patterns and relationships will arise.
I argued that
some patterns and relationships will arise. And if superintelligent AI exists, then while we cannot predict the precise nature of the relationship between humans and AI, we can predict this: If superintelligent AI wants something humans have, or wants humans to do something they don't want to do, or wants humans to
be something... the AI is likely to win. And as the AI(s) become(s) more and more developed, this winning process will become more and more lopsided in favor of the AI.
Exactly what will happen was never something I tried to predict. My point is that the closest analogy we have for what happens when a transcendentally superior intelligence emerges into the world is what we humans did to the non-intelligent animals in our natural world as we developed technology.
And as I pointed out in the first post, there is NO major land animal species on the planet that did not experience massive transformation as a side effect of human activity.
Dogs and their ancestors actually got a pretty good deal out of that, even if a significant fraction of their total population consists of bizarre malformed mutant versions, whom their wild ancestors would probably view as un-canid monstrosities.
Contrast to the Plains buffalo. They were hunted down and driven into near-extinction, one, as an indirect way for one group of humans to strike at another, and two, because humans wastefully used resources they could gain from the death of a buffalo.
Or polar bears. Whose habitat is being destroyed, and not even by humans who live in the same ecosystem as them! No, the thing that is driving wild polar bears to extinction is a random unintended side effect of other humans, most of them thousands of miles away. The side effect is so random and unintended many of us don't even believe is happening!
Or bald eagles. How did a bunch of eagles end up an endangered species? Sure as heck not because they were competing with humans for resources!
Or rhinoceroses. Many of them have been killed so their horns can be ground up into medicines. Medicines that don't actually work!
There are an enormous number of animal species that could not even begin to comprehend the bizarre and unhappy fates their species has experienced at the hands of humans. Because they are
not intelligent enough to fully comprehend our motives and actions, and why we do things that can result in their death or suffering.
Frankly, if we assume superintelligent AI is widespread and in play, the
best case scenario is AI viewing us as pets and controlling/distorting the future of our species in whatever ways they see fit. All the other alternatives are worse. Like, say, the AI recklessly destroying the ozone layer because it doesn't affect the AI's robots to lose it Never mind that this causes massive cancer problems for humans. Or an AI that wants to make humans happy, but doesn't respect humans' expressed wishes, might leave us 'stored' under conditions that make us individually happy, but make reproduction unlikely or impossible. Or other, stranger and worse things, that I cannot easily predict, might occur.
So please don't make this be about "what happened to wolves/dogs won't be exactly what happened to us." That is, again, missing the point.
Simon_Jester wrote:This is a universal experience of all pre-intelligent animals as humans came onto the scene on Earth. It is NOT just about competition for resources. It is about what happens when something that has the power to outthink you that massively shows up.
And that's fine. I never disputed that at all. Literally my only point was that the type of relationship will be different with A.I., because the rules of the interaction are being guided by different forces than the ones that guide interactions between humans and animals. Considering how hussy you got about me supposedly misinterpreting you, you seem to be utterly ignoring what my point was.
I think the issue is that you saw fit to concentrate on a single sub-issue that seems to me to be an exercise in pedantry.
YES, THE ANALOGY IS NOT PERFECT.
It doesn't have to be, because I'm not saying "humans will suffer the exact same fate under AI that dogs did under humans, down to being made to wear collars with tags and eat bowls of kibble." I'm saying "look at how bizarre and disruptive the effects of the rise of humans were to dogs, and to numerous other species, that appeared very well-adapted and capable to their ancestral environment. This is the
scale of disruption we can expect from AI dominating an Earth populated by humans."
Simon_Jester wrote:Right now, humans a huge fraction of the Earth's land surface for agriculture, and another large fraction for their own habitations. A great deal of resources goes into providing us with things that we need to survive, and another large amount goes into things that we, strictly speaking, don't need to survive but desire anyway.
In addition to physical resources, we use bandwidth and computer capacity for things we desire (e.g. cat videos) that might seem irrelevant to an AI, and those resources are critical to the AI's survival and growth, as critical to AI as food and water are to us.
Yes, exactly. This REINFORCES my point. There is massive overlap between the ESSENTIAL resources required for humans and wolves (e.g. food and water), while there is significantly less overlap between these resources for humans and A.I., exactly as you say. Resources that are critical to an A.I. aren't critical to a human (at least in terms of bare minimum survival, obviously once you scale this up to societies/worldwide your definition of "critical" may have to adapt). Thus, the rules governing any interaction between humans and A.I. are going to be DIFFERENT than the rules governing an interaction between humans and wolves. It's a different paradigm, and as much as I like the wolf analogy, I don't think it's necessarily one that can hold in this different paradigm. What about this argument so irritates you that you think it is somehow a distortion of your argument?
What irritates me is that your thesis here is that when I say "A is like B," you reply "but A is not
literally exactly the same as B."
If all you mean to say is "A is analogous to B but not literally identical," then that is a trivial observation. For A and B to be literally identical, I'd have to be predicting that human-AI interaction will
literally culminate in human-descended subspecies going around with collars eating bowls of kibble off the floors of the AI's houses, while "wild" humans wander around in remote wilderness areas and are occasionally culled when they start hunting down and eating the AI's robo-sheep.
Obviously that is not what I meant. So in that case, "A is not literally the same as B" is rather trivial and something of a waste of our time.
But if what you really meant to say was "I don't think your "A is like B" analogy is applicable, because A is not literally the same as B," then either you have completely failed to understand the analogy...
OR you are simply refusing to grasp the
role of analogy in a discussion.
So it seemed at the time that your original observation looked to be either trivial, a failure to understand the analogy, or a failure to understand what analogies are even for.
All three of those outcomes are rather irritating.
Yes, I agree completely. And this is the exact reason why the analogy doesn't quite hold, because it's a different paradigm than the one that governs human-animal interactions. Which was the entire point of my post. If you didn't want anyone to discuss the merits of your long and detailed analogy, then why spend so much time outlining such a long and detailed analogy, especially since this post indicates that you exactly agree with my concerns about it?
The analogy was intended to serve a specific purpose, by exploring
in depth just how strange and revolutionary (and discomforting) the consequences of human activity have been, even for species which arguably benefited from that activity.
The argument this was intended to serve, and I spelled it out at the time as I recall, was "You think this thing sounds bad from the wolves' point of view? THIS is the kind of thing that's likely to happen us in the hands of powerful AI." Not
this exact thing, but something "like" this thing. Something broadly comparable.
And to have it presented as a contradictory claim that "ah, but A is not exactly like B," when the whole point of the analogy was "A is
enough like B to give you a sense for just how big B is..." that was not the kind of productive dialogue and discussion of the merits I'd been hoping for, to say the least.