I can think of a few scenarios where a human (or humans) might be desirable for what is now an all-robot populace:
1. If the robots are an offshoot of human-made designs & programming, perhaps a desire to aid and further humanity is ingrained into them - they've felt a need that they couldn't fulfill, and learning of the possible human survivor spurred many to action.
2. Somewhat related to 1, what if the vessel that the frozen astronaut is in contains, or the astronaut himself, has some sort of high level access level, which carries over to the current generation of robots, and they've basically carried it over all these years because it was an otherwise benign segment of code they didn't bother to remove.
3. What if the robotic society has attained a certain level of stagnation, and they are desperately looking for a new variable to introduce into their existence, in hopes of rekindling progress.
4. What if the robots function in a manner very close to organic life, and have encountered a situation where they need to sort of transcend a fully inorganic existence - since their technology has origins in human prosthetic technology, perhaps they seek a live human specimen from which to research a cure.
5. What if robotic society has developed to such an extent that they deny their origins as coming from some organic life form. One group actively suppressed information pertaining to their true origins, and the very mention of humanity can mean imprisonment. A defiant group of robots caught wind of this frozen astronaut, and seek to reintroduce their progenitors into their society.
6. Sort of related to 4 - what if the astronaut's vessel contained a very early rendition of what became the current robot's "operating system" - due to some nihilistic element within the robot society, destructive code was spread through said populace, and intact code is needed to save them. Per historic records, early computer access was coded to certain individuals, and as such, they need the astronaut alive.
7. Just go with an altruistic bend - the robots never really experienced humanity first hand. Sure, they have historical records, and even with the worst that humanity displayed, they led to the robots, so they want to repay the descendants of their creators by recreating their civilization. Heck, maybe the robots think THEY can lead humanity down a road of prosperity and avoid one of war and bloodshed - maybe they can learn how to solve their own societal problems in the process.
Hope that help and good luck!
Why would a post-human robotic race want humans back?
Moderator: NecronLord
Re: Why would a post-human robotic race want humans back?
Yeah, you should probably put some thought into working out why none of this is the case, because it would be all kinds of problematic for the story you want to tell. Maybe the robots' programming gives them some sort of bias towards the humanoid form, to avoid maximum efficiency building design. Maybe their super-advanced robot brains are delicate in such a way that temperature control and relatively normal atmosphere is necessary. Maybe the way they're coded limits their ability to break out of the paradigm of thinking in human language. Maybe they jealously guard their individuality and don’t want the risks that directly interfacing/networking with other robots would cause. Etc.Tribble wrote:IMO a post-human robotic society is going to be so alien to modern-day humans that it would be very difficult for humans to comprehend, let alone interact and survive. Especially if the AI (or AIs, though I think they likely merge together at some point for efficiency) is fully sentient, unrestrained and capable of adapting and improving itself. They may have been created by humans, but they are NOT human, and as they evolve they will become so radically different from humans that they might as well be alien life from another planet.
Think about it for a second. Human beings are very easy to injure and kill, and our modem infrastructure is primarily designed around that. Free of those constraints, an robot society can use whatever materials and designs that it wants to in order to maximize efficiency and get results. There would be no bathrooms, no kitchens, no handicap accessible ramps. There would be no revolving doors, no door knobs, no ladders, no stairs, no steps, no bunks, no thermostats. Artificial climates (such as air conditioning) might exist for temperature sensitive equipment (processors and data banks) but keeping the whole installation cool would be a waste of power on a grand scale. Interior lighting is a useless gesture to robots that can see in the dark. OSHA would not apply. There would certainly be no food to consume, as robots don't go hungry. Hell, even a breathable atmosphere might be rare in such a society as facilities might be kept behind giant air locks, sealed and pumped full of non-flammable gas to not only cool vital equipment but also to prevent the possibility of fire or explosion. Gases which would kill a human in seconds would be used to prevent corrosion. IMO the infrastructure in a robot society would be completely incompatible with human life.
Why would robots need the ability to speak? Transmitting information and data via speech is irrelevant in a world where every machine whether sentient or not would be networked together. For that matter, why would the robots have any use for human languages at all? Human languages are messy and inefficient. Robots could program themselves with far more efficient forms of communication.
Why would robots have any concept of human emotions, thoughts and ideas? The reality we perceive is based on our very limited senses of sight, hearing, touch, taste, and smell. Robots would have instant access to virtually any perspective and data it could possibly need. It's likely that a robot society would eventually even merge into a single consciousness.
Would a planet-spanning consciousness remotely understand what it's like to be a human being, or vice versa? Would it really want to have fleshy, flawed, mentally deficient beings in charge again? If its infrastructure was even capable of supporting human life? I think not. Now that doesn't necessarily mean that the Robots or AI or whatever will be hostile. I just don't think it would see the need to have us around. We would likely be the mental and physical equivalent to bacteria to them.
The only reason I can think of why a post-human robotic race might want a human being around would be to study it out of sheer curiosity. And even then, presuming that they are even capable of doing so, they certainly wouldn't give that human any kind of meaningful authority.
Re: Why would a post-human robotic race want humans back?
I've been thinking a lot about this and... Really I don't see why a robot society would be so alien... In the long run.
After all, we could argue that, to an extent, we are robots. What separates our organic machinery from a purpose built robot nowadays is how complex (and thus unpredictable) the process of evolution has rendered our systems.
But, in essence, we're machines with the simple goal of propagating our data structures.
The same could be said about sentience. Our idea of sentience is just the combination of many complex automated routines, reaching a point where we can analyze ourselves.
Even if physically a machine built by us would be vastly different, the mental evolution to the point of self-awareness would maybe not be so different, if maybe faster.
I mean, self-awareness is the ability to understand (to an extent) how we work, and possibly use said knowledge to override our hardcoded functions. I don't see a self-aware machine being too different once it reaches that point, as it would then, theoretically, be able to understand why its physical configuration affects its priorities, and thus be able to understand why human priorities might differ, leading to attempts at finding an understanding.
This is sounding too much like what our current civilization should be doing with regards to different cultural groups, isn't it?
Maybe that could be another real-world parallel in the story, machines trying to succeed at understanding worldviews alien to them, something that humans keep failing at.
After all, we could argue that, to an extent, we are robots. What separates our organic machinery from a purpose built robot nowadays is how complex (and thus unpredictable) the process of evolution has rendered our systems.
But, in essence, we're machines with the simple goal of propagating our data structures.
The same could be said about sentience. Our idea of sentience is just the combination of many complex automated routines, reaching a point where we can analyze ourselves.
Even if physically a machine built by us would be vastly different, the mental evolution to the point of self-awareness would maybe not be so different, if maybe faster.
I mean, self-awareness is the ability to understand (to an extent) how we work, and possibly use said knowledge to override our hardcoded functions. I don't see a self-aware machine being too different once it reaches that point, as it would then, theoretically, be able to understand why its physical configuration affects its priorities, and thus be able to understand why human priorities might differ, leading to attempts at finding an understanding.
This is sounding too much like what our current civilization should be doing with regards to different cultural groups, isn't it?
Maybe that could be another real-world parallel in the story, machines trying to succeed at understanding worldviews alien to them, something that humans keep failing at.
unsigned
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Why would a post-human robotic race want humans back?
Just barely. Human introspective capability is pretty poor, masked over with a lot of confabulation. We have no direct knowledge of our brain state or structure.Oskuro wrote:Our idea of sentience is just the combination of many complex automated routines, reaching a point where we can analyze ourselves.
'Self-awareness' is actually one of the most mutable aspects of mental architecture; it isn't tightly constrained by the environment and its evolutionary development was strongly coupled to social behavior, which might as well be a chaotic attractor (as far as predicting evolution goes).Even if physically a machine built by us would be vastly different, the mental evolution to the point of self-awareness
In the sense that peanut butter is not too different from tempered steel, sure. I mean, both materials are made of atoms, must be the same.I mean, self-awareness is the ability to understand (to an extent) how we work, and possibly use said knowledge to override our hardcoded functions. I don't see a self-aware machine being too different once it reaches that point,
You haven't even begun to consider this problem; you don't have the concepts to even start understanding how hard it is.as it would then, theoretically, be able to understand why its physical configuration affects its priorities, and thus be able to understand why human priorities might differ, leading to attempts at finding an understanding.
Other-agent modelling is tricky full stop but AIs based on fully reprogrammable computing hardware have the major advantage of being able to run simulations with arbitrary structure, not whatever poor abstraction will fit into slow unreliable massively parallel barely mutable spiking neurons (and specifically, whatever set of concepts you have managed to imprint into those neurons at the time you attempt 'understanding').Maybe that could be another real-world parallel in the story, machines trying to succeed at understanding worldviews alien to them, something that humans keep failing at.
Re: Why would a post-human robotic race want humans back?
I know, just tossing around ideas in my head.Starglider wrote:You haven't even begun to consider this problem; you don't have the concepts to even start understanding how hard it is.
Heck, even in my lack of understanding I began to turn on that last post of mine on my own. I was thinking, if individual machines (robots) are built for specific purposes, then maybe they'd work more like a hive mind, where the sum of all the entities could approximate a self-aware entity, leading to a very different civilization (and to the main reason behind Ender's Game main conflict).
And please, I'm using the layman/ignorant generally understood meaning of "self-awareness", I'm aware of how complex it really is, just trying to have fun with the ideas, not proposing my brainfarts as any sort of actual argument.
unsigned