UltraViolence83 wrote:
Main reason is that if we do make supersmart machines and we are actually stupid enough to make MORE and let them control us and do our work, we won't have anything meaningful to do. That's the greatest pitfall of utopian thought: utopians never realize that once you fix everything to perfection, there is absolutley nothing to do.
I'm not expecting or even advocating "perfection"; merely a realistic level of improvement. Humans will ALWAYS be kept busy for various essential tasks that we might not be able to fully envision today. The AIs will be there merely to fill in the gaps that humans could not possibly fill.
UltraViolence83 wrote:
Work is something we need as a species, as individual persons. Without meaning in our lives, we are meaningless. To do nothing but grow complacent and stagnant will lead to nothing but our decay as a whole. Without perpetual goals to work for, we lose that essential spark of humanity that we seem to take for granted. That spark called "hope."
So far we're in agreement.
UltraViolence83 wrote:
I don't mean pointless work like hobbies or deviant sex (
), I mean the kind of work that runs society and the life-or-death decisions made by our leaders as well as ourselves sometimes. The kind of work that makes us who we are, the adventurous dangerous kind.
There are a billion business opportunities in hobbies and deviant sex. I'm all for business and competition; and judging by your understandable antipathy to socialism, so are you. Competition and creativity also make us who we are. And yes; I know that both of these qualities will be inspired by the reality of chaos, which will never go away.
UltraViolence83 wrote:
Imagine living in a totally complacent world. Would there even be any stories to write or fun games to make? Every good story has some kind of conflict in it. A world without any conflict would breed generations of boring, placid people.
The world would never be complacent, except in isolated pockets that will nonetheless be fated to get the occassional reality shock every generation or so (whether they would learn from their experiences is another matter).
UltraViolence83 wrote:
Though with AIs running the scene, there may be much less uncertainty and tragedies, but we need death and chaos to really understand the consequences of our actions and reality in general. Without accidents or human error like the Columbia explosion and Chernobol, we wouldn't think of our space pioneers as brave individuals or truly understand the importance of engineering a suitable nuclear power plant.
Point is, sustained contentment leads to apathy and a lack of prudence. The underestimating of possible disasters is also a real danger.
I can see the logic of your argument. You are implying that the good of the majority depends upon the suffering of the minority. I initiated a debate along these lines a few months ago - if improved airline safety depended upon one freakish airline crash, should you allow that airliner to crash in order to save a much greater number in the long run? A purely objective answer would be "yes". However, would the decades of improved safety eventually lead to complacency? Further catastrophe? This sort of "better is worse" sophistry ultimately leads to tail-swallowing. The best we could hope for is to consciously pursue our very natural desire for progress, while remaining forever vigilant for signs of abuse or side-effects. (This would be the responsibility of all intelligent species). Once again, there would be no such thing as "perfection"; merely slow, statistical improvement in the quality of life.
I would like you to give you a brief exercise. I would like you to imagine a scenario so terrible, so heart-breaking, that you are forced to eat your words. How would this hypothetical scenario affect you? Would it force you to permanently renounce your beliefs (assuming that you survive this scenario, if others do not)? Would it make you stronger, more resilient? And if you somehow benefit from this tragedy, what of those who don't? Are the books balanced in the end?
Oh, and here's a link that might interest you:
http://www.orionsarm.com