Why do most wannabe SF writers reject science?
Moderator: NecronLord
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Why do most wannabe SF writers reject science?
Remember, though, that first contact is going to be made at or beyond the forward edge of colonization, in star systems that don't have the kind of sophisticated sensors that can see gnat hiccup from fifty light-years away. Scout ships will deploy sensors, but their resolution will be limited and they'll be more interested in checking out the immediate vicinity than in doing deep space studies.
If the ships are fast enough, it's conceivable that two civilizations will make contact at close range before they have more than a vague idea that there's anyone out there. I mean yes, they'll see space drives jumping around between systems; they'll know aliens exist, but they won't know anything meaningful about them unless they either set up deliberate communication infrastructure or go send an envoy to talk to them up close and high-bandwidth personal.
If the ships are fast enough, it's conceivable that two civilizations will make contact at close range before they have more than a vague idea that there's anyone out there. I mean yes, they'll see space drives jumping around between systems; they'll know aliens exist, but they won't know anything meaningful about them unless they either set up deliberate communication infrastructure or go send an envoy to talk to them up close and high-bandwidth personal.
This space dedicated to Vasily Arkhipov
Re: Why do most wannabe SF writers reject science?
Plus I suspect ridiculously giant telescopes may be a lot less impressive than simple calculations off mirror radius and sensitivity of modern telescopes would indicate when it comes to picking up very dim lights at long range. The lights involved are so dim, the photon counts spread so thinly over the huge mirror area, that I suspect even at relatively close range they'd be lost in the "noise" of all the other many orders of magnitude brighter light sources the telescope is simultaneously staring into. Maybe if you could use a polymorphic utility fog like shade to black out all the stars you'd have more of a chance, but as it is at significant galactic ranges you're trying to find a firefly buzzing around in a field of giant stadium spotlights. It's a different thing from just asking whether you could detect the firefly if the night was perfectly dark.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
It's inefficient and pointless. A species going on about that sort of crusade will only get so far before thoroughly hamstringing itself.Samuel wrote:Not if the offspring are non-sentient.
I figured it was self-evident, child abuse limiting the child's ability to determine their own fate - either then or at some future moment.You aren't clear- how is child abuse a violation of the code?
No idea what you are trying to say here. That the only reason someone would support the philosophy is because they don't have influence over others?As for consistency, hardly. The boundaries for self-determination is limited by people you have power over. It has no other justification than the fact that you are currently unable to impose your will on others.
...then read it.Can you explain?
I need to read the story to the part about superhappies, but these stand out as not making alot of sense.
They force humans - and themselves - to produce hundreds of non-sentient offspring each, which they will then consume, as their gift to the babyeaters. In the mean time, they upgrade humanity whether humans want to or not, to force them to experience an emotion 'greater than love' because it shares experiences in the truest possible fashion.
They expressly rework each human's goals and passions. Which to some is going to be tantamount to genocide - what sets you apart from a sufficiently advanced rational computer but for your ability to decide your own purpose in life?
A coilgun capable of accelerating a projectile to any notable fraction of c is far, far more impractical than a statite swarm. The basic math for such a swarm suggests that, at small scales, it is feasible. This is not the case for a coilgun bringing a projectile to even a few percent of c - you quickly end up with coilguns measuring the better part of a light-year (or longer) to avoid vaporizing the projectile and coil along the way.Simon_Jester wrote: I don't really want to get into a raging debate about this, but I hope I'll be forgiven if I regard Dyson statite swarms capable of absorbing substantial chunks of the output of a main sequence star as being "less hard" than, say, coilguns. Or Orion drives.
The problem is the only way to expand at more than ~10% of c or so is to use light sail braking. Which can get extremely close to c, yes, but you're also littering space with thousand kilometer wide light sail remnants.As I understand it, ordinary radio traffic doesn't propagate for more than a few light years before it becomes effectively indistinguishable from the background. You'd have to go deliberately out of your way to detect an alien species and start talking to them across interstellar distances, and there are reasons not to do that. So it's fairly plausible for scout ships to not notice each others' presence until they're all converging on the same star. Of course, in that case the scout/colony interaction between the three races will be repeated in massive parallel throughout the region of first contact, which makes the moral choices of the participants in any one contact largely irrelevant.
Five hundred years after humans colonize the first star system, I would expect the rate of colonized systems to be measured in 'per second and accelerating fast' rather than 'per year'.Umm... I think you're misunderstanding the "aliens might be smarter than we are" as "humans are retarded." There's a difference. Do you contend that humanity must necessarily set the galactic gold standard for technological innovation and general intelligence? Because I honestly can't see that as being something we can realistically infer from a sample size of one intelligent species. Remember, the same story also presents us with at least one alien species that is our mental equal, perhaps slightly our inferior. Yudkowsky is not deliberately trying to portray humans as the fools of the galaxy here.
It's more of a sustainability thing - ideally, you don't actually permanently decrease your fitness when helping others. It reduces your ability to help, for one.As for the "utilitarians would autoreject baby-eating" point, that may depend heavily on whether one assigns ethical weight to technical efficiency. YOU do, I'm sure, but the Superhappies may not consider an efficient method for achieving a goal to be ethically superior to an inefficient one. They seem chiefly concerned with the 'naive utilitarian' goal of preventing suffering and creating happiness, and applying that rule and only that rule can create some dreadfully perverse outcomes... by human standards.
And so would we. For us, it's a moral conflict, for them, it isn't. It's what makes humanity's exodus into the cosmos something potentially and hopefully different than a homogenizing swarmI'm not sure I explained myself clearly. What I'm getting at is that the Superhappies themselves do NOT apply the principle of self-determination; to them all actions should be determined by a universally understandable moral code. You have no right not to do the right thing as far as they're concerned. It doesn't matter if self-determination is internally consistent to them; they don't care about it. As far as I can tell they've never heard of it. And when they seen an alien species attempting to justify doing something horrid to its own offspring by invoking "self-determination," they're going to (from their point of view) call bullshit.
It does seem odd, that a humanity which genetically modified itself to live for centuries did not do anything at all about say, the excessive nerves in the coccyx, toes, birth pain, etc. and so on.The Superhappies were also trying to prevent certain types of pain that people inflict on each other, and allow to happen to children (who are by nature unable to decide to embrace pain). The Superhappies might be willing to leave us alone if we could honestly argue that all of us who feel pain chose to feel pain after considering the alternatives, but neither we today nor story-we can honestly say that.
Wasn't arguing against any of that. FTL being impossible may not be a bad thing, to relate this back to the topic. I've been toying with the idea of a billion-year war against a homogenizing swarm as a story backdrop.Quite possibly, although they seem to have found a way to get along among themselves without the principle of self-determination. One key thing to remember when thinking about aliens is that they may have alternate methods of solving problems that we haven't invented, or simply not experience problems we consider to be massive and in urgent need of solutions. Especially problems that revolve around attitudes, such as the ones that arise from disagreement between beings.
And maybe the Superhappies' communal weirdtopia does get crushed eventually. Still our bad luck to run into them before they incorporate a lethal number of flaws. Shit like that happens: colonial empires were unstable over the long term too, but that didn't mean they couldn't conquer anyone before things started to go wobbly.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Why do most wannabe SF writers reject science?
So? "Utilitarian" is not a synonym for "practical." Moreover, there are a lot of ways for a species to engineer around inefficiencies, and it's highly unlikely that they'll ever have to deal with a species that has zero inefficiencies, which is the only way it's sure to face stiff competition.Xeriar wrote:It's inefficient and pointless. A species going on about that sort of crusade will only get so far before thoroughly hamstringing itself.Samuel wrote:Not if the offspring are non-sentient.
Nature is not a perfectly efficient machine, and its products aren't likely to be, either.
_______
Reassuring, but if you apply the argument consistently, you have a hard time raising children at all because almost anything you can conceivably teach them will place limits on their future fate in one way or another. We generally accept that parents have a right/obligation that partly trumps the child's right to self-determination... at which point it mostly becomes a matter of negotiating where that right/obligation stops. It still has to exist.I figured it was self-evident, child abuse limiting the child's ability to determine their own fate - either then or at some future moment.You aren't clear- how is child abuse a violation of the code?
_________
As a practical matter, the only reason for you to support self-determination when we disagree is that it's not feasible for you to make me do things your way.No idea what you are trying to say here. That the only reason someone would support the philosophy is because they don't have influence over others?As for consistency, hardly. The boundaries for self-determination is limited by people you have power over. It has no other justification than the fact that you are currently unable to impose your will on others.
Think about it. People who actually have enough power to reliably control others breach self-determination all the time, and there's a reason for that: we tend to draw exceptions to the rule "people should decide for themselves" whenever we hit something that is "clearly" wrong. It's one of the reasons fundamentalist libertarianism doesn't work; it's not practical to have people in power and expect them to leave everyone alone even when they see a disaster looming.
And since it's so rare for humans to actually honor the principle of self-determination when it is against our interests to do so and when we have enough power that we can plausibly hope to dictate terms to one another, it should be trivially easy for us to imagine minds that do not value the concept at all. Or, at least, minds that don't elevate it to a core philosophical concept as opposed to a practical observation that making you do the right thing would be too costly.
Yes, and as a human being I'm inclined to agree with the "tantamount to genocide" argument. The problem is that the Superhappies don't think like we do. They have a broadly self-consistent ethical framework (at least, self-consistent enough that we aren't in a good position to condemn them for being less consistent than we are)... but they disagree with our conclusions, and vice versa, because they're alien. To quote the story, it's a case of "incompatible values."They force humans - and themselves - to produce hundreds of non-sentient offspring each, which they will then consume, as their gift to the babyeaters. In the mean time, they upgrade humanity whether humans want to or not, to force them to experience an emotion 'greater than love' because it shares experiences in the truest possible fashion.
They expressly rework each human's goals and passions. Which to some is going to be tantamount to genocide - what sets you apart from a sufficiently advanced rational computer but for your ability to decide your own purpose in life?
From a philosophical standpoint, it's a bit tricky to prove the Superhappies wrong without rallying around certain principles. Principles that, when we're honest with ourselves, might very well be an aesthetic thing that doesn't HAVE to matter to all good-willed intelligent beings. In which case the Superhappies are doing bizarre shit to us because they're freaky aliens from our point of view... but from their point of view, the only reason we oppose this sort of logical mutual compromise is because we're freaky aliens with all sorts of absurd fetishes about things like insisting that people have a right to do what they want even when they're wrong.
_________
I didn't say "relativistic coilgun," I said "coilgun." I don't consider "capable of accelerating a projectile to speeds >0.01c" to be a necessary condition for existence in coilguns.A coilgun capable of accelerating a projectile to any notable fraction of c is far, far more impractical than a statite swarm. The basic math for such a swarm suggests that, at small scales, it is feasible. This is not the case for a coilgun bringing a projectile to even a few percent of c - you quickly end up with coilguns measuring the better part of a light-year (or longer) to avoid vaporizing the projectile and coil along the way.Simon_Jester wrote:I don't really want to get into a raging debate about this, but I hope I'll be forgiven if I regard Dyson statite swarms capable of absorbing substantial chunks of the output of a main sequence star as being "less hard" than, say, coilguns. Or Orion drives.
_________
Oh yes. High-relativistic space drives are going to be highly visible, no question there. But the mere fact that someone is using space drives doesn't tell you enough about their species to avoid the need for something like a classical "first contact." Aiming a comm laser at their nearest star system is better, but given the message turnaround time you might very well be able to get a ship over to their nearest system (or to some system in the middle) long before you really learn anything helpful about them.The problem is the only way to expand at more than ~10% of c or so is to use light sail braking. Which can get extremely close to c, yes, but you're also littering space with thousand kilometer wide light sail remnants.
_______
I think you must be assuming high-end von Neumann systems as a precondition, and I don't think that is a necessary assumption. Lower grade automated production means both a slower buildup in each system before it can send out colonies of its own, and a larger minimum payload for each colony ship... which in turn makes individual colonies more expensive and increases the doubling time even more.Five hundred years after humans colonize the first star system, I would expect the rate of colonized systems to be measured in 'per second and accelerating fast' rather than 'per year'.
________
True, but that's definitely a second-order consideration compared to doing what is right, or what is an immediate practical necessity in order to do the right thing. At least, it is in naive utilitarianism, and intelligent minds have adopted far wackier philosophies than naive utilitarianism.It's more of a sustainability thing - ideally, you don't actually permanently decrease your fitness when helping others. It reduces your ability to help, for one.As for the "utilitarians would autoreject baby-eating" point, that may depend heavily on whether one assigns ethical weight to technical efficiency. YOU do, I'm sure, but the Superhappies may not consider an efficient method for achieving a goal to be ethically superior to an inefficient one. They seem chiefly concerned with the 'naive utilitarian' goal of preventing suffering and creating happiness, and applying that rule and only that rule can create some dreadfully perverse outcomes... by human standards.
________
[/quote]Sure. Since the story is an investigation of OUR moral conflict (written in a universe where the events in question will most likely never occur in quite that way), I fail to see the problem. For them, dictating terms to a weaker alien race that is acting immorally is simply the obvious thing to do. For us, it's questionable... but that doesn't make their approach incoherent or crazy. From their point of view, we're the idiot lolbertarians of the universe, and it's not a trivial philosophical problem to prove them wrong.And so would we. For us, it's a moral conflict, for them, it isn't. It's what makes humanity's exodus into the cosmos something potentially and hopefully different than a homogenizing swarmI'm not sure I explained myself clearly. What I'm getting at is that the Superhappies themselves do NOT apply the principle of self-determination; to them all actions should be determined by a universally understandable moral code. You have no right not to do the right thing as far as they're concerned. It doesn't matter if self-determination is internally consistent to them; they don't care about it. As far as I can tell they've never heard of it. And when they seen an alien species attempting to justify doing something horrid to its own offspring by invoking "self-determination," they're going to (from their point of view) call bullshit.
This space dedicated to Vasily Arkhipov
Re: Why do most wannabe SF writers reject science?
Er, the story has FTL. Earth will continue spitting out colony ships even when the near systems are full. Even if Earth makes 1 ship a year and it takes a new system 200 years in order to do the same... you'd be having more than one colony founded every day after 500 years.I think you must be assuming high-end von Neumann systems as a precondition, and I don't think that is a necessary assumption. Lower grade automated production means both a slower buildup in each system before it can send out colonies of its own, and a larger minimum payload for each colony ship... which in turn makes individual colonies more expensive and increases the doubling time even more.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
...even the story explicitly states that both humans and the superhappies redesigned themselves - just the humans did far less, for some reason.Simon_Jester wrote: So? "Utilitarian" is not a synonym for "practical." Moreover, there are a lot of ways for a species to engineer around inefficiencies, and it's highly unlikely that they'll ever have to deal with a species that has zero inefficiencies, which is the only way it's sure to face stiff competition.
Nature is not a perfectly efficient machine, and its products aren't likely to be, either.
Nature is not clean and nor is deciding where right stops and wrong begins. Film at 11.Reassuring, but if you apply the argument consistently, you have a hard time raising children at all because almost anything you can conceivably teach them will place limits on their future fate in one way or another. We generally accept that parents have a right/obligation that partly trumps the child's right to self-determination... at which point it mostly becomes a matter of negotiating where that right/obligation stops. It still has to exist.
That I might wish to see the end result of each of our views has no value as a reason?As a practical matter, the only reason for you to support self-determination when we disagree is that it's not feasible for you to make me do things your way.
Above and beyond the fact that micromanagement is a road to frustration, the idea that humanity should have only one ultimate goal is preposterous on its face.
Certainly. Your right to determine what you want to do ends where the rights of others begin. Determining those boundaries is definitely a problem and a messy one.Think about it. People who actually have enough power to reliably control others breach self-determination all the time, and there's a reason for that: we tend to draw exceptions to the rule "people should decide for themselves" whenever we hit something that is "clearly" wrong. It's one of the reasons fundamentalist libertarianism doesn't work; it's not practical to have people in power and expect them to leave everyone alone even when they see a disaster looming.
That is distinct from what the superhappies are doing - they rewrite base goals. "Experience and spread pleasure."
I never argued against imagining it. Not sure what you are trying to say here.And since it's so rare for humans to actually honor the principle of self-determination when it is against our interests to do so and when we have enough power that we can plausibly hope to dictate terms to one another, it should be trivially easy for us to imagine minds that do not value the concept at all. Or, at least, minds that don't elevate it to a core philosophical concept as opposed to a practical observation that making you do the right thing would be too costly.
Their philosophy is their goal condition. Our philosophies place restrictions on our goal condition. The limitation inherent in the former is that, past a certain point, errors in the philosophy will be seen as errors in Nature to be corrected. This is not always a tractable problem unless you somehow get past even ontotechnology - "Can we remove the very concept of pain from the Universe? Let's see..."From a philosophical standpoint, it's a bit tricky to prove the Superhappies wrong without rallying around certain principles. Principles that, when we're honest with ourselves, might very well be an aesthetic thing that doesn't HAVE to matter to all good-willed intelligent beings. In which case the Superhappies are doing bizarre shit to us because they're freaky aliens from our point of view... but from their point of view, the only reason we oppose this sort of logical mutual compromise is because we're freaky aliens with all sorts of absurd fetishes about things like insisting that people have a right to do what they want even when they're wrong.
More for practicality, compared to railguns or chemical propulsion, they are not very impressive.I didn't say "relativistic coilgun," I said "coilgun." I don't consider "capable of accelerating a projectile to speeds >0.01c" to be a necessary condition for existence in coilguns.
Given the amount of infodumping being thrown around in the scenario, and the relative ease of translation, there's time for a few hundred message passings. And ships can visit other territories without posing a significant threat. Many absurdly powerful vessels in science fiction can't cope with a Type II civilization without FTL. You are one ship, the star is quadrillions.Oh yes. High-relativistic space drives are going to be highly visible, no question there. But the mere fact that someone is using space drives doesn't tell you enough about their species to avoid the need for something like a classical "first contact." Aiming a comm laser at their nearest star system is better, but given the message turnaround time you might very well be able to get a ship over to their nearest system (or to some system in the middle) long before you really learn anything helpful about them.
No need for such an assumption. The human population does not even need to be growing very fast. "Hey, let's all grab a star!" along with a recognized system authority template is all that's needed. Sol, Tau Ceti, etc. will be pumping them around the cosmos until drift calculations get too hard to do.I think you must be assuming high-end von Neumann systems as a precondition, and I don't think that is a necessary assumption. Lower grade automated production means both a slower buildup in each system before it can send out colonies of its own, and a larger minimum payload for each colony ship... which in turn makes individual colonies more expensive and increases the doubling time even more.
I'm not saying they have to make sense, I'm just saying they don't :-pTrue, but that's definitely a second-order consideration compared to doing what is right, or what is an immediate practical necessity in order to do the right thing. At least, it is in naive utilitarianism, and intelligent minds have adopted far wackier philosophies than naive utilitarianism.
The humanity presented in the story was lolbertarian. I think that was part of the point - it's not a given that a spacefaring humanity will be.Sure. Since the story is an investigation of OUR moral conflict (written in a universe where the events in question will most likely never occur in quite that way), I fail to see the problem. For them, dictating terms to a weaker alien race that is acting immorally is simply the obvious thing to do. For us, it's questionable... but that doesn't make their approach incoherent or crazy. From their point of view, we're the idiot lolbertarians of the universe, and it's not a trivial philosophical problem to prove them wrong.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Why do most wannabe SF writers reject science?
Excuse me. I think I'm accidentally mixing two assumption sets.Samuel wrote:Er, the story has FTL. Earth will continue spitting out colony ships even when the near systems are full. Even if Earth makes 1 ship a year and it takes a new system 200 years in order to do the same... you'd be having more than one colony founded every day after 500 years.I think you must be assuming high-end von Neumann systems as a precondition, and I don't think that is a necessary assumption. Lower grade automated production means both a slower buildup in each system before it can send out colonies of its own, and a larger minimum payload for each colony ship... which in turn makes individual colonies more expensive and increases the doubling time even more.
In the STL context, expansion is greatly slowed by the need to fly to a hospitable star system and set up shop. In the FTL context, that time limit is reduced, as you say. Xeriar did not make it clear which context he was talking about, and he used the words "per second and accelerating fast" to describe the rate of colonization.
Unless I'm doing my order of magnitude estimates badly wrong, that is not plausible without either FTL, a very short period of 'dead time' in a colony before it starts sending out new colonies, or technology that makes interstellar colony ships trivially cheap to build. The latter two require advanced von Neumann machines. And since Xeriar seems to greatly prefer the STL context, my natural impression was that he was assuming STL travel, which would rule out the first option.
_______
Yes, and? The fact that a species redesigns itself does not mean that it will choose to redesign itself only in ways that you, personally, deem efficient. Or that it will somehow be reliably constrained by Darwinian pressures to adopt only efficient strategies before it's met its first possible competitor.Xeriar wrote:...even the story explicitly states that both humans and the superhappies redesigned themselves - just the humans did far less, for some reason.Simon_Jester wrote: So? "Utilitarian" is not a synonym for "practical." Moreover, there are a lot of ways for a species to engineer around inefficiencies, and it's highly unlikely that they'll ever have to deal with a species that has zero inefficiencies, which is the only way it's sure to face stiff competition.
Nature is not a perfectly efficient machine, and its products aren't likely to be, either.
While there are moral systems based primarily around efficiency, such systems are not the only internally consistent ones that can exist. It may be flattering to the intellect to think that everyone will decide that acting as efficiently as possible to promote Darwinian fitness is the right thing to do, but the track record of our own species isn't promising on that question. Evolved brains are not consciously designed to optimize their own Darwinian fitness, so it should come as no surprise if they don't always do so.
_______
Are you agreeing, disagreeing, or side-stepping?Nature is not clean and nor is deciding where right stops and wrong begins. Film at 11.Reassuring, but if you apply the argument consistently, you have a hard time raising children at all because almost anything you can conceivably teach them will place limits on their future fate in one way or another. We generally accept that parents have a right/obligation that partly trumps the child's right to self-determination... at which point it mostly becomes a matter of negotiating where that right/obligation stops. It still has to exist.
_________
Look, there are two broad options here.That I might wish to see the end result of each of our views has no value as a reason?As a practical matter, the only reason for you to support self-determination when we disagree is that it's not feasible for you to make me do things your way.
Above and beyond the fact that micromanagement is a road to frustration, the idea that humanity should have only one ultimate goal is preposterous on its face.
One is that you DO regard self-determination as a supreme virtue, on par with any other moral good. In that case, you can reasonably complain that the Superhappies would be doing something wrong by trying to overwrite your self-determination in the name of their own idea of moral good. However, if you regard self-determination as a supreme virtue, and you wish to be consistent, you must tolerate all sorts of other repugnant things. You must be willing to show forbearance and let people do those things even if you have the power to make them stop, because it is their right to decide what they should do, not yours.
That's a self-consistent position, but history shows that most people don't normally adopt it. So while it's a reasonable thing to pick as a moral value, it would be dishonest to claim that it's an innately human moral value. Or that the idea of self-determination is essential to human nature. Plenty of humans ignore the right of self-determination whenever they get the chance, and while you may not like what they do, they're definitely as human as you are.
The other possibility is that you DO NOT regard self-determination as a supreme moral value, but only as a secondary value. In that case, there will be quite a few situations where you would choose to stop some bad thing from happening if you had power. Ethical questions like "if you were Superman, would you intervene to stop the Darfur genocide?" are easy to formulate, and they strike right at the core of this issue.
For most people, the answer seems to be "yes;" if it is practical for them to impose their ideas of moral behavior on others, they will often try to do so and will only rarely decline to do so. Most cases where people apply self-determination in real life come about because it's impractical to make others do what you want, not because everyone is an ideal libertarian who respects other people's decisions even when those decisions are 'clearly' wrong and stupid.
That's a common vein of human behavior; there are many people who choose to be busybodies, and who interfere in the actions and ethical decisions of others for many reasons. So why should we be surprised to see a similar vein of behavior in an alien species, with aliens who have the technological resources to do as they please with us easily choosing to reshape us into a more 'ethical' species... whether we want to be reshaped or not?
That doesn't mean they are right to do so, but it does mean that it's plausible that they would try.
________
To a dedicated utilitarian who's trying to optimize pleasure and minimize pain, there really isn't any difference between placing external controls on your behavior and setting internal controls by rewriting your basic drives. What matters is the outcome and only the outcome, and only as measured in the terms the utilitarian cares about. That's the problem with naive utilitarianism, and it's a big reason why people have come up with more refined versions of utilitarian ethics over the past 150 years or so.Certainly. Your right to determine what you want to do ends where the rights of others begin. Determining those boundaries is definitely a problem and a messy one.Think about it. People who actually have enough power to reliably control others breach self-determination all the time, and there's a reason for that: we tend to draw exceptions to the rule "people should decide for themselves" whenever we hit something that is "clearly" wrong. It's one of the reasons fundamentalist libertarianism doesn't work; it's not practical to have people in power and expect them to leave everyone alone even when they see a disaster looming.
That is distinct from what the superhappies are doing - they rewrite base goals. "Experience and spread pleasure."
I will freely concede this since I haven't done the math or carefully watched someone else do it, but the point remains that coilguns are hard sci-fi by any reasonable definition. They may be inefficient, but it's obvious that they will work, at least within the bounds of a certain performance envelope.More for practicality, compared to railguns or chemical propulsion, they are not very impressive.I didn't say "relativistic coilgun," I said "coilgun." I don't consider "capable of accelerating a projectile to speeds >0.01c" to be a necessary condition for existence in coilguns.
There's still logistical issues involved: how long does it take to cross interstellar distances? How does the cost of colony expeditions compare to the resources available for interstellar colonies (which may be only a tiny fraction of civilization's total; no one seriously talks about spending more than a few percent of GDP to build a moonbase today, after all)? How long will it take for each colony to become a source of new colonies? At what point is a given star system surrounded by its own colonies to the point where no viable targets for colonization remain?No need for such an assumption. The human population does not even need to be growing very fast. "Hey, let's all grab a star!" along with a recognized system authority template is all that's needed. Sol, Tau Ceti, etc. will be pumping them around the cosmos until drift calculations get too hard to do.I think you must be assuming high-end von Neumann systems as a precondition, and I don't think that is a necessary assumption. Lower grade automated production means both a slower buildup in each system before it can send out colonies of its own, and a larger minimum payload for each colony ship... which in turn makes individual colonies more expensive and increases the doubling time even more.
Any coherent mathematical model of interstellar expansion has to take those factors into account, and my understanding of the models is that you don't get growth in the colonies/second range after 500 years unless you make fairly optimistic assumptions about the technology used to do the expansion. Like FTL AND advanced von Neumann machines AND a high degree of social willpower driving the colony effort.
_________
I disagree, because I understand naive utilitarianism well enough to fear it when it's taken to extremes by someone with really significant power.I'm not saying they have to make sense, I'm just saying they don't :-pTrue, but that's definitely a second-order consideration compared to doing what is right, or what is an immediate practical necessity in order to do the right thing. At least, it is in naive utilitarianism, and intelligent minds have adopted far wackier philosophies than naive utilitarianism.
_________
Could you expand on that?The humanity presented in the story was lolbertarian. I think that was part of the point - it's not a given that a spacefaring humanity will be.Sure. Since the story is an investigation of OUR moral conflict (written in a universe where the events in question will most likely never occur in quite that way), I fail to see the problem. For them, dictating terms to a weaker alien race that is acting immorally is simply the obvious thing to do. For us, it's questionable... but that doesn't make their approach incoherent or crazy. From their point of view, we're the idiot lolbertarians of the universe, and it's not a trivial philosophical problem to prove them wrong.
I mean, the Superhappies could reasonably call future-humanity 'idiot lolbertarian'*. In their frame of reference, we make an insane fetish out of self-determination even when it means doing things that are "obviously" wrong. Humans tend to have to choose between individual liberty and having a dictator or oligarchy, because that's how we're wired. We don't naturally all decide to do the same things unless someone is making us do the same thing. To a species with less diversity in its value systems, we might seem like a bunch of freaks who alternate between total chaos (everyone doing their own thing and many of them doing the wrong thing) and psychotic dictatorship (everyone following the leader, whether he's right or wrong). We don't even have a good word for how we'd look from that point of view, because it's not a human point of view in the first place.
*I don't think it's fair for us to call future-humanity of that story "lolbertarian," because it's not obvious that their society runs on the kind of minarcho-capitalism a really mockable lolbertarian embraces. There's definitely a government, and it's led by the kind of rigorously screened technocrats that most people on this forum would just love to pieces. It may be 'libertarian' in the sense that individual liberties are largely unconstrained, partly because we've fooled around with the social conditions to the point where incentive structures are saner. But there's definitely a government in place that can and will interfere if things start to get stupid.
This space dedicated to Vasily Arkhipov
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
The sun puts our nearly 4e26 watts of energy. It's easy to just not grasp the magnitude of that number and what it means.Simon_Jester wrote:...
Unless I'm doing my order of magnitude estimates badly wrong,
...
Pretty much the only limit to how much Sol itself can colonize is the inherent limit of our knowledge of a distant star's position. That leaves millions of star systems to be colonized by Sol alone, though we might prefer to set up a sail exchange with a nearby, mineral-rich star system like Tau Ceti instead and have it fire off fifty million or however many sails we want.
You made the claim that it's probable that -no one in the entire Universe- will gun for pure efficiency.Yes, and? The fact that a species redesigns itself does not mean that it will choose to redesign itself only in ways that you, personally, deem efficient. Or that it will somehow be reliably constrained by Darwinian pressures to adopt only efficient strategies before it's met its first possible competitor.
That's not an assumption I would make if I were deciding the future of my species, but that may just be me.
I agree that how one handles ones offspring or other sapient creations - including how many you get to make - needs to be evaluated when setting self determination as an ideal. It can't mean not imposing yourself on others at all. It can't mean never interfering at all. It can't mean avoiding all conflict. It most certainly means allowing - and enacting - some rather repugnant policies down the road.Are you agreeing, disagreeing, or side-stepping?
Some problems really don't have a perfectly nice solution. You can make them nice enough for any naturally evolved species, however - the Universe may be harsh but it is plentiful.
I think this last sentence is where I would disagree.Look, there are two broad options here.
One is that you DO regard self-determination as a supreme virtue, on par with any other moral good. In that case, you can reasonably complain that the Superhappies would be doing something wrong by trying to overwrite your self-determination in the name of their own idea of moral good. However, if you regard self-determination as a supreme virtue, and you wish to be consistent, you must tolerate all sorts of other repugnant things. You must be willing to show forbearance and let people do those things even if you have the power to make them stop, because it is their right to decide what they should do, not yours.
That's a self-consistent position, but history shows that most people don't normally adopt it. So while it's a reasonable thing to pick as a moral value, it would be dishonest to claim that it's an innately human moral value.
You don't need to uphold something perfectly to consider it a virtue. Jefferson did not like slavery, but not enough to let go of his slaves. I don't like a lot of the things I do or don't do - some of them due to medical conditions. There are many things about myself and the way I behave that I would love to change if I could.
Thanks to modern psychiatry and medicine, I even have to some degree. It is certainly possible.
Regardless, though, we tend to value our own freedom, regardless of anyone else. I am sure you do.
Back to the first sentence - the Sun and 4e26 watts. A billionth is enough to boil Earth's oceans and then some. Whatever group gets that first billionth is going to decide the future of the human race, if they have not already done so. There is going to be a time - and we may live to see it - when one person or many persons are going to make the decisions to set that in motion. Some think it will be through a seed AI, I don't think it will be so simple, but unless humanity destroys itself, it's going to happen.
Being a human you can take that how you wish, of course. Since it has not happened yet, however, you can hope for the best case scenario about what those decisions eventually are, and attempt to see that come about.
Darfur and other conflicts between groups are not really core issues. Authorizing the use of force in order to prevent conflict - or even to prevent the gain of an unfair edge (we're talking about the harnessing the power of stars, here) - is easy to formulate. Numbers get chosen and you can provide for a system that drives colonization.Or that the idea of self-determination is essential to human nature. Plenty of humans ignore the right of self-determination whenever they get the chance, and while you may not like what they do, they're definitely as human as you are.
The other possibility is that you DO NOT regard self-determination as a supreme moral value, but only as a secondary value. In that case, there will be quite a few situations where you would choose to stop some bad thing from happening if you had power. Ethical questions like "if you were Superman, would you intervene to stop the Darfur genocide?" are easy to formulate, and they strike right at the core of this issue.
Dealing with the creation of sapient agents is another matter entirely. Forget child abuse, at what point, in our quest to make more complex virtual universes, do the characters in those universes deserve rights? It's very easy to postulate a future humanity in which the babyeaters are laughably benign to their children in comparison.
I imagine this was intentional on Yudkowsky's part. The baby eaters are a lesser reflection of a potential future of humanity. So are the superhappies. The actual humans in his story make the least sense as a reflection of a possible future.
With one exception, I've seen the opposite. Most people, when needs are met and information is sufficient (which includes upbringing to some degree), are happy to cooperate and adjust their goals around those of others, and feel better for doing so, becoming more tolerant as time goes on. The exception is when another's goal is to impose their whim on someone else. We as a species are more tolerant of being imposed upon than witnessing one agent imposing on another.For most people, the answer seems to be "yes;" if it is practical for them to impose their ideas of moral behavior on others, they will often try to do so and will only rarely decline to do so. Most cases where people apply self-determination in real life come about because it's impractical to make others do what you want, not because everyone is an ideal libertarian who respects other people's decisions even when those decisions are 'clearly' wrong and stupid.
Of course, by their very nature, people who seek authority need to be able to impose themselves on others. Frank Herbert called this 'power attracts the corruptible' - and while not a perfect statement, it is apt enough.
Usually it's because they want to make sure they can put bread on the table. That sort of situation does not apply in a limited scarcity scenario.That's a common vein of human behavior; there are many people who choose to be busybodies, and who interfere in the actions and ethical decisions of others for many reasons.
Did I indicate surprise at them? I merely objected to "humanity's" lack of a comparative moral ideal in the story - it treated the 'let the markets decide' as a comedy. There are more mirrors to evaluate than just the babyeaters, and superhappies, and more variants of far-future libertarianism than some joke.So why should we be surprised to see a similar vein of behavior in an alien species, with aliens who have the technological resources to do as they please with us easily choosing to reshape us into a more 'ethical' species... whether we want to be reshaped or not?
That doesn't mean they are right to do so, but it does mean that it's plausible that they would try.
I think we can easily agree that 'naive just about anything' works rather poorly as a grounding philosophy.To a dedicated utilitarian who's trying to optimize pleasure and minimize pain, there really isn't any difference between placing external controls on your behavior and setting internal controls by rewriting your basic drives. What matters is the outcome and only the outcome, and only as measured in the terms the utilitarian cares about. That's the problem with naive utilitarianism, and it's a big reason why people have come up with more refined versions of utilitarian ethics over the past 150 years or so.
The further away, the closer to half of c you can get, but the harder it becomes to accurately determine the star's true position, and of course past a kiloparsec or so refraction is a pain. This isn't a question we can really answer, not being an emergent Type II civilization, but if the light sail is appropriately designed, it should actually be able to make significant course corrections on its own - both by using conventional thrusters to maintain course along the lightpath while undergoing acceleration, and by angling the reflectors when the sail separates during deceleration.There's still logistical issues involved: how long does it take to cross interstellar distances? ... At what point is a given star system surrounded by its own colonies to the point where no viable targets for colonization remain?
...to put it bluntly, you are vastly underestimating the potential size of the first wave. Human civilization could easily expand at ~.4 c effectively unchecked, forever. Faster when dealing with intergalactic space.
Negligible and not long enough to make a significant impact in the rate of expansion, if a system is setup to reasonably guarantee that light beams will be made to persist for millennia.How does the cost of colony expeditions compare to the resources available for interstellar colonies (which may be only a tiny fraction of civilization's total; no one seriously talks about spending more than a few percent of GDP to build a moonbase today, after all)? How long will it take for each colony to become a source of new colonies?
FTL is not required. "Advanced" (by our standards) von Neumann machines are not exactly what you want - the process is going to be best performed by a variety of machines working in tandem - the key machines gathering resources will not be the ones generating power or producing new machines. As for social willpower, a thousandth of the Sun's output can provide enough light pressure to launch hundreds of thousands of thousand-km radius sails. The Solar System does not want for energy, or materials, or people with the will to explore.Any coherent mathematical model of interstellar expansion has to take those factors into account, and my understanding of the models is that you don't get growth in the colonies/second range after 500 years unless you make fairly optimistic assumptions about the technology used to do the expansion. Like FTL AND advanced von Neumann machines AND a high degree of social willpower driving the colony effort.
I meant in in-Universe terms, that they are comic - eventually they will meet their betters and will be worse for the flaws they have acquired along the way.I disagree, because I understand naive utilitarianism well enough to fear it when it's taken to extremes by someone with really significant power.
"We make our decisions based on prediction markets." - the lobertarians really did win in EY's future humanity, as opposed to being addicted to the "Make your own World of Whorecraft each week!" or "Let's eliminate all pain and all have eternal sex!"Could you expand on that?
I mean, the Superhappies could reasonably call future-humanity 'idiot lolbertarian'*. In their frame of reference, we make an insane fetish out of self-determination even when it means doing things that are "obviously" wrong. Humans tend to have to choose between individual liberty and having a dictator or oligarchy, because that's how we're wired. We don't naturally all decide to do the same things unless someone is making us do the same thing. To a species with less diversity in its value systems, we might seem like a bunch of freaks who alternate between total chaos (everyone doing their own thing and many of them doing the wrong thing) and psychotic dictatorship (everyone following the leader, whether he's right or wrong). We don't even have a good word for how we'd look from that point of view, because it's not a human point of view in the first place.
It is explicitly declared that markets drive their decisionmaking in the fiction.*I don't think it's fair for us to call future-humanity of that story "lolbertarian," because it's not obvious that their society runs on the kind of minarcho-capitalism a really mockable lolbertarian embraces. There's definitely a government, and it's led by the kind of rigorously screened technocrats that most people on this forum would just love to pieces. It may be 'libertarian' in the sense that individual liberties are largely unconstrained, partly because we've fooled around with the social conditions to the point where incentive structures are saner. But there's definitely a government in place that can and will interfere if things start to get stupid.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
Re: Why do most wannabe SF writers reject science?
Because if they were they would replace themselves with machines. Robots and AIs are more effecient than any organic competitor. However to do that you have to be willing to give up everything that makes up your species.You made the claim that it's probable that -no one in the entire Universe- will gun for pure efficiency.
That's not an assumption I would make if I were deciding the future of my species, but that may just be me.
Why?I agree that how one handles ones offspring or other sapient creations - including how many you get to make - needs to be evaluated when setting self determination as an ideal.
And for that we consider him a hypocrite. George Washington was consistent enough to not sell of the kids and free them after his wife's death- Jefferson only freed those who were related to him and sold off slaves whenever he needed more cash.Jefferson did not like slavery, but not enough to let go of his slaves.
I don't. I can work pretty well with extremely limited freedom- in fact I work better when I have fewer options. Interestingly enough people find it easier to choose when they have less and less options or one is selected for them- though they get indignent when they realize they have been manipulated.Regardless, though, we tend to value our own freedom, regardless of anyone else. I am sure you do.
Of course this also depends on what you mean by freedom.
The US in 1945 had half the world's industrial might, the sole stockpile of nuclear weapons and a massive navy and army. However it did not decide the future of human history or obliterate its opponents. There is a difference between the power to destroy and the power to change the behavior of others.Whatever group gets that first billionth is going to decide the future of the human race, if they have not already done so.
Well, if you are a utilitarian that isn't a problem- you don't believe in rights remember?Dealing with the creation of sapient agents is another matter entirely. Forget child abuse, at what point, in our quest to make more complex virtual universes, do the characters in those universes deserve rights?
Isn't that the whole point of this conflict? The superhappies believe that needs are not being meet and that our upbringing was not sufficient.Most people, when needs are met and information is sufficient (which includes upbringing to some degree), are happy to cooperate and adjust their goals around those of others, and feel better for doing so, becoming more tolerant as time goes on.
Typo? Also why does it not apply in a situation without limited scarcity? Humans judge their wealth by what others have, not what they need to survive.Usually it's because they want to make sure they can put bread on the table. That sort of situation does not apply in a limited scarcity scenario.
Isn't the reason it is wrong is because it fails to get the outcome desired (aka other actors are involved and stop you), not that the logic is inherently flawed?I think we can easily agree that 'naive just about anything' works rather poorly as a grounding philosophy.
Re: Why do most wannabe SF writers reject science?
..."No, us. The ones who remembered the ancient world. Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees. When our children legalized rape, we thought that the Future had gone wrong."
Do we want to continue working off "less wrong than baby eating?" Because the humans in the story are fucking insane.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
That would be the reasoning of a single individual race, that did not make a mistake. In an infinite Universe, even if every species prefers to remain biological (specious as is), claiming that none of them will make a mistake and end up with a gray death swarm is not a bet I would make.Samuel wrote: Because if they were they would replace themselves with machines. Robots and AIs are more effecient than any organic competitor. However to do that you have to be willing to give up everything that makes up your species.
Why what - why determine what you can do to your most advanced creations, or why limit how many of them you can make? The former is the entire point of this derailment, the latter is obvious to any non-retard. I'm not sure what you are asking.Why?
...half of the world's industrial might is rather much less than effectively 100% of total civilization capability. Not 99.9%. 100%. All of it. All power production, all industrial production, all computing, all resource harvesting. Once someone gains control of the parent star, they win. If they screw up, you freeze or boil. If they don't like you, you hope to just freeze or boil. You and all the nations of Earth with their pathetic excuse for 'weaponry' have no say in the matter whatsoever.The US in 1945 had half the world's industrial might, the sole stockpile of nuclear weapons and a massive navy and army. However it did not decide the future of human history or obliterate its opponents. There is a difference between the power to destroy and the power to change the behavior of others.
Since I'm arguing for self determination and not utilitarianism, I'm not sure what you are trying to say. By utilitarianism, we are all pretty much worthless.Well, if you are a utilitarian that isn't a problem- you don't believe in rights remember?
That would be a reasonable argument to make on an individual basis. If it's superior, people will convert eventually.Isn't that the whole point of this conflict? The superhappies believe that needs are not being meet and that our upbringing was not sufficient.
...with limited scarcity. Grab your friends, colonize a random star, have access to more wealth than the entirety of human civilization prior to space colonization.Typo? Also why does it not apply in a situation without limited scarcity? Humans judge their wealth by what others have, not what they need to survive.
We call the end result of that 'solipsism'. Morality by definition involves other actors.Isn't the reason it is wrong is because it fails to get the outcome desired (aka other actors are involved and stop you), not that the logic is inherently flawed?
EY has some sort of 'legalize rape' fetish, I'm not sure what it is or if he even has any real comprehension of the phenomenon.
...
Do we want to continue working off "less wrong than baby eating?" Because the humans in the story are fucking insane.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
Re: Why do most wannabe SF writers reject science?
What is your position here? I can't tell.That would be the reasoning of a single individual race, that did not make a mistake. In an infinite Universe, even if every species prefers to remain biological (specious as is), claiming that none of them will make a mistake and end up with a gray death swarm is not a bet I would make.
I apologize- I miss read "needs to be evaluted" as something else.Why what - why determine what you can do to your most advanced creations, or why limit how many of them you can make? The former is the entire point of this derailment, the latter is obvious to any non-retard. I'm not sure what you are asking.
Because it isn't as if multiple groups will be attempting to exploit the sun at the same time? For a group to get into a position where it has that power... it must already have that power in the first place or else other groups would have prevented it from seizing supremacy....half of the world's industrial might is rather much less than effectively 100% of total civilization capability. Not 99.9%. 100%. All of it. All power production, all industrial production, all computing, all resource harvesting. Once someone gains control of the parent star, they win. If they screw up, you freeze or boil. If they don't like you, you hope to just freeze or boil. You and all the nations of Earth with their pathetic excuse for 'weaponry' have no say in the matter whatsoever.
I know- I'm pointing out this is a non-issue for utilitarianism while an incredibly thorny issue for rights and self-determination. Perhaps if there was some sort of standard and rationale behind the rights so they could be evaluated in comparison with each other so we'd know when one overrides another...Since I'm arguing for self determination and not utilitarianism, I'm not sure what you are trying to say. By utilitarianism, we are all pretty much worthless.
Why? What makes you think that it will erode any faster than communist states or religious cults? Ones that have the resources of entire solar systems.That would be a reasonable argument to make on an individual basis. If it's superior, people will convert eventually.
People don't work that way....with limited scarcity. Grab your friends, colonize a random star, have access to more wealth than the entirety of human civilization prior to space colonization.
http://www.nytimes.com/2008/06/01/fashi ... 0&emc=eta1
We call the end result of that 'solipsism'. Morality by definition involves other actors.
My point was that it neglects to consider how the other actors will attempt to hinder you. Naive utilitarianism considers and individuals actions in a vaccum without considering reputation.
Especially considering it makes you realize the humans are total hypocrites.EY has some sort of 'legalize rape' fetish, I'm not sure what it is or if he even has any real comprehension of the phenomenon.
God, there are so many plot holes and crimes against science... in a realistic scenario the fascists humans and communists babyeaters would ally against the democracys superhappies because they are an exponential threat.
Re: Why do most wannabe SF writers reject science?
Yeah, that was definitely the weakest part of the story IMO. A vague piece of total WTF exposition that is never explained, never elaborated on, has no relevance to the story, and seems to have been put in there basically for shock value.Samuel wrote:Do we want to continue working off "less wrong than baby eating?" Because the humans in the story are fucking insane.
I'm not sure that article entirely supports your point.Samuel wrote:People don't work that way.
http://www.nytimes.com/2008/06/01/fashi ... 0&emc=eta1
The Article wrote:One of her clients recently confessed that his net worth had decreased to $8 million from more than $20 million, and he thinks that his wife will leave him. He has hidden their fall in fortune by taking on debt to pay for her extravagant clothes and vacations.
“I literally had to sit there and tell him that he had to tell his wife that she had to stop spending,” she said. “He was actually scared she would leave him because their financial situation changed so drastically.”
The Article wrote:Interviews with the people who actually see the bank statements, like divorce lawyers and lenders, say their clients are definitely living on less than they did a year ago, regardless of how expansive the definition of “less” may be. Hairstylists and private jet rental companies say the wealthy are cutting back on luxuries like $350 highlights and $10,000-an-hour jet rentals.
The problem is still very much that they have to reduce their standard of living (give up luxuries). It's just that their standard of living is so high anyway that to us the difference looks trivial.The Article wrote:“They fear their kids won’t get invited to the right birthday parties,” said Michele Kleier, an Upper East Side-based real estate broker. “If they have to give up things that are invisible, they’re O.K. as long as they don’t have give up things visible to the outside world.”
So New York’s very wealthy are addressing their distress in discreet and often awkward ways. They try to move their $165 sessions with personal trainers to a time slot that they know is already taken. They agree to tour multimillion-dollar apartments and then say the spaces don’t match their specifications. They apply for a line of credit before art auctions, supposedly to buy a painting or a sculpture, but use that borrowed money to pay other debts.
Mind you, I won't deny there's a status aspect to wealth that has nothing to do with material desires and basically is a form of dickwaving, which are also strongly at work in the behaviors the article describes. I'm not sure how important it would be, however, in a civilization where there is absolutely no observable difference in the lifestyle of the poorest and wealthiest person. It's actually a rather interesting question (also bringing into it questions of whether there might be things that would still be valuable in a society without material scarcity - original objects of art for instance, or strategically placed pieces of land - e.g. one right next to the Grand Canyon).
I think that would have been a great direction to take the story in. The moral agonies posed by a choice between allying with the Superhappies or Babyeaters would be great dramatic fodder.Samuel wrote:God, there are so many plot holes and crimes against science... in a realistic scenario the fascists humans and communists babyeaters would ally against the democracys superhappies because they are an exponential threat.
1) Ally with the Superhappies against the Babyeaters. Oh, this means humanity is remade in the Superhappies' image.
2) Ally with the Babyeaters. We have interests in common with them as we're both fighting to preserve our way of life. Theirs just happens to include torturous early death of 99% of their population.
3) Ally with the Babyeaters and then BACKSTAB THEM after we've won and forcibly alter their society. Personally I sort of lean toward this one, but it's hard to not smell a heavy aroma of hypocrisy around it, as the Superhappies' argument that we can choose lives of suffering for ourselves but do not have the right to choose it for our children is not without merit. Also, the Babyeaters may BACKSTAB US FIRST and modify us to EAT OUR BABIES.
4) Ally with neither species and attempt to make a stand against the Superhappies on our own. Downside is we'll probably lose.
Actually, I might lean toward a fifth option:
5) Ally with the Superhappies, but with provision. Modified humans will be born in a Superhappy state but at an early age we will develop the ability to consciously switch it off and experience the emotional states of unmodified humans. This way our freedom of experience would be preserved, although I suspect that somebody who'd experienced nothing but perpetual bliss would probably find a normal human life very unappealing (though they might experience it occassionally for curiousity or as a kind of extreme sport), so in practice it would probably lead to the gradual Superhappification of humanity anyway. Which would be quite thought-provoking really, as it would suggest the Superhappies might be right.
Maybe you could have humanity end up in a three way civil war over the issue, with two sides allying with different aliens and a third refusing to ally with either side. Then you could explore the consequences of choosing different options.
Re: Why do most wannabe SF writers reject science?
Best part is when a comentator mentions the fact Akon uses lipstick as evidence sex roles are reversed. The rationalizations are hilarious.Yeah, that was definitely the weakest part of the story IMO. A vague piece of total WTF exposition that is never explained, never elaborated on, has no relevance to the story, and seems to have been put in there basically for shock value.
That isn't the only stupid thing- aside from the victory of batshit pants on head retarded libertarianism you also have the superhappies- aliens that use DNA for both cells and thinking...
...
I think the only way to deal with all the oddities from the superhappies is that they are naive utilitarians- which means everything they say to the humans is a lie. Of course every physicists conspiring to keep the fact you can blow up stars a secret, biological space ships...
I'll stop.
My point was that we are so beyond anything remotely like scarcity and yet people still have problems and fight over limited resources. I picked that article because I was browsing through News and Politics (it is mentioned page... 54?) and it illustrates my point pretty well.I'm not sure that article entirely supports your point.
After all, sure you have a shit load of stuff. The problem is that you judge it by what other people have, not by what you need.
Why not? While necesities will be equal amoung the population, power, prestige and social interaction will always be unequal. The most likely set up will be a vast majority of people on the dole, a sub group of them trying to improve themselves and individuals who are incharge. Unless you have AIs running everything in which case everyone will be in the first two categories and this won't matter because no one has any real power.I'm not sure how important it would be, however, in a civilization where there is absolutely no observable difference in the lifestyle of the poorest and wealthiest person.
You can make copies of art and use video screens. No, the real value will be other people and status. After all, everything else can be replaced or taken care of by machines but status is something that is both finite and cannot be taken over. Unless we unleash the Lenin Bots.It's actually a rather interesting question (also bringing into it questions of whether there might be things that would still be valuable in a society without material scarcity - original objects of art for instance, or strategically placed pieces of land - e.g. one right next to the Grand Canyon).
Not really. Any individual who thought of humanities own good first and foremost would ally with the babyeaters. The only problems is everyone is letting ideology cloud their judgment because they are unused to dealing with foreigners.I think that would have been a great direction to take the story in. The moral agonies posed by a choice between allying with the Superhappies or Babyeaters would be great dramatic fodder.
Seriously, the Babyeaters are r strategists. How could the "lets launch a crusade" idiot forget that means that when they get to a new world they reproduce like rabbits and colonize extremely quickly, unlike humanity?
That is a non-issue. The superhappies can take the baby eater down on their own. We have no reason to ally with them even if we want to be changed and, if so, we would change ourselves on our own terms.1) Ally with the Superhappies against the Babyeaters. Oh, this means humanity is remade in the Superhappies' image.
2) Ally with the Babyeaters. We have interests in common with them as we're both fighting to preserve our way of life. Theirs just happens to include torturous early death of 99% of their population.
Bad choice. While the babyeaters are vile, as long as they exist they show other alien races that humanity can live in peace with them no matter how vile we think they are. Any alien race that has never encountered aliens before is a problem, but if you can prove you have encountered aliens before and dealt with them satisfactorally than you put yourself in a much better position.3) Ally with the Babyeaters and then BACKSTAB THEM after we've won and forcibly alter their society. Personally I sort of lean toward this one, but it's hard to not smell a heavy aroma of hypocrisy around it, as the Superhappies' argument that we can choose lives of suffering for ourselves but do not have the right to choose it for our children is not without merit. Also, the Babyeaters may BACKSTAB US FIRST and modify us to EAT OUR BABIES.
Plus we prove to the baby eaters you don't need to eat babies in order to be moral. Wheter it changes them is unknown, but it does disprove the foundation of their belief system. Plus if we get them to change voluntarily we get to play the moral superiority card on the next species we meet if we are challanged. "Sure we allied with the babyeater, but it was to prove to them that they were wrong and we managed to change them. We aren't amoral, but pragamatic- so even if you are vile we won't kill you. If you moral and want to help other species we are on your side as well."
That is on par with "invade the Soviet Union while fighting Great Britian".4) Ally with neither species and attempt to make a stand against the Superhappies on our own. Downside is we'll probably lose.
The problem with this plan is that constant production of the chemicals that are happiness in the brain will burn out the receptors and require higher and higher dosages. That and people don't appreciate things unless they have contrasts to compare it to.Modified humans will be born in a Superhappy state but at an early age we will develop the ability to consciously switch it off and experience the emotional states of unmodified humans. This way our freedom of experience would be preserved, although I suspect that somebody who'd experienced nothing but perpetual bliss would probably find a normal human life very unappealing (though they might experience it occassionally for curiousity or as a kind of extreme sport), so in practice it would probably lead to the gradual Superhappification of humanity anyway. Which would be quite thought-provoking really, as it would suggest the Superhappies might be right.
The superhappies in this story seem to have found a way around that. Of course they could be lying in which case the result will be a next generation that is brain dead after x years and a crushing defeat of humanity as they practice their lightening warfare.
Honestly, I am amazed at how gullible and stupid the humans are in this story. Why do they believe anything the superhappies tell them?
Re: Why do most wannabe SF writers reject science?
I did find the idea of a sapient species that communicates by swapping mind-states during sex interesting and reasonably plausible. The idea that evolution on some world might somehow co-opt the exchange of bodily fluids to be a means of communication seems reasonably plausible to me. To do it you'd need some sort of information-carrier molecule, and why not use DNA? Protein could be more efficient (many more amino acids in proteins than nucleotides in DNA, so you have more ability to cram complex messages into relatively short molecules), but evolution often ends up using sub-optimal solutions. It'd be helpful if the organism had already co-opted DNA to serve as an intracellular signalling system (equivalent to our neurotransmitters and hormones), which also seems reasonably plausible for an alien organism. The trickiest part would be to have a way for extremely detailed mind-state information to be encoded into the information-carrier strands as they're being synthesized without having to encode it first in the nuclear DNA (which would probably require an impractically long genome - we have more neural connections than genes by many orders of magnitude). Maybe use a system where the synthesis can be directed by the firing of specialized neurons.Samuel wrote:That isn't the only stupid thing- aside from the victory of batshit pants on head retarded libertarianism you also have the superhappies- aliens that use DNA for both cells and thinking...
Well, yes, but envy and dickwaving still require that some people have stuff that others don't. Which opens up the interesting question of how and where that would be possible in an effectively postscarcity society.My point was that we are so beyond anything remotely like scarcity and yet people still have problems and fight over limited resources. I picked that article because I was browsing through News and Politics (it is mentioned page... 54?) and it illustrates my point pretty well.
After all, sure you have a shit load of stuff. The problem is that you judge it by what other people have, not by what you need.
"Unreasonable" wants like your own star system are one obvious area, but if everybody can have access to a starship they should be easy to satisfy. True, the number of stars in the universe may be finite (it depends on whether we live in a closed universe or not) but the Fermi Paradox indicates the universe is probably quite empty and the visible universe alone has many billions of galaxies so more likely than not it won't be a problem for a while. You could try to claim vast swathes of the visible universe but without very fast FTL you could never enforce such a claim so it would be an absurd exercise in futility. "Prestige goods" like original objects of art are a possibility, as is strategically placed bits of land (e.g. next to a famous landmark or over a historically significant spot). Social prestige and power might be another big one - it will of course depend on the exact workings of the society.
Letting ideology cloud your judgment is very human, however. Humans are not particularly rational and their feelings and actions are often not determined by rational game theory.Not really. Any individual who thought of humanities own good first and foremost would ally with the babyeaters. The only problems is everyone is letting ideology cloud their judgment because they are unused to dealing with foreigners.
One could modify them to bear small numbers of children.Seriously, the Babyeaters are r strategists. How could the "lets launch a crusade" idiot forget that means that when they get to a new world they reproduce like rabbits and colonize extremely quickly, unlike humanity?
A good point, and one I hadn't thought of.Bad choice. While the babyeaters are vile, as long as they exist they show other alien races that humanity can live in peace with them no matter how vile we think they are. Any alien race that has never encountered aliens before is a problem, but if you can prove you have encountered aliens before and dealt with them satisfactorally than you put yourself in a much better position.
Plus we prove to the baby eaters you don't need to eat babies in order to be moral. Wheter it changes them is unknown, but it does disprove the foundation of their belief system. Plus if we get them to change voluntarily we get to play the moral superiority card on the next species we meet if we are challanged. "Sure we allied with the babyeater, but it was to prove to them that they were wrong and we managed to change them. We aren't amoral, but pragamatic- so even if you are vile we won't kill you. If you moral and want to help other species we are on your side as well."
The first is a simple bioengineering problem and would presumably would be corrected in the process of making us Superhappy (doing so would also have the beneficial side effect of rendering us immune to all sorts of addiction, both chemical and psychological). As for the second, I suspect the Superhappies would say that just because you lose appreciation of pleasure is no reason to desire its end. Most modern Westerners probably don't appreciate their (comparatively) luxurious and safe lifestyle as much as a Middle Ages peasant would in their place, but that doesn't mean it'd be a good idea for us to live like peasants in the Middle Ages.The problem with this plan is that constant production of the chemicals that are happiness in the brain will burn out the receptors and require higher and higher dosages. That and people don't appreciate things unless they have contrasts to compare it to.
Re: Why do most wannabe SF writers reject science?
To do it you'd need some sort of information-carrier molecule, and why not use DNA? Protein could be more efficient (many more amino acids in proteins than nucleotides in DNA, so you have more ability to cram complex messages into relatively short molecules), but evolution often ends up using sub-optimal solutions.
DNA is not a suboptimal solution. DNA has the exact opposite traits you want for a neurotransmiter. It has
- few different states
- hard to change
- mutable
- packadged in such a way that prevents full usage
- entirely chemical so that it works slower
It also means that if you ever get an infection your body goes insane because pieces of DNA from anything can cause you cells to go nuts.It'd be helpful if the organism had already co-opted DNA to serve as an intracellular signalling system (equivalent to our neurotransmitters and hormones), which also seems reasonably plausible for an alien organism.
Yeah, but the individuals in this story didn't bother even thinking of that.Letting ideology cloud your judgment is very human, however. Humans are not particularly rational and their feelings and actions are often not determined by rational game theory.
I meant that, unlike humans, you could simply set a small number of baby eaters down on one planet and they would spread like wild fire. Since the main limitation on the spread of colonies is how long until they make their own this gives them a major advantage.One could modify them to bear small numbers of children.
Of course the baby eaters fear humanity... but it could simply be they assume that humanity has superior numbers because their first contact ship is superior, implying that humanity has gotten to the point where the surface area of the expansion is less than the output of the interior. I'm assuming the baby eaters don't use dedicated exploration vessels, but have their worlds mass produce cheap colony ships to get to the next system.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
The superhappies are a homogenizing swarm that absorbs inefficiencies. A homogenizing swarm that avoids that will overcome them. That's all, really.Samuel wrote:What is your position here? I can't tell.That would be the reasoning of a single individual race, that did not make a mistake. In an infinite Universe, even if every species prefers to remain biological (specious as is), claiming that none of them will make a mistake and end up with a gray death swarm is not a bet I would make.
Whatever makeup it contains, the individuals responsible for that first step of control either act with care, or everyone on Earth dies. It does not really matter how many you put into that group - it could be everyone on Earth if you wanted - that still doesn't change the magnitude of the danger presented, though it certainly makes it worse.Because it isn't as if multiple groups will be attempting to exploit the sun at the same time? For a group to get into a position where it has that power... it must already have that power in the first place or else other groups would have prevented it from seizing supremacy.
I'm a computer science student, not a philosophy student, so I have only so much to say on the matter.I know- I'm pointing out this is a non-issue for utilitarianism while an incredibly thorny issue for rights and self-determination. Perhaps if there was some sort of standard and rationale behind the rights so they could be evaluated in comparison with each other so we'd know when one overrides another...
I don't.Why? What makes you think that it will erode any faster than communist states or religious cults? Ones that have the resources of entire solar systems.
Showing off in a postscarcity society would almost certainly be solely dependent on one's own creativity. The Sun puts out ~$5e18 worth of electricity (by modern standards) every second. You don't have enough mental processing power to spend it on enough ostentatious trivialties to make a dent in your share of that. If you grab your friends and tail it to another star, your wealth increases exponentially from that.People don't work that way.
http://www.nytimes.com/2008/06/01/fashi ... 0&emc=eta1
It's intended as a parable, I think. Like I said, the babyeaters and superhappies are each a reflection of a potential future humanity.Especially considering it makes you realize the humans are total hypocrites.
God, there are so many plot holes and crimes against science... in a realistic scenario the fascists humans and communists babyeaters would ally against the democracys superhappies because they are an exponential threat.
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
Re: Why do most wannabe SF writers reject science?
The Superhappies are almost certainly naive utilitarians. There is literally no reason at all to believe anything they are promising. After all, once they change a group to think like them they can get them to continue to embrace future changes until they too are Superhappies.The superhappies are a homogenizing swarm that absorbs inefficiencies. A homogenizing swarm that avoids that will overcome them. That's all, really.
If we are that advanced we will also be using orbitals by that time which reduces the threat a bit.Whatever makeup it contains, the individuals responsible for that first step of control either act with care, or everyone on Earth dies.
The story maybe, the ending? No.It's intended as a parable, I think. Like I said, the babyeaters and superhappies are each a reflection of a potential future humanity.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Why do most wannabe SF writers reject science?
If you stop and look at what the born-in-the-future guy Akon says in response, it's obvious that while they may be fucking insane, they're not fucking insane in the way we'd expect just from that quote.Samuel wrote:Do we want to continue working off "less wrong than baby eating?" Because the humans in the story are fucking insane."No, us. The ones who remembered the ancient world. Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees. When our children legalized rape, we thought that the Future had gone wrong."
Akon lives in a society where something that both he and the ancient-born Confessor describe as "nonconsensual sex" is legal... but Akon is utterly unfamiliar with the concept of violent rape, to the point where his inability to understand why the Confessor and his generation freaked out at making nonconsensual sex legal drives the Confessor up the wall.
I'm pretty sure we're dealing with a language shift here, or with humans having monkeyed around with their own incentive structures and system for raising children to the point where violent rape is in fact unheard of. Such that "nonconsensual sex" becomes something other than a euphemism for "violent rape," the crime that we of 2009 are painfully familiar with.
And no, I don't think it's very plausible. But since I don't think I have enough information about just how extensively this future-civilization has manipulated human nature, and how much their changes to human upbringing affect the incidence rate of mental illness, it's hard for me to say how implausible it is, or whether it fails to make sense in context.
Did he ever promote this idea anywhere else, or did it just occur in this one place?Xeriar wrote:EY has some sort of 'legalize rape' fetish, I'm not sure what it is or if he even has any real comprehension of the phenomenon.
If it just occured in one place, then the most likely explanation is that he was trying to prove that his future society is WEIRD, in which case he succeeded beyond his wildest dreams. If he's brought it up before or since, your explanation is as good as anything else I can think of.
=========
I wouldn't either, but if the only way to stop a gray death swarm is to become a gray death swarm, there's not much point worrying about it; either way my actual species is gone and there's nothing but a gray death swarm left picking over the rubble. The only difference is whether the individual death swarm robots have "Made on Terra" or "Made on Omicron Persei VIII" stamped on their outer casing.Xeriar wrote:That would be the reasoning of a single individual race, that did not make a mistake. In an infinite Universe, even if every species prefers to remain biological (specious as is), claiming that none of them will make a mistake and end up with a gray death swarm is not a bet I would make.
You're still grossly oversimplifying the scenario. That kind of power plant has to be constructed by someone; what are the odds that all the builders of Dyson solar statites will form a unified political bloc? Remember that people could reasonably have said the same thing about nuclear bombs in 1941, knowing how powerful they were; hell, they DID say so. Anyone with nukes will win a war against anyone without, so the first guy to build up a nuclear arsenal wins....half of the world's industrial might is rather much less than effectively 100% of total civilization capability. Not 99.9%. 100%. All of it. All power production, all industrial production, all computing, all resource harvesting. Once someone gains control of the parent star, they win. If they screw up, you freeze or boil. If they don't like you, you hope to just freeze or boil. You and all the nations of Earth with their pathetic excuse for 'weaponry' have no say in the matter whatsoever.
What was missing then (and what is missing from your calculations now) is that the time scale it takes to acquire an all-dominant weapon is longer than the time scale it takes for your competitors to catch on to what you're doing and acquire weapons of comparable power. In this case, their own Dyson statites that answer to their programming, not yours. At which point both sides are quite capable of building laser cannons big enough to boil Earth's oceans, and where are you then? Maybe one side's lasers will take half again as long to finish the job, but the basic problem is still in place.
Having enormous physical power does not guarantee absolute mastery, not least because if you can acquire that power, the odds are that someone else equally clever and ambitious is only slightly behind you in the race for it.
========
The statement "we make our decisions based on prediction markets" gives surprisingly little insight into what kinds of decisions are made and (more importantly) what kinds of decisions are not made. Frankly, it was poorly developed, but the information vacuum is complete enough that we aren't in a good position to fill in the blanks by calling them insane."We make our decisions based on prediction markets." - the lobertarians really did win in EY's future humanity, as opposed to being addicted to the "Make your own World of Whorecraft each week!" or "Let's eliminate all pain and all have eternal sex!"
For example, the senior Administrator of the Huygens system clearly seemed to have enough power to make things happen (destroy a ship, load children onto ships and evacuate them). The prediction markets may be something of an epiphenomenon compared to the actual decision-making process, much as polls are today. Indeed, you can make a case that a well designed prediction market is simply a refined version of a poll. And virtually all non-despotic governments we have today use opinion polls in one way or another.
If I wanted to explain democracy to an alien, I'd be forced to say something like "we make decisions based on elections, where "election" is defined as a process where individuals have the option of selecting a preference, and where the outcome preferred by the greatest number then becomes the official policy." However, that does NOT mean that all our decisions are based on polling (at least, not in republics, which seem to be far preferred over pure democracies); other factors are definitely considered.
In that context, how is "we make decisions based on elections" less insane than "we make decisions based on prediction markets?"
========
OK, but you're still making the major assumption of Dyson swarms. Without a Dyson swarm you don't get that kind of power production. Yudkowsky doesn't present us with any other evidence that the guys he's writing rate a 1.8 or so on the Kardashev scale, so it's truly absurd for you to complain that his future-humanity isn't building interstellar colonies as fast as you'd like. Essentially you're complaining about what you see as a bad policy decision (not to build Dyson swarms and strip-mine the asteroid belts down to dust)... but the fact that a civilization makes bad policy decisions does not mean that they were written by a bad author. At least, not by itself.Xeriar wrote:[on the subject of rapid colonization]The sun puts our nearly 4e26 watts of energy. It's easy to just not grasp the magnitude of that number and what it means.
Pretty much the only limit to how much Sol itself can colonize is the inherent limit of our knowledge of a distant star's position. That leaves millions of star systems to be colonized by Sol alone, though we might prefer to set up a sail exchange with a nearby, mineral-rich star system like Tau Ceti instead and have it fire off fifty million or however many sails we want.
Provided sufficient numbers in the first wave, which requires Dyson swarms and asteroid strip-mining, yes. How long does it take civilizations to reach that level of infrastructure? How much of it gets built before someone who fears its potential as a weapon destroys it?...to put it bluntly, you are vastly underestimating the potential size of the first wave. Human civilization could easily expand at ~.4 c effectively unchecked, forever. Faster when dealing with intergalactic space.
I'm not saying Dyson swarms and asteroid strip-mining can't happen. I'm saying they aren't a logical inevitability. Science fiction settings without such heavy industrialization are still going to be subject to the problem of a limited origin rate for colony ships.
_________
That kind of energy will perforce not be controlled by small groups of ideologues; society will destroy itself in short order if it is. So what's needed is not only "people with the will to explore," but people to vote for those people, and people to vote for the taxes required to build up their Dyson Swarm from a sparse array that taps, say, .01% of solar power into something more ambitious that taps .1%. That's one place where things get tricky.As for social willpower, a thousandth of the Sun's output can provide enough light pressure to launch hundreds of thousands of thousand-km radius sails. The Solar System does not want for energy, or materials, or people with the will to explore.
Remember that we're talking about planet-devastating energy levels here. Launch lasers for interstellar colony ships are more dangerous than nuclear weapons by orders of magnitude, and you damn sure don't see those being available to even large civilian organizations. Not even if those organizations protest that what they want the nukes for is 'really important'. So the level of social will required to sign over thousands of laser launch batteries is... nontrivial, just like the level required to sign over an arsenal of hydrogen bombs for an Orion drive.
=========
Ah, but is freedom one of your prime directives (so to speak)? Is it something that you consider co-equal with the highest moral virtues, something that you would sacrifice nearly anything you possess to maintain, even in small amounts?I think this last sentence is where I would disagree. You don't need to uphold something perfectly to consider it a virtue. Jefferson did not like slavery, but not enough to let go of his slaves. I don't like a lot of the things I do or don't do - some of them due to medical conditions. There are many things about myself and the way I behave that I would love to change if I could. Thanks to modern psychiatry and medicine, I even have to some degree. It is certainly possible.
Regardless, though, we tend to value our own freedom, regardless of anyone else. I am sure you do.
If freedom and self-determination were truly central, essential human motivating forces, we'd consider tyranny as repugnant as, say, eating babies. Protecting our children is a fundamental human drive, even if it's not a uniquely human drive. Notice the difference between our attitude about someone who acts totally against the "protect the children" impulse and about someone who acts totally against the "secure the blessings of liberty for ourselves and our posterity" impulse. Child-killers get a much harsher reaction than people who want to revoke the Bill of Rights. And there's a good reason for that: When you get right down to it, most people would rather protect their children than secure the blessings of liberty.
So while humans value self-determination, you can make a damned good case that it isn't one of the essential values at the core of what it means to be human. Very few humans will go far out of their way to respect the self-determination of others, and most humans have things they consider far more valuable than even their own self-determination.
So a nonhuman species for whom self-determination is already a creepy alien concept and is told that we take it dead seriously, so seriously that it's as important to us as doing utilitarian good, is going to be skeptical. They're going to point to all the people in our own history who infringed on others' self-determination for the sake of a proclaimed utilitarian good, and ask why they were not stopped, the way that someone who had pronounced their intention to eat thousands of babies would have been. If we take this so seriously when we're suddenly talking to aliens planning to reshape our species... why don't we take it so seriously when talking among ourselves?
So there's a limit on how well you can stand off something like the Superhappy Weirdtopia Collective by proclaiming that one of humanity's core values is self-determination.
This space dedicated to Vasily Arkhipov
Re: Why do most wannabe SF writers reject science?
I did. It still doesn't make any sense. After all the older generation freaked out and that implies they saw something wrong. The only condition I could see it reasonably being made legal would be if no one was being raped anymore.If you stop and look at what the born-in-the-future guy Akon says in response, it's obvious that while they may be fucking insane, they're not fucking insane in the way we'd expect just from that quote.
Akon lives in a society where something that both he and the ancient-born Confessor describe as "nonconsensual sex" is legal... but Akon is utterly unfamiliar with the concept of violent rape, to the point where his inability to understand why the Confessor and his generation freaked out at making nonconsensual sex legal drives the Confessor up the wall.
I'm pretty sure we're dealing with a language shift here, or with humans having monkeyed around with their own incentive structures and system for raising children to the point where violent rape is in fact unheard of. Such that "nonconsensual sex" becomes something other than a euphemism for "violent rape," the crime that we of 2009 are painfully familiar with.
And no, I don't think it's very plausible. But since I don't think I have enough information about just how extensively this future-civilization has manipulated human nature, and how much their changes to human upbringing affect the incidence rate of mental illness, it's hard for me to say how implausible it is, or whether it fails to make sense in context.
It would make sense as part of them gradually eliminating most of the body of law and it wouldn't have resulted in the older generation panicking.
That is the problem- the older generation thought things had gone wrong. If it was something as innocuous as not being needed anymore than they wouldn't have objected. But it obviously wasn't.
The whole genetic engineering falls flat- humans in the story are more than willing to immediately commit to warfare. Even assuming they are different than modern humanity (even though there is no reason to believe so) the problem of sex with strangers could have been dealt with alot easier by just having people who want random sex announcing their desire on their clothing or other easy social engineering option.
A simpler method would be people having T-shirts that say "want to fuck?" having sex in the halls.If it just occured in one place, then the most likely explanation is that he was trying to prove that his future society is WEIRD, in which case he succeeded beyond his wildest dreams. If he's brought it up before or since, your explanation is as good as anything else I can think of.
Not to mention the construction of orbital that hide behind planets so they can't be obliterated.At which point both sides are quite capable of building laser cannons big enough to boil Earth's oceans, and where are you then? Maybe one side's lasers will take half again as long to finish the job, but the basic problem is still in place.
On a ship with a small crew the idea of using predicition markets is insane- there are two few individuals for it to work well and most of the crew would be to busy to participate.Frankly, it was poorly developed, but the information vacuum is complete enough that we aren't in a good position to fill in the blanks by calling them insane.
Or she could just be the head of the most powerful company in the system.For example, the senior Administrator of the Huygens system clearly seemed to have enough power to make things happen (destroy a ship, load children onto ships and evacuate them).
No, it isn't. A predicition market is an attempt to estimate the future and what will happen and is influenced by money while a poll is a question of what the individual desires and is influenced by numbers.Indeed, you can make a case that a well designed prediction market is simply a refined version of a poll.
No, you'd say:If I wanted to explain democracy to an alien, I'd be forced to say something like "we make decisions based on elections, where "election" is defined as a process where individuals have the option of selecting a preference, and where the outcome preferred by the greatest number then becomes the official policy."
"We use democracy. The canidate that the most people choose is in charge."
In that context, how is "we make decisions based on elections" less insane than "we make decisions based on prediction markets?"
Except this isn't a policy decision. Libertarians, remember? It is also worth noting that since the idea is economically efficient it would be adopted by libertarians.Essentially you're complaining about what you see as a bad policy decision (not to build Dyson swarms and strip-mine the asteroid belts down to dust)... but the fact that a civilization makes bad policy decisions does not mean that they were written by a bad author.
How do you destroy the ultimate weapon?How much of it gets built before someone who fears its potential as a weapon destroys it?
Why wouldn't extreme industrialization be an evitablity? Once you have part of the system up, each addition step foward makes total sense. The only problem is the first step... except it also makes sense. Once the first asteroid is dismantled to form a network of solar panels and power relays it has begun and everyone will want to do that in order to have more energy for whatever program/country/project/institution/corporation they are working for.I'm saying they aren't a logical inevitability. Science fiction settings without such heavy industrialization are still going to be subject to the problem of a limited origin rate for colony ships.
Of course they won't. They will authorize the doubling of sats in the L perimeter from a new asteroid this year, extension after repairing a network, an above plane and below plan expansion... once you have part of the system in place, making it larger is trivial..01% of solar power into something more ambitious that taps .1%. That's one place where things get tricky.
The launch batteries won't be signed over. The usage for a period of time will be. As for insuring that they won't be used for evil, even if we don't have AIs I'm sure we can make decent expert systems to run them.So the level of social will required to sign over thousands of laser launch batteries is... nontrivial, just like the level required to sign over an arsenal of hydrogen bombs for an Orion drive.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Why do most wannabe SF writers reject science?
Don't be too sure; if I were, say, 150 years old in a paradise-future and I saw my descendants trying to legalize rape, I'd tend to be strongly opposed... even if I knew that the violent rape rate was effectively zero. Social values are strongly determined by the world you experience when you're young.Samuel wrote:I did. It still doesn't make any sense. After all the older generation freaked out and that implies they saw something wrong. The only condition I could see it reasonably being made legal would be if no one was being raped anymore.
It would make sense as part of them gradually eliminating most of the body of law and it wouldn't have resulted in the older generation panicking.
Ah, but how small is the crew? We'd assume "small; it's a scout ship," but they might easily be carrying a large research staff. All we see is the command team. Likewise, the society is at least semi-post-scarcity; their idea of the minimum required leisure time to keep the crew sane and functioning efficiently might be different from ours.On a ship with a small crew the idea of using predicition markets is insane- there are two few individuals for it to work well and most of the crew would be to busy to participate.Frankly, it was poorly developed, but the information vacuum is complete enough that we aren't in a good position to fill in the blanks by calling them insane.
And yes, those are excuses. But in a situation where so little is made explicit, I think that condemning the story for not making sense is no more reasonable than trying to explain the story such that it does make sense. We're not talking about technobabble that creates mutually exclusive problems here; we're talking about a story that fails to develop the future humans because it isn't about future humans. We don't know enough to come up with rational hypotheses about what Yudkowsky was trying to make the place look like, or even if he had any one cohesive picture of their society in mind. It's a flaw in the story, yes, but I don't think it's one that merits all that much concern.
________
In theory, yes... but unless you assume that she must be, there's no support for the idea that she is. She answers the phone as if she were a planetary governor, and she clearly has decision-making power. I really don't see where you get the conviction that the Administrators aren't serving society as a government in the story's future-setting.Or she could just be the head of the most powerful company in the system.For example, the senior Administrator of the Huygens system clearly seemed to have enough power to make things happen (destroy a ship, load children onto ships and evacuate them).
_______
"So... is this done by ritual combat, or by shouting matches, or... I'm honestly having trouble visualizing this process of yours."If I wanted to explain democracy to an alien, I'd be forced to say something like "we make decisions based on elections, where "election" is defined as a process where individuals have the option of selecting a preference, and where the outcome preferred by the greatest number then becomes the official policy." No, you'd say:
"We use democracy. The canidate that the most people choose is in charge."
Seriously, I think you're bringing too many assumptions to the table about how their system must work, and letting this contaminate your impressions about how it does work (or, rather, is said to work). You've overlooked or discarded the possibility of words meaning different things to century-old denizens of a distant post-scarcity future than they do to us, the possibility that the influential order of Confessors explicitly dedicated to preserving sanity in the actions of policy-makers is in fact doing its job, the possibility that the highly trained Administrators aren't just a figurehead on a society driven entirely by prediction markets...
So I question the precision of your very specific conclusions about how messy the fictional society you're analyzing is.
________
Yes, assuming major discrepancies of wealth among the society, and that money isn't serving as much as a way to keep score in social interactions as it is a medium of exchange. And since we honestly don't come close to knowing enough about the civilian life of the EY-future to say whether or not that's true... I'd say you're trying to make too much soup from too little stock here.In that context, how is "we make decisions based on elections" less insane than "we make decisions based on prediction markets?"[cites effects of money on the decision-making process]
________
By mass-drivering the statite control stations into scrap metal before someone builds enough of them to power a planet-crushing beam weapon? If having a sufficient statite array really lets you dictate terms to everyone including national governments, what sane national government will let you reach the "sufficient" stage of development without taking steps to secure control of your statites? And that applies as much when "you" are yourself a national government as when you're a private citizen.How do you destroy the ultimate weapon?How much of it gets built before someone who fears its potential as a weapon destroys it?
Governments cannot allow a statite gap, so most of the bigger ones won't, any more than they allowed a nuclear gap in the past.
_________
Samuel, I seem to recall you arguing for slower colonization not long ago, but anyway...Why wouldn't extreme industrialization be an evitablity? Once you have part of the system up, each addition step foward makes total sense. The only problem is the first step... except it also makes sense. Once the first asteroid is dismantled to form a network of solar panels and power relays it has begun and everyone will want to do that in order to have more energy for whatever program/country/project/institution/corporation they are working for.I'm saying they aren't a logical inevitability. Science fiction settings without such heavy industrialization are still going to be subject to the problem of a limited origin rate for colony ships.
The chief reason I can think of is the social desire to restrict access to massive amounts of energy. The most powerful sources of potentially destructive energy now available are already tightly controlled, to the point where they're effectively unobtainable unless you have your own army and nation-state. Since a hard sci-fi future doesn't feature any kind of technomagic shields that allow my house to shrug off gigajoule-range energy blasts, I'm going to be very reluctant to act in ways that make gigajoule-range energy easy to access and to use destructively. That goes with increasing vehemence as we move up through the orders of magnitude, too. By the time we're talking about, say, one billionth of a star's total power output we're dealing with enough power to level cities in seconds or minutes, and keep doing so indefinitely, and as long as governments exist they will try to restrict ownership of that kind of power. They can do so by getting in at the ground floor and staying in, using their early lead in coercive force to build up a lead in this new type of force.
If that's the way the expansions go in, then you can bet people are thinking of ways to use all that power as the system is expanded... in which case when the ambitious guys propose batteries of lasers powering interstellar light-sails, they're going to have to outcompete all the things other people are using power for, or accept that they've only got a small fraction of the total. Either way, social will becomes a major limiting factor on the number of ships that can go out.Of course they won't. They will authorize the doubling of sats in the L perimeter from a new asteroid this year, extension after repairing a network, an above plane and below plan expansion... once you have part of the system in place, making it larger is trivial.
This space dedicated to Vasily Arkhipov
Re: Why do most wannabe SF writers reject science?
Laser lightsail propulsion requires something like 6-7 gigawatts per newton. So a 1 million ton ship accelerating at 1 gravity would use somewhat less than 7 X 10^19 joules. That is a little less than .00002% of the sun's energy.
A fairly robust colonization program certainly seems feasible even if it is a very low priority for the civilization as a whole.
It's also worth noting that if you're patient (say, willing to wait a couple of thousand years to get to a nearby star) interstellar colonization is actually very easy. Aside from the difficulties imposed by voyage time it's not much harder than interplanetary travel. It just takes a long time to get where you're going. But then people with this level of technology would almost undoubtedly be immortal, so that shouldn't be as big a problem for them as it would be for us.
I think the big bottleneck is probably going to be the number of people who feel inclined to leave the comfort and safety of home and brave the unknown. Although with this level of technology you should be able to take a small bubble of that safety and luxury with you as you travel, so aside from random accidents, the possibility of running into hostile aliens, and the psychological rigors of isolation (assuming you didn't take companions) you should be able to be pretty comfortable on your voyage.
A fairly robust colonization program certainly seems feasible even if it is a very low priority for the civilization as a whole.
It's also worth noting that if you're patient (say, willing to wait a couple of thousand years to get to a nearby star) interstellar colonization is actually very easy. Aside from the difficulties imposed by voyage time it's not much harder than interplanetary travel. It just takes a long time to get where you're going. But then people with this level of technology would almost undoubtedly be immortal, so that shouldn't be as big a problem for them as it would be for us.
I think the big bottleneck is probably going to be the number of people who feel inclined to leave the comfort and safety of home and brave the unknown. Although with this level of technology you should be able to take a small bubble of that safety and luxury with you as you travel, so aside from random accidents, the possibility of running into hostile aliens, and the psychological rigors of isolation (assuming you didn't take companions) you should be able to be pretty comfortable on your voyage.
- Ariphaos
- Jedi Council Member
- Posts: 1739
- Joined: 2005-10-21 02:48am
- Location: Twin Cities, MN, USA
- Contact:
Re: Why do most wannabe SF writers reject science?
He also brings it up as a potential for how far society might 'degenerate', I think in his 'Continuous Extrapolated Vision' article (if I spelled that right). I don't think he understands what is going on with our 'degeneration' - the rise of individual will and respect for that will, so long as it does not interfere with another.Simon_Jester wrote:Did he ever promote this idea anywhere else, or did it just occur in this one place?Xeriar wrote:EY has some sort of 'legalize rape' fetish, I'm not sure what it is or if he even has any real comprehension of the phenomenon.
If it just occured in one place, then the most likely explanation is that he was trying to prove that his future society is WEIRD, in which case he succeeded beyond his wildest dreams. If he's brought it up before or since, your explanation is as good as anything else I can think of.
Nah, it isn't. As long as you are sufficiently heterogeneous you can respond to novel threats, at least in theory and possibly with help. Nothing stops you from reshaping the nature of your front and converting it into war mode. You probably have the advantage of being able to predict their actions a lot better than they can predict yours, though this is more advantageous in an STL scenario for various reasons.Simon_Jester wrote:I wouldn't either, but if the only way to stop a gray death swarm is to become a gray death swarm, there's not much point worrying about it; either way my actual species is gone and there's nothing but a gray death swarm left picking over the rubble. The only difference is whether the individual death swarm robots have "Made on Terra" or "Made on Omicron Persei VIII" stamped on their outer casing.Xeriar wrote:That would be the reasoning of a single individual race, that did not make a mistake. In an infinite Universe, even if every species prefers to remain biological (specious as is), claiming that none of them will make a mistake and end up with a gray death swarm is not a bet I would make.
It's really hard to predict how it can turn out, being the future and all.You're still grossly oversimplifying the scenario. That kind of power plant has to be constructed by someone; what are the odds that all the builders of Dyson solar statites will form a unified political bloc? Remember that people could reasonably have said the same thing about nuclear bombs in 1941, knowing how powerful they were; hell, they DID say so. Anyone with nukes will win a war against anyone without, so the first guy to build up a nuclear arsenal wins.
What was missing then (and what is missing from your calculations now) is that the time scale it takes to acquire an all-dominant weapon is longer than the time scale it takes for your competitors to catch on to what you're doing and acquire weapons of comparable power. In this case, their own Dyson statites that answer to their programming, not yours. At which point both sides are quite capable of building laser cannons big enough to boil Earth's oceans, and where are you then? Maybe one side's lasers will take half again as long to finish the job, but the basic problem is still in place.
Having enormous physical power does not guarantee absolute mastery, not least because if you can acquire that power, the odds are that someone else equally clever and ambitious is only slightly behind you in the race for it.
The issue is not so much that it might be a unified or diversified political bloc - keep in mind, whoever builds this sort of thing is going to be beyond any sort of need for resources. There is a not-insignificant chance that it actually gets set up as a system authority sort of scenario - the controller would not be human, but something a number of humans understood and agreed to. This has several advantages - you need technically and mathematically keen people to design this sort of thing, you do not want to fuck up, and in game theory terms, the benefit for cooperation is effectively guaranteed +infinity, while non-cooperation has a large -infinity chance. The problem has "cooperate, dumbass" written on its face.
Any rational agent is going to want to cooperate and find an ideal solution. The threat is some yahoo group actually managing to get a four-year head start. It falls under the level of 'existential risk', much the way Seed AI proponents discuss.
========
The Superhappies would probably get it. It is difficult for humans to pick out their most rational members, for some reason.In that context, how is "we make decisions based on elections" less insane than "we make decisions based on prediction markets?"
The original argument involved comparing an equivalent scenario in an STL Universe. Even then, though, the superhappies launched billions of probes.OK, but you're still making the major assumption of Dyson swarms. Without a Dyson swarm you don't get that kind of power production. Yudkowsky doesn't present us with any other evidence that the guys he's writing rate a 1.8 or so on the Kardashev scale, so it's truly absurd for you to complain that his future-humanity isn't building interstellar colonies as fast as you'd like. Essentially you're complaining about what you see as a bad policy decision (not to build Dyson swarms and strip-mine the asteroid belts down to dust)... but the fact that a civilization makes bad policy decisions does not mean that they were written by a bad author. At least, not by itself.
My own personal bet is it will either happen - and happen soundly - or humanity goes poof. There is a limited window in which it can be destroyed, because you have to use your own lasers in order to do so reliably.Provided sufficient numbers in the first wave, which requires Dyson swarms and asteroid strip-mining, yes. How long does it take civilizations to reach that level of infrastructure? How much of it gets built before someone who fears its potential as a weapon destroys it?
I'm not saying Dyson swarms and asteroid strip-mining can't happen. I'm saying they aren't a logical inevitability. Science fiction settings without such heavy industrialization are still going to be subject to the problem of a limited origin rate for colony ships.
The concept of taxes in a society where individuals can be direct generators of more wealth than Earth has ever had is a bit amusing.That kind of energy will perforce not be controlled by small groups of ideologues; society will destroy itself in short order if it is. So what's needed is not only "people with the will to explore," but people to vote for those people, and people to vote for the taxes required to build up their Dyson Swarm from a sparse array that taps, say, .01% of solar power into something more ambitious that taps .1%. That's one place where things get tricky.
It would work more like "This many people want to give Venus a moon and terraform it. This many people want to expand our energy harvesting. This many people want to colonize stars 453-657. This many people want to terraform Mars. This many people want to terraform the Galilean satellites. This many people want to harvest deuterium from Uranus and Neptune. ..."
After you tally up the exaprojects, you distribute energy accordingly, possibly weighted e.g. their will be plenty of interest for terraforming Mars and that will only happen so fast.
The meek shall inherit the Earth, the rest of us will take the stars. What are the rednecks and treehuggers that can actually comprehend the event going to do? March on Washington?Remember that we're talking about planet-devastating energy levels here. Launch lasers for interstellar colony ships are more dangerous than nuclear weapons by orders of magnitude, and you damn sure don't see those being available to even large civilian organizations. Not even if those organizations protest that what they want the nukes for is 'really important'. So the level of social will required to sign over thousands of laser launch batteries is... nontrivial, just like the level required to sign over an arsenal of hydrogen bombs for an Orion drive.
Have you talked with a number of assault victims, and then friends and family of those victims? Or just compare your own feelings to theirs - assuming you have a 'normal' reaction. Rights are words on paper. Actions are things that have occurred and people want consequences for them.Ah, but is freedom one of your prime directives (so to speak)? Is it something that you consider co-equal with the highest moral virtues, something that you would sacrifice nearly anything you possess to maintain, even in small amounts?
If freedom and self-determination were truly central, essential human motivating forces, we'd consider tyranny as repugnant as, say, eating babies. Protecting our children is a fundamental human drive, even if it's not a uniquely human drive. Notice the difference between our attitude about someone who acts totally against the "protect the children" impulse and about someone who acts totally against the "secure the blessings of liberty for ourselves and our posterity" impulse. Child-killers get a much harsher reaction than people who want to revoke the Bill of Rights. And there's a good reason for that: When you get right down to it, most people would rather protect their children than secure the blessings of liberty.
And when you are talking about tossing around the power of a star, you need to tread carefully. Not just out of the possibility for opposition, but simply out of the chance of making a mistake.
See above. I'm not talking about humans who have measurable needs or even wants compared to the resources they have at their disposal, here.So while humans value self-determination, you can make a damned good case that it isn't one of the essential values at the core of what it means to be human. Very few humans will go far out of their way to respect the self-determination of others, and most humans have things they consider far more valuable than even their own self-determination.
In EY's world, probably not. In a future in which you or I are actually talking with said aliens, there would be a clear point at which we grew past that.So a nonhuman species for whom self-determination is already a creepy alien concept and is told that we take it dead seriously, so seriously that it's as important to us as doing utilitarian good, is going to be skeptical. They're going to point to all the people in our own history who infringed on others' self-determination for the sake of a proclaimed utilitarian good, and ask why they were not stopped, the way that someone who had pronounced their intention to eat thousands of babies would have been. If we take this so seriously when we're suddenly talking to aliens planning to reshape our species... why don't we take it so seriously when talking among ourselves?
Give fire to a man, and he will be warm for a day.
Set him on fire, and he will be warm for life.
Set him on fire, and he will be warm for life.
- The Yosemite Bear
- Mostly Harmless Nutcase (Requiescat in Pace)
- Posts: 35211
- Joined: 2002-07-21 02:38am
- Location: Dave's Not Here Man
Re: Why do most wannabe SF writers reject science?
Which is strange considering that history already had it as a meer finable offence at one point. Yes, there was a time when women were just property.
The scariest folk song lyrics are "My Boy Grew up to be just like me" from cats in the cradle by Harry Chapin
Re: Why do most wannabe SF writers reject science?
It is implied that they opposed it so virulently that they were permanently banned from positions of authority. This is alot if it is just "the law no longer being needed anymore".Don't be too sure; if I were, say, 150 years old in a paradise-future and I saw my descendants trying to legalize rape, I'd tend to be strongly opposed... even if I knew that the violent rape rate was effectively zero. Social values are strongly determined by the world you experience when you're young.
It used to. Now we use slips of paper because there are too many people."So... is this done by ritual combat, or by shouting matches, or... I'm honestly having trouble visualizing this process of yours."
Yes, otherwise attempting to analyze the story is pointless.You've overlooked or discarded the possibility of words meaning different things to century-old denizens of a distant post-scarcity future than they do to us,
There will ALWAYS be wealth disperity. Additionally one of the interesting properties of wealth is that it increases itself. This is not a post singularity future- these are still concerns. It is explicatly statedYes, assuming major discrepancies of wealth among the society, and that money isn't serving as much as a way to keep score in social interactions as it is a medium of exchange. And since we honestly don't come close to knowing enough about the civilian life of the EY-future to say whether or not that's true... I'd say you're trying to make too much soup from too little stock here.
In other words, no free lunch.where AI never worked, molecular nanotechnology never worked, biotechnology only sort-of worked
And who would let you build said mass driver? That is also a planet destroying nightmare.By mass-drivering the statite control stations into scrap metal before someone builds enough of them to power a planet-crushing beam weapon?
So are you arguing that they would all build them?Governments cannot allow a statite gap, so most of the bigger ones won't, any more than they allowed a nuclear gap in the past.
This is because they can easily be used to take out large numbers of other people. The most powerful sources (atomic energy) are also under the control of private companies who use it to make money.The chief reason I can think of is the social desire to restrict access to massive amounts of energy. The most powerful sources of potentially destructive energy now available are already tightly controlled, to the point where they're effectively unobtainable unless you have your own army and nation-state.
Energy budget doesn't mean you get it all in one go. It means you can tap into it without having to worry about anyone else using it.Since a hard sci-fi future doesn't feature any kind of technomagic shields that allow my house to shrug off gigajoule-range energy blasts, I'm going to be very reluctant to act in ways that make gigajoule-range energy easy to access and to use destructively.
I'm not seeing the difference between this and the future others have proposed.By the time we're talking about, say, one billionth of a star's total power output we're dealing with enough power to level cities in seconds or minutes, and keep doing so indefinitely, and as long as governments exist they will try to restrict ownership of that kind of power. They can do so by getting in at the ground floor and staying in, using their early lead in coercive force to build up a lead in this new type of force.
You missed the whole point. Less than 1% is enough for them- of course they are fine with a small fraction of the total!or accept that they've only got a small fraction of the total.
You know, really my biggest problem with the story is that the humans don't consider the idea that the Babyeaters or the Estatics are lying to them. It is an odd blindspot which is weird- after all they still have lying (the administrator accuses them of manipulating the market doing that)- why they don't view the situation the same way I can't rightfully say.