The Big One

UF: Stories written by users, both fanfics and original.

Moderator: LadyTevar

User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:I don't give us much past the middle of this century. We had our chance to survive and blew it.
You think mass space colonisation starting with 70s tech was the only real chance?

If so I would of course beg to differ.
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote: You think mass space colonisation starting with 70s tech was the only real chance? If so I would of course beg to differ.
In a way, I believe there is a very narrow window between an intelligent species gaining the ability to move into space and that species being able to utterly destroy itself or be destroyed. By turning our backs on space in the 1970s, we doomed ourselves. As it happens, I believe the probability is that we'll destroy ourselves by warfare, my guess is uncontrolled biological warfare, but that's really irrelevent. By confining ourselves to Earth, we've left ourselves wide open to extinction and nature is remarkably remorseless in such things. Could be an asteroid, could be natural disease, could be environmental collapse, could be Factor X. We're all sitting on the bullseye and there's no way off it.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:In a way, I believe there is a very narrow window between an intelligent species gaining the ability to move into space and that species being able to utterly destroy itself or be destroyed.
That's one answer to the Fermi paradox but there's another answer (consequences of creating arbitrary self-enhancing intelligences) that's just as valid. Whether it accounts for more planetkills depends on the frequency distribution of various traits in evolved sapients (i.e. how fast they tend to kill each other off) and whether anthropic effects are involved (i.e. whether the fraction of arbitrary self-enhanching intelligences that generate 'reformat/control as much of the universe as possible' as a subgoal is large enough that we literally have to be the first, or at least one of the first, civilisations to approach this threshold).
By turning our backs on space in the 1970s, we doomed ourselves.
This assumes that we could've accomplished enough to make a difference by now even with a continuation of Apollo level funding - IMHO probable but highly debatable. More relevantly it assumes that we're not going to develop technology in the near future that leapfrogs all the missed incremental progress by massively decreasing launch and orbital construction costs.
As it happens, I believe the probability is that we'll destroy ourselves by warfare, my guess is uncontrolled biological warfare, but that's really irrelevent.
Assumes that both the weapons get cheap enough for non-rational players to start deploying them and that no countering technology is developed in time. Biotech is in a particularly dangerous phase in this regard, but the problem is solvable in principle; good enough biotech, ideally combined with very good surveillence and monitoring technology, eventually tips the balance strongly in the defence's favour. Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).

It also assumes that a hegemonic government doesn't get total control of the earth. This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible. Further in the future, an invasive engineered agent that can network and co-operate to restructure the brain in this way would make it a very serious risk.

Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else, the kind of advantage you would get from having very fast and powerful engineered intelligences co-operating (though for genuine AIs the singleton/group distinction is pretty irrelevant).
By confining ourselves to Earth, we've left ourselves wide open to extinction and nature is remarkably remorseless in such things.
Assumes that we're not going to
a) turn into something a lot more robust than basic humans or
b) develop very effective mitigation strategies for the specified risks
Could be an asteroid,
Needs either space-based resources to divert it or total independence from the ecosystem to ignore the results. You're obviously keen on (a), which would be nice, I think (b) may turn up first, but this risk is minor enough not to be an immediate concern.
could be natural disease,
Mitigated and eventually eliminated with medical progress, which is doing great.
could be environmental collapse,
Serious risk but technological solutions are on the drawing board for just about every subproblem. Furthermore, space colonies don't actually buy you anything against environmental collapse (or even biowar) that self-sufficient colonies on the earth's surface don't. An antarctic or desert sealed biosphere is much cheaper to build and has much better access to local resources. Space colonisation is a) useful for avoiding nuclear wars if no one targets you or you have a very comprehensive anti-missile system and b) long-term expansion.
We're all sitting on the bullseye and there's no way off it.
Yet. Radical technologies for hardening the bullseye and getting off it much more easily are within reach, if we can last a few more decades. On the other hand, transhuman intelligence is a huge risk (actually, a huge set of risks), even as it makes those technologies much easier to reach (in fact largely because of that).
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote: That's one answer to the Fermi paradox but there's another answer (consequences of creating arbitrary self-enhancing intelligences) that's just as valid. Whether it accounts for more planetkills depends on the frequency distribution of various traits in evolved sapients (i.e. how fast they tend to kill each other off) and whether anthropic effects are involved (i.e. whether the fraction of arbitrary self-enhanching intelligences that generate 'reformat/control as much of the universe as possible' as a subgoal is large enough that we literally have to be the first, or at least one of the first, civilisations to approach this threshold).
The basic message is still the same though; the window of opportunity for getting off this planet is very small; I'd guess around a century or so measured from around 1960. That means that the clock runs out for us at around 2060 at the latest. I won't live to see it but a lot of the people here will.
This assumes that we could've accomplished enough to make a difference by now even with a continuation of Apollo level funding - IMHO probable but highly debatable. More relevantly it assumes that we're not going to develop technology in the near future that leapfrogs all the missed incremental progress by massively decreasing launch and orbital construction costs.
If we'd carried on with space exploration in the 1970s, it would have created a driver for the development of inexpensive surface-to-space travel. At the moment there isn't really such an incentive. Now, I think its too late, we don't have the infrastructure up there and we haven't time to develop it before the roof falls in.
Assumes that both the weapons get cheap enough for non-rational players to start deploying them and that no countering technology is developed in time. Biotech is in a particularly dangerous phase in this regard, but the problem is solvable in principle; good enough biotech, ideally combined with very good surveillence and monitoring technology, eventually tips the balance strongly in the defence's favour.
I'd disagree there. Biological weapons are very simple to obtain (all one needs to set up an anthrax production facility is a few bits of glassware, a few spadefuls of dirt from the local farm, a culture medium - baby formula will do just fine - and patience. All one needs to develop an antibiotic-resistant form of anthrax is the above plus a supply of antibiotics. Have you noted how many leading Islamic terrorists are medical doctors?). Monitoring is virtually impossible. That's why biological warfare scares me much more than nukes; frankly Iran getting a nuclear device or two doesn't worry me terribly. The idea of them having twenty or thirty years time to work on biologicals does. Sort of options that can be produced without any great genetic engineering expertise.

Airborne rabies (rabies/influenza hybrid)
Antiobiotic resistant anthrax
Influenza that produces cobra venom as toxin
Infuenza/ebola hybrid
Smallpox
Smallpox/ebola hybrid

That's just a start. The ideal biological weapon has a long incubation period during which the victim is symptomless but highly infectious/contagious and 100 percent lethality once symptoms develop. Some of the above can be made very close to that. Given a bit of egnetic engineering expertise and a disease can be tailor-made for virtually any effect desired. One possibility, embed a virus disease inside a bacterial disease. People get infected with the bacteria, the disease is identified and treated with an antibiotic. That kills the bacteria and releases the virus. What is more, because the bacteria has multiplied inside the victim already, the onset of the viral disease is instant and overwhelming. So, treat the bacterial disease and the virus kills the patient, don't treat it and the victim dies anyway.
Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).
They require technology levels that biologicals don't. Look how far biological warfare goes back, that's the terrifying thing. It doesn't require any great skill.
It also assumes that a hegemonic government doesn't get total control of the earth. This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible. Further in the future, an invasive engineered agent that can network and co-operate to restructure the brain in this way would make it a very serious risk.
In the time frame I think we have available, I don't believe that's a probable development.
Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else, the kind of advantage you would get from having very fast and powerful engineered intelligences co-operating (though for genuine AIs the singleton/group distinction is pretty irrelevant).
The United States is heading that way now; our technology lead over the rest of the world is lengthening steadily. Again though, I don;t think there's enough time left to matter. The 2040 - 2050 date for the Great Biowar is my best guess at how much time we have left. As I said, I won't be around to know whether I got it right but.........
Assumes that we're not going to
a) turn into something a lot more robust than basic humans or
b) develop very effective mitigation strategies for the specified risks
Again time is the problem. As to developing countermeasures, lack of will is in there as well. Look at all the problems we've got trying to deploy an anti-missile system and that's with a lunatic in North Korea tossing long-range missiles around and trying to develop a nuclear warhead. If he's developing that, do you wnat to bet the farm that he isn't developing biologicals? We can get an anti-missile system up fairly quickly but the sort of elaborate defenses needed to thwart a biological attack? There's no sign of people even beginning to think about them.

The later parts of the TBOverse (set from 1972 - 2050) are my scream of warning that we do need to develop mitigation strategices and develop them NOW. We're talking about the subject here which is a start, we need everybody talking about it. Not whether we need such strategies, that's a given, but which ones we need first and how fast.
Needs either space-based resources to divert it or total independence from the ecosystem to ignore the results. You're obviously keen on (a), which would be nice, I think (b) may turn up first, but this risk is minor enough not to be an immediate concern.
My understanding is that there is an asteroid due to make a very close pass in the 2030s (perhaps somebody can correct and/or elaborate). Also one may come out of left field. The point is near-extinction events from asteroid impacts are a fact, we don't know when one will happen next.
Mitigated and eventually eliminated with medical progress, which is doing great.
Only against diseases we know about, I have in mind something that's beyond our experience and which means we have to start from scratch in fighting it. Needn't be one that directly affects us - how about an air-vectored disease that destroys chlorophyll in plants?
Yet. Radical technologies for hardening the bullseye and getting off it much more easily are within reach, if we can last a few more decades. On the other hand, transhuman intelligence is a huge risk (actually, a huge set of risks), even as it makes those technologies much easier to reach (in fact largely because of that).
I'm not so sanguine. I guess we have thirty, perhaps forty years. That just isn't long enough to get anything significant done. For comparison, run back to 1975; we're much more advanced technologically than we were then but nmone of those advances translates into hardening the bullseye any more. In fact, its arguable the bullseye is softer now than it was then.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:
Starglider wrote: That's one answer to the Fermi paradox but there's another answer
The basic message is still the same though; the window of opportunity for getting off this planet is very small;
Well, kinda. Getting off the planet (in the 'create some self-sustaining colonies' sense') almost completely removes the existential risk of various natural disasters. It reduces but does not eliminate biowarfare risks, in that depending on the setup of your interplanetary civilisation it may be possible to introduce pathogens into enough critical places before detection that effective quarrantine is impossible. Similarly it reduces deliberate warfare risks in that smashing all the habitable biospheres is harder, but it does not eliminate them. Interstellar is much better, but of course much, much harder, at least if the only intelligence you have to work with is bog standard humans and you don't have a good way to freeze them for centuries-to-millenia.

Unfortunately getting off the planet is only of limited use with regard to hostile transhuman intelligences, in that if they want to come after you, they probably can, and for a rational intelligence there's a strong motive to do this if surviving humans are likely to intefere with its goals later. This debate (mostly lifeboat foundation supporters vs Singularity people) has raged on several transhumanist forums, and the general answer is 'you may escape if you're fast and very lucky, but don't count on it'. Fortunately the risk mitigation strategies for these aren't really competing with each other for funding or effort at present (only a little for some nonprofits that probably aren't terribly relevant anyway).
I'd guess around a century or so measured from around 1960. That means that the clock runs out for us at around 2060 at the latest. I won't live to see it but a lot of the people here will.
If that's a .50 probability, that's pretty good, we've got a very good chance of developing lethal goodies like brain-computer interfacing, full-brain scanning and simulation, general nanoassembly, mobile microrobotics and of course general AI within that timeframe. You're probably the most qualified person I know with regard to estimating biowarfare existential risk, but the existential risks associated with these early-stage technologies require their own massive study to appreciate (I'm only an expert in the AGI one - and not the best expert I know on it, since I've never spent a lot of time playing professional futurist). Of course you might see a space elevator in that timeframe, which would probably suffice for a space lifeboat, and AFAIK there's still an outside chance of doing it with privatised space and conventional launches.
If we'd carried on with space exploration in the 1970s, it would have created a driver for the development of inexpensive surface-to-space travel.
Well, maybe. NASA's performance at driving commercial interest has been pretty abysmal, and they've never had a 'get asteroid mining started' agenda. This is a complex debate that I'm only moderately qualified for, but at this point it's of largely academic interest - unless you think government-driven space exploration is going to do anything useful in time.

That said I hope you weren't involved in the Air Force FUBARing the STS design. :)
Biological weapons are very simple to obtain... monitoring is virtually impossible.
That's the extreme window of vulnerability. Current medical science cannot provide a generally superior replacement for the human immune system. We can often create toxins tailored to destroy or render inert specific pathogens after much lab work, we can kick start the immune system into action with immunisations (again after a lot of analysis and preparation) and we can mitigate symptoms to some degree but that's it.

However in principle we can do much better. The tools to actively monitor every single human's biology at a cellular level, to process, transmitt and pattern-recognise that data, to greatly automate the process of designing and deploying countermeasures, and ultimately to directly prompt and reconfigure the immune system in place have already been designed to a surprising level of detail. We just don't have the tools to build them yet. I just hope the very high regulatory burden on medical science doesn't push these fairly inevitable advances too far into the future (i.e. past the extinction point they might have prevented).

I don't want to turn this thread into a debate about the technical nitty-gritty of biosimulation, embedded biosensors and medical nanotech any more than I do regarding space colonisation feasibility - so I won't take a position on whether this will come in time to do any good, just that bioweapons are not an impossibly deadly menace that can never be (reliably) avoided by anything other than total isolation.
Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).
They require technology levels that biologicals don't.
They aren't a threat yet. But if we survive the next 30-50 years, we may actually see the end of biological weapons as a serious threat, simply because technology will eventually trump biology (ultimately the same reason why various forms of AI are so dangerous). However the threat from those advanced technologies themselves has no such limitations.
This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible.
In the time frame I think we have available, I don't believe that's a probable development.
If your time frame is 50 years I think it's highly probable we'll have the understanding to do it. Brain mapping has been progressing quickly recently. The question is whether the tools will exist to make the required changes, which I don't have a reliable method of estimating. Something like freezing a brain solid and laser-ablation scanning it to a sufficient resolution for an accurate simulation is a relatively straightforward process you can turn into engineering specs - when we get scanner resolution X and computing power Y, which have been following these curves, then it becomes feasible. Fiddling about with dendrite webs in a currently unknown way without killing the subject or sending them mad isn't a procedure we can currently analyse in enough detail to get a good set of preconditions and hence probability for. But it would not surprise me, because the prospect is so attractive to many governments it could easily swing a lot of research funding when it passes a certain minimal level of plausibility.
Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else
The United States is heading that way now; our technology lead over the rest of the world is lengthening steadily.[/quote]

The US government isn't monolithic enough for me to count it as a 'single party' in that sense. Plus it's leaky as a sieve for anything but the final stage military applications - other countries and groups are going to get the basic science, often they can buy materials and tools from the same suppliers. The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.

However after working through enough of the possible outcomes for the creation of transhuman intelligences, I believe a technological advantage of that genuinely overwhelming degree (in that there is no way for other parties to resist domination or even do any damage to the first party) is actually possible, in that there is a very real chance of it being achieved in many scenarios for the development of such intelligences. This can be very good or very bad, depending on who gets it and how far it goes. I think it's essentially impossible for it to be achieved any other way, simply because there's no way other way to to the equivalent of centuries of work by a manhattan project calibre team in total secrecy, within a decade or less.
Again time is the problem.
True. But although the chances of success may be small, they're worth fighting for; it hardly makes sense to sit down and give up. I've personally structured my life to try and make the best contribution I can to the development of one of the 'magic' technological fixes, simply because it's the most useful thing I can do.
As to developing countermeasures, lack of will is in there as well. Look at all the problems we've got trying to deploy an anti-missile system and that's with a lunatic in North Korea tossing long-range missiles around and trying to develop a nuclear warhead. If he's developing that, do you wnat to bet the farm that he isn't developing biologicals? We can get an anti-missile system up fairly quickly but the sort of elaborate defenses needed to thwart a biological attack? There's no sign of people even beginning to think about them.
While it's unfortunate that biowarfare risks don't have a higher profile, on the plus side there isn't that crazy MAD mentality that let the ABM critics successfully argue that being undefended against nuclear anhiliation was a good thing. Essentially everyone agrees that defence against biowarfare is a good thing in principle - though there are plenty of irritating conspiracy nuts who refuse to accept that defensive biowarfare requires some installations and programs superficially quite similar to offensive biowarfare.
The later parts of the TBOverse (set from 1972 - 2050) are my scream of warning that we do need to develop mitigation strategices and develop them NOW.
I'm curious, have you had many opportunities in your career to make the case for space colonisation to relevant decision makers, the way you have your characters make it?

Have you summarised what you're actually advocating anywhere? Is it as simple as massive government spending on space colonisation? Do you have any novel ideas for making that paletable to the electorate?
The point is near-extinction events from asteroid impacts are a fact, we don't know when one will happen next.
However it isn't a pressing risk, and having at least one self-sufficient outpost survive massive climate change on the earth's surface is probably easier than surviving without support in earth orbit (though the growth potential is more limited).
Mitigated and eventually eliminated with medical progress, which is doing great.
Only against diseases we know about,
But the tools for diagnosis, pathogen analysis, drug design, flexible manufacturing and treatment are all improving as well as just expanding the range of off-the-shelf treatments available.
Needn't be one that directly affects us - how about an air-vectored disease that destroys chlorophyll in plants?
Given that that hasn't happened in the last three billion years despite abundent opportunities for it to evolve, it seems pretty unlikely without very good genetic engineering.
I'm not so sanguine.
Oh I'm not sanguine. I'm saying that you're probably right about the biowarfare risk (in fact, that 50 years is generous), and that in addition to this we have lots more upcoming existential risks to worry about, some of which are essentially impossible to run away from or contain. And that's not even getting started on the Peak Oil / Runaway Climate Change scenarios (which I don't personally rate as existential risks, but they're exacerbating factors for some of the really serious problems).

I'm just saying that there is some hope, and ultimately rapid but carefully focused (of course these are opposed) technological progress is the only thing that can save us.
For comparison, run back to 1975; we're much more advanced technologically than we were then but nmone of those advances translates into hardening the bullseye any more. In fact, its arguable the bullseye is softer now than it was then.
I'd agree that it's softer in the sense that the infrastructure supporting civilisation is more fragile. Convincingly fixing that problem is a challenge for the nanofactory people. However humans haven't changed since 1975 - we just travel around more (which doesn't help either for biothreats). In fact we've been stuck with the mark I mod I human for all of recorded history. Changing that situation changes the rules of the game, and since we're currently losing the game it's a very compelling option.

P.S.

mksheppard: you're a bit pie in the sky on your stuff
Starglider: Some people need to work on advanced concepts if they're ever going to become reality.
Starglider: It doesn't just magically become practical one day, it takes a lot of hard work.
Starglider: However, most people can and should stay focused on more immediate things.
mksheppard: I'm very dubious of said advanced nanoconcepts
Starglider: I was myself until I lost a lot of debates where I took the 'they won't work' position. The ones the real experts are proposing are definitely physically possible. Detailed engineering studies and simulations have been done.
Starglider: It's similar to fusion power circa 1970 really. We know it's physically possible, but there's a lot of hard research and engineering between here and it working.
Starglider: AI is a different kind of challenge, it isn't conventional engineering.
Starglider: I can't blame people for being highly skeptical of it.
Starglider: I wouldn't be surprised if Stuart's impressions of AI were formed when this was directly relevant http://philip.greenspun.com/humor/ai.text
Last edited by Starglider on 2007-08-02 05:25pm, edited 1 time in total.
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

Question Stuart; has there been any discussion or provokation within the strategic community over what we SHOULD do in case we ever see a situation like the al-Hammar situation in "High Frontier", where we see a 65% provable case of BW being tested?

At some point, there has to be a pilot deployment of the weapon outside of lab conditions, so that you can be sure that it works in the real world.

Biology of Doom contains a semi detailed account of our widespread testing of the only weaponized and standardized form of BW we did; a variant of Brucella that we actually assigned a Mxx series number, using M33 500 pound cluster bombs, against an army of guinea pigs at Dugway.

We used up something like 12,000~ Guinea pigs for the tests.

As an Army Chemical Corps General said years later of the tests:

"Now we know what to do if we ever go to war against guinea pigs. "
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote: It reduces but does not eliminate biowarfare risks, in that depending on the setup of your interplanetary civilisation it may be possible to introduce pathogens into enough critical places before detection that effective quarrantine is impossible.
Protocols for defending against bioattacks on space facilities have been evaluated over the years. It's hard but not insoluble, basically the results of those discussions came up in "Exodus".
Similarly it reduces deliberate warfare risks in that smashing all the habitable biospheres is harder, but it does not eliminate them. Interstellar is much better, but of course much, much harder, at least if the only intelligence you have to work with is bog standard humans and you don't have a good way to freeze them for centuries-to-millenia.
Interstellar is pretty much out as far as the time frame is concerned. A moon base though was entirely doable using 1980s technology.
Unfortunately getting off the planet is only of limited use with regard to hostile transhuman intelligences, in that if they want to come after you, they probably can, and for a rational intelligence there's a strong motive to do this if surviving humans are likely to intefere with its goals later.
Our general cut was that if a hostile space-faring nation comes after us, the technology differential will be such that all we can do is go down fighting. Which we'd do. Defending a space habitat is a lot easier ins ome ways than it sounds, its at the top of the gravity well and that gived it an enormous advantage - provided its armed to the teeth and prepared to start shooting.
You're probably the most qualified person I know with regard to estimating biowarfare existential risk, but the existential risks associated with these early-stage technologies require their own massive study to appreciate.
The problem is that all the technologies you mention is that they can be applied to the biowarfare effort as well - and with much greater effect. In bio terms, the offense has a great advantage at this time simply because it can create and release then its job is over. The defense has to wait for the release, find out what's causing the disease, find a cure and distribute it. The cold equations run against them succeeding in time.
Well, maybe. NASA's performance at driving commercial interest has been pretty abysmal, and they've never had a 'get asteroid mining started' agenda. This is a complex debate that I'm only moderately qualified for, but at this point it's of largely academic interest - unless you think government-driven space exploration is going to do anything useful in time.
I don't; the best role for the government is to seed corn things and then let the private sector do it. And no, I wasn't involved in the Space Shuttle fiasco.
That's the extreme window of vulnerability. Current medical science cannot provide a generally superior replacement for the human immune system. We can often create toxins tailored to destroy or render inert specific pathogens after much lab work, we can kick start the immune system into action with immunisations (again after a lot of analysis and preparation) and we can mitigate symptoms to some degree but that's it.
To put numbers on things, inhaled anthrax is 99 percent fatal if left untreated (figure from the penholder on my desk which was given to me by AMRIID). The present treatment regime is very effective - BUT its known. So, all the bad guys have to do is to breed a strain of anthrax thats immune to that treatment and we're at 99 percent lethality again. Add one crop-dusting aircraft, one major city and .........
We just don't have the tools to build them yet. I just hope the very high regulatory burden on medical science doesn't push these fairly inevitable advances too far into the future (i.e. past the extinction point they might have prevented).
I agree; the problem is that I'm very pessimistic about developing those tools. And again, remember that all these tools can be used by the other side as well. They can use them engineer a disease that hits us in ways we couldn't dream possible. They require technology levels that biologicals don't.
We may actually see the end of biological weapons as a serious threat, simply because technology will eventually trump biology (ultimately the same reason why various forms of AI are so dangerous). However the threat from those advanced technologies themselves has no such limitations.
I hope you're right but I don't think we have the time. Bad guys, both state and non-state players are fooling around with bio now. We're rolling the dice all the time; one day its going to go against us.
Plus it's leaky as a sieve for anything but the final stage military applications - other countries and groups are going to get the basic science, often they can buy materials and tools from the same suppliers.
You'd be surprised at what these 'ere Untied States can keep secret. We can do it (New York Times notwithstanding); the basic doctrine is to release so much information that the holes (that define the secret stuff) are blurred. Then we work in the blur.
The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.
I agree, which is a pity. We could have been.
But although the chances of success may be small, they're worth fighting for; it hardly makes sense to sit down and give up. I've personally structured my life to try and make the best contribution I can to the development of one of the 'magic' technological fixes, simply because it's the most useful thing I can do.
And my working life has been spent in the strategic destruction/strategic defense business under various guises. We're still trying, nobody's given up, but we're realists. We can see hear the clock ticking. In a way, we're all praying that somebody comes up with the breakthrough that offers us a fighting chance, the universal cure, something like that.
I'm curious, have you had many opportunities in your career to make the case for space colonisation to relevant decision makers, the way you have your characters make it?
The idea's been put up often by people a lot more important than me (I'm quite low down the food chain in such things. But yes, the idea has been pushed hard.
Do you have any novel ideas for making that paletable to the electorate?
The novels...
However it isn't a pressing risk, and having at least one self-sufficient outpost survive massive climate change on the earth's surface is probably easier than surviving without support in earth orbit (though the growth potential is more limited).
I wouldn't say that; we know how to do most of the in-space stuff, like the examples you gave, we have the knowledge, not the tools. The current modelling of an extinction events suggest its just that. No matterw hat we do, no survival possible.
[But the tools for diagnosis, pathogen analysis, drug design, flexible manufacturing and treatment are all improving as well as just expanding the range of off-the-shelf treatments available.
Again, the same things apply to the offense and tehy can use them better than the defense. What we need is something taht will give us an end run around the offense, create a defense that's always in place and they'll run head on into no matetr what they do. What that might be I do not know,
Given that that hasn't happened in the last three billion years despite abundent opportunities for it to evolve, it seems pretty unlikely without very good genetic engineering.
Just giving it as an example of something that could come from way out of left field and finish us. The one that'll do it is something even more unimaginable.
I'm just saying that there is some hope, and ultimately rapid but carefully focused (of course these are opposed) technological progress is the only thing that can save us. {/quote]

I agree. And I wish you luck. But, looking at the situation now. its grim, very grim indeed.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

MKSheppard wrote:Question Stuart; has there been any discussion or provokation within the strategic community over what we SHOULD do in case we ever see a situation like the al-Hammar situation in "High Frontier", where we see a 65% provable case of BW being tested?
Oh yes, we've discussed it, evaluated it, analysed it. The problem is, 65 percent is nowhere good enough. The standard of proof demanded would be that of a legal court, beyond reasonable doubt - and there would always be people out there manufacturing any doubt and claiming it was "reasonable". The reality is that we would have to have an overt bio-attack on an American city before people would do anything - and even then there would still be a vociferous group screaming that it was all a government plot.
At some point, there has to be a pilot deployment of the weapon outside of lab conditions, so that you can be sure that it works in the real world.
There has been - in Laos using Tricothecene. There was a concerted wail of denial from people who would say anything and do anything rather than believe a biological weapon had been used. It was classic denial, any excuse will do.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:The problem is that all the technologies you mention is that they can be applied to the biowarfare effort as well - and with much greater effect.
They can, and there are concepts for hybridising nanotechnology ('wet' and 'dry') and engineered biological pathogens that would be near-untreatable with conventional medicine. This even made it into a popsci book - it was one of the more plausible things in Michael Crichton's 'Prey' novel. These will become practical earlier than the really advanced applications that start to cut down and eventually eliminate the biothreat. So yes, in the short term nanotechnology make things worse.

There isn't really any relationship between AI (including uploading) and aggressive biowarfare. Strictly yes having one will make bioweapon design easier, but only in the sense that having general AI on your side makes any technological endeavour easier. Hostile AIs might well deliberately create bioweapons, but if you've got hostile AIs with those capabilities you've already lost anyway.
The defense has to wait for the release, find out what's causing the disease, find a cure and distribute it.
Yes. We have to cut the response time down. Massive numbers of widely distributed sensors and more automation of the response process is the only way to do it. AI can help with the 'recognise the threat' and 'design the cure' part, automating the 'manufacture the cure' part and reducing distribution delay will probably need small scale units with the ability to synthesise arbitrary complex compounds quickly and in useful amounts, i.e. something the size of a semi-trailer (at most) that can do what currently takes a chemical plant.
I agree; the problem is that I'm very pessimistic about developing those tools. And again, remember that all these tools can be used by the other side as well. They can use them engineer a disease that hits us in ways we couldn't dream possible.
Well yes, but ultimately biochemistry is a bottleneck that will cause conventional bioweapons to drop out of the arms race. Unfortunately there are even worse things, starting with hybridised bio-nano weapons, that will probably take over right where they left off. The only good thing about that is that the level of technical expertise required to build them is much higher than for cultured (non-GE) bioweapons.
The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.
I agree, which is a pity. We could have been.
I assume you mean in the sense of having a tiny colony of people immune to attack by anything other than a massive missile strike; AFAIK there's nothing the US could have done to completely avoid the bio threat, even the TBOverse USA with its excellent strategic position doesn't manage it.
In a way, we're all praying that somebody comes up with the breakthrough that offers us a fighting chance, the universal cure, something like that.
I'm personally convinced (as is undoubtadly obvious by now) that general, de-novo AI is the single most powerful technological advance we have a realistic chance of making in the time remaining. It has characteristics (and risks, and opportunities) completely unlike any other technology. Employed correctly and with a bit of luck, it literally could save the world. The madenning thing about it is that it isn't a tools or infrastructure issue. Existing hardware within reach of the average moderately funded research group more than suffices for the job, and existing software tools are probably adequate. It's a purely design challenge - on the one hand there's a very real chance that someone, somewhere will crack it tomorrow, but there's also a very real chance (though IMHO low) that we won't crack it even with another century to work on it.

But only a tiny, tiny fraction of people can usefully work on the problem, and most of those who can are probably already doing so. The field has a horrible track record for usefully absorbing influxes of cash and enthusiasm. So there's no particular call to action here, the vast majority of people should continue doing what they can to support other space colonisation, biodefence efforts, nanotech progress, brain-computer interfacing progress etc.
Do you have any novel ideas for making that paletable to the electorate?
The novels...
Regrettably the vast majority of the electorate is not interested in having a tiny community survive in space if it means they personally die. Even if all of this was generally and accurately appreciated, I would imagine most people would say 'I want my taxes to be spent on developing defences to protect me and my family, not saving some lucky few'.
I wouldn't say that; we know how to do most of the in-space stuff, like the examples you gave, we have the knowledge, not the tools. The current modelling of an extinction events suggest its just that. No matterw hat we do, no survival possible.
I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony. Unless it's really, really unlucky and the asteroid hits close enough to directly threaten it with ejecta, the only obvious drawback is lack of continuous solar power. Airborne bioweapons can pose a risk for a surface installation in that any leak while the agent persists outside can potentially infect and kill the entire population, whereas in space a leak can be quickly plugged and at worst depressurise a single section - but this isn't likely to be an issue in Antarctica (say). Indeed for the bioweapon threat alone a sufficiently isolated island probably suffices for maintaining a community of human survivors, as long as the agent isn't indefinitely persistent on the mainland via animal carriers.
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

Stuart wrote:There has been - in Laos using Tricothecene. There was a concerted wail of denial from people who would say anything and do anything rather than believe a biological weapon had been used. It was classic denial, any excuse will do.
One of the posters on this board calls me by the epithet "Yellow Rain Man". :) and mocks me by claiming it was bee puke, which begs the question; wouldn't the Laotian mountain tribesmen you know, have heard about this thing before in living memory if it was a natural event?

I'm sure he thinks the Sverdlosk outbreak was just bad meat.

It does make me wonder though; if there are a couple of caves in Afghanistan or wherever, which contain rotting corpses and a decent chemistry set; and we can't enter the caves without a throrough napalming.
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

MKSheppard wrote:One of the posters on this board calls me by the epithet "Yellow Rain Man". :) and mocks me by claiming it was bee puke, which begs the question; wouldn't the Laotian mountain tribesmen you know, have heard about this thing before in living memory if it was a natural event?
I can guess who that is. However.

Lets see.

The Vietnamese offensive lasted for about four years in the mid-late 1970s with the worst fighting in 1981

Bee yuk has been dropped for about 300 million years

The Vietnamese offensive took place in the Laotian Highlands

Bee yuk has been dropped all across South East Asia, as far east as Pakistan, as far north as mid-China and as far south as Indonesia.

The only time and place the disease outbreaks took place was in Laos in the years that coincided with the Vietnamese offensive with the worst incidents in 1981.

Looks a bit odd to me.

It's another example of the phenomena I referred to above, people who will adopt any explanation no matter how far-fetched, in order to try and deny biological warfare is a threat. As I said, its classical denial, we can't cope with this problem or its implications so lets find any excuse to deny it exists.

Try Yellow Rain: A Journey Through the Terror of Chemical Warfare by Sterling Seagrave which is a pretty good summary of the whole Yellow Rain incident and Biohazard by Ken Alibek as pretty good primers on biological warfare (Seagrave and I disagree on whether tricothocene is a chemical or a biological agent, one can make the case either way).

This article is a very fair and unbiased summary of the evidence and arguments on both sides of the issue.

I'd also refer you to This article which refers to another possible tricothecene attack in Saudi Arabia during ODS

As another interesting bit of evidence, there was a "yellow rain" incident in India in June 2002. A yellow-green rain fell from the sky on the town of Sangrampur, near Calcutta, India. Rumors spread that the rain might be contaminated with toxins or chemical warfare agents. Shortly after the "attack," however, Deepak Chakraborty, chief pollution scientist for the Indian state of West Bengal, reported that the yellow-green droplets were in fact bee feces containing pollen from local mangoes and coconuts. He concluded that the colored rain may have been caused by the migration of a giant swarm of Asian honeybees, which are known to produce "golden showers."

Important point is that there were no cases whatsoever of people going down with anything like the symptoms that were reported in Laos. Therefore, this is very strong evidence that the bee-yuk phenomena (which is known, documented and very familiar to people in the area) was not related to the yellow rain attacks in Laos.

My opinion is that the Tricothecene use in Laos is a perfect example of your 65 percent case. Personally, having worked in the area and been shat on by bees in the process, I believe the tricothecene poison attack explanation is correct.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote:Strictly yes having one will make bioweapon design easier, but only in the sense that having general AI on your side makes any technological endeavour easier. Hostile AIs might well deliberately create bioweapons, but if you've got hostile AIs with those capabilities you've already lost anyway.
But, AI also makes easier the genetic engineering needed to create completely new diseases - one hypthesized extermination-type disease as an example. A latent infection that produces no symptoms in its victims but if a woman infected with the agent gets pregnant, the foetus becomes a fast-growing tumor that kills her in days. Not theory, that one has been seriously suggested as a possible threat.
Yes. We have to cut the response time down. Massive numbers of widely distributed sensors and more automation of the response process is the only way to do it. AI can help with the 'recognise the threat' and 'design the cure' part, automating the 'manufacture the cure' part and reducing distribution delay will probably need small scale units with the ability to synthesise arbitrary complex compounds quickly and in useful amounts, i.e. something the size of a semi-trailer (at most) that can do what currently takes a chemical plant.
There's a lot of money being spent (at AMRIID and other places) on just that. As I said, nobody's given up but also, nobody is hopeful that we can find enough answers. Another example, there is a very good laser system that detects aerosols and alerts people in the area. Its already in limited military service and some experiments were tried out to see if it had civilian use. The problem was a horrendous false alarm rate. Now, work is going on attempting to apply AI to the job of reducing that false alarm rate. We're about 90 percent there. The trouble is the remaining 10 percent is critical.

Good example of the problem, wake detection of submarines. Submarines do produce a wake on the surface that can be detected by radar. The problem is, so do a lot of things and picking the one we want out from the rest is proving impossible. People have been working on it since the 1980s without any luck.
Well yes, but ultimately biochemistry is a bottleneck that will cause conventional bioweapons to drop out of the arms race. Unfortunately there are even worse things, starting with hybridised bio-nano weapons, that will probably take over right where they left off. The only good thing about that is that the level of technical expertise required to build them is much higher than for cultured (non-GE) bioweapons.
From a technology point of view yes, but the scaring thing about bioweapons is how easily then can be made and how hard it is to control them. In many ways, the most surprising thing is that they haven't already been used. Again, another example, its like mining U.S. ports. One bottom mine in the main shipping lane and the port comes to a complete standstill for days or even weeks. Amazing nobody has tried it.
I assume you mean in the sense of having a tiny colony of people immune to attack by anything other than a massive missile strike; AFAIK there's nothing the US could have done to completely avoid the bio threat, even the TBOverse USA with its excellent strategic position doesn't manage it.
I was actually thnking more of strategic defense in general. Bioweapons for all their threat profile is just one of the threats we face. If we'd kept the strategic defense program momentum up in the 1959-61 era up, we would be whole worlds better off.
The madenning thing about it is that it isn't a tools or infrastructure issue. Existing hardware within reach of the average moderately funded research group more than suffices for the job, and existing software tools are probably adequate. It's a purely design challenge - on the one hand there's a very real chance that someone, somewhere will crack it tomorrow, but there's also a very real chance (though IMHO low) that we won't crack it even with another century to work on it.
As I said, I hope you guys pull it off. There's a lot of money being thrown quietly at the project area (not just by us) but really we need that breakthrough. Once we get it, it'll solve a lot of problems.
So there's no particular call to action here, the vast majority of people should continue doing what they can to support other space colonisation, biodefence efforts, nanotech progress, brain-computer interfacing progress etc.
Agreed, no reservations. I'd add missile and air defense to that, its good to be able to take down the delivery systems.
Regrettably the vast majority of the electorate is not interested in having a tiny community survive in space if it means they personally die. Even if all of this was generally and accurately appreciated, I would imagine most people would say 'I want my taxes to be spent on developing defences to protect me and my family, not saving some lucky few'.
On the other hand, people always assume they'll be the lucky ones who get chosen to go up (I'm fortunate since I know I will :) ). It depends, it is pullable.
I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony.
The earthquakes etc that result from the asteroid impact will do it. At the time of the Permian-Triassic (P-Tr) extinction event, the whole biosphere was poisoned for tens of thousands of years. Oceans becmae anoxic, that caused hydrogen sulphide producing biologicals to rpliferate and they poisoned the rest of the biosphere. A selaed biosphere could well survive an extinction event if the duration was limited to a few years but for millenia? Don't think so. Its like nuclear initiations. The only real defense is to be somewhere else.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:But, AI also makes easier the genetic engineering needed to create completely new diseases - one hypthesized extermination-type disease as an example.
Unfortunately you don't actually need general AI for this. Biochemistry simulation and semi-intelligent search will eventually get to the point (with the help of ever-increasing brute force) of being able to reliably predict protein folding on a laptop, such that you can design a protein shape and have the computer spit out an RNA sequence likely to produce it, or input an existing enzyme and have the computer create appropriate inhibitors for it. So while my statement holds for AGI, more conventional computing does benefit the offence as well as (but probably not more than) the defence.
Now, work is going on attempting to apply AI to the job of reducing that false alarm rate. We're about 90 percent there. The trouble is the remaining 10 percent is critical.
Excellent narrow AI application. Unfortunately the machine learning tools we've got at present are sufficiently crude and reliant on humans to supply context and training feedback that this sort of application takes a lot of time and money to develop, and the effort isn't really transferable to the next project. There's a lot of scope for improving the underlying tools, such that churning out narrow AI applications becomes quicker and easier, without actually going as far as AGI. This kind of automated software engineering is an area my own company has been focusing on until recently, though we're currently on a bit of a web services detour.

In many ways, the most surprising thing is that they haven't already been used. Again, another example, its like mining U.S. ports. One bottom mine in the main shipping lane and the port comes to a complete standstill for days or even weeks. Amazing nobody has tried it.
Fortunately very few of the people qualified to do a huge amount of damage to society actually want to do so. But then there are dangerous nutjobs like this one (the AGI equivalent would be someone like Hugo de Garis - fortunately AGI is much harder than bioweapons). Did I mention how much I loathe and despise the Voluntary Human Extinction Movement and their eco-terrorist associates?
If we'd kept the strategic defense program momentum up in the 1959-61 era up, we would be whole worlds better off.
Certainly true.
As I said, I hope you guys pull it off. There's a lot of money being thrown quietly at the project area
Very little of the big money is targetting general AI these days; it got so thoroughly discredited in the late 80s (and Cycorp have been doing their best to keep discrediting it right through the 90s and 2000s, as well as trying to stop other projects getting funded) that the military spending is all going on more practical short-term projects. Which frankly is fine, I don't think runaway self-enhancement risks are well enough understood and appreciated for it to be a good thing for gobs of government funding to go on AGI. Neurophys-inspired brain simulation is getting a fair amount of cash though, it has plenty of near-term medical spinoffs.
So there's no particular call to action here, the vast majority of people should continue doing what they can to support other space colonisation, biodefence efforts, nanotech progress, brain-computer interfacing progress etc.
Agreed, no reservations. I'd add missile and air defense to that, its good to be able to take down the delivery systems.
Missile and air defence are sensible things for the government to throw money at, but they don't really benefit from advocacy and (relatively) small-scale private research the way the others do. The average person can buy stock in a space launch startup, donnate to the Foresight institute, and if they're a reasonably young scientist or engineer probably steer their career towards working on space/nano/bio/AI. AFAIK they best you can do for strategic defence is try to get hired by one of the major military contractors and hope you get assigned to a relevant project.
On the other hand, people always assume they'll be the lucky ones who get chosen to go up (I'm fortunate since I know I will :) ).
Apollo has already reserved you a seat eh? :)
I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony.
The earthquakes etc that result from the asteroid impact will do it.
AFAIK a 10-100km asteroid will produce earthquakes in the 10-11 Richter magnitude range. That's bad, but it won't damage modern buildings on the other side of the continent, much less a different continent. Since you did of course build two colonies on opposite sides of the earth for redundancy, at least one should be fine. Just make sure they're not anywhere near a coastline.
At the time of the Permian-Triassic (P-Tr) extinction event, the whole biosphere was poisoned for tens of thousands of years. Oceans becmae anoxic, that caused hydrogen sulphide producing biologicals to rpliferate and they poisoned the rest of the biosphere. A selaed biosphere could well survive an extinction event if the duration was limited to a few years but for millenia?
Even with that amount of damage, the prospects for terraforming earth back into habitability are better than the prospects for making mars or venus earthlike. You're comparing surviving in sealed biospheres on a planetary surface with surviving in sealed biospheres in a vacuum. On earth there's still plenty of free oxygen and hydrogen available in the atmosphere, just requiring energy to extract, there's gravity, and you can mine for more materials. In space you've got to shield against radiation and if you want more materials of any kind, you have to go find an asteroid, tow it home and mine it in zero gravity. Again, space is good for long term growth potential but I'm not clear it's superior for survivability, particularly if you're on a tight construction timetable where launch capacity is a killer for building a usefully large habitat.

Actually this is one place human hibernation technology, or better yet the ability to revive people from cryogenic stasis, would be really useful. If you're trying to save a large set of people with key skills and genetic diversity, you could launch or store them all in stasis along with a much smaller set of initial construction engineers, and revive them when enough additional habitat space has been built to support them.
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote: Fortunately very few of the people qualified to do a huge amount of damage to society actually want to do so. But then there are dangerous nutjobs like this one (the AGI equivalent would be someone like Hugo de Garis - fortunately AGI is much harder than bioweapons). Did I mention how much I loathe and despise the Voluntary Human Extinction Movement and their eco-terrorist associates?
I'd gained that impression somehow. My personal idea is that we should put all those people on a bomb-disposal course where they can learn by trial and error (and keep them there until they've run through all the possible errors or until all the candidates are expended).
Apollo has already reserved you a seat eh?
Not quite; a red-headed lady I know has "arranged" for one of the other seat-holders to, uhhh, get ill just before take-off.
AFAIK a 10-100km asteroid will produce earthquakes in the 10-11 Richter magnitude range. That's bad, but it won't damage modern buildings on the other side of the continent, much less a different continent. Since you did of course build two colonies on opposite sides of the earth for redundancy, at least one should be fine. Just make sure they're not anywhere near a coastline.
The initial impact isn't the quake I meant. The problem is the contra-coup shock on the other side of the world from the point of impact, This causes a sustained flood plain earthquake/eruption; the phenomena is a combination of both; the quake ruptures the crust down to magma level then the magma upwells through the crasks and floods out. The Siberian Traps and the Deccan Traps are good examples that release immense amounts of smoke, dust and toxic gasses into the atmosphere. The Siberian Traps were probably the largest earthquake/eruption in earth's history and they lasted for a million years.
Even with that amount of damage, the prospects for terraforming earth back into habitability are better than the prospects for making mars or venus earthlike. You're comparing surviving in sealed biospheres on a planetary surface with surviving in sealed biospheres in a vacuum. On earth there's still plenty of free oxygen and hydrogen available in the atmosphere, just requiring energy to extract, there's gravity, and you can mine for more materials. In space you've got to shield against radiation and if you want more materials of any kind, you have to go find an asteroid, tow it home and mine it in zero gravity. Again, space is good for long term growth potential but I'm not clear it's superior for survivability, particularly if you're on a tight construction timetable where launch capacity is a killer for building a usefully large habitat.
I agree, but the problem is that there's nobody left alive after the flood plain quake to do the terraforming. I agree about the launch wieght problem, that's why I say we've missed the bus. All we can do now is try to get up there and hope we have time.
Actually this is one place human hibernation technology, or better yet the ability to revive people from cryogenic stasis, would be really useful. If you're trying to save a large set of people with key skills and genetic diversity, you could launch or store them all in stasis along with a much smaller set of initial construction engineers, and revive them when enough additional habitat space has been built to support them.
Interesting thought. I must admit I've eliminated cryogenic stasis and the space elevator from TBO because I don't know enough about the technologies involved to make a realistic job of describing them.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Stuart wrote:I agree about the launch wieght problem, that's why I say we've missed the bus. All we can do now is try to get up there and hope we have time.
If launch weight is absolutely critical, time is limited and the stakes are the survival of the human race, the next question is 'is surface-to-orbit Orion actually workable'. AFAIK the answer is 'yes, the engineering is sound, fallout will be less than a cold war atmospheric nuclear test, payload mass is in the thousands of tons per launch'. If the answer is indeed yes, the next question would appear to be 'how do we finally overrule the anti-anything-nuclear idiots and get it built'.

Incidentally I just discovered that Robert McNamara was the one who effectively cancelled the Orion program (though I must've read it before, presumably didn't register). Somehow I am not surprised. Given that he wasn't around to mess things up in the TBOverse, what happened to Orion, (and while I'm at it NERVA, Dumbo and TRITON)?
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Stuart wrote:
Surlethe wrote:The downside was the utterly atrocious grammar. It made the book nigh-unreadable. I wholeheartedly suggest another edition, rewritten at least twice -- once for grammar, spelling, punctuation, and once just in case you missed anything.
There was an inquiry as to what happened with the spelling and punctuation - it turned out the last batch of author's corrections weren't made due to an administrative error. There is a second edition (hardback) coming out with that stuff fixed.
Ahhh. That makes much more sense than a book going to print with errors.
I'd take serious issue with you on the grammar. The story is largely told via the eyes and internal thoughts of the various characters who are at the center of each section - so the text follows their thoughts. As a result, its colloquial rather than classic prefect" grammar, its the way people speak and think with minimum modifications for clarity. Redo those sections in classical grammar and it reads horribly wrong, stilted and false.
To be honest, when I wrote "grammar", I was thinking "punctuation". My biggest nit to pick was with the commas; the actual sentence structures weren't annoying as far as I could tell. It just seemed like they would run out of breath thinking becuase there were no commas ... :wink:
Oh, and Stuart? You should drop into some of the Peak Oil threads down in SLAM or N&P sometime. The debate would be fun.
Everybody is having too much fun with their doom, gloom and utter disaster prognostications. I don't want to spoil it for them.[/quote]

A little cold water on the more hardcore predictions wouldn't be unwelcome. In fact, an informed opinion backed up with sound arguments is never unwelcome in a debate.
Anyway, neither Peak Oil nor Global Warming are problems, humanity is going to destroy itself long before they become significant. I don't give us much past the middle of this century. We had our chance to survive and blew it.
I'm a little more optimistic, but then your opinion is evidently more informed than mine. The worst case scenario I can envision looks something like Global Mean Temperature; constructing that took into account something like 99.9% death rate and still I had to make sure there was no opportunity for an agricultural society to rise again. Even if the human race is sufficiently wiped out -- and that's a big "if" -- it also has to be impossible for people to scrape together agriculture in various pockets.

The best I can come up with off the top of my head is, like you pointed out, an airborne virus that attacks chlorophyll or a large asteroid strike (IIRC, the one slated to make a close run -- with a 1/16000 chance of hitting -- is only enough to totally destroy a large state) with the associated global dieoff and winter effects.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
Einhander Sn0m4n
Insane Railgunner
Posts: 18630
Joined: 2002-10-01 05:51am
Location: Louisiana... or Dagobah. You know, where Yoda lives.

Post by Einhander Sn0m4n »

Image Image
User avatar
Alan Bolte
Sith Devotee
Posts: 2611
Joined: 2002-07-05 12:17am
Location: Columbus, OH

Post by Alan Bolte »

That set me cackling gleefully.
Any job worth doing with a laser is worth doing with many, many lasers. -Khrima
There's just no arguing with some people once they've made their minds up about something, and I accept that. That's why I kill them. -Othar
Avatar credit
User avatar
Einhander Sn0m4n
Insane Railgunner
Posts: 18630
Joined: 2002-10-01 05:51am
Location: Louisiana... or Dagobah. You know, where Yoda lives.

Post by Einhander Sn0m4n »

Image Image
User avatar
CaptainChewbacca
Browncoat Wookiee
Posts: 15746
Joined: 2003-05-06 02:36am
Location: Deep beneath Boatmurdered.

Post by CaptainChewbacca »

British SS Units, and maybe an American one? Creepy.
Stuart: The only problem is, I'm losing track of which universe I'm in.
You kinda look like Jesus. With a lightsaber.- Peregrin Toker
ImageImage
User avatar
Einhander Sn0m4n
Insane Railgunner
Posts: 18630
Joined: 2002-10-01 05:51am
Location: Louisiana... or Dagobah. You know, where Yoda lives.

Post by Einhander Sn0m4n »

Image Image
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Taken from an entirely unrelated thread, but:
jegs2 wrote:We've seen a dangerous drift within the military over the fast few decades, solidifying support of one particular party. Thanks to the war in Iraq and the resulting damage to the US Army, that has eroded.
...I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing. It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.
User avatar
Instant Sunrise
Jedi Knight
Posts: 945
Joined: 2005-05-31 02:10am
Location: El Pueblo de Nuestra Señora la Reina de los Angeles del Río de Porciúncula
Contact:

Post by Instant Sunrise »

Starglider wrote:...I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing. It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.
AFAIK, It's only the Senior Chief who does that, and he has a good reason to do so, as explained in this story.
Hi, I'm Liz.
Image
SoS: NBA | GALE Force
Twitter
Tumblr
User avatar
Stuart
Sith Devotee
Posts: 2935
Joined: 2004-10-26 09:23am
Location: The military-industrial complex

Post by Stuart »

Starglider wrote:I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing.
Not an Admiral, its the eponymous Senior Chief. There is a built-in joke there that the "Democrats" he refers to are actually the Democratic-Republicans of Jefferson's era who are the lineal ancestors of today's Republican Party! There are four stories that have a large U.S. base.

The Great Game - Republican President (LeMay) not seen

Crusade - Democrat President (LBJ) portrayed as a man who combines great political skills with a humanist and sympathetic outlook and "the common touch".

Ride of the Valkyries - Republican President (Nixon) portrayed as a most unpleasant and not very competent person

High Frontier - Republican President (Reagan). Ronaldus Magnus, 'nuff said.

So, of the Presidents, the only Democrat is portrayed in a very favorable light while the Republicans are portrayed as one favorable, one unfavorable and one unknown.
It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.
Not so, a Senior Chief is hardly high-ranking (although the Chiefs are a critical part of running the fleet). The only other case that might be construed as the military being anti-Democrat is the private command meeting during Crusade when the order has been received not to attempt a rescue of the shot-down aircrew. Then, the motivation is that this runs against everything they've been trained to believe in so they must go in despite orders. Then, note, LBJ goes to work with a sledgehammer and takes apart those who issued that order.
Nations do not survive by setting examples for others
Nations survive by making examples of others
User avatar
TimothyC
Of Sector 2814
Posts: 3793
Joined: 2005-03-23 05:31pm

Post by TimothyC »

"I believe in the future. It is wonderful because it stands on what has been achieved." - Sergei Korolev
Post Reply