You think mass space colonisation starting with 70s tech was the only real chance?Stuart wrote:I don't give us much past the middle of this century. We had our chance to survive and blew it.
If so I would of course beg to differ.
Moderator: LadyTevar
In a way, I believe there is a very narrow window between an intelligent species gaining the ability to move into space and that species being able to utterly destroy itself or be destroyed. By turning our backs on space in the 1970s, we doomed ourselves. As it happens, I believe the probability is that we'll destroy ourselves by warfare, my guess is uncontrolled biological warfare, but that's really irrelevent. By confining ourselves to Earth, we've left ourselves wide open to extinction and nature is remarkably remorseless in such things. Could be an asteroid, could be natural disease, could be environmental collapse, could be Factor X. We're all sitting on the bullseye and there's no way off it.Starglider wrote: You think mass space colonisation starting with 70s tech was the only real chance? If so I would of course beg to differ.
That's one answer to the Fermi paradox but there's another answer (consequences of creating arbitrary self-enhancing intelligences) that's just as valid. Whether it accounts for more planetkills depends on the frequency distribution of various traits in evolved sapients (i.e. how fast they tend to kill each other off) and whether anthropic effects are involved (i.e. whether the fraction of arbitrary self-enhanching intelligences that generate 'reformat/control as much of the universe as possible' as a subgoal is large enough that we literally have to be the first, or at least one of the first, civilisations to approach this threshold).Stuart wrote:In a way, I believe there is a very narrow window between an intelligent species gaining the ability to move into space and that species being able to utterly destroy itself or be destroyed.
This assumes that we could've accomplished enough to make a difference by now even with a continuation of Apollo level funding - IMHO probable but highly debatable. More relevantly it assumes that we're not going to develop technology in the near future that leapfrogs all the missed incremental progress by massively decreasing launch and orbital construction costs.By turning our backs on space in the 1970s, we doomed ourselves.
Assumes that both the weapons get cheap enough for non-rational players to start deploying them and that no countering technology is developed in time. Biotech is in a particularly dangerous phase in this regard, but the problem is solvable in principle; good enough biotech, ideally combined with very good surveillence and monitoring technology, eventually tips the balance strongly in the defence's favour. Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).As it happens, I believe the probability is that we'll destroy ourselves by warfare, my guess is uncontrolled biological warfare, but that's really irrelevent.
Assumes that we're not going toBy confining ourselves to Earth, we've left ourselves wide open to extinction and nature is remarkably remorseless in such things.
Needs either space-based resources to divert it or total independence from the ecosystem to ignore the results. You're obviously keen on (a), which would be nice, I think (b) may turn up first, but this risk is minor enough not to be an immediate concern.Could be an asteroid,
Mitigated and eventually eliminated with medical progress, which is doing great.could be natural disease,
Serious risk but technological solutions are on the drawing board for just about every subproblem. Furthermore, space colonies don't actually buy you anything against environmental collapse (or even biowar) that self-sufficient colonies on the earth's surface don't. An antarctic or desert sealed biosphere is much cheaper to build and has much better access to local resources. Space colonisation is a) useful for avoiding nuclear wars if no one targets you or you have a very comprehensive anti-missile system and b) long-term expansion.could be environmental collapse,
Yet. Radical technologies for hardening the bullseye and getting off it much more easily are within reach, if we can last a few more decades. On the other hand, transhuman intelligence is a huge risk (actually, a huge set of risks), even as it makes those technologies much easier to reach (in fact largely because of that).We're all sitting on the bullseye and there's no way off it.
The basic message is still the same though; the window of opportunity for getting off this planet is very small; I'd guess around a century or so measured from around 1960. That means that the clock runs out for us at around 2060 at the latest. I won't live to see it but a lot of the people here will.Starglider wrote: That's one answer to the Fermi paradox but there's another answer (consequences of creating arbitrary self-enhancing intelligences) that's just as valid. Whether it accounts for more planetkills depends on the frequency distribution of various traits in evolved sapients (i.e. how fast they tend to kill each other off) and whether anthropic effects are involved (i.e. whether the fraction of arbitrary self-enhanching intelligences that generate 'reformat/control as much of the universe as possible' as a subgoal is large enough that we literally have to be the first, or at least one of the first, civilisations to approach this threshold).
If we'd carried on with space exploration in the 1970s, it would have created a driver for the development of inexpensive surface-to-space travel. At the moment there isn't really such an incentive. Now, I think its too late, we don't have the infrastructure up there and we haven't time to develop it before the roof falls in.This assumes that we could've accomplished enough to make a difference by now even with a continuation of Apollo level funding - IMHO probable but highly debatable. More relevantly it assumes that we're not going to develop technology in the near future that leapfrogs all the missed incremental progress by massively decreasing launch and orbital construction costs.
I'd disagree there. Biological weapons are very simple to obtain (all one needs to set up an anthrax production facility is a few bits of glassware, a few spadefuls of dirt from the local farm, a culture medium - baby formula will do just fine - and patience. All one needs to develop an antibiotic-resistant form of anthrax is the above plus a supply of antibiotics. Have you noted how many leading Islamic terrorists are medical doctors?). Monitoring is virtually impossible. That's why biological warfare scares me much more than nukes; frankly Iran getting a nuclear device or two doesn't worry me terribly. The idea of them having twenty or thirty years time to work on biologicals does. Sort of options that can be produced without any great genetic engineering expertise.Assumes that both the weapons get cheap enough for non-rational players to start deploying them and that no countering technology is developed in time. Biotech is in a particularly dangerous phase in this regard, but the problem is solvable in principle; good enough biotech, ideally combined with very good surveillence and monitoring technology, eventually tips the balance strongly in the defence's favour.
They require technology levels that biologicals don't. Look how far biological warfare goes back, that's the terrifying thing. It doesn't require any great skill.Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).
In the time frame I think we have available, I don't believe that's a probable development.It also assumes that a hegemonic government doesn't get total control of the earth. This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible. Further in the future, an invasive engineered agent that can network and co-operate to restructure the brain in this way would make it a very serious risk.
The United States is heading that way now; our technology lead over the rest of the world is lengthening steadily. Again though, I don;t think there's enough time left to matter. The 2040 - 2050 date for the Great Biowar is my best guess at how much time we have left. As I said, I won't be around to know whether I got it right but.........Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else, the kind of advantage you would get from having very fast and powerful engineered intelligences co-operating (though for genuine AIs the singleton/group distinction is pretty irrelevant).
Again time is the problem. As to developing countermeasures, lack of will is in there as well. Look at all the problems we've got trying to deploy an anti-missile system and that's with a lunatic in North Korea tossing long-range missiles around and trying to develop a nuclear warhead. If he's developing that, do you wnat to bet the farm that he isn't developing biologicals? We can get an anti-missile system up fairly quickly but the sort of elaborate defenses needed to thwart a biological attack? There's no sign of people even beginning to think about them.Assumes that we're not going to
a) turn into something a lot more robust than basic humans or
b) develop very effective mitigation strategies for the specified risks
My understanding is that there is an asteroid due to make a very close pass in the 2030s (perhaps somebody can correct and/or elaborate). Also one may come out of left field. The point is near-extinction events from asteroid impacts are a fact, we don't know when one will happen next.Needs either space-based resources to divert it or total independence from the ecosystem to ignore the results. You're obviously keen on (a), which would be nice, I think (b) may turn up first, but this risk is minor enough not to be an immediate concern.
Only against diseases we know about, I have in mind something that's beyond our experience and which means we have to start from scratch in fighting it. Needn't be one that directly affects us - how about an air-vectored disease that destroys chlorophyll in plants?Mitigated and eventually eliminated with medical progress, which is doing great.
I'm not so sanguine. I guess we have thirty, perhaps forty years. That just isn't long enough to get anything significant done. For comparison, run back to 1975; we're much more advanced technologically than we were then but nmone of those advances translates into hardening the bullseye any more. In fact, its arguable the bullseye is softer now than it was then.Yet. Radical technologies for hardening the bullseye and getting off it much more easily are within reach, if we can last a few more decades. On the other hand, transhuman intelligence is a huge risk (actually, a huge set of risks), even as it makes those technologies much easier to reach (in fact largely because of that).
Well, kinda. Getting off the planet (in the 'create some self-sustaining colonies' sense') almost completely removes the existential risk of various natural disasters. It reduces but does not eliminate biowarfare risks, in that depending on the setup of your interplanetary civilisation it may be possible to introduce pathogens into enough critical places before detection that effective quarrantine is impossible. Similarly it reduces deliberate warfare risks in that smashing all the habitable biospheres is harder, but it does not eliminate them. Interstellar is much better, but of course much, much harder, at least if the only intelligence you have to work with is bog standard humans and you don't have a good way to freeze them for centuries-to-millenia.Stuart wrote:The basic message is still the same though; the window of opportunity for getting off this planet is very small;Starglider wrote: That's one answer to the Fermi paradox but there's another answer
If that's a .50 probability, that's pretty good, we've got a very good chance of developing lethal goodies like brain-computer interfacing, full-brain scanning and simulation, general nanoassembly, mobile microrobotics and of course general AI within that timeframe. You're probably the most qualified person I know with regard to estimating biowarfare existential risk, but the existential risks associated with these early-stage technologies require their own massive study to appreciate (I'm only an expert in the AGI one - and not the best expert I know on it, since I've never spent a lot of time playing professional futurist). Of course you might see a space elevator in that timeframe, which would probably suffice for a space lifeboat, and AFAIK there's still an outside chance of doing it with privatised space and conventional launches.I'd guess around a century or so measured from around 1960. That means that the clock runs out for us at around 2060 at the latest. I won't live to see it but a lot of the people here will.
Well, maybe. NASA's performance at driving commercial interest has been pretty abysmal, and they've never had a 'get asteroid mining started' agenda. This is a complex debate that I'm only moderately qualified for, but at this point it's of largely academic interest - unless you think government-driven space exploration is going to do anything useful in time.If we'd carried on with space exploration in the 1970s, it would have created a driver for the development of inexpensive surface-to-space travel.
That's the extreme window of vulnerability. Current medical science cannot provide a generally superior replacement for the human immune system. We can often create toxins tailored to destroy or render inert specific pathogens after much lab work, we can kick start the immune system into action with immunisations (again after a lot of analysis and preparation) and we can mitigate symptoms to some degree but that's it.Biological weapons are very simple to obtain... monitoring is virtually impossible.
They aren't a threat yet. But if we survive the next 30-50 years, we may actually see the end of biological weapons as a serious threat, simply because technology will eventually trump biology (ultimately the same reason why various forms of AI are so dangerous). However the threat from those advanced technologies themselves has no such limitations.They require technology levels that biologicals don't.Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).
If your time frame is 50 years I think it's highly probable we'll have the understanding to do it. Brain mapping has been progressing quickly recently. The question is whether the tools will exist to make the required changes, which I don't have a reliable method of estimating. Something like freezing a brain solid and laser-ablation scanning it to a sufficient resolution for an accurate simulation is a relatively straightforward process you can turn into engineering specs - when we get scanner resolution X and computing power Y, which have been following these curves, then it becomes feasible. Fiddling about with dendrite webs in a currently unknown way without killing the subject or sending them mad isn't a procedure we can currently analyse in enough detail to get a good set of preconditions and hence probability for. But it would not surprise me, because the prospect is so attractive to many governments it could easily swing a lot of research funding when it passes a certain minimal level of plausibility.In the time frame I think we have available, I don't believe that's a probable development.This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible.
The United States is heading that way now; our technology lead over the rest of the world is lengthening steadily.[/quote]Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else
True. But although the chances of success may be small, they're worth fighting for; it hardly makes sense to sit down and give up. I've personally structured my life to try and make the best contribution I can to the development of one of the 'magic' technological fixes, simply because it's the most useful thing I can do.Again time is the problem.
While it's unfortunate that biowarfare risks don't have a higher profile, on the plus side there isn't that crazy MAD mentality that let the ABM critics successfully argue that being undefended against nuclear anhiliation was a good thing. Essentially everyone agrees that defence against biowarfare is a good thing in principle - though there are plenty of irritating conspiracy nuts who refuse to accept that defensive biowarfare requires some installations and programs superficially quite similar to offensive biowarfare.As to developing countermeasures, lack of will is in there as well. Look at all the problems we've got trying to deploy an anti-missile system and that's with a lunatic in North Korea tossing long-range missiles around and trying to develop a nuclear warhead. If he's developing that, do you wnat to bet the farm that he isn't developing biologicals? We can get an anti-missile system up fairly quickly but the sort of elaborate defenses needed to thwart a biological attack? There's no sign of people even beginning to think about them.
I'm curious, have you had many opportunities in your career to make the case for space colonisation to relevant decision makers, the way you have your characters make it?The later parts of the TBOverse (set from 1972 - 2050) are my scream of warning that we do need to develop mitigation strategices and develop them NOW.
However it isn't a pressing risk, and having at least one self-sufficient outpost survive massive climate change on the earth's surface is probably easier than surviving without support in earth orbit (though the growth potential is more limited).The point is near-extinction events from asteroid impacts are a fact, we don't know when one will happen next.
But the tools for diagnosis, pathogen analysis, drug design, flexible manufacturing and treatment are all improving as well as just expanding the range of off-the-shelf treatments available.Only against diseases we know about,Mitigated and eventually eliminated with medical progress, which is doing great.
Given that that hasn't happened in the last three billion years despite abundent opportunities for it to evolve, it seems pretty unlikely without very good genetic engineering.Needn't be one that directly affects us - how about an air-vectored disease that destroys chlorophyll in plants?
Oh I'm not sanguine. I'm saying that you're probably right about the biowarfare risk (in fact, that 50 years is generous), and that in addition to this we have lots more upcoming existential risks to worry about, some of which are essentially impossible to run away from or contain. And that's not even getting started on the Peak Oil / Runaway Climate Change scenarios (which I don't personally rate as existential risks, but they're exacerbating factors for some of the really serious problems).I'm not so sanguine.
I'd agree that it's softer in the sense that the infrastructure supporting civilisation is more fragile. Convincingly fixing that problem is a challenge for the nanofactory people. However humans haven't changed since 1975 - we just travel around more (which doesn't help either for biothreats). In fact we've been stuck with the mark I mod I human for all of recorded history. Changing that situation changes the rules of the game, and since we're currently losing the game it's a very compelling option.For comparison, run back to 1975; we're much more advanced technologically than we were then but nmone of those advances translates into hardening the bullseye any more. In fact, its arguable the bullseye is softer now than it was then.
Protocols for defending against bioattacks on space facilities have been evaluated over the years. It's hard but not insoluble, basically the results of those discussions came up in "Exodus".Starglider wrote: It reduces but does not eliminate biowarfare risks, in that depending on the setup of your interplanetary civilisation it may be possible to introduce pathogens into enough critical places before detection that effective quarrantine is impossible.
Interstellar is pretty much out as far as the time frame is concerned. A moon base though was entirely doable using 1980s technology.Similarly it reduces deliberate warfare risks in that smashing all the habitable biospheres is harder, but it does not eliminate them. Interstellar is much better, but of course much, much harder, at least if the only intelligence you have to work with is bog standard humans and you don't have a good way to freeze them for centuries-to-millenia.
Our general cut was that if a hostile space-faring nation comes after us, the technology differential will be such that all we can do is go down fighting. Which we'd do. Defending a space habitat is a lot easier ins ome ways than it sounds, its at the top of the gravity well and that gived it an enormous advantage - provided its armed to the teeth and prepared to start shooting.Unfortunately getting off the planet is only of limited use with regard to hostile transhuman intelligences, in that if they want to come after you, they probably can, and for a rational intelligence there's a strong motive to do this if surviving humans are likely to intefere with its goals later.
The problem is that all the technologies you mention is that they can be applied to the biowarfare effort as well - and with much greater effect. In bio terms, the offense has a great advantage at this time simply because it can create and release then its job is over. The defense has to wait for the release, find out what's causing the disease, find a cure and distribute it. The cold equations run against them succeeding in time.You're probably the most qualified person I know with regard to estimating biowarfare existential risk, but the existential risks associated with these early-stage technologies require their own massive study to appreciate.
I don't; the best role for the government is to seed corn things and then let the private sector do it. And no, I wasn't involved in the Space Shuttle fiasco.Well, maybe. NASA's performance at driving commercial interest has been pretty abysmal, and they've never had a 'get asteroid mining started' agenda. This is a complex debate that I'm only moderately qualified for, but at this point it's of largely academic interest - unless you think government-driven space exploration is going to do anything useful in time.
To put numbers on things, inhaled anthrax is 99 percent fatal if left untreated (figure from the penholder on my desk which was given to me by AMRIID). The present treatment regime is very effective - BUT its known. So, all the bad guys have to do is to breed a strain of anthrax thats immune to that treatment and we're at 99 percent lethality again. Add one crop-dusting aircraft, one major city and .........That's the extreme window of vulnerability. Current medical science cannot provide a generally superior replacement for the human immune system. We can often create toxins tailored to destroy or render inert specific pathogens after much lab work, we can kick start the immune system into action with immunisations (again after a lot of analysis and preparation) and we can mitigate symptoms to some degree but that's it.
I agree; the problem is that I'm very pessimistic about developing those tools. And again, remember that all these tools can be used by the other side as well. They can use them engineer a disease that hits us in ways we couldn't dream possible. They require technology levels that biologicals don't.We just don't have the tools to build them yet. I just hope the very high regulatory burden on medical science doesn't push these fairly inevitable advances too far into the future (i.e. past the extinction point they might have prevented).
I hope you're right but I don't think we have the time. Bad guys, both state and non-state players are fooling around with bio now. We're rolling the dice all the time; one day its going to go against us.We may actually see the end of biological weapons as a serious threat, simply because technology will eventually trump biology (ultimately the same reason why various forms of AI are so dangerous). However the threat from those advanced technologies themselves has no such limitations.
You'd be surprised at what these 'ere Untied States can keep secret. We can do it (New York Times notwithstanding); the basic doctrine is to release so much information that the holes (that define the secret stuff) are blurred. Then we work in the blur.Plus it's leaky as a sieve for anything but the final stage military applications - other countries and groups are going to get the basic science, often they can buy materials and tools from the same suppliers.
I agree, which is a pity. We could have been.The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.
And my working life has been spent in the strategic destruction/strategic defense business under various guises. We're still trying, nobody's given up, but we're realists. We can see hear the clock ticking. In a way, we're all praying that somebody comes up with the breakthrough that offers us a fighting chance, the universal cure, something like that.But although the chances of success may be small, they're worth fighting for; it hardly makes sense to sit down and give up. I've personally structured my life to try and make the best contribution I can to the development of one of the 'magic' technological fixes, simply because it's the most useful thing I can do.
The idea's been put up often by people a lot more important than me (I'm quite low down the food chain in such things. But yes, the idea has been pushed hard.I'm curious, have you had many opportunities in your career to make the case for space colonisation to relevant decision makers, the way you have your characters make it?
The novels...Do you have any novel ideas for making that paletable to the electorate?
I wouldn't say that; we know how to do most of the in-space stuff, like the examples you gave, we have the knowledge, not the tools. The current modelling of an extinction events suggest its just that. No matterw hat we do, no survival possible.However it isn't a pressing risk, and having at least one self-sufficient outpost survive massive climate change on the earth's surface is probably easier than surviving without support in earth orbit (though the growth potential is more limited).
Again, the same things apply to the offense and tehy can use them better than the defense. What we need is something taht will give us an end run around the offense, create a defense that's always in place and they'll run head on into no matetr what they do. What that might be I do not know,[But the tools for diagnosis, pathogen analysis, drug design, flexible manufacturing and treatment are all improving as well as just expanding the range of off-the-shelf treatments available.
Just giving it as an example of something that could come from way out of left field and finish us. The one that'll do it is something even more unimaginable.Given that that hasn't happened in the last three billion years despite abundent opportunities for it to evolve, it seems pretty unlikely without very good genetic engineering.
I'm just saying that there is some hope, and ultimately rapid but carefully focused (of course these are opposed) technological progress is the only thing that can save us. {/quote]
I agree. And I wish you luck. But, looking at the situation now. its grim, very grim indeed.
Oh yes, we've discussed it, evaluated it, analysed it. The problem is, 65 percent is nowhere good enough. The standard of proof demanded would be that of a legal court, beyond reasonable doubt - and there would always be people out there manufacturing any doubt and claiming it was "reasonable". The reality is that we would have to have an overt bio-attack on an American city before people would do anything - and even then there would still be a vociferous group screaming that it was all a government plot.MKSheppard wrote:Question Stuart; has there been any discussion or provokation within the strategic community over what we SHOULD do in case we ever see a situation like the al-Hammar situation in "High Frontier", where we see a 65% provable case of BW being tested?
There has been - in Laos using Tricothecene. There was a concerted wail of denial from people who would say anything and do anything rather than believe a biological weapon had been used. It was classic denial, any excuse will do.At some point, there has to be a pilot deployment of the weapon outside of lab conditions, so that you can be sure that it works in the real world.
They can, and there are concepts for hybridising nanotechnology ('wet' and 'dry') and engineered biological pathogens that would be near-untreatable with conventional medicine. This even made it into a popsci book - it was one of the more plausible things in Michael Crichton's 'Prey' novel. These will become practical earlier than the really advanced applications that start to cut down and eventually eliminate the biothreat. So yes, in the short term nanotechnology make things worse.Stuart wrote:The problem is that all the technologies you mention is that they can be applied to the biowarfare effort as well - and with much greater effect.
Yes. We have to cut the response time down. Massive numbers of widely distributed sensors and more automation of the response process is the only way to do it. AI can help with the 'recognise the threat' and 'design the cure' part, automating the 'manufacture the cure' part and reducing distribution delay will probably need small scale units with the ability to synthesise arbitrary complex compounds quickly and in useful amounts, i.e. something the size of a semi-trailer (at most) that can do what currently takes a chemical plant.The defense has to wait for the release, find out what's causing the disease, find a cure and distribute it.
Well yes, but ultimately biochemistry is a bottleneck that will cause conventional bioweapons to drop out of the arms race. Unfortunately there are even worse things, starting with hybridised bio-nano weapons, that will probably take over right where they left off. The only good thing about that is that the level of technical expertise required to build them is much higher than for cultured (non-GE) bioweapons.I agree; the problem is that I'm very pessimistic about developing those tools. And again, remember that all these tools can be used by the other side as well. They can use them engineer a disease that hits us in ways we couldn't dream possible.
I assume you mean in the sense of having a tiny colony of people immune to attack by anything other than a massive missile strike; AFAIK there's nothing the US could have done to completely avoid the bio threat, even the TBOverse USA with its excellent strategic position doesn't manage it.I agree, which is a pity. We could have been.The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.
I'm personally convinced (as is undoubtadly obvious by now) that general, de-novo AI is the single most powerful technological advance we have a realistic chance of making in the time remaining. It has characteristics (and risks, and opportunities) completely unlike any other technology. Employed correctly and with a bit of luck, it literally could save the world. The madenning thing about it is that it isn't a tools or infrastructure issue. Existing hardware within reach of the average moderately funded research group more than suffices for the job, and existing software tools are probably adequate. It's a purely design challenge - on the one hand there's a very real chance that someone, somewhere will crack it tomorrow, but there's also a very real chance (though IMHO low) that we won't crack it even with another century to work on it.In a way, we're all praying that somebody comes up with the breakthrough that offers us a fighting chance, the universal cure, something like that.
Regrettably the vast majority of the electorate is not interested in having a tiny community survive in space if it means they personally die. Even if all of this was generally and accurately appreciated, I would imagine most people would say 'I want my taxes to be spent on developing defences to protect me and my family, not saving some lucky few'.The novels...Do you have any novel ideas for making that paletable to the electorate?
I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony. Unless it's really, really unlucky and the asteroid hits close enough to directly threaten it with ejecta, the only obvious drawback is lack of continuous solar power. Airborne bioweapons can pose a risk for a surface installation in that any leak while the agent persists outside can potentially infect and kill the entire population, whereas in space a leak can be quickly plugged and at worst depressurise a single section - but this isn't likely to be an issue in Antarctica (say). Indeed for the bioweapon threat alone a sufficiently isolated island probably suffices for maintaining a community of human survivors, as long as the agent isn't indefinitely persistent on the mainland via animal carriers.I wouldn't say that; we know how to do most of the in-space stuff, like the examples you gave, we have the knowledge, not the tools. The current modelling of an extinction events suggest its just that. No matterw hat we do, no survival possible.
One of the posters on this board calls me by the epithet "Yellow Rain Man". and mocks me by claiming it was bee puke, which begs the question; wouldn't the Laotian mountain tribesmen you know, have heard about this thing before in living memory if it was a natural event?Stuart wrote:There has been - in Laos using Tricothecene. There was a concerted wail of denial from people who would say anything and do anything rather than believe a biological weapon had been used. It was classic denial, any excuse will do.
I can guess who that is. However.MKSheppard wrote:One of the posters on this board calls me by the epithet "Yellow Rain Man". and mocks me by claiming it was bee puke, which begs the question; wouldn't the Laotian mountain tribesmen you know, have heard about this thing before in living memory if it was a natural event?
But, AI also makes easier the genetic engineering needed to create completely new diseases - one hypthesized extermination-type disease as an example. A latent infection that produces no symptoms in its victims but if a woman infected with the agent gets pregnant, the foetus becomes a fast-growing tumor that kills her in days. Not theory, that one has been seriously suggested as a possible threat.Starglider wrote:Strictly yes having one will make bioweapon design easier, but only in the sense that having general AI on your side makes any technological endeavour easier. Hostile AIs might well deliberately create bioweapons, but if you've got hostile AIs with those capabilities you've already lost anyway.
There's a lot of money being spent (at AMRIID and other places) on just that. As I said, nobody's given up but also, nobody is hopeful that we can find enough answers. Another example, there is a very good laser system that detects aerosols and alerts people in the area. Its already in limited military service and some experiments were tried out to see if it had civilian use. The problem was a horrendous false alarm rate. Now, work is going on attempting to apply AI to the job of reducing that false alarm rate. We're about 90 percent there. The trouble is the remaining 10 percent is critical.Yes. We have to cut the response time down. Massive numbers of widely distributed sensors and more automation of the response process is the only way to do it. AI can help with the 'recognise the threat' and 'design the cure' part, automating the 'manufacture the cure' part and reducing distribution delay will probably need small scale units with the ability to synthesise arbitrary complex compounds quickly and in useful amounts, i.e. something the size of a semi-trailer (at most) that can do what currently takes a chemical plant.
From a technology point of view yes, but the scaring thing about bioweapons is how easily then can be made and how hard it is to control them. In many ways, the most surprising thing is that they haven't already been used. Again, another example, its like mining U.S. ports. One bottom mine in the main shipping lane and the port comes to a complete standstill for days or even weeks. Amazing nobody has tried it.Well yes, but ultimately biochemistry is a bottleneck that will cause conventional bioweapons to drop out of the arms race. Unfortunately there are even worse things, starting with hybridised bio-nano weapons, that will probably take over right where they left off. The only good thing about that is that the level of technical expertise required to build them is much higher than for cultured (non-GE) bioweapons.
I was actually thnking more of strategic defense in general. Bioweapons for all their threat profile is just one of the threats we face. If we'd kept the strategic defense program momentum up in the 1959-61 era up, we would be whole worlds better off.I assume you mean in the sense of having a tiny colony of people immune to attack by anything other than a massive missile strike; AFAIK there's nothing the US could have done to completely avoid the bio threat, even the TBOverse USA with its excellent strategic position doesn't manage it.
As I said, I hope you guys pull it off. There's a lot of money being thrown quietly at the project area (not just by us) but really we need that breakthrough. Once we get it, it'll solve a lot of problems.The madenning thing about it is that it isn't a tools or infrastructure issue. Existing hardware within reach of the average moderately funded research group more than suffices for the job, and existing software tools are probably adequate. It's a purely design challenge - on the one hand there's a very real chance that someone, somewhere will crack it tomorrow, but there's also a very real chance (though IMHO low) that we won't crack it even with another century to work on it.
Agreed, no reservations. I'd add missile and air defense to that, its good to be able to take down the delivery systems.So there's no particular call to action here, the vast majority of people should continue doing what they can to support other space colonisation, biodefence efforts, nanotech progress, brain-computer interfacing progress etc.
On the other hand, people always assume they'll be the lucky ones who get chosen to go up (I'm fortunate since I know I will ). It depends, it is pullable.Regrettably the vast majority of the electorate is not interested in having a tiny community survive in space if it means they personally die. Even if all of this was generally and accurately appreciated, I would imagine most people would say 'I want my taxes to be spent on developing defences to protect me and my family, not saving some lucky few'.
The earthquakes etc that result from the asteroid impact will do it. At the time of the Permian-Triassic (P-Tr) extinction event, the whole biosphere was poisoned for tens of thousands of years. Oceans becmae anoxic, that caused hydrogen sulphide producing biologicals to rpliferate and they poisoned the rest of the biosphere. A selaed biosphere could well survive an extinction event if the duration was limited to a few years but for millenia? Don't think so. Its like nuclear initiations. The only real defense is to be somewhere else.I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony.
Unfortunately you don't actually need general AI for this. Biochemistry simulation and semi-intelligent search will eventually get to the point (with the help of ever-increasing brute force) of being able to reliably predict protein folding on a laptop, such that you can design a protein shape and have the computer spit out an RNA sequence likely to produce it, or input an existing enzyme and have the computer create appropriate inhibitors for it. So while my statement holds for AGI, more conventional computing does benefit the offence as well as (but probably not more than) the defence.Stuart wrote:But, AI also makes easier the genetic engineering needed to create completely new diseases - one hypthesized extermination-type disease as an example.
Excellent narrow AI application. Unfortunately the machine learning tools we've got at present are sufficiently crude and reliant on humans to supply context and training feedback that this sort of application takes a lot of time and money to develop, and the effort isn't really transferable to the next project. There's a lot of scope for improving the underlying tools, such that churning out narrow AI applications becomes quicker and easier, without actually going as far as AGI. This kind of automated software engineering is an area my own company has been focusing on until recently, though we're currently on a bit of a web services detour.Now, work is going on attempting to apply AI to the job of reducing that false alarm rate. We're about 90 percent there. The trouble is the remaining 10 percent is critical.
Fortunately very few of the people qualified to do a huge amount of damage to society actually want to do so. But then there are dangerous nutjobs like this one (the AGI equivalent would be someone like Hugo de Garis - fortunately AGI is much harder than bioweapons). Did I mention how much I loathe and despise the Voluntary Human Extinction Movement and their eco-terrorist associates?In many ways, the most surprising thing is that they haven't already been used. Again, another example, its like mining U.S. ports. One bottom mine in the main shipping lane and the port comes to a complete standstill for days or even weeks. Amazing nobody has tried it.
Certainly true.If we'd kept the strategic defense program momentum up in the 1959-61 era up, we would be whole worlds better off.
Very little of the big money is targetting general AI these days; it got so thoroughly discredited in the late 80s (and Cycorp have been doing their best to keep discrediting it right through the 90s and 2000s, as well as trying to stop other projects getting funded) that the military spending is all going on more practical short-term projects. Which frankly is fine, I don't think runaway self-enhancement risks are well enough understood and appreciated for it to be a good thing for gobs of government funding to go on AGI. Neurophys-inspired brain simulation is getting a fair amount of cash though, it has plenty of near-term medical spinoffs.As I said, I hope you guys pull it off. There's a lot of money being thrown quietly at the project area
Missile and air defence are sensible things for the government to throw money at, but they don't really benefit from advocacy and (relatively) small-scale private research the way the others do. The average person can buy stock in a space launch startup, donnate to the Foresight institute, and if they're a reasonably young scientist or engineer probably steer their career towards working on space/nano/bio/AI. AFAIK they best you can do for strategic defence is try to get hired by one of the major military contractors and hope you get assigned to a relevant project.Agreed, no reservations. I'd add missile and air defense to that, its good to be able to take down the delivery systems.So there's no particular call to action here, the vast majority of people should continue doing what they can to support other space colonisation, biodefence efforts, nanotech progress, brain-computer interfacing progress etc.
Apollo has already reserved you a seat eh?On the other hand, people always assume they'll be the lucky ones who get chosen to go up (I'm fortunate since I know I will ).
AFAIK a 10-100km asteroid will produce earthquakes in the 10-11 Richter magnitude range. That's bad, but it won't damage modern buildings on the other side of the continent, much less a different continent. Since you did of course build two colonies on opposite sides of the earth for redundancy, at least one should be fine. Just make sure they're not anywhere near a coastline.The earthquakes etc that result from the asteroid impact will do it.I'd appreciate it if you'd explain exactly why a sealed biosphere built in Antarctica, or the Sahara, or in a huge bunker would have less chance of surviving an asteroid strike than a space colony.
Even with that amount of damage, the prospects for terraforming earth back into habitability are better than the prospects for making mars or venus earthlike. You're comparing surviving in sealed biospheres on a planetary surface with surviving in sealed biospheres in a vacuum. On earth there's still plenty of free oxygen and hydrogen available in the atmosphere, just requiring energy to extract, there's gravity, and you can mine for more materials. In space you've got to shield against radiation and if you want more materials of any kind, you have to go find an asteroid, tow it home and mine it in zero gravity. Again, space is good for long term growth potential but I'm not clear it's superior for survivability, particularly if you're on a tight construction timetable where launch capacity is a killer for building a usefully large habitat.At the time of the Permian-Triassic (P-Tr) extinction event, the whole biosphere was poisoned for tens of thousands of years. Oceans becmae anoxic, that caused hydrogen sulphide producing biologicals to rpliferate and they poisoned the rest of the biosphere. A selaed biosphere could well survive an extinction event if the duration was limited to a few years but for millenia?
I'd gained that impression somehow. My personal idea is that we should put all those people on a bomb-disposal course where they can learn by trial and error (and keep them there until they've run through all the possible errors or until all the candidates are expended).Starglider wrote: Fortunately very few of the people qualified to do a huge amount of damage to society actually want to do so. But then there are dangerous nutjobs like this one (the AGI equivalent would be someone like Hugo de Garis - fortunately AGI is much harder than bioweapons). Did I mention how much I loathe and despise the Voluntary Human Extinction Movement and their eco-terrorist associates?
Not quite; a red-headed lady I know has "arranged" for one of the other seat-holders to, uhhh, get ill just before take-off.Apollo has already reserved you a seat eh?
The initial impact isn't the quake I meant. The problem is the contra-coup shock on the other side of the world from the point of impact, This causes a sustained flood plain earthquake/eruption; the phenomena is a combination of both; the quake ruptures the crust down to magma level then the magma upwells through the crasks and floods out. The Siberian Traps and the Deccan Traps are good examples that release immense amounts of smoke, dust and toxic gasses into the atmosphere. The Siberian Traps were probably the largest earthquake/eruption in earth's history and they lasted for a million years.AFAIK a 10-100km asteroid will produce earthquakes in the 10-11 Richter magnitude range. That's bad, but it won't damage modern buildings on the other side of the continent, much less a different continent. Since you did of course build two colonies on opposite sides of the earth for redundancy, at least one should be fine. Just make sure they're not anywhere near a coastline.
I agree, but the problem is that there's nobody left alive after the flood plain quake to do the terraforming. I agree about the launch wieght problem, that's why I say we've missed the bus. All we can do now is try to get up there and hope we have time.Even with that amount of damage, the prospects for terraforming earth back into habitability are better than the prospects for making mars or venus earthlike. You're comparing surviving in sealed biospheres on a planetary surface with surviving in sealed biospheres in a vacuum. On earth there's still plenty of free oxygen and hydrogen available in the atmosphere, just requiring energy to extract, there's gravity, and you can mine for more materials. In space you've got to shield against radiation and if you want more materials of any kind, you have to go find an asteroid, tow it home and mine it in zero gravity. Again, space is good for long term growth potential but I'm not clear it's superior for survivability, particularly if you're on a tight construction timetable where launch capacity is a killer for building a usefully large habitat.
Interesting thought. I must admit I've eliminated cryogenic stasis and the space elevator from TBO because I don't know enough about the technologies involved to make a realistic job of describing them.Actually this is one place human hibernation technology, or better yet the ability to revive people from cryogenic stasis, would be really useful. If you're trying to save a large set of people with key skills and genetic diversity, you could launch or store them all in stasis along with a much smaller set of initial construction engineers, and revive them when enough additional habitat space has been built to support them.
If launch weight is absolutely critical, time is limited and the stakes are the survival of the human race, the next question is 'is surface-to-orbit Orion actually workable'. AFAIK the answer is 'yes, the engineering is sound, fallout will be less than a cold war atmospheric nuclear test, payload mass is in the thousands of tons per launch'. If the answer is indeed yes, the next question would appear to be 'how do we finally overrule the anti-anything-nuclear idiots and get it built'.Stuart wrote:I agree about the launch wieght problem, that's why I say we've missed the bus. All we can do now is try to get up there and hope we have time.
Ahhh. That makes much more sense than a book going to print with errors.Stuart wrote:There was an inquiry as to what happened with the spelling and punctuation - it turned out the last batch of author's corrections weren't made due to an administrative error. There is a second edition (hardback) coming out with that stuff fixed.Surlethe wrote:The downside was the utterly atrocious grammar. It made the book nigh-unreadable. I wholeheartedly suggest another edition, rewritten at least twice -- once for grammar, spelling, punctuation, and once just in case you missed anything.
To be honest, when I wrote "grammar", I was thinking "punctuation". My biggest nit to pick was with the commas; the actual sentence structures weren't annoying as far as I could tell. It just seemed like they would run out of breath thinking becuase there were no commas ...I'd take serious issue with you on the grammar. The story is largely told via the eyes and internal thoughts of the various characters who are at the center of each section - so the text follows their thoughts. As a result, its colloquial rather than classic prefect" grammar, its the way people speak and think with minimum modifications for clarity. Redo those sections in classical grammar and it reads horribly wrong, stilted and false.
Everybody is having too much fun with their doom, gloom and utter disaster prognostications. I don't want to spoil it for them.[/quote]Oh, and Stuart? You should drop into some of the Peak Oil threads down in SLAM or N&P sometime. The debate would be fun.
I'm a little more optimistic, but then your opinion is evidently more informed than mine. The worst case scenario I can envision looks something like Global Mean Temperature; constructing that took into account something like 99.9% death rate and still I had to make sure there was no opportunity for an agricultural society to rise again. Even if the human race is sufficiently wiped out -- and that's a big "if" -- it also has to be impossible for people to scrape together agriculture in various pockets.Anyway, neither Peak Oil nor Global Warming are problems, humanity is going to destroy itself long before they become significant. I don't give us much past the middle of this century. We had our chance to survive and blew it.
...I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing. It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.jegs2 wrote:We've seen a dangerous drift within the military over the fast few decades, solidifying support of one particular party. Thanks to the war in Iraq and the resulting damage to the US Army, that has eroded.
AFAIK, It's only the Senior Chief who does that, and he has a good reason to do so, as explained in this story.Starglider wrote:...I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing. It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.
Not an Admiral, its the eponymous Senior Chief. There is a built-in joke there that the "Democrats" he refers to are actually the Democratic-Republicans of Jefferson's era who are the lineal ancestors of today's Republican Party! There are four stories that have a large U.S. base.Starglider wrote:I must confess I found the blatant anti-Democrat sentiment in some of the TBO stories, in particular the admiral using the name as an insult in from of the troops in Crusade, disturbing.
Not so, a Senior Chief is hardly high-ranking (although the Chiefs are a critical part of running the fleet). The only other case that might be construed as the military being anti-Democrat is the private command meeting during Crusade when the order has been received not to attempt a rescue of the shot-down aircrew. Then, the motivation is that this runs against everything they've been trained to believe in so they must go in despite orders. Then, note, LBJ goes to work with a sledgehammer and takes apart those who issued that order.It wouldn't be so bad if the US actually had more than two parties, but it doesn't, so high-ranking members of the US military saying 'only the Republicans can govern the country, the Democrats are scum' while on duty is essentially 'damn, having to pretend to care about this democracy stuff is annoying, wouldn't we all be better off in a dictatorship eh troops?'.