Stuart wrote:Starglider wrote: That's one answer to the Fermi paradox but there's another answer
The basic message is still the same though; the window of opportunity for getting off this planet is very small;
Well, kinda. Getting off the planet (in the 'create some self-sustaining colonies' sense') almost completely removes the existential risk of various natural disasters. It reduces but does not eliminate biowarfare risks, in that depending on the setup of your interplanetary civilisation it may be possible to introduce pathogens into enough critical places before detection that effective quarrantine is impossible. Similarly it reduces deliberate warfare risks in that smashing all the habitable biospheres is harder, but it does not eliminate them. Interstellar is much better, but of course much, much harder, at least if the only intelligence you have to work with is bog standard humans and you don't have a good way to freeze them for centuries-to-millenia.
Unfortunately getting off the planet is only of limited use with regard to hostile transhuman intelligences, in that if they want to come after you, they probably can, and for a rational intelligence there's a strong motive to do this if surviving humans are likely to intefere with its goals later. This debate (mostly lifeboat foundation supporters vs Singularity people) has raged on several transhumanist forums, and the general answer is 'you may escape if you're fast and very lucky, but don't count on it'. Fortunately the risk mitigation strategies for these aren't really competing with each other for funding or effort at present (only a little for some nonprofits that probably aren't terribly relevant anyway).
I'd guess around a century or so measured from around 1960. That means that the clock runs out for us at around 2060 at the latest. I won't live to see it but a lot of the people here will.
If that's a .50 probability, that's pretty good, we've got a very good chance of developing lethal goodies like brain-computer interfacing, full-brain scanning and simulation, general nanoassembly, mobile microrobotics and of course general AI within that timeframe. You're probably the most qualified person I know with regard to estimating biowarfare existential risk, but the existential risks associated with these early-stage technologies require their own massive study to appreciate (I'm only an expert in the AGI one - and not the best expert I know on it, since I've never spent a lot of time playing professional futurist). Of course you might see a space elevator in that timeframe, which would probably suffice for a space lifeboat, and AFAIK there's still an outside chance of doing it with privatised space and conventional launches.
If we'd carried on with space exploration in the 1970s, it would have created a driver for the development of inexpensive surface-to-space travel.
Well, maybe. NASA's performance at driving commercial interest has been pretty abysmal, and they've never had a 'get asteroid mining started' agenda. This is a complex debate that I'm only moderately qualified for, but at this point it's of largely academic interest - unless you think government-driven space exploration is going to do anything useful in time.
That said I hope you weren't involved in the Air Force FUBARing the STS design.
Biological weapons are very simple to obtain... monitoring is virtually impossible.
That's the extreme window of vulnerability. Current medical science cannot provide a generally superior replacement for the human immune system. We can often create toxins tailored to destroy or render inert specific pathogens after much lab work, we can kick start the immune system into action with immunisations (again after a lot of analysis and preparation) and we can mitigate symptoms to some degree but that's it.
However
in principle we can do much better. The tools to actively monitor every single human's biology at a cellular level, to process, transmitt and pattern-recognise that data, to greatly automate the process of designing and deploying countermeasures, and ultimately to directly prompt and reconfigure the immune system in place have already been designed to a surprising level of detail. We just don't have the tools to build them yet. I just hope the very high regulatory burden on medical science doesn't push these fairly inevitable advances too far into the future (i.e. past the extinction point they might have prevented).
I don't want to turn this thread into a debate about the technical nitty-gritty of biosimulation, embedded biosensors and medical nanotech any more than I do regarding space colonisation feasibility - so I won't take a position on whether this will come in time to do any good, just that bioweapons are not an impossibly deadly menace that can never be (reliably) avoided by anything other than total isolation.
Unfortunately the more advanced applications of microrobotics and nanotechnology favour the attacker even more strongly and have a wider, possibly indefinite window (the more plausible proposed defence schemes are extremely complex and difficult to implement).
They require technology levels that biologicals don't.
They aren't a threat yet. But
if we survive the next 30-50 years, we may actually see the end of biological weapons as a serious threat, simply because technology will eventually trump biology (ultimately the same reason why various forms of AI are so dangerous). However the threat from those advanced technologies themselves has no such limitations.
This isn't going to happen with current or near-future technology, but good enough cognitive engineering technology (i.e. fine grained brain manipulation, enough that one trip to the neurosurgeon will make you a devoted and almost unshakeable supporter of cause X for life) will make it quite plausible.
In the time frame I think we have available, I don't believe that's a probable development.
If your time frame is 50 years I think it's highly probable we'll have the understanding to do it. Brain mapping has been progressing quickly recently. The question is whether the tools will exist to make the required changes, which I don't have a reliable method of estimating. Something like freezing a brain solid and laser-ablation scanning it to a sufficient resolution for an accurate simulation is a relatively straightforward process you can turn into engineering specs - when we get scanner resolution X and computing power Y, which have been following these curves, then it becomes feasible. Fiddling about with dendrite webs in a currently unknown way without killing the subject or sending them mad isn't a procedure we can currently analyse in enough detail to get a good set of preconditions and hence probability for. But it would not surprise me, because the prospect is so attractive to many governments it could easily swing a lot of research funding when it passes a certain minimal level of plausibility.
Finally it assumes that a single party doesn't gain a massive technological advantage over everyone else
The United States is heading that way now; our technology lead over the rest of the world is lengthening steadily.[/quote]
The US government isn't monolithic enough for me to count it as a 'single party' in that sense. Plus it's leaky as a sieve for anything but the final stage military applications - other countries and groups are going to get the basic science, often they can buy materials and tools from the same suppliers. The US could certainly destroy human civilisation with a little preparation, but it can't wield that power as a means of domination or deterrence, and is not likely to be invulnerable to counterattack any time soon.
However after working through enough of the possible outcomes for the creation of transhuman intelligences, I believe a technological advantage of that genuinely overwhelming degree (in that there is no way for other parties to resist domination or even do any damage to the first party) is actually possible, in that there is a very real chance of it being achieved in many scenarios for the development of such intelligences. This can be very good or very bad, depending on who gets it and how far it goes. I think it's essentially impossible for it to be achieved any other way, simply because there's no way other way to to the equivalent of centuries of work by a manhattan project calibre team in total secrecy, within a decade or less.
Again time is the problem.
True. But although the chances of success may be small, they're worth fighting for; it hardly makes sense to sit down and give up. I've personally structured my life to try and make the best contribution I can to the development of one of the 'magic' technological fixes, simply because it's the most useful thing I can do.
As to developing countermeasures, lack of will is in there as well. Look at all the problems we've got trying to deploy an anti-missile system and that's with a lunatic in North Korea tossing long-range missiles around and trying to develop a nuclear warhead. If he's developing that, do you wnat to bet the farm that he isn't developing biologicals? We can get an anti-missile system up fairly quickly but the sort of elaborate defenses needed to thwart a biological attack? There's no sign of people even beginning to think about them.
While it's unfortunate that biowarfare risks don't have a higher profile, on the plus side there isn't that crazy MAD mentality that let the ABM critics successfully argue that being undefended against nuclear anhiliation was a good thing. Essentially everyone agrees that defence against biowarfare is a good thing in principle - though there are plenty of irritating conspiracy nuts who refuse to accept that defensive biowarfare requires some installations and programs superficially quite similar to offensive biowarfare.
The later parts of the TBOverse (set from 1972 - 2050) are my scream of warning that we do need to develop mitigation strategices and develop them NOW.
I'm curious, have you had many opportunities in your career to make the case for space colonisation to relevant decision makers, the way you have your characters make it?
Have you summarised what you're actually advocating anywhere? Is it as simple as massive government spending on space colonisation? Do you have any novel ideas for making that paletable to the electorate?
The point is near-extinction events from asteroid impacts are a fact, we don't know when one will happen next.
However it isn't a pressing risk, and having at least one self-sufficient outpost survive massive climate change on the earth's surface is probably easier than surviving without support in earth orbit (though the growth potential is more limited).
Mitigated and eventually eliminated with medical progress, which is doing great.
Only against diseases we know about,
But the tools for diagnosis, pathogen analysis, drug design, flexible manufacturing and treatment are all improving as well as just expanding the range of off-the-shelf treatments available.
Needn't be one that directly affects us - how about an air-vectored disease that destroys chlorophyll in plants?
Given that that hasn't happened in the last three billion years despite abundent opportunities for it to evolve, it seems pretty unlikely without very good genetic engineering.
I'm not so sanguine.
Oh I'm not sanguine. I'm saying that you're probably right about the biowarfare risk (in fact, that 50 years is generous), and that in addition to this we have lots
more upcoming existential risks to worry about, some of which are essentially impossible to run away from or contain. And that's not even getting started on the Peak Oil / Runaway Climate Change scenarios (which I don't personally rate as existential risks, but they're exacerbating factors for some of the really serious problems).
I'm just saying that there is some hope, and ultimately rapid but carefully focused (of course these are opposed) technological progress is the
only thing that can save us.
For comparison, run back to 1975; we're much more advanced technologically than we were then but nmone of those advances translates into hardening the bullseye any more. In fact, its arguable the bullseye is softer now than it was then.
I'd agree that it's softer in the sense that the infrastructure supporting civilisation is more fragile. Convincingly fixing that problem is a challenge for the nanofactory people. However humans haven't changed since 1975 - we just travel around more (which doesn't help either for biothreats). In fact we've been stuck with the mark I mod I human for all of recorded history. Changing
that situation changes the rules of the game, and since we're currently losing the game it's a very compelling option.
P.S.
mksheppard: you're a bit pie in the sky on your stuff
Starglider: Some people need to work on advanced concepts if they're ever going to become reality.
Starglider: It doesn't just magically become practical one day, it takes a lot of hard work.
Starglider: However, most people can and should stay focused on more immediate things.
mksheppard: I'm very dubious of said advanced nanoconcepts
Starglider: I was myself until I lost a lot of debates where I took the 'they won't work' position. The ones the real experts are proposing are definitely physically possible. Detailed engineering studies and simulations have been done.
Starglider: It's similar to fusion power circa 1970 really. We know it's physically possible, but there's a lot of hard research and engineering between here and it working.
Starglider: AI is a different kind of challenge, it isn't conventional engineering.
Starglider: I can't blame people for being highly skeptical of it.
Starglider: I wouldn't be surprised if Stuart's impressions of AI were formed when this was directly relevant
http://philip.greenspun.com/humor/ai.text