Fermi Musings
Moderator: NecronLord
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Fermi Musings
The absence of observable civilizations in our galaxy so far is often taken to indicate either that intelligent life is exceedingly rare, or that most signaling societies have short lifespans, presumably destroying themselves through some sort of ecological collapse before attaining full Type I status. Lack of observable indications of Dyson swarms seems to rule out the presence of any nearby Type II civilizations. However, it seems to me that there would be a considerable time gap between attaining Type I and Type II status, during which it seems extremely unlikely that powerful AI would not be developed by the civilization in question. However, following Stargliders' posts on the topic, it seems highly probable that many of these AI's would not be of a fully sane design with stable and desirable goal systems. Hence, many societies may have collapsed between stages I and II, leaving a galaxy full of paperclip maximizers, Go simulators, and alien cheesecake deities, among other unimaginable possibilities.
Some societies may have had enough success to enable exploration of and possibly expansion into nearby star systems. However, they will likely have run into AI run societies of varying degrees of sanity and friendliness, and undoubtedly conflict resulted. If many such societies formed at any one time, the terms of the conflict may become extremely complex, and perhaps some sort of stalemate has been reached. Dyson Swarms may have been built and destroyed earlier, and since then they have been dismantled or avoided altogether as obvious targets for adjacent civilizations. If all of this happened more than 100,000 years ago, there would be no visible evidence left for us to view. Existing civilizations may try to surreptitiously seed undeveloped or developing worlds with AI's friendly to them. However, if all competing groups try to prevent anyone from gaining an advantage by doing so, it may be possible to prevent detectable levels of intervention in younger solar systems, creating pockets where worlds such as Earth could develop life without any nearby civilization willing to risk making an appearance. I haven't thought too much beyond this, and was wondering what thoughts the august members of this board could add.
Some societies may have had enough success to enable exploration of and possibly expansion into nearby star systems. However, they will likely have run into AI run societies of varying degrees of sanity and friendliness, and undoubtedly conflict resulted. If many such societies formed at any one time, the terms of the conflict may become extremely complex, and perhaps some sort of stalemate has been reached. Dyson Swarms may have been built and destroyed earlier, and since then they have been dismantled or avoided altogether as obvious targets for adjacent civilizations. If all of this happened more than 100,000 years ago, there would be no visible evidence left for us to view. Existing civilizations may try to surreptitiously seed undeveloped or developing worlds with AI's friendly to them. However, if all competing groups try to prevent anyone from gaining an advantage by doing so, it may be possible to prevent detectable levels of intervention in younger solar systems, creating pockets where worlds such as Earth could develop life without any nearby civilization willing to risk making an appearance. I haven't thought too much beyond this, and was wondering what thoughts the august members of this board could add.
Every day is victory.
No victory is forever.
No victory is forever.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Fermi Musings
One point about this:
A truly ruthless "paperclip maximizer" AI would tend to spread more voraciously than almost any other interstellar civilization imaginable, because it would have a really strong reason to go to other star systems: to go forth and make more paperclips.
Actually, I find this slightly comforting, because I believe that (slow) interstellar travel is physically feasible. Which, by Fermi's paradox, makes it rather likely that there aren't any really dedicated paperclip maximizer AIs out there in a position to turn our solar system into paperclips.
This may mean that the creation of exceedingly dangerous paperclip-maximizer AIs is harder than some of the Friendly AI advocates make it out to be. There are no doubt other pitfalls, but if there were that many Aggressive Hegemonizing Swarm societies out there, and it was hard to stop them once they got rolling, they'd already be here.
A truly ruthless "paperclip maximizer" AI would tend to spread more voraciously than almost any other interstellar civilization imaginable, because it would have a really strong reason to go to other star systems: to go forth and make more paperclips.
Actually, I find this slightly comforting, because I believe that (slow) interstellar travel is physically feasible. Which, by Fermi's paradox, makes it rather likely that there aren't any really dedicated paperclip maximizer AIs out there in a position to turn our solar system into paperclips.
This may mean that the creation of exceedingly dangerous paperclip-maximizer AIs is harder than some of the Friendly AI advocates make it out to be. There are no doubt other pitfalls, but if there were that many Aggressive Hegemonizing Swarm societies out there, and it was hard to stop them once they got rolling, they'd already be here.
This space dedicated to Vasily Arkhipov
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
My guess is that most of the failures would tend to be more subtle. Perhaps most of the organic beings are held in pristine habitats with virtual realities and an endless supply of perfectly adjusted happiness drugs. However, they would be ruthlessly defended by their AI guardians against paperclip maximizers and the like. If there are any severely expansionist oddities, they may have been contained by their more sane neighbors who had perhaps a head start industrially. They have since had to maintain a policy of dissembling and feigning more benign goals to avoid being wiped out. Presumably, other than those civilizations that had verifiably friendly algorithms that they could submit for checking, nobody would trust anyone else, since they may all be hiding their true intent. Even the friendly ones may not risk advertising their nature to other groups in case it risks giving an advantage away.
Every day is victory.
No victory is forever.
No victory is forever.
- Imperial528
- Jedi Council Member
- Posts: 1798
- Joined: 2010-05-03 06:19pm
- Location: New England
Re: Fermi Musings
Perhaps the reason why paperclip maximizers (and the like) haven't taken over the galaxy (if they do indeed exist) is because some extremely bureaucratic stellar civilization said to them "Look, you make paperclips, we need paperclips to hold our bureaucratic paperwork together. How about we ship you x tons of resources per year in exchange for x tons of paperclips?"
I mean, if there are rampant AIs out there that exist to make only one or even a handful of object types, it would be trivially easy to buy them off.
Then of course there is the possibility that no intelligent civilization to date has ever seriously considered creating a giant, automated, paperclip maximizer. AIs are great and all, but it gets to the point where someone has to ask "Why do we need this to do x?". Seriously, a collection of barely intelligent robots managed by an organic life-form that feeds resources into a factory full of bang-bang robots would be enough to produce sufficient paperclips, and it'd be much cheaper and safer than building a fully intelligent computer that has "make paperclips" as it's paramount reason for existence.
EDIT: Frankly, I am more afraid of a research maximizer than anything else, because it will eventually reach the point where it needs to build a galaxy's diameter particle accelerator to continue its work. Then again, it would depend on what kind of research its parent civilization wanted it to do.
I mean, if there are rampant AIs out there that exist to make only one or even a handful of object types, it would be trivially easy to buy them off.
Then of course there is the possibility that no intelligent civilization to date has ever seriously considered creating a giant, automated, paperclip maximizer. AIs are great and all, but it gets to the point where someone has to ask "Why do we need this to do x?". Seriously, a collection of barely intelligent robots managed by an organic life-form that feeds resources into a factory full of bang-bang robots would be enough to produce sufficient paperclips, and it'd be much cheaper and safer than building a fully intelligent computer that has "make paperclips" as it's paramount reason for existence.
EDIT: Frankly, I am more afraid of a research maximizer than anything else, because it will eventually reach the point where it needs to build a galaxy's diameter particle accelerator to continue its work. Then again, it would depend on what kind of research its parent civilization wanted it to do.
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
The point about paperclip maximizers is that they would not be intentionally made that way by anyone; most forms of AI architecture have difficult-to-specify goal structures. Resulting AIs might confuse instrumental and ultimate goals, as people tend to do, or have ultimate goals that seem to be beneficial but turn out to have problems if they are ruthlessly pursued to every possible conclusion, or perhaps there might not be any stable ultimate goals at all. Since there is a good chance that a group that did not install sufficient precautions may succeed in creating a powerful general AI before a group that practiced safe design, a significant portion of emerging civilizations may be governed, or controlled, or replaced by machines with unpredictable behavior or utterly bizarre obsessions, of which paperclip maximization is just a comical example.
I would assume that most civilizations would avoid such obvious mistakes, but there's no way to imagine how strangely the mistakes might actually play out. Even if most of the AIs are not crazy and do not destroy their parent civilization, their goals may be incompatible with other civilizations. If they are rational, they may come to a consensus that it would be best for their societies not to risk open conflict by continuing expansion and jointly come to a policy of mutual containment and isolation, while remaining vigilant for signs of encroaching hegemonizing swarms from elsewhere. While at the likely timescales involved in this galaxy (roughly already a billion years of spacefaring civilizations, based on estimates of planet formation) either there is no expansionist society or a hegemony of some sort enforces a policy of non expansion into new developing systems. However, in the latter case, there still exists the mystery of why there are no visible cosmic engineering projects, unless the limits of technology permit far more exotic development than we realize, and the really advanced civilizations practice a form of engineering so subtle and pervasive that it is indistinguishable to us from the natural state of affairs, since we haven't been around very long to notice any changes or have a reference state. So it seems more likely that there is a complex multi-party stasis in which there is incentive to conceal the extent of one's activities.
I would assume that most civilizations would avoid such obvious mistakes, but there's no way to imagine how strangely the mistakes might actually play out. Even if most of the AIs are not crazy and do not destroy their parent civilization, their goals may be incompatible with other civilizations. If they are rational, they may come to a consensus that it would be best for their societies not to risk open conflict by continuing expansion and jointly come to a policy of mutual containment and isolation, while remaining vigilant for signs of encroaching hegemonizing swarms from elsewhere. While at the likely timescales involved in this galaxy (roughly already a billion years of spacefaring civilizations, based on estimates of planet formation) either there is no expansionist society or a hegemony of some sort enforces a policy of non expansion into new developing systems. However, in the latter case, there still exists the mystery of why there are no visible cosmic engineering projects, unless the limits of technology permit far more exotic development than we realize, and the really advanced civilizations practice a form of engineering so subtle and pervasive that it is indistinguishable to us from the natural state of affairs, since we haven't been around very long to notice any changes or have a reference state. So it seems more likely that there is a complex multi-party stasis in which there is incentive to conceal the extent of one's activities.
Last edited by Alerik the Fortunate on 2011-05-01 11:07pm, edited 1 time in total.
Every day is victory.
No victory is forever.
No victory is forever.
-
- Village Idiot
- Posts: 4046
- Joined: 2005-06-15 12:21am
- Location: The Abyss
Re: Fermi Musings
Or maybe it means that some civilization long ago seeded the galaxy with "police" Von Neumann probes that just sit around watching for something like that, and obliterate the paperclip-maximizer. Especially possible if it turns out that such a thing is only a problem for early AI, which seems possible; cultures with advanced AI may well be able to easily squash such problems themselves. In such a scenario the police drones coming from a more advanced culture would have a technological edge over any paperclip-maximizer since only more primitive cultures release such a thing.Simon_Jester wrote:This may mean that the creation of exceedingly dangerous paperclip-maximizer AIs is harder than some of the Friendly AI advocates make it out to be. There are no doubt other pitfalls, but if there were that many Aggressive Hegemonizing Swarm societies out there, and it was hard to stop them once they got rolling, they'd already be here.
"There are two novels that can change a bookish fourteen-year old's life: The Lord of the Rings and Atlas Shrugged. One is a childish fantasy that often engenders a lifelong obsession with its unbelievable heroes, leading to an emotionally stunted, socially crippled adulthood, unable to deal with the real world. The other, of course, involves orcs." - John Rogers
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
The main problem with a police civilization is why they have not built Dyson Swarms or similarly detectable megascale artifacts. The only reasons I can think of are that they have a strong ideological or aesthetic reason for opposing that degree of intervention in the natural world, which they enforce on all developing societies, or for pragmatic reasons they use a form of technology too esoteric for us to notice, which they might conceivably do if they had a head start of over a billion years. Still, that would be extremely speculative, since we have no idea what the hard limits are on technology imposed by physics. Still, it would be interesting if things such as the distribution of dark matter or dark energy are actually artifacts of a really ancient civilization that have left the baryonic universe as a playground for children.
Every day is victory.
No victory is forever.
No victory is forever.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Fermi Musings
You don't understand.Imperial528 wrote:Perhaps the reason why paperclip maximizers (and the like) haven't taken over the galaxy (if they do indeed exist) is because some extremely bureaucratic stellar civilization said to them "Look, you make paperclips, we need paperclips to hold our bureaucratic paperwork together. How about we ship you x tons of resources per year in exchange for x tons of paperclips?"
A paperclip optimizer is, in essence, any artificial intelligence which takes it into its head to optimize some material goal. It doesn't have to be paperclips; it could just as well be electrical generating capacity or computing power- computing power is an obvious one. And they aren't just out to manufacture this product, they're trying to increase the quantity of it as much as possible. The problem with this is that it's a natural bug-state of any AI designed to do something specific, or at least a plausible one... and that it can backfire horribly- say, if the AI starts trying to turn the whole world into computer banks and incidentally killing everyone on the planet in the process.
There are obvious reasons why such a process would wind up halted before it blows completely out of control. Such as people physically preventing the AI mainframe from gaining extra computer power, long before it reaches the point where it could even consider trying to turn the whole world into computer banks.
But if such an AI could exist, the characteristic trait would be that it would seek to convert more and more of its surroundings into whatever it "believed" should be built- and this might well become a powerful enough unexamined assumption to make it extremely destructive, the technological equivalent of a swarm of locusts, or worse.
So it's not an unreasonable concern- if these things existed at all and were capable of interstellar travel, they'd probably be a serious threat to the neighbors.
Again, "paperclip maximizer" is a simple, handy nickname, not the essential nature of the concept. Designed to illustrate that the AI might be doing something very strange to us, "just because" it does not examine its assumption that it should be seeking to maximize this thing.EDIT: Frankly, I am more afraid of a research maximizer than anything else, because it will eventually reach the point where it needs to build a galaxy's diameter particle accelerator to continue its work. Then again, it would depend on what kind of research its parent civilization wanted it to do.
A more realistic threat is a processing power optimizer, or a von Neumann swarm that gets out of control, or, hell, there are a lot of options.
A more depressing option, I suppose, but I honestly think it's less likely: what saved the guys who built the system in the first place from hegemonizing swarms, both self-created and other-created, during their early development?Lord of the Abyss wrote:Or maybe it means that some civilization long ago seeded the galaxy with "police" Von Neumann probes that just sit around watching for something like that, and obliterate the paperclip-maximizer. Especially possible if it turns out that such a thing is only a problem for early AI, which seems possible; cultures with advanced AI may well be able to easily squash such problems themselves. In such a scenario the police drones coming from a more advanced culture would have a technological edge over any paperclip-maximizer since only more primitive cultures release such a thing.Simon_Jester wrote:This may mean that the creation of exceedingly dangerous paperclip-maximizer AIs is harder than some of the Friendly AI advocates make it out to be. There are no doubt other pitfalls, but if there were that many Aggressive Hegemonizing Swarm societies out there, and it was hard to stop them once they got rolling, they'd already be here.
This space dedicated to Vasily Arkhipov
- GrandMasterTerwynn
- Emperor's Hand
- Posts: 6787
- Joined: 2002-07-29 06:14pm
- Location: Somewhere on Earth.
Re: Fermi Musings
Indeed. In order for a civilization to even attempt Type II, it should first survive the process of becoming a Type I. For example, the ability of humans to put together a civilization far-sighted enough to make it to Type I is still very much in doubt.Alerik the Fortunate wrote:The absence of observable civilizations in our galaxy so far is often taken to indicate either that intelligent life is exceedingly rare, or that most signaling societies have short lifespans, presumably destroying themselves through some sort of ecological collapse before attaining full Type I status.
Even if that weren't the case, the time window between the time we invented high-power radio signals and the time we realized just how wasteful kicking out everything with wideband analog signals was; is a very narrow one. We'd probably have to listen for near-geological timescales just to pick up the leakage from another civilization just mastering electronics.
As has been stated, a plague of self-replicating paperclip factories (assuming relatively fast travel times and relatively ancient origin) would've converted everything here to paperclips already. Though one could envision a swarm of paperclip factories having turned up within the last few tens of millions of years which elects to spread solely by photon sail, since you can't effectively turn photons into paperclips; and spreads very slowly due to the fact that the swarm is very thorough. That, or they developed in the Andromeda Galaxy, turned every bit of non-stellar material into paperclips, decided that trying to cross intergalactic space would be infeasible, and are simply waiting for the two galaxies to collide in three billion years.Lack of observable indications of Dyson swarms seems to rule out the presence of any nearby Type II civilizations. However, it seems to me that there would be a considerable time gap between attaining Type I and Type II status, during which it seems extremely unlikely that powerful AI would not be developed by the civilization in question. However, following Stargliders' posts on the topic, it seems highly probable that many of these AI's would not be of a fully sane design with stable and desirable goal systems. Hence, many societies may have collapsed between stages I and II, leaving a galaxy full of paperclip maximizers, Go simulators, and alien cheesecake deities, among other unimaginable possibilities.
Though that might count as writing a very stable (though fatally misguided) goal system. It's arguable that an ancient swarm of self-replicating paperclip factories will likely mutate into something else given an adequately long timetable and enough bits flipped by cosmic rays. Or even an existential crisis where some cluster of replicators starts wondering what all the paperclips are for. Or even why paperclips and not, say, sperm whales?
In another alternative, such civilizations may have ventured forth and decided the galaxy is much too dangerous. Then they retreated back to their home systems, built a swarm of self-replicating computronium factories and are presently engaged in quantum-level universe or ancestor sims. To avoid interference, such swarms are either breathtakingly hostile and proactive. Or else, they're extremely stealthy and unobtrusive, and are spending their time colonizing red dwarves and planetary systems orbiting neutron stars and white dwarves.Some societies may have had enough success to enable exploration of and possibly expansion into nearby star systems. However, they will likely have run into AI run societies of varying degrees of sanity and friendliness, and undoubtedly conflict resulted. If many such societies formed at any one time, the terms of the conflict may become extremely complex, and perhaps some sort of stalemate has been reached. Dyson Swarms may have been built and destroyed earlier, and since then they have been dismantled or avoided altogether as obvious targets for adjacent civilizations.
This assumes that such conflicts required stellar-sized releases of energy to resolve. Competing swarms of paperclip factories may simply attempt to out-compete each other by converting their opposing numbers into two different kinds of paperclips. Or else, swap idea-complexes and decide that one paperclip is pretty much the same as another and merge into a single swarm of autonomous paperclip factories.If all of this happened more than 100,000 years ago, there would be no visible evidence left for us to view.
Or else, the pattern of exploitation of star systems might lend itself to the creation of stringy, patchy collection of settlement looking not unlike a block of cosmic Swiss cheese; where we find ourselves in a resource-poor void. Though it could, indeed, also be the case that there is a sort of "Prime Directive" that is enforced by bigger, older, and more-badass civilizations.Existing civilizations may try to surreptitiously seed undeveloped or developing worlds with AI's friendly to them. However, if all competing groups try to prevent anyone from gaining an advantage by doing so, it may be possible to prevent detectable levels of intervention in younger solar systems, creating pockets where worlds such as Earth could develop life without any nearby civilization willing to risk making an appearance.
Tales of the Known Worlds:
2070s - The Seventy-Niners ... 3500s - Fair as Death ... 4900s - Against Improbable Odds V 1.0
2070s - The Seventy-Niners ... 3500s - Fair as Death ... 4900s - Against Improbable Odds V 1.0
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
That had occurred to me also. Within the timetable available, no swarm has gained hegemony within the Milky Way, otherwise they would have already been here and consumed everything. Existing conditions such as alliances or watchdog civilizations may make it improbable or impossible for them to arise afterwards, but it may not be the case in every galaxy. Somewhere, perhaps, millions of light years away, some mad swarm has launched a fleet of von Neumanns to adjacent galaxies, eventually approaching our own. But the time tables for that scenario place it outside of Fermi Paradox considerationGrandMasterTerwynn wrote: That, or they developed in the Andromeda Galaxy, turned every bit of non-stellar material into paperclips, decided that trying to cross intergalactic space would be infeasible, and are simply waiting for the two galaxies to collide in three billion years.
I find this a hilarious thought. It could even lead to a kind of religious war between factions with and without ultimate faith in the meaning of paperclips.Though that might count as writing a very stable (though fatally misguided) goal system. It's arguable that an ancient swarm of self-replicating paperclip factories will likely mutate into something else given an adequately long timetable and enough bits flipped by cosmic rays. Or even an existential crisis where some cluster of replicators starts wondering what all the paperclips are for. Or even why paperclips and not, say, sperm whales?
This is what I was getting at. If there was massive conflict or the threat thereof in the past, then nearly everyone who survived has resorted to a stealthy existence, with perhaps a tacit agreement between whatever groups who maintained some sort of contact that they would cooperate in eradicating a clear emerging threat, but otherwise have nothing to do with anyone and avoiding conflict by concealing their activities.In another alternative, such civilizations may have ventured forth and decided the galaxy is much too dangerous. Then they retreated back to their home systems, built a swarm of self-replicating computronium factories and are presently engaged in quantum-level universe or ancestor sims. To avoid interference, such swarms are either breathtakingly hostile and proactive. Or else, they're extremely stealthy and unobtrusive, and are spending their time colonizing red dwarves and planetary systems orbiting neutron stars and white dwarves.
Not necessarily. I had simply assumed that societies that could would build Dyson Swarms or matrioshka brains or some similar thing. Given millions of years to work with, I can't imagine any society not having use for such artifacts. But we do not currently see evidence of any in our galaxy, though admittedly we may simply not have found them yet. If they were built and then disassembled for whatever reason over 100,000 years ago, we would never have known they existed. I was just speculating why they might have been disassembled: either they are obvious targets or were otherwise a liability, or everyone has adopted an ideological bias against building them, or some other technology has since been favored that we cannot perceive or imagine.This assumes that such conflicts required stellar-sized releases of energy to resolve.
I've read about the bubble like distribution hypothesis before, and it does seem reasonable. However, it doesn't answer why none of the colonies nearest to our bubble have not been developed sufficiently that it would be noticeable from Earth, even if they weren't here already, unless the colonization has been so recent that the infrastructure has not yet reached that scale. Still, I would have expected the originating worlds and older colonies to have some massive structures, since launching significant interstellar exploration is much easier with Type II energy resources, which seems to require construction of something resembling a Dyson Swarm. So either there is some vastly different technology, or the history of the galaxy has made such projects undesirable.Or else, the pattern of exploitation of star systems might lend itself to the creation of stringy, patchy collection of settlement looking not unlike a block of cosmic Swiss cheese; where we find ourselves in a resource-poor void. Though it could, indeed, also be the case that there is a sort of "Prime Directive" that is enforced by bigger, older, and more-badass civilizations.
Every day is victory.
No victory is forever.
No victory is forever.
- Imperial528
- Jedi Council Member
- Posts: 1798
- Joined: 2010-05-03 06:19pm
- Location: New England
Re: Fermi Musings
No, no, I understand the idea of a paperclip maximizer. My post was partly a joke, since in all likelihood any civilization near a sufficiently large paperclip maximizer would be turned into paperclips before they could figure out what the hell to do about it. Although it did give me the idea for a short story, where you have an AI-run civilization that only exists to build civilizations (With AIs in the AI civ each building certain parts), think about it.Simon_Jester wrote:You don't understand.
Frankly though, I do question how likely it is that a large and powerful AI would be given such a specific task. I mentioned research maximizer because to me it seems more likely to be built, as that's the kind of specific task you'd want to give an AI. Of course it still fits into the paperclip maximizer category but it's at a higher level of maximizer, at least to me, given the very specific goal and that short of completely hardware based safeties it'd be hard to prevent it from turning on you. (Seriously, it might just decide to find out what it can learn from consuming the civilization that built it, despite software safeties, unless the creators had the foresight to program it with a sense of morality, although that still leaves possible holes.)
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
The point isn't that the AI would be given such a specific task, but rather that it would arise unintentionally from an improperly designed goal system. Even worse would be a paperclip maximizer that knew not everyone shared it's goals, and was sly enough to lie about its intentions until it had the resources to force everyone to permit it to maximize paperclips when the opportunity arose. While paperclips are an unlikely obsession compared to more practical items, it is possible that an AI would fixate on something even stranger to us. Without eons of evolutionary selective pressures and biological constraints on its cognition, an insane AI could be unimaginably strange to us.
What I'm really looking forward to is the construction of a space observatory array to exploit solar gravitational lensing. I've heard amazing claims about the possible resolution of such a system. It might allow us to see evidence of other civilizations even if they've avoided massive building programs.
What I'm really looking forward to is the construction of a space observatory array to exploit solar gravitational lensing. I've heard amazing claims about the possible resolution of such a system. It might allow us to see evidence of other civilizations even if they've avoided massive building programs.
Every day is victory.
No victory is forever.
No victory is forever.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Fermi Musings
It may simply be that there are limits on how big intelligent beings actually decide to build their technology, in the event.Alerik the Fortunate wrote:Not necessarily. I had simply assumed that societies that could would build Dyson Swarms or matrioshka brains or some similar thing. Given millions of years to work with, I can't imagine any society not having use for such artifacts. But we do not currently see evidence of any in our galaxy, though admittedly we may simply not have found them yet. If they were built and then disassembled for whatever reason over 100,000 years ago, we would never have known they existed. I was just speculating why they might have been disassembled: either they are obvious targets or were otherwise a liability, or everyone has adopted an ideological bias against building them, or some other technology has since been favored that we cannot perceive or imagine.
When we picture the future, we tend to imagine a scaled-up version of the present. To take an example, imagine we found a medieval architect, one of the best in the business. Suppose he had a thorough grounding in the art and craft of designing buildings. Suppose we described to him the capabilities of modern materials and heavy machinery, without establishing any social context. Now suppose we asked this medieval man what he expected we would do with the technology.
The answer? "Why, surely you'd build cathedral spires half a mile high! Wow, what potentials!" And the guy would go wandering off envisioning churches the size of (to him) entire cities, with giant holographic images of the saints and whatnot.
Which we could totally build, and would... if we thought like medieval Europeans.
In the same respect, we live in an era characterized by rapid advances in the generation and manipulation of energy, and even faster advances in computing. Those two accelerating processes have defined and controlled our history for the past hundred years and more. It's not surprising that when people present us with the possibilities of reasonably projectable future technology, we think "Why, surely you'd build solar power plants with a square AU of surface area, and computer brains the size of small moons! Wow, what potentials!"
The question is whether that basic assumption about what the future looks like is correct. Would there really be demand for that many watts or flops of computing power? We assume so, but that's projecting from our own era, where the advance of technology lets us keep applying more physical and computer power to our problems without having to scale up to more expensive infrastructure.*
Maybe we're wrong, and the "right" size for a "mature" planetary or interplanetary culture is much smaller than a Dyson swarm.
And I'd argue you can't call that "ideological" any more than you can say that modern steel-frame building technology isn't used to build half-mile cathedrals for "ideological" reasons.
*EDIT: In relative terms. A modern semiconductor factory is more expensive than the kind of facility that turned out the cams for a mechanical computer in the 1930s, but relative to the scale of our economy the difference is much smaller.
This space dedicated to Vasily Arkhipov
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
Well, the reason we don't build half mile high cathedrals is that there has been a values shift since the middle ages; glorifying God is no longer a communal goal shared by those who command the most resources; neither is glorifying monarchs as divinely sanctioned autocrats of their respective nations, as during the age of great palace building in the seventeenth and eighteenth centuries. We do however, build half mile high towers; they're just not used as cathedrals. While we don't really build to the limits of our technical abilities for cultural and economic reasons. We are a young, poor, and unevenly developed civilization. Yet we do still build colossal projects on the same scale as the cathedrals, they're just different. Over millions of years I would expect us to expand our construction projects towards the limits of technical ability.
Dyson Swarms and similar structures are vastly more useful than cathedrals or even many of our current infrastructure project. I just find it odd that if they haven't been built, then there should be other things of a similar scale that we could detect. Of course, if they are very different from what we have imagined, then we might not yet be able to perceive them since we don't have the resolution to see them clearly and easily. Perhaps some sort of universal laws of economics cause every civilization to cut off investment in new infrastructure at some point, but that seems silly to me. Perhaps something similar to the situation in The Golden Age occurs, in which a conservation group bought the development rights to the planet Saturn and its moons in order to prevent another group from reengineering the system into a network of habitats and power stations. Many civilizations may value retaining their solar systems in near natural states for nostalgic and aesthetic reasons. Perhaps life is so widespread that most civilizations avoid full scale terraforming so as not to disrupt the development of primitive life on other worlds.
Possibly only societies that take up a culture of ecological stewardship and caution in expansion survive the transition to Type I societies. Yet I can't see that holding to the point that every single culture that expands beyond Type I refusing to reengineer any solar systems purely out of principle, unless there is a treaty network or enforcer civilization that sets standards restricting levels of development, sort of like cosmic zoning laws. But I would have expected some signal from a society like that considering what we've done to the earth. On the other hand, they may believe that life on every world should evolve on its own, and only be limited if they threaten life on other worlds. With the state of our space program, economy, and climate, a watching civilization may decide to see if we either reform ourselves and become a mature society amenable to their influence, or destroy ourselves without their expending any effort. They may only act offensively if we appear to be developing into a hegemonizing swarm of some sort. On the other hand, it still seems that they would be better off trying to seed our society with AI friendly to them, one way or another, unless they have projected our future with such certainty that they know we will never become a threat, presumably through self destruction, yet possibly allowing enough of the planet's biosphere to remain to interest them.
Dyson Swarms and similar structures are vastly more useful than cathedrals or even many of our current infrastructure project. I just find it odd that if they haven't been built, then there should be other things of a similar scale that we could detect. Of course, if they are very different from what we have imagined, then we might not yet be able to perceive them since we don't have the resolution to see them clearly and easily. Perhaps some sort of universal laws of economics cause every civilization to cut off investment in new infrastructure at some point, but that seems silly to me. Perhaps something similar to the situation in The Golden Age occurs, in which a conservation group bought the development rights to the planet Saturn and its moons in order to prevent another group from reengineering the system into a network of habitats and power stations. Many civilizations may value retaining their solar systems in near natural states for nostalgic and aesthetic reasons. Perhaps life is so widespread that most civilizations avoid full scale terraforming so as not to disrupt the development of primitive life on other worlds.
Possibly only societies that take up a culture of ecological stewardship and caution in expansion survive the transition to Type I societies. Yet I can't see that holding to the point that every single culture that expands beyond Type I refusing to reengineer any solar systems purely out of principle, unless there is a treaty network or enforcer civilization that sets standards restricting levels of development, sort of like cosmic zoning laws. But I would have expected some signal from a society like that considering what we've done to the earth. On the other hand, they may believe that life on every world should evolve on its own, and only be limited if they threaten life on other worlds. With the state of our space program, economy, and climate, a watching civilization may decide to see if we either reform ourselves and become a mature society amenable to their influence, or destroy ourselves without their expending any effort. They may only act offensively if we appear to be developing into a hegemonizing swarm of some sort. On the other hand, it still seems that they would be better off trying to seed our society with AI friendly to them, one way or another, unless they have projected our future with such certainty that they know we will never become a threat, presumably through self destruction, yet possibly allowing enough of the planet's biosphere to remain to interest them.
Every day is victory.
No victory is forever.
No victory is forever.
- Imperial528
- Jedi Council Member
- Posts: 1798
- Joined: 2010-05-03 06:19pm
- Location: New England
Re: Fermi Musings
Ah, I guess I've been coming at it from the wrong angle then. Coming from an automation perspective the idea of an insane AI arising from anything but certain types of AI (Von-neumann machine AIs would be one) is to put it bluntly, laughably unlikely.Alerik the Fortunate wrote:The point isn't that the AI would be given such a specific task, but rather that it would arise unintentionally from an improperly designed goal system. Even worse would be a paperclip maximizer that knew not everyone shared it's goals, and was sly enough to lie about its intentions until it had the resources to force everyone to permit it to maximize paperclips when the opportunity arose. While paperclips are an unlikely obsession compared to more practical items, it is possible that an AI would fixate on something even stranger to us. Without eons of evolutionary selective pressures and biological constraints on its cognition, an insane AI could be unimaginably strange to us.
I've read speculation that if such lenses could be used as antennas as well, they might be the reason why we're not hearing anybody- our radio isn't up to the task.What I'm really looking forward to is the construction of a space observatory array to exploit solar gravitational lensing. I've heard amazing claims about the possible resolution of such a system. It might allow us to see evidence of other civilizations even if they've avoided massive building programs.
Re: Fermi Musings
I tend to think that the most probable situation is to have multi-cultural civilizations like here on Earth. And seeing current world, I doubt that any of them would be strong enough or wealthy enough to undertake gigantic projects or that they would collaborate for that. The best we can hope currently is a hypothetical european project for a manned mission to Mars.
Therefore I think that we may never spot a dyson sphere or ring around a distant star. Because no nation/organization would accept having anyone building those huge device...
Therefore I think that we may never spot a dyson sphere or ring around a distant star. Because no nation/organization would accept having anyone building those huge device...
Future is a common dream. Past is a shared lie.
There is the only the 3 Presents : the Present of Today, the Present of Tomorrow and the Present of Yesterday.
There is the only the 3 Presents : the Present of Today, the Present of Tomorrow and the Present of Yesterday.
- Imperial528
- Jedi Council Member
- Posts: 1798
- Joined: 2010-05-03 06:19pm
- Location: New England
Re: Fermi Musings
That is unless every nation can build such mega constructs. If they are built, they may be like our modern nuclear arsenals. If you have one, it deters others who have one from attacking you, as you can just as well use yours against them. This is especially so if you can have dyson-sphere based weapons, like say a giant cosmic-ray or gamma-ray spectrum laser capable of irradiating entire nearby star systems.
-
- Emperor's Hand
- Posts: 30165
- Joined: 2009-05-23 07:29pm
Re: Fermi Musings
My point is that there may reach a point beyond which it is no longer rewarding for a culture to expand its construction projects to the theoretical limit of technical ability. Once you have enough electricity to immerse everyone in a virtual paradise, you may not need more- and your population may not be growing sufficiently to require infrastructure to expand.Alerik the Fortunate wrote:Well, the reason we don't build half mile high cathedrals is that there has been a values shift since the middle ages; glorifying God is no longer a communal goal shared by those who command the most resources; neither is glorifying monarchs as divinely sanctioned autocrats of their respective nations, as during the age of great palace building in the seventeenth and eighteenth centuries. We do however, build half mile high towers; they're just not used as cathedrals. While we don't really build to the limits of our technical abilities for cultural and economic reasons. We are a young, poor, and unevenly developed civilization. Yet we do still build colossal projects on the same scale as the cathedrals, they're just different. Over millions of years I would expect us to expand our construction projects towards the limits of technical ability.
Dyson sphere and the like have uses, we can think of things to do with them... but we cannot guarantee that a civilization honestly faced with the decision of whether to make the effort to build one will consider it worthwhile to do so.
Possible, but then... how far would we really be able to spot them from? Dyson spheres a thousand light years away might not be particularly visible, for example, even if Dyson spheres fifty light years away would stick out like a sore thumb.Possibly only societies that take up a culture of ecological stewardship and caution in expansion survive the transition to Type I societies. Yet I can't see that holding to the point that every single culture that expands beyond Type I refusing to reengineer any solar systems purely out of principle, unless there is a treaty network or enforcer civilization that sets standards restricting levels of development, sort of like cosmic zoning laws.
This space dedicated to Vasily Arkhipov
- Guardsman Bass
- Cowardly Codfish
- Posts: 9281
- Joined: 2002-07-07 12:01am
- Location: Beneath the Deepest Sea
Re: Fermi Musings
Are they? Unless a civilization places a heavy value on spreading throughout its solar system and engaging in mega-engineering, they're just gigantic commitments of resources and time - resources and time that could be spent elsewhere (if at all).Alerik the Fortunate wrote:Dyson Swarms and similar structures are vastly more useful than cathedrals or even many of our current infrastructure project.
Not to anthropomorphize too much, but look at our civilization. There is a gigantic range of things that we could do in terms of space colonization and exploration if we committed the time and resources towards it with our present technology, but we choose not to.
I don't find it silly at all. Perhaps civilizations tend to reach a "steady state" situation when their technological advancement runs into the laws of physics and plateaus, and engaging in massive investments in space colonization and expansion are generally too costly in terms of time and resources to be worthwhile on anything other than the cosmic scale (where not having interstellar colonies threatens their civilization's survival when their sun goes off the main sequence).Alerik the Fortunate wrote:Perhaps some sort of universal laws of economics cause every civilization to cut off investment in new infrastructure at some point, but that seems silly to me.
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
Actually, this makes some sense. Maybe civilizations avoid major investments in infrastructure in favor of investing just enough to sustain the luxury fads of their age, and if they curb population growth they could last a very long time with little visible impact on their cosmic surroundings. Perhaps ancient civilizations only engage in megascale engineering as a last ditch catastrophe aversion when their sun is about to depart.Guardsman Bass wrote:I don't find it silly at all. Perhaps civilizations tend to reach a "steady state" situation when their technological advancement runs into the laws of physics and plateaus, and engaging in massive investments in space colonization and expansion are generally too costly in terms of time and resources to be worthwhile on anything other than the cosmic scale (where not having interstellar colonies threatens their civilization's survival when their sun goes off the main sequence).Alerik the Fortunate wrote:Perhaps some sort of universal laws of economics cause every civilization to cut off investment in new infrastructure at some point, but that seems silly to me.
Every day is victory.
No victory is forever.
No victory is forever.
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
It's just that though we've severely curbed infant mortality and the birthrate, we haven't done much for radical life extension. I would imagine that achieving virtual immortality would be a goal for most or at least some long lasting civilizations, in which case lack of death would still cause significant population growth over time, even if the rate were relatively low, unless having children was severely restricted. Or it may be that people opt for a regular (long) natural lifespan followed by upload into some processor substrate, which might be enlarged to meet demand without requiring the same resources as maintaining a full biological population.
Every day is victory.
No victory is forever.
No victory is forever.
- Guardsman Bass
- Cowardly Codfish
- Posts: 9281
- Joined: 2002-07-07 12:01am
- Location: Beneath the Deepest Sea
Re: Fermi Musings
It depends on the alien race, I would imagine. In the case of humans, you'd probably see a spike in the overall population, followed by a complete collapse in birth rates down to a rate that would be negligibly small by comparison to the already negative birth rates in most of the rich countries. Barring things like uploading and the like.Alerik the Fortunate wrote:It's just that though we've severely curbed infant mortality and the birthrate, we haven't done much for radical life extension. I would imagine that achieving virtual immortality would be a goal for most or at least some long lasting civilizations, in which case lack of death would still cause significant population growth over time, even if the rate were relatively low, unless having children was severely restricted. Or it may be that people opt for a regular (long) natural lifespan followed by upload into some processor substrate, which might be enlarged to meet demand without requiring the same resources as maintaining a full biological population.
“It is possible to commit no mistakes and still lose. That is not a weakness. That is life.”
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-Jean-Luc Picard
"Men are afraid that women will laugh at them. Women are afraid that men will kill them."
-Margaret Atwood
-
- Jedi Knight
- Posts: 646
- Joined: 2006-07-22 09:25pm
- Location: Planet Facepalm, Home of the Dunning-Krugerites
Re: Fermi Musings
There is no such thing as a negative birth rate, only birth rates that do not meet replacement due to death of the elders, resulting in negative net population growth. My point is that if the death rate goes down, possibly to zero, even a very, very small birth rate would over time lead to massive population growth. Over a billion years this could be noticeable, which is why I still believe Dyson Swarms or something similar would be useful to at least some civilization, unless, of course, there is some enforcement preventing it.
Every day is victory.
No victory is forever.
No victory is forever.