The Matrix ... but not quite

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Patrick Degan
Emperor's Hand
Posts: 14847
Joined: 2002-07-15 08:06am
Location: Orleanian in exile

Post by Patrick Degan »

Drooling Iguana wrote:
Feil wrote:I may be wrong (it's a distressingly common occurrence) but would not this computer need a byte for every quantum event?
Assuming that the simulation was made for our benefit, it would only need a byte for every quantum event directly observed by humans. It could fudge things with less granular algorithms when we're observing things macroscopically.
Which of couse is why so many things taste like chicken. 8)
When ballots have fairly and constitutionally decided, there can be no successful appeal back to bullets.
—Abraham Lincoln

People pray so that God won't crush them like bugs.
—Dr. Gregory House

Oil an emergency?! It's about time, Brigadier, that the leaders of this planet of yours realised that to remain dependent upon a mineral slime simply doesn't make sense.
—The Doctor "Terror Of The Zygons" (1975)
User avatar
Turin
Jedi Master
Posts: 1066
Joined: 2005-07-22 01:02pm
Location: Philadelphia, PA

Post by Turin »

Surlethe wrote:Leaving aside the fundamental problems with his argument, it seems like his invocation of probability reduces to absurdity (well, duh) anyway, or at least infinite regress. Suppose he is correct, and we are simply a simulation of an advanced civilization. Who is to say that the simulators are not themselves simulated? By his own argument, the simulators are probably a simulation; and so on and on and on.
I immediately thought of this myself, as a very silly but cleverly written little SF story was made of that very concept: I don't know Timmy, being God is a big responsibility.
Surlethe wrote:Debate-wise, though, this is a poor way to attack the argument. It would be better to simply hammer on the fact it's essentially solipsistic.
Yeah, Solipsism always struck me as a "so what?" kind of position. It doesn't tell you anything.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Surlethe wrote:Leaving aside the fundamental problems with his argument, it seems like his invocation of probability reduces to absurdity (well, duh) anyway, or at least infinite regress.
I don't see how invoking the magic word "probability" justifies anything anyway. Any and all probability analyses must make some assumptions about the controlling mechanism or the number of possible choices, etc. Ergo, its reliability is contingent upon those assumptions. So many probability analyses are really nothing more than sophistry layered upon some unjustifiable assumptions.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
Junghalli
Sith Acolyte
Posts: 5001
Joined: 2004-12-21 10:06pm
Location: Berkeley, California (USA)

Post by Junghalli »

Turin wrote:I immediately thought of this myself, as a very silly but cleverly written little SF story was made of that very concept:
You also see it in the movie The Thirteenth Floor. It's about a guy who'se running such a simulation, only to discover his own world is itself a simulation. In the end it's revealed that the next world beyond that is a simulation as well.
User avatar
Surlethe
HATES GRADING
Posts: 12267
Joined: 2004-12-29 03:41pm

Post by Surlethe »

Turin wrote:I immediately thought of this myself, as a very silly but cleverly written little SF story was made of that very concept: I don't know Timmy, being God is a big responsibility.
It's sort of like the "who designed the designer?" conundrum. That's a cute story, by the way.
Surlethe wrote:Debate-wise, though, this is a poor way to attack the argument. It would be better to simply hammer on the fact it's essentially solipsistic.
Yeah, Solipsism always struck me as a "so what?" kind of position. It doesn't tell you anything.
It's interesting speculation, like theology, but since it can't lead to any testable predictions, it is ultimately as descriptively useless as religion. Although I wonder if this could generate predictions; if it's an interactive simulation, there should be some level of arbitrary behavior in the system -- it can be "hacked".
Darth Wong wrote:I don't see how invoking the magic word "probability" justifies anything anyway. Any and all probability analyses must make some assumptions about the controlling mechanism or the number of possible choices, etc. Ergo, its reliability is contingent upon those assumptions. So many probability analyses are really nothing more than sophistry layered upon some unjustifiable assumptions.
That's a good point; using "probability" as a magic word is like any other pseudoscience, abusing terminology for the sake of appearances. It seems like there are an infinite number of possible choices in this case -- anybody could be running the simulation -- and when infinite discrete choices are in play, you're not allowed to use the usual formulas for probability anyway.
A Government founded upon justice, and recognizing the equal rights of all men; claiming higher authority for existence, or sanction for its laws, that nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
F. Douglass
User avatar
J
Kaye Elle Emenopey
Posts: 5835
Joined: 2002-12-14 02:23pm

Post by J »

Patrick Degan wrote:
Drooling Iguana wrote:Assuming that the simulation was made for our benefit, it would only need a byte for every quantum event directly observed by humans. It could fudge things with less granular algorithms when we're observing things macroscopically.
Which of couse is why so many things taste like chicken. 8)
Which raises the deep question of what is the taste of chicken, if almost everything tastes like chicken?
Philosophers will undoubtedly spend the next few centuries attempting to answer that.
This post is a 100% natural organic product.
The slight variations in spelling and grammar enhance its individual character and beauty and in no way are to be considered flaws or defects


I'm not sure why people choose 'To Love is to Bury' as their wedding song...It's about a murder-suicide
- Margo Timmins


When it becomes serious, you have to lie
- Jean-Claude Juncker
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

I've discussed this with Nick in person - over dinner in fact. His argument has a tendency to be grossly misrepresented.

The Simulation Argument is not concerned with other universes and aliens. We (probably) can't say anything about what might be 'below' the observable basis for the universe, and Occam's Razor suggests that it's useless to try. This is literally metaphysics.

Bostrom's presentation of the simulation argument is confined to the future history of humanity. One of the following must be true;

1) Humanity (and our successors) will never have the capability to run numerous 'ancestor simulations', i.e. fully sapient simulations of historical humans including their environment.
2) Humanity (and our successors) will have the capability but choose not to do so, due to either universal lack of desire or universally effective restrictions on the use of computing power.
3) There is a very high chance that you personally are an entity in a computer simulation being run by a (post-)human from what you would consider the future*.

* I don't personally see personal identity that way, so I would formulate that statement somewhat differently, but this is how most people see it.

There is no question (to anyone with a clue) of (1) being technically feasible, if human advancement continues the way we hope it will. If we colonise the solar system and develop mature nanotechnology and so on, the computing power capability will be there, and it is almost certain that we will eventually either crack AI or scan and understand the brain in enough resolution to do it. So (1) boils down to 'do you believe human civilisation will collapse and never colonise the solar system or develop massive computing power.

Personally I am optimistic about (1), so the chance of (3) being false hinges critically on (2). I think (2) is most likely true, but sadly not for human-friendly reasons; the development of even a tiny fraction of the necessary computing power for (3) is likely to lead to seed AI, which while if used correctly would provide a means to enforce (2) perfectly, is more likely to produce a rampant expansionist AI that destroys humanity and then has no interest in simulating its history.
User avatar
ray245
Emperor's Hand
Posts: 7954
Joined: 2005-06-10 11:30pm

Post by ray245 »

If we are a program, there will always be some bugs unless the system is totally perfect. That would mean there is no room for improvement, which is illogical that means the idea will be illogical as well.

Moreover, to ensure that the Humans are allowed to developed new stuff, that would mean the world has to be constantly maintained which would allow room for more Human error.
User avatar
Molyneux
Emperor's Hand
Posts: 7186
Joined: 2005-03-04 08:47am
Location: Long Island

Post by Molyneux »

ray245 wrote:If we are a program, there will always be some bugs unless the system is totally perfect. That would mean there is no room for improvement, which is illogical that means the idea will be illogical as well.

Moreover, to ensure that the Humans are allowed to developed new stuff, that would mean the world has to be constantly maintained which would allow room for more Human error.
It is perfectly possible to write a program without any bugs; it just grows more difficult as the complexity of the program grows.
Ceci n'est pas une signature.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Starglider wrote:Personally I am optimistic about (1), so the chance of (3) being false hinges critically on (2). I think (2) is most likely true, but sadly not for human-friendly reasons; the development of even a tiny fraction of the necessary computing power for (3) is likely to lead to seed AI, which while if used correctly would provide a means to enforce (2) perfectly, is more likely to produce a rampant expansionist AI that destroys humanity and then has no interest in simulating its history.
How the hell does #3 follow from #1 or the status of #2? You could say that it's possible, although Occam's Razor makes it a useless avenue of exploration, but to say that it's probable is completely absurd, even if the technical ability does exist in future.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Drooling Iguana
Sith Marauder
Posts: 4975
Joined: 2003-05-13 01:07am
Location: Sector ZZ9 Plural Z Alpha

Post by Drooling Iguana »

Darth Wong wrote:
Starglider wrote:Personally I am optimistic about (1), so the chance of (3) being false hinges critically on (2). I think (2) is most likely true, but sadly not for human-friendly reasons; the development of even a tiny fraction of the necessary computing power for (3) is likely to lead to seed AI, which while if used correctly would provide a means to enforce (2) perfectly, is more likely to produce a rampant expansionist AI that destroys humanity and then has no interest in simulating its history.
How the hell does #3 follow from #1 or the status of #2? You could say that it's possible, although Occam's Razor makes it a useless avenue of exploration, but to say that it's probable is completely absurd, even if the technical ability does exist in future.
I think the point is that, while there is only one real universe, there would likely be multiple simulated universes if we do, in fact, develop the technology to create them. If there was only one simulated universe, our odds of inhabiting it would be 50%. However, as the number of simulations increase that probability also increases, and the probability of our inhabiting the real universe goes down.
Image
"Stop! No one can survive these deadly rays!"
"These deadly rays will be your death!"
- Thor and Akton, Starcrash

"Before man reaches the moon your mail will be delivered within hours from New York to California, to England, to India or to Australia by guided missiles.... We stand on the threshold of rocket mail."
- Arthur Summerfield, US Postmaster General 1953 - 1961
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Drooling Iguana wrote:I think the point is that, while there is only one real universe, there would likely be multiple simulated universes if we do, in fact, develop the technology to create them. If there was only one simulated universe, our odds of inhabiting it would be 50%. However, as the number of simulations increase that probability also increases, and the probability of our inhabiting the real universe goes down.
Only if you assume that habitation in any given universe (real or simulated) has precisely the same likelihood, which in turn presumes that all of these universes are not just "sapient" simulations but in fact are perfect ones. I understand that some people have optimism about computer technology advancements, but perfection seems a ludicrous expectation.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Drooling Iguana
Sith Marauder
Posts: 4975
Joined: 2003-05-13 01:07am
Location: Sector ZZ9 Plural Z Alpha

Post by Drooling Iguana »

They'd only have to be perfect enough to fool the people inside them, and since they'd have no way of comparing the simulated universe to the real one that would mean that it would just have to be self-consistent.
Image
"Stop! No one can survive these deadly rays!"
"These deadly rays will be your death!"
- Thor and Akton, Starcrash

"Before man reaches the moon your mail will be delivered within hours from New York to California, to England, to India or to Australia by guided missiles.... We stand on the threshold of rocket mail."
- Arthur Summerfield, US Postmaster General 1953 - 1961
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Drooling Iguana wrote:They'd only have to be perfect enough to fool the people inside them, and since they'd have no way of comparing the simulated universe to the real one that would mean that it would just have to be self-consistent.
Which in turn means that they are perfect, since they have to be glitch-free.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Drooling Iguana
Sith Marauder
Posts: 4975
Joined: 2003-05-13 01:07am
Location: Sector ZZ9 Plural Z Alpha

Post by Drooling Iguana »

I think a lot of it comes down to whether or not we'd recognise a glitch if we saw one.
Image
"Stop! No one can survive these deadly rays!"
"These deadly rays will be your death!"
- Thor and Akton, Starcrash

"Before man reaches the moon your mail will be delivered within hours from New York to California, to England, to India or to Australia by guided missiles.... We stand on the threshold of rocket mail."
- Arthur Summerfield, US Postmaster General 1953 - 1961
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

ray245 wrote:If we are a program, there will always be some bugs unless the system is totally perfect.
There may or may not be human-noticeable bugs in the simulation software, but if there were we wouldn't notice them, as when a crash or glitch occurs the state would just be restored to the pre-bug state and the simulation resumed.
That would mean there is no room for improvement, which is illogical that means the idea will be illogical as well. Moreover, to ensure that the Humans are allowed to developed new stuff, that would mean the world has to be constantly maintained which would allow room for more Human error.
Say WTF? I suggest you never try to do philosophy again.
Molyneux wrote:It is perfectly possible to write a program without any bugs; it just grows more difficult as the complexity of the program grows.
For humans, yes. Possibly not for AIs; my company has been researching automated code generation from specs and automated test suite generation from specs+code for the last three years, and it's pretty clear to me that large classes of software would be effectively bug free if developed by general AIs. However I can't be sure about generalising this to something as complicated as an ancestor simulation.
Darth Wong wrote:How the hell does #3 follow from #1 or the status of #2? You could say that it's possible, although Occam's Razor makes it a useless avenue of exploration, but to say that it's probable is completely absurd, even if the technical ability does exist in future.
My personal assessment of the risks of badly programmed seed AI are not based on this argument; I'm just saying that based on that independent assessment, I would say that the second possibility is quite likely. If I did not believe this, then I would believe that the third possibility is most likely.

Nick's argument does not favour any one of these three possibilities over the others; he has in fact published several well-regarded papers on existential risks (that tend to favour #1, human-derrived intelligences will never have the capability) and is IMHO a hopeless optimist about co-operative (rather than dictatorial) implementation of #2, in that no posthuman will ever indulge in mass simulation of historical humans because it's morally wrong and/or social pressure will prevent it.
Drooling Iguana wrote:I think the point is that, while there is only one real universe, there would likely be multiple simulated universes if we do, in fact, develop the technology to create them. If there was only one simulated universe, our odds of inhabiting it would be 50%. However, as the number of simulations increase that probability also increases, and the probability of our inhabiting the real universe goes down.
Correct.
Darth Wong wrote:Only if you assume that habitation in any given universe (real or simulated) has precisely the same likelihood, which in turn presumes that all of these universes are not just "sapient" simulations but in fact are perfect ones.
Not only is this a reasonable assumption, I don't see how your generating anything else. In Bostrom's scenario, a very large set of human-structured intelligences experiencing contemporary surroundings exist in history, a small fraction of which exist independently and a large fraction of which exist in simulations. The complexity of the environment simulations is vastly smaller than the complexity of the real universe, but the complexity of the intelligences themselves and the /perceived complexity/ of the external universe from their point of view is identical (or nearly so).
Darth Wong wrote:Which in turn means that they are perfect, since they have to be glitch-free.
Staying self-consistent isn't anywhere near as hard as being true to reality. Early versions might well be buggy, but only the first few billion simulated humans will ever see that, once the capability has been under development for a few centuries I'm sure it would be effectively perfectly reliable. Beyond that, even if it was glitchy that wouldn't be an issue as long as either the glitches or the fact someone has noticed a glitch is detectable by the system. You can either edit the simulation to retcon the glitch away, or just revert to the last save prior to the glitch (globally or locally depending on the situation).

IMHO most practical versions of this scenario are likely to be managed at a fine level of detail by a general AI anyway, so even if the simulation was coarse enough that noticeable glitches might regularly occur, they would be detected and fixed as they happen anyway.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

And who would put in the effort to maintain these large numbers of gigantic simulations, and toward what end? Everything breaks down eventually, so somebody would have to actively maintain everything. I suppose you could posit a world where we've been completely replaced by machines and the simulation maintains itself, but that doesn't seem too likely either. And even in the case, why?
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Kuroneko
Jedi Council Member
Posts: 2469
Joined: 2003-03-13 03:10am
Location: Fréchet space
Contact:

Post by Kuroneko »

Didn't we already have this discussion about Dr. Bostrom's argument about two years ago?

The conclusion does follow from the premises, but the premises are quite hefty enough to swallow--(1) that future humans will become advanced enough to have world-sized "ancestor-simulations", and (2) that they are interested enough in doing so to the degree that ancestor-simulations become so common as to have a large fraction of "people to ever exist" be inside a simulation. If those things are true, the chance any particular person being simulated is actually likely, but the second premise in particular is very unreasonable--why would having such simulations be valuable enough to make such resource investments worthwhile?

If I recall correctly, in his original paper, Dr. Bostrom's language had the implicature that he was in favor of denying the premises, i.e., to say that even if such world-simulations ever become technologically possible, then they will be uncommon. I'm not sure of whether he's telling the press a slightly different spin on all of this or whether the reporter just wanted to make it more sensational.
User avatar
Winston Blake
Sith Devotee
Posts: 2529
Joined: 2004-03-26 01:58am
Location: Australia

Post by Winston Blake »

Turin wrote:I immediately thought of this myself, as a very silly but cleverly written little SF story was made of that very concept: I don't know Timmy, being God is a big responsibility.
That's cool, except that it assumes that simulating the Big Bang would result in the exact same universe developing, whereas our best understanding is that the universe is probabilistic, not deterministic.
Starglider wrote:
Drooling Iguana wrote:I think the point is that, while there is only one real universe, there would likely be multiple simulated universes if we do, in fact, develop the technology to create them. If there was only one simulated universe, our odds of inhabiting it would be 50%. However, as the number of simulations increase that probability also increases, and the probability of our inhabiting the real universe goes down.
Correct.
Correct me if I'm wrong:

The probability that our reality is a simulation is high if the probability that there is a large number of simulations is high. However, the probability that there is any given number of simulations (including one) is completely unknown without an actual mechanism to analyse.

The mechanism is this: if a top-level super-intelligence creates a simulation, which then creates its own simulation, then you can assume that they will keep making simulations recursively until the top-level computer runs out of resources.

So to estimate the probability of a given number of simulations existing, you need to estimate the system resources of the top-level computer. Since Bostrom is deeply into all that transhumanism stuff, he assumes that there will be sufficient computing power for a very large top-level simulator.

So the idea is:
1) Transhumanism implies a very large top-level simulator is possible and there is an ineffable motive to build it.
2) A very large top-level simulator implies a very large number of recursive simulations.
3) A very large number of recursive simulations implies that the total number of realities is very high.
4) Which implies that the probability that any given reality is a simulation is very high.
5) Which finally implies that the probability that our reality is a simulation is very high.

I see number (4) as a problem. What if there is a very large number of realities but something allows us to determine that ours is a high-quality one? Since simulating a simplified model of reality will always be easier than simulating a more accurate one, the vast majority of simulations will be low-quality. In fact, the number of simulations that have billions of sapient beings in a billion-light-year-sized universe is probably very small compared to the total number of simulations.

So if there was a million realities, then I think the probability of ours being the top-level one is much higher than one-millionth. However, given a sufficiently wanked out superintelligence (which transhumanism implies), that probability is still going to be small. So yeah, assuming transhumanist predictions are true, then we're probably in a simulation; even though it can be discarded from everything because it's solipsistic.
Patrick Degan wrote:
Drooling Iguana wrote:Assuming that the simulation was made for our benefit, it would only need a byte for every quantum event directly observed by humans. It could fudge things with less granular algorithms when we're observing things macroscopically.
Which of couse is why so many things taste like chicken. 8)
Or more seriously (yet not actually seriously), why quantum physics and relativity don't like each other - when you're not looking in the box, Schroedinger's cat is replaced by the words 'CAT GOES HERE'. Why bother simulating and storing the position of an electron around an atom when you can just store it's distribution of allowed positions and generate one as required? Much lower development budget, system requirements, final cost, etc.
Kuroneko wrote:[snip] why would having such simulations be valuable enough to make such resource investments worthwhile?
I've never played The Sims, but it is the best selling PC game series in history. If it's so fun for us, then maybe a super-AI finds it fun too.
User avatar
Turin
Jedi Master
Posts: 1066
Joined: 2005-07-22 01:02pm
Location: Philadelphia, PA

Post by Turin »

Winston Blake wrote:
Turin wrote:I immediately thought of this myself, as a very silly but cleverly written little SF story was made of that very concept: I don't know Timmy, being God is a big responsibility.
That's cool, except that it assumes that simulating the Big Bang would result in the exact same universe developing, whereas our best understanding is that the universe is probabilistic, not deterministic.
Not really, as the story mentions the idea that each is ever-so-slightly different from the universe "above" and "below" it. It just assumes that each universe simulates a universe that is it's "nearest neighbor" in the mathematical space of all possible universes, which itself is a ridiculous proposition AFAIK. Like I said, a very silly story.
Post Reply