That's cool, except that it assumes that simulating the Big Bang would result in the exact same universe developing, whereas our best understanding is that the universe is probabilistic, not deterministic.
Starglider wrote:Drooling Iguana wrote:I think the point is that, while there is only one real universe, there would likely be multiple simulated universes if we do, in fact, develop the technology to create them. If there was only one simulated universe, our odds of inhabiting it would be 50%. However, as the number of simulations increase that probability also increases, and the probability of our inhabiting the real universe goes down.
Correct.
Correct me if I'm wrong:
The probability that our reality is a simulation is high if the probability that there is a large number of simulations is high. However, the probability that there is any given number of simulations (including one) is completely unknown without an actual mechanism to analyse.
The mechanism is this: if a top-level super-intelligence creates a simulation, which then creates its own simulation, then you can assume that they will keep making simulations recursively
until the top-level computer runs out of resources.
So to estimate the probability of a given number of simulations existing, you need to estimate the system resources of the top-level computer. Since Bostrom is deeply into all that transhumanism stuff, he assumes that there will be sufficient computing power for a very large top-level simulator.
So the idea is:
1) Transhumanism implies a very large top-level simulator is possible and there is an ineffable motive to build it.
2) A very large top-level simulator implies a very large number of recursive simulations.
3) A very large number of recursive simulations implies that the total number of realities is very high.
4) Which implies that the probability that any given reality is a simulation is very high.
5) Which finally implies that the probability that
our reality is a simulation is very high.
I see number (4) as a problem. What if there is a very large number of realities but something allows us to determine that ours is a high-quality one? Since simulating a simplified model of reality will always be easier than simulating a more accurate one, the vast majority of simulations will be low-quality. In fact, the number of simulations that have billions of sapient beings in a billion-light-year-sized universe is probably
very small compared to the total number of simulations.
So if there was a million realities, then I think the probability of ours being the top-level one is much higher than one-millionth. However, given a
sufficiently wanked out superintelligence (which transhumanism implies), that probability is still going to be small. So yeah, assuming transhumanist predictions are true, then we're probably in a simulation; even though it can be discarded from everything because it's solipsistic.
Patrick Degan wrote:Drooling Iguana wrote:Assuming that the simulation was made for our benefit, it would only need a byte for every quantum event directly observed by humans. It could fudge things with less granular algorithms when we're observing things macroscopically.
Which of couse is why so many things taste like chicken.

Or more seriously (yet not actually seriously), why quantum physics and relativity don't like each other - when you're not looking in the box, Schroedinger's cat is replaced by the words 'CAT GOES HERE'. Why bother simulating and storing the position of an electron around an atom when you can just store it's distribution of allowed positions and generate one as required? Much lower development budget, system requirements, final cost, etc.
Kuroneko wrote:[snip] why would having such simulations be valuable enough to make such resource investments worthwhile?
I've never played The Sims, but it
is the best selling PC game series in history. If it's so fun for us, then maybe a super-AI finds it fun too.