Jump to content
Science Forums

Infinite recursive Simulated Realities and it's implications for the GUT


Recommended Posts

One thing that always bothered me is the computing power required to run a meaningfull simulation of an universe. One explanation is that the containing universe is vastly more complex and that our simplified universe can be runned on computers that may cost the same as a cellphone(of course in the context of their universe). The problem comes in when WE want to run a meaningfull simulation. Even the game of life would be limited to trillions of cells on the fastest super computers. Okay so you say moore's law, even if we convert all the mass in the solar system with processors that have almost theoretical energy per computation efficiency, you run your gazzillion cell game of life simulation and seeds it randomly, even if intelligent "life" arises all you will see is well... a mass of boiling cells...not much of a "zoo factor" which is after all the main reason for running the simulation in the first place, you want to see something interesting, novel and new. So you up the complexity which means more processing power, so you convert the galaxy which you can't... The problem with parallel computers is the communication delay between nodes, the speed of light limit. Even if you add more computers the effect will be less and less. Even if we slow down our own perception by allocating more resources to the simulation, there is the little problem of the sun running out of fuel, you can end up literally slowing down your perception and thought processes so much that the universe starts dying around you. So when the simulation is to simple it is boring and when it is too complex it is impossible to implement. That is frankly why I think the more complex universe argument does not wash, the beings in that universe themselves would also be vastly more complex and more intelligent, which means we will be the equivalent to the game of live, extremely boring.

 

One way that negates all that is if you can find a way to access an infinite pool of processing resources. Now the only way that we currently know of is Prof. Tipler's Omega Computer hypothesis. In the omega computer a person as complex(or simple) as ourselves can run a perfect simulation of the universe. Of course it is assumed that the GUT(Great unifying theory) will be ancient knowledge. This will enable the person to run the universe with perfect and accurate rules. This means that the simulated universe will ultimately collapse into it's own omega computer which means some person in that computer... you get the idea a infinite recursion of omega computers in the master process. This has the implication that the model universe will never reach the halting state(the final moment of the Crunch a point of no volume) but will exponentially demand more processing resources. This will be like trying to reach the speed of light you have to use more energy as your mass increases, but only tends towards lightspeed even if you use infinite energy for acceleration. Even if a fraction of persons decide to run a simulation, there will be infinite amount of universe simulations because a fraction of infinity is still infinity. This of course applies to all the omega computers and universe simulations down the chain. The master omega computer does not mind, the OS just keep on allocating resources from an infinite pool to an infinite amount of processes that exponentially demands more resources.

 

 

So my conclusion is that the simulation argument by Bostrom is right only if Tipler is right or you can access an infinite amount of processing power(only Tipler has currently shown how this may be possible). In this case we are almost certainly in a simulation, but this does not realy matter or have any implication, even the master or "real" universe has been reduced to an omega computer.

 

The GUT itself will be fundamentally complex, thus while complex processes can emerge from interacting simple processes the GUT itself will be irreducible complex. The GUT will under some instances and circumstances like the Crunch recurse into infinitely more complex loops and calculations.

Link to comment
Share on other sites

  • 3 years later...
It's amazing how little has been posted on Nick Bostrom and the simulation argument!

Indeed. How could we have all missed this some 1221 days ago?

 

The “simulation hypothesis” has been discussed a few times around these forums, but by no means, IMHO, to exhaustion.

One thing that always bothered me is the computing power required to run a meaningfull simulation of an universe

...

One-time poster Alma-Tadema voices a lot of intuitive objections to the hypothesis, most of which most of us have entertain at various times. I think some application or methodic reason over intuition can answer many of them, though...

The problem with parallel computers is the communication delay between nodes, the speed of light limit

Along with the “if you try to get enough computing machinery close enough together to not have long communication delays, it’s Schwarzschild radius will exceed its radius, and it’ll become a black hole” issue, this seems a pretty inescapable limit. However, this reasoning appears to me to err in assuming that a reality simulation must run in realtime relative to some external reality. As with the ordinary simulations some of us have written and run, this isn’t the case. A reality simulation could be running much more slowly than the external reality of its computer, with no effect on the simulation. Because, according to the simulation hypothesis, everything we simulated beings can use to measure time is part of the simulation, we’d be unable by any means detect the ratio of simulated to actual time. Thus, a computer with synchronous parts separated by thousands of light-years would have an effective clock speed of on the order of .00000000001 ([math]10^{-11}[/math]) Hz, but this intuitively ridiculously slow rate would be insignificant to the simulated reality.

 

Such machines would, however, require vast durations to simulate even a single minute, day, or human lifetime, easily exceeding the duration of the Stelliferous era (about [math]10^6[/math] to [math]10^14[/math] years ABB, at which we’re at nearly the very beginning, at [math]1.91 \times 10^{10}[/math] years ABB. As essentially all of our present day technology is directly or indirectly star-powered, such a computer would have to be powered differently. Fortunately (for our imagined very far-future simulation makers), the post-stelliferous universal eras are expected to have lots of even greater power supplies, primarily super-massive black holes.

One way that negates all that is if you can find a way to access an infinite pool of processing resources. Now the only way that we currently know of is Prof. Tipler's Omega Computer hypothesis.

This is referring to an idea by Frank J Tipler, described at length in such books as the The Physics of Immortality. Tippler’s hypothesis – some believe it’s more an expression of desperately wishful thinking than a serious scientific hypothesis – has several requirements not required by the simulation hypothesis

  • The simulation must match “heaven” as imagined by every being that has ever lived and imagined it.
  • The simulation must be latterly eternal – that is, it requires a actually infinite amount of processing.
  • At present, as we’re not in heaven, we’re in the same, real universe in which the omega computer will someday be.
  • Somewhat as an aside, and requiring a fairly lengthy explanation found in the TPoI, the physical universe must be closed ([math]\Omega > 1[/math]). Tipler appears to suspect that the universe is not naturally so, but can be made so artificially.

Without these requirements, a reality simulation per the simulation hypothesis is quite literally infinitely easier than with them. At the same time, Tipler’s omega point idea explicitly rejects one of the more intriguing assertions of Bostrom’s simulation hypothesis: that it’s likely we are, right now, in a simulated reality.

 

Both ideas seem to me to share the assumption that the presumed current or ultimate far future reality simulation is artificial, an assumption I think is somewhat limiting, as it’s not to difficult to imagine that actual, physical reality is a sort of simulation run on an underlying “hardware” consisting of physical reality itself.

So my conclusion is that the simulation argument by Bostrom is right only if Tipler is right or you can access an infinite amount of processing power(only Tipler has currently shown how this may be possible).

My take on these two’s ideas comes to nearly the opposite conclusion. As I noted above, it appears to me much easier for Bostrom’s hypothesis to be right than Tipler’s.

Life cannot simulate itself anymore that generators can surpass input with output.

I don’t see how this analogy follows.

 

Electric generators can’t produce more output than input energy due to the law of conservation of energy. There’s no requirement that a computer simulations violate this law. In obvious fact, every one yet written follows the usual physical laws, and, barring the sort of far future super-science hinted at by Tipler, will almost certainly continue to do so.

 

Although progress has been slow – surprisingly to many biologists and computer scientists – it’s not unreasonable, I think, to expect that projects to accurately simulate simple life such as the E-Cell project, will eventually succeed, demonstrating that life, in the form of human scientists, truly can simulate life.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...