The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.
But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.
Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.
And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim realty would be toast overnight.
What am I not getting about this?
Cheers!
xantoxis@lemmy.world 5 months ago
This is the crux of the logical error you made. It’s a common error, but it’s important to recognize here.
If we’re in a simulation, we have no idea the available resources in the simulation “above” us. Suppose energy density up there is 100x as high as ours?Suppose the subjective experience of the passage of time up there is 100x faster than ours?
Another thing is that we have no idea how long it takes to render each frame of our simulation. Could take a million years. As long as it keeps running though, and as long as the simulation above us is patient, we keep ticking. This is also where the subjective experience of time matters. If it takes a million years, but their subjective “day” is a trillion years long, it becomes feasible to run us for a while.
And, finally, there’s no reason to assume we’re a complete simulation of anything. Perhaps the simulation was instantiated beginning with this morning–but including all memories and documentation of our “historical” past. All that past, all that experience is also fake, but we’d never know that because it’s real to us. In this scenario, the simulation above us only has to simulate one day. Or maybe even just the experiences of one PERSON for one day. Or one minute. Who knows?
The main point is we don’t know what’s happening in the simulation above ours, if it exists, but there’s no reason to assume it’s similar to ours in any way.
Scubus@sh.itjust.works 5 months ago
Quantum is weird. If we are in a simulation, that would explain a lot of that, because the quantum effects we see are actually just light simulations of much deeper mechanics.
As such, if we were simulating a universe, there’s every chance that we may decide to only simulate down to individual atoms. So the people in the simulation would probably discover atoms, but then they would have to come up with their own version of quantum mechanics to describe the effects that we know come from quarks.
The point is that each layer may choose to simulate things slightly lighter to save on resources, and you would have no way of knowing.
xantoxis@lemmy.world 5 months ago
Indeed and–interesting corrollary–if we accept the concept of reduced accuracy simulations as axiomatic, then it might be possible to figure out how close we are to the “bottom” of the simulation stack that’s theoretically possible. There’s only so many orders of magnitude after all; at some point you’re only simulating one pixel wiggling around and that’s not interesting enough to keep going down.
There is not, as far as I know, any way to estimate the length of the stack in the other direction, though.
bunchberry@lemmy.world 5 months ago
I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it.