The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim realty would be toast overnight.

What am I not getting about this?

Cheers!

  • deafboy@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    Fellow macaque here. Not only that, but time does not even run 1:1 between 2 places in our own universe. Plus, there are all kinds of quantum fuckery, where we can’t really detect all the properties of a certain particle, or the particles act like waves as long as they do not interact with anything, because… who knows?

    • lmaydev@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      Particles and waves aren’t actually separate as we were taught in school. They are in reality a third thing with properties of both.

      As for detecting properties that’s a limit of our technology not the universe. In order to observe something we currently have to interact with it (e.g. bounce some light off it) It’s possible in the future we develop techniques that don’t require interaction, like reading the higgs boson field directly for example.

      • bunchberry@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        If our technology is limited so we can never see beyond something, why even propose it exists? Bell’s theorem also demonstrates that if you do add hidden parameters, it would have to violate Lorentz invariance, meaning it would have to contradict with the predictions of our current best theories of the universe, like GR and QFT. Even as pure speculation it’s rather dubious as there’s no evidence that Lorentz invariance is ever violated.