You are close! though its not quite that simple. According to penrose spacetime diagrams the roles of space and time get reversed in a black hole which causes all sorts of wierdness from an interior perspective. Just like the universe has no center, it also has no singularity pulling everything in unlike a black hole.
RSgZdrlNP5661Ma.png
71i1aSqo4oJkvaY.png
Now heres where it gets interesting. Our observable universe has a hard limit boundary known as the cosmological horizon due to finite speed of light and finite universe lifespan. Its impossible to ever know whats beyond this horizon boundary. similarly,black hole event horizons share this property of not being able to know about the future state of objects that fall inside. A cosmologist would say they are different phenomenon but from an information-theoretic perspective these are fundamentally indistinguishable Riemann manifolds that share a very unique property.
LZht0PvNgQCH0wg.jpg
They are geometric physically realized instances of godelian incompleteness and turing undecidability within the universes computational phase space. The universe is a finite computational system with finite state system representation capacity of about 10^122 microstates according to beckenstein bounds and planck constant. If an area of spacetime exceeds this amount of potential microstates to represent it gets quarentined in an inverse pocket universe so the whole system doesnt freeze up trying to compute the uncomputable.
The problem is that the universe can’t just throw away all that energy and information due to conservation laws, instead it utilizes something called ‘holographic principle’ to safely conserve information even if it cant compute with it. Information isn’t lost when a thing enters a black hole instead it gets encoded into the topological event horizon boundary itself. in a sense the information is ‘pulled’ into a higher fractal dimension for efficent encoding. Over time the universe slowly works on safely bleeding out all that energy through hawking radiation.
So say you buy into this logic, assume that the cosmological horizon isn’t just some observational limit artifact but an actual topological Riemann manifold made of the same ‘stuff’ sharing properties with an event horizon, like an inverted black hole where the universe is a kind of anti-singularity which distributes matter everywhere dense as it expands instead of concentrating matter into a single point. what could that mean?
So this holographic principle thing is all about how information in high dimensional topological spaces can be projected down into lower dimensional space. This concept is insanely powerful and is at the forefront of advanced computer modeling of high dimensional spaces. For example, neural networks organize information in high dimensional spaces called activation atlases that have billions and trillions of ‘feature dimensions’ each representing a kind of relation between two unique states of information.
orUFuJRVuuQl0zZ.webp
So, what if our physical universe is a lower dimensional holographic projection of the cosmological horizons manifold? What if the undecidable bubble around our universe is the universe in its native high dimensional space and our 3D+1T universe is a projection?
tomiant@piefed.social 8 hours ago
Similarly. That’s exactly what I was thinking. I mean, we can see a resolution of within 300000 years of the creation of the Universe, or something like that. That too, becomes a sort of event horizon. I’m not saying this is the case, but it kind of rhymes if we were within some sort of bubble on “the other side” of a supernova resulting in a black hole. I mean, even by our own physics, matter is energy, and energy cannot be destroyed only transformed. So, it could be that, in my mind, the Big Bang was an “inverted supernova/black hole” event. I think someone called it a “white hole”. But again, I’m just completely speculating, but it’s super interesting to me as someone who just has a passion for learning and understanding and trying to figure out things myself based on the tremendous works of all the giants upon whose shoulder I stand and squint at the horizon!
SmokeyDope@piefed.social 2 hours ago
Yes I did! I spent a lot of time cooking an axiomatic framework for myself on this stuff so im happy to have the opportunity to distill my current ideas for others. Thanks! :)
The planck length and planck constant are both ultimate computational constraints on physical interactions, with planck length being the smallest meaningful scale, planck time being the smallest meaningful interval (universal framerate), and plancks constant being both combined to tell us about limits to how fast things can compute at the smallest meaningful distance steps of interaction, and ultimate bounds on how much energy we can put into physical computing.
Theres an important insight to be had here which ill share with you. Currently when people think of computation they think of digital computer transistors, turing machines, qbits, and mathematical calculations. The picture of our universe being run on an aliens harddrive or some shit like that because thats where were atas a society culturally and technologically.
Calculation is not this, or at least its not just that. A calculation is any operation that actualizes/changes a single bit in a representational systems current microstate causing it to evolve. A transistor flipping a bit is a calculation. A photon interacting with a detector to collapse its superposition is a calculation. The sun computes the distribution of electromagnetic waves and charged particles ejected. Hawking radiation/virtual particles compute a distinct particle from all possible ones that could be formed near the the event horizon.
The neurons in my brain firing to select the next word in this sentence sequence from the probabilistic sea of all things I could possibly say, is a calculation. A gravitational wave emminating from a black hole merger is a calculation, Drawing on a piece of paper a calculation actualizing a drawing from all things you could possibly draw on a paper. Smashing a particle into base components is a calculation, so is deriving a mathematical proof through cognitive operation and symbolic representation. From a certain phase space perspective, these are all the same thing. Just operations that change the current universal microstate to another iteratively flowing from one microstate bit actualization/ superposition collapse to the next. The true nature of computation is the process of microstate actualization.
Lauderes principle from classic computer science states that any classical computer transistor bit/microstate change has a certain energy cost. This can easily be extended to quantum mechanics to show every superposition collapse into a distinct qbit of information has the same actional energy cost structure directly relating to plancks constant. Essentially every time two parts of the universe interact is a computation that cost time and energy to change the universalmicrostate.
If you do a bit more digging with the logic that comes from this computation-as-actualization insight, you discover the fundamental reason why combinatorics is such a huge thing and why pascals triangle/ the binomial coefficents shows up literally everywhere in STEM. Pascals triangle directly governs the amount of microstates a finite computational phase space can access as well as the distribution of order and entropy within that phase space. Because the universe itself is a finite state representation system with 10^122 bits to form microstates, it too is governed by pascals triangle. On the 10^122th row of pascals triangle is the series of binomial coefficent distribution encoding of all possible microstates our universe can possibly evolve into.
6vpb2u8IWBxOfrG.png
This perspective also clears up the apparent mysterious mechanics of quantum superposition collapse and the principle of least action.
A superposition is literally just a collection of unactualized computationally accessable microstate paths a particle/ algorithm could travel through superimposed ontop of eachother. with time and energy being paid as the actional resource cost for searching through and collapsing all possible iterative states at that step in time into one definitive state. No matter if its a detector interaction ,observer interaction, or particle collision interaction, same difference. each possible microstate is seperated by exactly one single bit flip worth of difference in microstate path outcomes creating a definitive distinction between two near-equivalent states.
The choice of which microstate gets selected is statistical/combinatoric in nature. Each microstate is statistically probability weighted based on its entropy/order. Entropy is a kind of ‘information-theoretic weight’ property that affects actualization probability of that microstate based on location in pascals triangle, and its directly tied to algorithmic complexity (more complex microstates that create unique meaningful patterns of information are harder to form randomly from scratch compared to a soupy random cloud state and thus rarer).
Measurement happens when a superposition of microstates entangles with a detector causing an actualized bit of distinction within the sensors informational state. Its all about the interaction between representational systems and the collapsing of all possible superposition microstates into an actualized distinct collection of bits of information.
Plancks constant is really about the energy cost of actualizing a definitive microstate of information from a quantum superposition at smaller and smaller scales of measurement in space or time. The energy cost of distinguishing between two bits of information at a scale smaller than the planck length or at a time interval smaller than planck time will cost more energy than the universe allows in one area (it would create a kugelblitz pure energy black hole if you tried) and so any computational microstate path that differs from another with less than a plancks length worth of disctinction are indistinguishable, blurring together and creating a bedrock limit scale for computational actualization.
language is symbolic representation, cognition and conversation are computations as your neural network traces paths through your activation atlas. Our ability to talk about abstractions is tied to how complex they are to abstract about/model in our mind. The cool thing is language evolves as our understanding does so we can convey novel new concepts or perspectives that didn’t exist before.