pcalau12i
@pcalau12i@lemmygrad.ml
- Comment on It's not supposed to make sense... 1 week ago:
I think it’s boring honestly. It’s a bit strange how like, the overwhelming majority of people either avoid interpreting quantum theory at all (“shut up and calculate”) or use it specifically as a springboard to justify either sci-fi nonsense (multiverses) or even straight8-up mystical nonsense (consciousness induced collapse). Meanwhile, every time there is a supposed “paradox” or “no-go theorem” showing you can’t have a relatively simple explanation for something, someone in the literature publishes a paper showing it’s false, and then only the paper showing how “weird” QM is gets media attention. I always find myself on the most extreme fringe of the fringe of thinking both that (1) we should try to interpret QM, and (2) we should be extremely conservative about our interpretation so we don’t give up classical intuitions unless we absolutely have to. That seems to be considered an extremist fringe position these days.
- Comment on It's not supposed to make sense... 1 week ago:
The double-slit experiment doesn’t even require quantum mechanics. It can be explained classically and intuitively.
It is helpful to think of a simpler case, the Mach-Zehnder interferometer, since it demonstrates the same effect but where where space is discretized to just two possible paths the particle can take and end up in, and so the path/position is typically described with just with a single qubit of information: |0⟩ and |1⟩.
You can explain this entirely classical if you stop thinking of photons really as independent objects but just specific values propagating in a field, what are sometimes called modes. If you go to measure a photon and your measuring device registers a |1⟩, this is often interpreted as having detected the photon, but if it measures a |0⟩, this is often interpreted as not detecting a photon, but if the photons are just modes in a field, then |0⟩ does not mean you registered nothing, it means that you indeed measured the field but the field just so happens to have a value of |0⟩ at that location.
Since fields are all-permeating, then describing two possible positions with |0⟩ and |1⟩ is misleading because there would be two modes in both possible positions, and each independently could have a value of |0⟩ or |1⟩, so it would be more accurate to describe the setup with two qubits worth of information, |00⟩, |01⟩, |10⟩, and |11⟩, which would represent a photon being on neither path, one path, the other path, or both paths (which indeed is physically possible in the real-world experiment).
When systems are described with |0⟩ or |1⟩, that is to say, 1 qubit worth of information, that doesn’t mean they contain 1 bit of information. They actually contain as much as 3 as there are other bit values on orthogonal axes. You then find that the physical interaction between your measuring device and the mode perturbs one of the values on the orthogonal axis as information is propagating through the system, and this alters the outcome of the experiment.
You can interpret the double-slit experiment in the exact same way, but the math gets a bit more hairy because it deals with continuous position, but the ultimate concept is the same.
A measurement is a kind of physical interaction, and all physical interactions have to be specified by an operator, and not all operators are physically valid. Quantum theory simply doesn’t allow you to construct a physically valid operator whereby one system could interact with another to record its properties in a non-perturbing fashion. Any operator you construct to record one of its properties without perturbing it must necessarily perturb its other properties. Specifically, it perturbs any other property within the same noncommuting group.
When the modes propagate from the two slits, your measurement of its position disturbs its momentum, and this random perturbation causes the momenta of the modes that were in phase with each other to longer be in phase. You can imagine two random strings which you don’t know what they are but you know they’re correlated with each other, so whatever is the values of the first one, whatever they are, they’d be correlated with the second. But then you randomly perturb one of them to randomly distribute its variables, and now they’re no longer correlated, and so when they come together and interact, they interact with each other differently.
There’s a paper on this here and also a lecture on this here. You don’t have to go beyond the visualization or even mathematics of classical fields to understand the double-slit experiment.
- Comment on It's not supposed to make sense... 1 week ago:
Why interpret it as either? The double-slit experiment can be given an entirely classical explanation. Such extravagances are not necessary.
- Comment on It's not supposed to make sense... 1 week ago:
My impression from the literature is that superdeterminism is not the position of rejecting an asymmetrical arrow of time. In fact, it tries to build a model that can explain violations of Bell inequalities completely from the initial conditions evolved forwards in time.
Let’s imagine you draw a coin from box A and it’s random, and you draw coins from box B and it’s random, but you find a peculiar feature where if you switch from A to B, the first coin you draw from B is always the last you drew from A, and then it goes back to being random. You repeat this many times and it always seems to hold. How is that possible if they’re independent of each other?
Technically, no matter how many coins you draw, the probability of it occurring just by random chance is never zero. It might get really really low, but it’s not zero. A very specific initial configuration of the coins could reproduce that.
Superdeterminism is just the idea that there are certain laws of physics that restrict the initial configurations of particles at the very beginning of the universe, the Big Bang, to guarantee their evolution would always maintain certain correlations that allow them to violate Bell inequalities. It’s not really an interpretation because it requires you to posit these laws, and so it really becomes a new theory since you have to introduce new postulates, but such a theory would in principle then allow you to evolve the system forwards from its initial conditions in time to explain every experimental outcome.
As a side note, you can trivially explain violations of Bell inequalities in local realist terms without even introducing anything new to quantum theory just by abandoning the assumption of time-asymmetry. This is called the Two-State Vector Formalism and it’s been well-established in the literature for decades. If A causes B and B causes C, in the time-reverse, C causes B and B causes A. if you treat both as physically real, then B would have enough constraints placed upon it by A and C taken together (by evolving the wave function from both ends to where they meet at B) to violate Bell inequalities.
That’s already pretty much a feature built-in to quantum theory and allows you to interpret it in local realist terms if you’d like, but it requires you to accept that the microscopic world is genuinely indifferent to the arrow-of-time and the time-forwards and the time-reversed evolution of a system are both physically real.
However, this time-symmetric view is not superdeterminism. Superdeterminism is time-asymmetric just like most every other viewpoint (Copenhagen, MWI, pilot wave, objective collapse, etc). Causality goes in one temporal direction and not the other. The time-symmetric interpretation is its own thing and is mathematically equivalent to quantum mechanics so it is an actual interpretation and not another theory.
- Comment on It's not supposed to make sense... 1 week ago:
The problem with pilot wave is it’s non-local, and so it contradicts with special relativity and cannot be made directly compatible with the predictions of quantum field theory. The only way to make it compatible would be to throw out special relativity and rewrite a whole new theory of spacetime with a preferred foliation built in that could reproduce the same predictions as special relativity, and so you end up basically having to rewrite all of physics from the ground-up.
I also disagree that it’s intuitive. It’s intuitive when we’re talking about the trajectories of particles, but all its intuition disappears when we talk about any other property at all, like spin. You don’t even get a visualization of what’s going on at all when dealing with quantum circuits.
Personally, I find the most intuitive interpretation a modification of the Two-State Vector Formalism where you replace the two state vectors with two vectors of expectation values. This gives you a very unambiguous and concrete picture of what’s going on. Due to the uncertainty principle, you always start with limited information on the system, you build out a list of expectation values assigned to each observable, and then take into account how those will swap around as the system evolves (for example, if you know X=+1 but don’t know Y, and an interaction has the effect of swapping X with Y, then now you know Y=+1 and don’t know X).
This alone is sufficient to reproduce all of quantum mechanics, but it still doesn’t explain violations of Bell inequalities. You explain that by just introducing a second vector of expectation values to describe the final state of the system and evolve it backwards in time. This applies sufficient constraints on the system to explain violations of Bell inequalities in local realist terms, without having to introduce anything to the theory and with a largely classical picture.
- Comment on It's not supposed to make sense... 1 week ago:
Quantum mechanics becomes massively simpler to interpret once you recognize that the wave function is just a compressed list of expectation values for the observables of a system. An expectation value is like a weighted probability. They can be negative because the measured values can be negative, such as fo qubits the measured values can be either +1 or -1, and if you weight by -1 then it can become negative. For example, an expectation value of -0.5 means there is a 25% chance of +1 and a 75% of -1.
It’s like, if I know for certain that X=+1 but I have no idea what Y is, and the physical system interacts with something that we know will have the effect of swapping its X and Y components around, then this would also swap my uncertainty around so now I would know that Y=+1 without knowing what X is. Hence, if you don’t know the complete initial conditions of a system, you can represent it with a list of all of possible observables and assign each one an expectation value related to your certainty of measuring that value, and then compute how that certainty is shifted around as the system evolves.
The wave function then just becomes a compressed form of this. For qubits, the expectation value vector grows at a rate of 4^N where N is the number of qubits, but the uncertainty principle limits the total bits of information you can have at a single time to 2^N, so the vector is usually mostly empty (a lot of zeros). This allows you to mathematically compress it down to a wave function that also grows by 2^N, making it the most concise way to represent this.
But the notation often confuses people, they think it means particles are in two places at once, that qubits are 0 and 1 at the same time, that there is some “collapse” that happens when you make a measurement, and they frequently ask what the imaginary components mean. But all this confusion just stems from notation. Any wave function can be expanded into a real-valued list of expectation values and you can evolve that through the system rather than the wave function and compute the same results, and then the confusion of what it represents disappears.
When you write it out in this expanded form, it’s also clear why the uncertainty principle exists in the first place. A measurement is a kind of physical interaction between a record-keeping system and the recorded system, and it should result in information from the recorded system being copied onto the record-keeping system. Physical interactions are described by an operator, and quantum theory has certain restrictions on qualifies as a physically valid operator: it has to be time-reversible, preserve handedness, be completely positive, etc, and these restrictions prevent you from constructing an operator that can copy a value of an observable from one system onto another in a way that doesn’t perturb the its other observables.
Most things in quantum theory that are considered “weird” are just misunderstandings, some of which can even be reproduced classically. Things like double-slit, Mach–Zehnder interferometer, the Elitzur–Vaidman “paradox,” the Wigner’s friend “paradox,” the Schrodinger’s cat “paradox,” the Deutsch algorithm, quantum encryption and key distribution, quantum superdense coding, etc, can all be explained entirely classically just by clearing up some confusion about the notation.
This narrows it down to only a small number of things that genuinely raise an eyebrow, those being cases that exhibit what is sometimes called quantum contextuality, such as violations of Bell inequalities. It inherently requires a non-classical explanation for this, but I don’t think that also means it can’t be something understandable.
The simplest explanation I have found in the literature is that of time-symmetry. It is a requirement in quantum mechanics that every operator is time-symmetric, and that famously leads to the problem of establishing an arrow of time in quantum theory. Rather than taking it to be a problem, we can instead presume that there is a good reason nature demands all its microscopic operators are time-symmetric: because the arrow of time is a macroscopic phenomena, not a microscopic one.
If you have a set of interactions between microscopic particles where A causes B and B causes C, if I played the video in the reverse, it is mathematically just as valid to say that C causes B and B causes A. Most people then introduce an additional postulate that says “even though it is mathematically valid, it’s not physically valid, we should only take the evolution of the system in a single direction of time seriously.” You can’t derive that postulate from quantum theory, you just have to take it on faith.
If we drop that postulate and take the local evolution of the system seriously in both its time-forwards evolution and its time-reversed evolution, then you can explain violations of Bell inequalities without having to add anything to the theory at all, and interpret it completely in intuitive local realist terms. You do this using the Two-State Vector Formalism where all you do is compute the evolution of the wave function (or expectation values) from both ends until they meet at an intermediate point, and that gives you enough constraints to deterministically derive a weak value at that point. The weak value is a physical variable that evolves locally and deterministically with the system and contains sufficient information to determine its expectation values.
You still can’t always assign a definite value, but these expectation values are epistemic, there is no contradiction with there being a definite value as the weak value contains all the information needed for the correct expectation values, and therefore the correct probability distribution, locally within the particle.
In terms of computation, it’s very simple, because for the time-reverse evolution you just treat the final state as the initial state and then apply the operators in reverse with their time-symmetric equivalents (Hermitian transpose) and then the weak value equation looks exactly like the expectation value equation except rather than having the same wave function on both ends of the observable, you have the reverse-evolved wave function on one end of the observable and the forwards-evolved wave function on the other.
Nothing about this is hard to visualize because you just imagine playing a moving forwards and also playing it in the reverse, and in both directions you get a local causal chain of interactions between the particles. If A causes B and B causes C in the time-forwards movie, playing the movie in reverse you will see C cause B which then causes A. That means B is both caused by A and C, and thus is influenced by both through a local chain of interactions. There is nothing “special” going on in the backwards evolution, the laws of physics are symmetrical to visually it is not distinguishable from its forwards evolution, so you visualize it the exact same way.
That is enough to explain QM in local realist terms, and has been well-established in the literature for decades, but people often seem to favor explanations that are impossible to visualize, like treating the wave function as a literal object despite it being, at times, even infinite-dimensional or even believing we all live in an infinite-dimensional multiverse.
- Comment on observer 👀 observed quantum state 2 weeks ago:
Well, first, that is not something that actually happens in the real world but is a misunderstanding. Particles diffract like a wave from a slit due to the uncertainty principle, because their position is confined to the narrow slit so their momentum must probabilistically spread out. If you have two slits where they have a probability of entering one slit or the other, then you will have two probabilistic diffraction trajectories propagating from each slit which will overlap with each other.
Measuring the slit the photon passes through does not make it behave like a particle. Its probabilistic trajectory still diffracts out of both slits, and you will still get a smeared out diffraction pattern like a wave. The diagrams that show two neat clean separated blobs has never been observed in real life and is just a myth. The only difference that occurs between whether or not you’re making a measurement is whether or not the two diffraction trajectories interfere with one another or not, and that interference gives you the black bands.
This is an interference-based experiment. Interference-based phenomena can all be given entirely classical explanations without even resorting to anything nonclassical. The paper “Why interference phenomena do not capture the essence of quantum theory” is a good discussion on this. There is also a presentation on it here.
Basically, you (1) treat particles as values that propagate in a field. Not waves that propagate through a field, just values in a field like any classical field theory. Classical fields are indeed something that can take multiple paths simultaneously. (2) We assume that the particles really do have well-defined values for all of their observables at once, even if the uncertainty principle disallows us from knowing them all simultaneously. We can mathematically prove from that assumption that it would impossible to construct a measuring device that simply passively measures a system, it will always perturb the values it is not measuring in an unpredictable way.
A classical field has values everywhere. That’s basically what a field is, you assign a value, in this case a vector, to every point in space and time. The vector holds the properties of the particles. For example, the X, Y, and Z observable would be stored in a vector [X, Y, Z] with a vector value at any point. What the measuring device measures is |0> or |1>, where we interpret the former to meaning no photon is there and we interpret the latter to mean a photon is there. But if you know anything about quantum information science, you know that |0> just means Z=+1 and |1> just means Z=-1. Hence, if you measure |0>, it doesn’t tell you anything about the X and Y values, which we would assume are also there if particles are excitations in a field as given by assumption #1 because the field exists everywhere, and in fact, from our other assumption #2, your measurement of its Z value to be |0> must perturb those X and Y values.
It would be the field that propagates information through both slits and the presence of the measurement device perturbs the observables you do not measure, causing them to become out of phase with one another so they that they do not interfere when the field values overlap.
- Comment on observer 👀 observed quantum state 2 weeks ago:
Quantum mechanics is not complicated. It just appears complicated because everyone chooses to interpret it in a way that is inherently contradictory. One of the fundamental postulates of quantum mechanics is that it is time-symmetric, called unitarity, but almost everyone for some reason assumes it is time-asymmetric. This contradiction leads them to have to compartmentalize this contradiction in their head, which then leads to a bunch of a contradictory conclusions, and then they invent a bunch of nonsense to try and make sense of those contradictions, like collapsing wave functions, a multiverse, cats that are both dead and alive simultaneously, particles in two places at once, nonlocality, etc. But that’s all entirely unnecessary if you just consistently interpret the theory as time-symmetric. This has been shown in the literature for decades, called the Two-State Vector Formalism, yet it’s almost entirely ignored in the popular discourse for some reason.
- Comment on ETERNAL TORMENT 1 month ago:
There are no “paradoxes of quantum mechanics.” QM is a perfectly internally consistent theory. Most so-called “paradoxes” are just caused by people not understanding it.
QM is both probabilistic and, in its own and very unique way, relative. Probability on its own isn’t confusing, if the world was just fundamentally random you could still describe it in the language of classical probability theory and it wouldn’t be that difficult. If it was just relative, it can still be a bit of a mind-bender like special relativity with its own faux paradoxes (like the twin “paradox”) that people struggle with, but ultimately people digest it and move on.
But QM is probabilistic and relative, and for most people this becomes very confusing, because it means a particle can take on a physical value in one perspective while not having taken on a physical value in another (why is called the relativity of facts in the literature), and not only that, but because it’s fundamentally random, if you apply a transformation to try to mathematically place yourself in another perspective, you don’t get definite values but only probabilistic ones, albeit not in a superposition of states.
For example, the famous “Wigner’s friend paradox” claims there is a “paradox” because you can setup an experiment whereby Wigner’s friend would assign a particle a real physical value whereas Wigner would be unable to from his perspective and would have to assign an entangled superposition of states to both his friend and the particle taken together, which has no clear physical meaning.
However, what the supposed “paradox” misses is that it’s not paradoxical at all, it’s just relative. Wigner can apply a transformation in Hilbert space to compute the perspective of his friend, and what he would get out of that is a description of the particle that is probabilistic but not in a superposition of states. It’s still random because nature is fundamentally random so he cannot predict what his friend would see with absolute certainty, but he can predict it probabilistic, and since this probability is not a superposition of states, what’s called a maximally mixed state, which is basically a classical probability distribution.
But you only get those classical distributions after applying the transformation to the correct perspective where such a distribution is to be found, i.e. what the mathematics of the theory literally implies is that only under some perspectives (defined in terms of any physical system at all, kind of like a frame of reference, nothing to do with human observers) are the physical properties of the system actually realized, while under some other perspectives, the properties just aren’t physically there.
The Schrodinger’s cat “paradox” is another example of a faux paradox. People repeat it as if it is meant to explain how “weird” QM is, but when Schrodinger put it forward in his paper “The Present Situation in Quantum Mechanics,” he was using it to mock the idea of particles literally being in two states at once, by pointing out that if you believe this, then a chain reaction caused by that particle would force you to conclude cats can be in two states at once, which, to him, was obviously silly.
If the properties of particles only exist in some perspectives and aren’t absolute, then a particle can’t meaningfully have “individually,” that is to say, you can’t define it in complete isolation. In his book “Science and Humanism,” Schrodinger talks about how, in classical theory, we like to imagine particles as having their own individual existence, moving around from interaction to interaction, carrying their properties with themselves at all times. But, as Schrodinger points out, you cannot actually empirically verify this.
If you believe particles have continued existence in between interactions, this is only possible if the existence of their properties are not relative so they can be meaningfully considered to continue to exist even when entirely isolated. Yet, if they are isolated, then by definition, they are not interacting with anything, including a measuring device, so you can never actually empirically verify they have a kind of autonomous individual existence.
Schrodinger pointed out that many of the paradoxes in QM carry over from this Newtonian way of thinking, that particles move through space with their own individual properties like billiard balls flying around. If this were to be the case, then it should be possible to assign a complete “history” to the particle, that is to say, what its individual properties are at all moments in time without any gaps, yet, as he points out in that book, any attempt to fill in the “gaps” leads to contradiction.
One of these contradictions is the famous “delayed choice” paradox, whereby if you imagine what the particle is doing “in flight” when you change your measurement settings, you have to conclude the particle somehow went back in time to rewrite the past to change what it is doing. However, from Schrodinger’s perspective, this is not a genuine “paradox” but just a flaw of actually interpreting the particle as having a Newtonian-style autonomous existence, of having “individuality” as he called it.
He also points out in that book that when he originally developed the Schrodinger equation, the purpose was precisely to “fill in the gaps,” but he realized later that interpreting the evolution of the wave function according to the Schrodinger equation as a literal physical description of what’s going on is a mistake, because all you are doing is pushing the “gap” from those that exist between interactions in general to those that exist between measurement, and he saw no reason as to why “measurement” should play an important role in the theory. Given that it is possible t make all the same predictions without using the wave function (using a mathematical formalism called matrix mechanics), you don’t have to reify the wave function because it’s just a result of an arbitrarily chosen mathematical formalism, and so Schrodinger cautioned against reifying it, because it leads directly to the measurement problem.
The EPR “paradox” is a metaphysical “paradox.” We know for certain QM is empirically local due to the no-communication theorem, which proves that no interaction a particle could undergo could ever cause an observation alteration in its entangled pair. Hence, if there is any nonlocality, it must be invisible to us, i.e. entirely metaphysical and not physical. The EPR paper reaches the “paradox” through a metaphysical criterion it states very clearly on the first page, which is to equate the ontology of a system to its eigenstates (to “certainty”). This makes it seem like the theory is nonlocal because entangled particles are not in eigenstates, but if you measure one, both are suddenly in eigenstates, which makes it seem like they both undergo an ontological transition simultaneously, transforming from not having a physical state to having one at the same time, regardless of distance.
However, if particles only have properties relative to what they are physically interacting with, from that perspective, then ontology should be assigned to interaction, not to eigenstates. Indeed, assigning it to “certainty” as the EPR paper claims is a bit strange. If I flip a coin, even if I can predict the outcome with absolute certainty by knowing all of its initial conditions, that doesn’t mean the outcome actually already exists in physical reality. To exist in physical reality, the outcome must actually happen, i.e. the coin must actually land. Just because I can predict the particle’s state at a distance if I were to travel there and interact with it doesn’t mean it actually has a physical state from my perspective.
I would recommend checking out this paper here which shows how a relative ontology avoids the “paradox” in EPR. I also wrote my own blog post here which if you go to the second half it shows some tables which walk through how the ontology differs between EPR and a relational ontology and how the former is clearly nonlocal while the latter is clearly local.
Some people frame Bell’s theorem as a “paradox” that proves some sort of “nonlocality,” but if you understand the mathematics it’s clear that Bell’s theorem only implies nonlocality for hidden variable theories. QM isn’t a hidden variable theory. It’s only a difficulty that arises in alternative theories like pilot wave theory, which due to their nonlocal nature have to come up with a new theory of spacetime because they aren’t compatible with special relativity due to the speed of light limit. However, QM on its own, without hidden variables, is indeed compatible with special relativity, which forms the foundations of quantum field theory. This isn’t just my opinion, if you go read Bell’s own paper himself where he introduces the theorem, he is blatantly clear in the conclusion, in simple English language, that it only implies nonlocality for hidden variable theories, not for orthodox QM.
Some “paradoxes” just are much more difficult to catch because they are misunderstandings of the mathematics which can get hairy at times. The famous Frauchiger–Renner “paradox” for example stems from incorrect reasoning across incompatible bases, a very subtle point lost in all the math. The Cheshire cat “paradox” tries to show particles can dissassicate from their properties, but those properties only “dissociate” across different experiments, meaning in no singular experiment are they observed to “dissociate.”
- Comment on Rock Auras - Not just for Hippies anymore 2 months ago:
I will be the controversial one and say that I reject that “consciousness” even exists in the philosophical sense. Of course, things like intelligence, self-awareness, problem-solving capabilities, even emotions exist, but it’s possible to describe all of these things in purely functional terms, which would in turn be computable. When people like about “consciousness not being computable” they are talking about the Chalmerite definition of “consciousness” popular in philosophical circles specifically.
This is really just a rehashing of Kant’s noumena-phenomena distinction, but with different language. The rehashing goes back to the famous “What is it like to be a bat?” paper by Thomas Nagel. Nagel argues that physical reality must be independent of point of view (non-contextual, non-relative, absolute), whereas what we perceive clearly depends upon point of view (contextual). You and I are not seeing the same thing for example, even if we look at the same object we will see different things from our different standpoints.
Nagel thus concludes that what we perceive cannot be reality as it really is, but must be some sort of fabrication by the mammalian brain. It is not equivalent to reality as it is really is (which is said to be non-contextual) but must be something irreducible to the subject. What we perceive, therefore, he calls “subjective,” and since observation, perception and experience are all synonyms, he calls this “subjective experience.”
Chalmers later in his paper “Facing up to the Hard Problem of Consciousness” renames this “subjective experience” to “consciousness.” He points out that if everything we perceive is “subjective” and created by the brain, then true reality must be independent of perception, i.e. no perception could ever reveal it, we can never observe it and it always lies beyond all possible observation. How does this entirely invisible reality which is completely disconnected from everything we experience, in certain arbitrary configurations, “give rise to” what we experience. This “explanatory gap” he calls the “hard problem of consciousness.”
This is just a direct rehashing in different words Kant’s phenomena-noumena distinction, where the “phenomena” is the “appearance of” reality as it exists from different points of view, and the “noumena” is that which exists beyond all possible appearances, the “thing-in-itself” which, as the term implies, suggests it has absolute (non-contextual) properties as it can be meaningfully considered in complete isolation. Velocity, for example, is contextual, so objects don’t meaningfully have velocity in complete isolation; to say objects meaningfully exist in complete isolation is to thus make a claim that they have a non-contextual ontology. This leads to the same kind of “explanatory gap” between the two which was previously called the “mind-body problem.”
The reason I reject Kantianism and its rehashing by the Chalmerites is because Nagel’s premise is entirely wrong. Physical reality is not non-contextual. There is no “thing-in-itself.” Physical reality is deeply contextual. The imagined non-contextual “godlike” perspective whereby everything can be conceived of as things-in-themselves in complete isolation is a fairy tale. In physical reality, the ontology of a thing can only be assigned to discrete events whereby its properties are always associated with a particular context, and, as shown in the famous Wigner’s friend thought experiment, the ontology of a system can change depending upon one’s point of view.
This non-contextual physical reality from Nagel is just a fairy tale, and so his argument in the rest of his paper does not follow that what we observe (synonym for: experience, perceive) is “subjective,” and if Nagel fails to establish “subjective experience,” then Chalmers fails to establish “consciousness” which is just a renaming of this term, and thus Chalmers fails to demonstrate an “explanatory gap” between consciousness and reality because he has failed to establish that “consciousness” is a thing at all.
What’s worse is that if you buy Chalmers’ and Nagel’s bad arguments then you basically end up equating observation as a whole with “consciousness,” and thus you run into the Penrose conclusion that it’s “non-computable.” Of course we cannot compute what we observe, because what we observe is not consciousness, it is just reality. And reality itself is not computable. The way in which reality evolves through time is computable, but reality as a whole just is. It’s not even a meaningful statement to speak of “computing” it, as if existence itself is subject to computation.
- Comment on You cannot learn without failing. 2 months ago:
That’s more religion than pseudoscience. Pseudoscience tries to pretend to be science and tricks a lot of people into thinking it is legitimate science, whereas religion just makes proclamations and claims it must be wrong if any evidence debunks them. Pseudoscience is a lot more sneaky, and has become more prevalent in academia itself ever since people were infected by the disease of Popperism.
Popperites believe something is “science” as long as it can in principle be falsified, so you invent a theory that could in principle be tested then you have proposed a scientific theory. So pseudoscientists come up with the most ridiculous nonsense ever based on literally nothing and then insist everyone must take it seriously because it could in theory be tested one day, but it is always just out of reach of actually being tested.
Since it is testable and the brain disease of Popperism that has permeated academia leads people to be tricked by this sophistry, sometimes these pseudoscientists can even secure funding to test it, especially if they can get a big name in physics to endorse it. If it’s being tested at some institution somewhere, if there is at least a couple papers published of someone looking into it, it must be genuine science, right?
Meanwhile, while they create this air of legitimacy, a smokescreen around their ideas, they then reach out to a laymen audience through publishing books, doing documentaries on television, or publishing videos to YouTube, talking about woo nuttery like how we’re all trapped inside a giant “cosmic consciousness” and we are all feel each other’s vibrations through quantum entanglement, and that somehow science proves the existence of gods.
As they make immense dough off of the laymen audience they grift off of, if anyone points to the fact that their claims are based on nothing, they just can deflect to the smokescreen they created through academia.
- Comment on shrimp colour drama 2 months ago:
Color is not invented by the brain but is socially constructed. You cannot look inside someone’s brain and find a blob of green, unless idk you let the brain mold for awhile. All you can do is ask the person to think of “green” and then correlate whatever their brain patterns are that respond to that request, but everyone’s brain patterns are different so the only thing that ties them all together is that we’ve all agreed as a society to associate a certain property in reality with “green.”
If you were an alien who had no concept of green and had abducted a single person, if that person is thinking of “green,” you would have no way to know because you have no concept of “green,” you would just see arbitrary patterns in their brain that to you would seem meaningless. Without the ability to reference that back to the social system, you cannot identify anything “green” going on in their brain, or for any colors at all, or, in fact, for any concepts in general.
This was the point of Wittgenstein’s rule-following problem, that ultimately it is impossible to tie any symbol (such as “green”) back to a concrete meaning without referencing a social system. If you were on a deserted island and forgot what “green” meant and started to use it differently, there would be no one to correct you, so that new usage might as well be what “green” meant.
If you try to not change your usage by building up a basket of green items to remind you of what “green” is, there is no basket you could possibly construct that would have no ambiguity. If you put a green apple and a green lettuce in there, and you forget what “green” is so you look at the basket for reference, you might think, for example, that “green” just refers to healthy vegetation. No matter how many items you add to the basket, there will always be some ambiguity, some possible definition that is compatible with all your examples yet not your original intention.
Without a social system to reference for meaning and to correct your mistakes, there is no way to be sure that today you are even using symbols the same way you used them yesterday. Indeed, there would be no reason for someone born and grew up in complete isolation to even develop any symbols at all, because they would just all be fuzzy and meaningless. They would still have a brain and intelligence and be able to interpret the world, but they would not divide it up into rigid categories like “green” or “red” or “dogs” or “cats.” They would think in a way where everything kind of merges together, a mode of thought that is very alien to social creatures and so we cannot actually imagine what it is like.
- Comment on trapped in the middle with u 2 months ago:
So a couple of intergalactic hydrogen atoms could exchange a photon across light years and become entangled for the rest of time, casually sharing some quantum of secrets as they coast to infinity.
Nope. No “secrets” are being exchanged between these particles.
- Comment on trapped in the middle with u 2 months ago:
The point wasn’t that the discussion is stupid, but that believing particles can be in two states at once is stupid. Schrodinger was doing a kind of argument known as a reduction to absurdity in his paper The Present Situation in Quantum Mechanics. He was saying that if you believe a single particle can be in two states at once, it could trivially cause a chain reaction that would put a macroscopic object in two states at once, and that it’s absurd to think a cat can be in two states at once, ergo a particle cannot be in two states at once.
In his later work Science and Humanism, Schrodinger argues that all the confusion around quantum mechanics originates from assuming that if that particles are autonomous objects with their own individual existence. If this were to be the case, then the particle must have properties localizable to itself, such as its position. And if the particle’s position is localized to itself and merely a function of itself, then it would have a position at all times. That means if the particle is detected by a detector at t=0 and a detector at t=1 and no detection is made at t=0.5, the particle should have some position value at t=0.5.
If the particle has properties like position at all times, then the changes in its position must always be continuous as there would be no gaps between t=0 and t=1 where it lacks a position but would have a position at t=0.1, t=0.2, etc. Schrodinger referred to this as the “history” of the particle, saying that whenever a particle shows up on a detector, we always assume it must have come from somewhere, that it used to be somewhere else before arriving at the detector.
However, Schrodinger viewed this as mistake that isn’t actually backed by the empirical evidence. We can only make observations at discrete moments in time, and to assume the particle is doing something in between those observation is by definition to make assumptions about something we cannot, by definition, observe, and so it can never actually be empirically verified.
Indeed, Schrodinger’s concern was more that it could not be verified, but that all the confusion around quantum theory comes precisely from what he called trying to “fill in the gaps” of the particle’s history. When you do so, you run into logical contradictions without introducing absurdities, like nonlocal action, retrocausality, or these days it’s even popular to talk about multiverses. Schrodinger also pointed out how the measurement problem, too, directly stems from trying to fill in the gaps of the particle’s history.
Schrodinger thought it made more sense to just abandon the notion that particles are really autonomous objects with their own individual existence. They only exist at the moment they are interacting with something, and the physical world evolves through a sequence of discrete events and not through continuous transitions of autonomous entities.
He actually used to hate this idea and criticized Heisenberg for it as it was basically Heisenberg’s view as well, saying “I cannot believe that the electron hops about like a flea.” However, in the same book he mentions that he changed his mind precisely because of the measurement problem. He says that he introduced the Schrodinger equation as a way to “fill in the gaps” between these “hops,” but that it actually fails to achieve this because it just shifts the gap between from between “hops” to between measurements as the system would evolve continuously up until measurement then have a sudden transition to a discrete value.
Schrodinger didn’t think it made sense that measurement should be special or play any sort of role in the theory over any other kind of physical interaction. By not trying to fill in the gaps at all, then no physical interaction is treated as special and all are put on an equal playing field, and so you don’t have a problem of measurement.
- Comment on Don't look now 3 months ago:
We know how it works, we just don’t yet understand what is going on under the hood.
Why should we assume “there is something going on under the hood”? This is my problem with most “interpretations” of quantum mechanics. They are complex stories to try and “explain” quantum mechanics, like a whole branching multiverse, of which we have no evidence for.
It’s kind of like if someone wanted to come up with deep explanation to “explain” Einstein’s field equations and what is “going on under the hood”. Why should anything be “underneath” those equations? If we begin to speculate, we’re doing just tha,t speculation, and if we take any of that speculation seriously as in actually genuinely believe it, then we’ve left the realm of being a scientifically-minded rational thinker.
It is much simpler to just accept the equations at face-value, to accept quantum mechanics at face-value. “Measurement” is not in the theory anywhere, there is no rigorous formulation of what qualifies as a measurement. The state vector is reduced whenever a physical interaction occurs from the reference point of the systems participating in the interaction, but not for the systems not participating in it, in which the systems are then described as entangled with one another.
This is not an “interpretation” but me just explaining literally how the terminology and mathematics works. If we just accept this at face value there is no “measurement problem.” The only reason there is a “measurement problem” is because this contradicts with people’s basic intuitions: if we accept quantum mechanics at face value then we have to admit that whether or not properties of systems have well-defined values actually depends upon your reference point and is contingent on a physical interaction taking place.
Our basic intuition tells us that particles are autonomous entities floating around in space on their lonesome like little stones or billiard balls up until they collide with something, and so even if they are not interacting with anything at all they meaningfully can be said to “exist” with well-defined properties which should be the same properties for all reference points (i.e. the properties are absolute rather than relational). Quantum mechanics contradicts with this basic intuition so people think there must be something “wrong” with it, there must be something “under the hood” we don’t yet understand and only if we make the story more complicated or make a new discovery one day we’d “solve” the “problem.”
Einstein once said, God does not place dice, and Bohr rebutted with, stop telling God what to do. This is my response to people who believe in the “measurement problem.” Stop with your preconceptions on how reality should work. Quantum theory is our best theory of nature and there is currently no evidence it is going away any time soon, and it’s withstood the test of time for decades. We should stop waiting for the day it gets overturned and disappears and just accept this is genuinely how reality works, accept it at face-value and drop our preconceptions. We do not need any additional “stories” to explain it.
- Comment on Gottem. :) 3 months ago:
So usually this is explained with two scientists, Alice and Bob, on far away planets. They’re each in the possession of a particle that is entangled with the other, and in a superposition of state 1 and state 2.
This “usual” way of explaining it is just overly complicating it and making it seem more mystical than it actually is. We should not say the particles are “in a superposition” as if this describes the current state of the particle. The superposition notation should be interpreted as merely a list of probability amplitudes predicting the different likelihoods of observing different states of the system in the future.
It is sort of like if you flip a coin, while it’s in the air, you can say there is a 50% chance it will land heads and a 50% chance it will land tails. This is not a description of the coin in the present as if the coin is in some smeared out state of 50% landed heads and 50% landed tails. It has not landed at all yet!
Unlike classical physics, quantum physics is fundamentally random, so you can only predict events probabilistically, but one should not conflate the prediction of a future event to the description of the present state of the system. The superposition notation is only writing down probability amplitudes of the likelihoods of what you will observe (state 1 or state 2) of the particles in the future event that you go to the interact with it and is not a description of the state of the particles in the present.
When Alice measures the state of her particle, it collapses into one of the states, say state 1. When Bob measures the state of his particle immediately after, before any particle travelling at light speed could get there, it will also be in state 1 (assuming they were entangled in such a way that the state will be the same).
This mistreatment of the mathematical notation as a description of the present state of the system also leads to confusing language like “it collapses into one of the states” as if the change in a probability distribution represents a physical change to the system. The mental picture people say this often have is that the particle literally physically becomes the probability distribution prior to measuring it—the particle “spreads out” like a wave according to the probability amplitudes of the state vector—and when you measure the particle, this allows you to update the probabilities, and so they must interpret this as the wave physically contracting into an eigenvalue—it “collapses” like a house of cards.
But this is, again, overcomplicating things. The particle never spreads out like a wave and it never “collapses” back into a particle. The mathematical notation is just a way of capturing the likelihoods of the particle showing up in one state or the other, and when you measure what state it actually shows up in, then you can update your probabilities accordingly. For example, if you the coin is 50%/50% heads/tails and you observe it land on tails, you can update the probabilities to 0%/100% heads/tails because you know it landed on tails and not heads. Nothing “collapsed”: you’re just observing the actual outcome of the event you were predicting and updating your statistics accordingly.
- Comment on Observer 3 months ago:
I don’t think solving the Schrodinger equation really gives you a good idea of why quantum mechanics is even interesting. You also shouldstudy very specific applications of it where it yields counterintuitive outcomes to see why it is interesting, such as in the GHZ experiment.
- Comment on You'll never see it coming 5 months ago:
By applying both that and the many worlds hypothesis, the idea of quantum immortality comes up, and thats a real mind bender. Its also a way to verifiably prove many worlds accurate(afaik the only way)
MWI only somewhat makes sense (it still doesn’t make much sense) if you assume the “branches” cannot communicate with each other after decoherence occurs. “Quantum immortality” mysticism assumes somehow your cognitive functions can hop between decoherent branches where you are still alive if they cease in a particular branch. It is self-contradictory. There is nothing in the mathematical model that would predict this and there is no mechanism to explain how it could occur.
It also has a problem similar to reincarnation mysticism. If MWI is correct (it’s not), then there would be an infinite number of other decoherent branches containing other “yous.” Which “you” would your consciousness hop into when you die, assuming this even does occur (it doesn’t)? It makes zero sense.
- Comment on I'm literally a thinking lump of fat 5 months ago:
Depends upon what you mean by “consciousness.” A lot of the literature seems to use “consciousness” just to refer to physical reality as it exists from a particular perspective, for some reason. For example, one popular definition is “what it is like to be in a particular perspective.” The term “to be” refers to, well, being, which refers to, well, reality. So we are just talking about reality as it actually exists from a particular perspective, as opposed to mere description of reality from that perspective.
I find it bizarre to call this “consciousness,” but words are words. You can define them however you wish. If we define “consciousness” in this sense, as many philosophers do, then it does not make logical sense to speak of your “consciousness” doing anything at all after you die, as your “consciousness” would just be defined as reality as it actually exists from your perspective. Perspectives always implicitly entail a physical object that is at the basis of that perspective, akin to the zero-point of a coordinate system, which in this case that object is you.
If you cease to exist, then your perspective ceases to even be defined. The concept of “your perspective” would no longer even be meaningful. It would be kind of like if a navigator kept telling you to go “more north” until eventually you reach the north pole, and then they tell you to go “more north” yet again. You’d be confused, because “more north” does not even make sense anymore at the north pole. The term ceases to be meaningfully applicable. If consciousness is defined as being from a particular perspective (as many philosophers in the literature define it), then by logical necessity the term ceases to be meaningful after the object that is the basis of that perspective ceases to exist.
But, like I said, I’m not a fan of defining “consciousness” in this way, albeit it is popular to do so in the literature. My criticism of the “what it is like to be” definition is mainly that most people tend to associate “consciousness” with mammalian brains, yet the definition is so broad that there is no logical reason as to why it should not be applicable to even a single fundamental particle.
- Comment on I'm literally a thinking lump of fat 5 months ago:
This problem presupposes metaphysical realism, so you have to be a metaphysical realist to take the problem seriously. Metaphysical realism is a particular kind of indirect realism whereby you posit that everything we observe is in some sense not real, sometimes likened to a kind of “illusion” created by the mammalian brain, called “consciousness” or sometimes “subjective experience” with the adjective “subjective” used to make it clear it is being interpreted as something unique to conscious subjects and not ontologically real.
If everything we observe is in some sense not reality, then “true” reality must by definition be independent of what we observe. If this is the case, then it opens up a whole bunch of confusing philosophical problems, as it would logically mean the entire universe is invisible/unobservable/nonexperiential, except in the precise configuration of matter in the human brain which somehow “gives rise to” this property of visibility/observability/experience. It seems difficult to explain this without just presupposing this property arbitrarily attaches itself to brains in a particular configuration, i.e. to treat it as strongly emergent, which is effectively just dualism, indeed the founder of the “hard problem of consciousness” is a self-described dualist.
This philosophical problem does not exist in direct realist schools of philosophy, however, such as Jocelyn Benoist’s contextual realism, Carlo Rovelli’s weak realism, or in Alexander Bogdanov’s empiriomonism. It is solely a philosophical problem for metaphysical realists, because they begin by positing that there exists some fundamental gap between what we observe and “true” reality, then later have to figure out how to mend the gap. Direct realist philosophies never posit this gap in the first place and treat reality as precisely equivalent to what we observe it to be, so it simply does not posit the existence of “consciousness” and it would seem odd in a direct realist standpoint to even call experience “subjective.”