• 2 Posts
  • 346 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle


  • I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.

    If you don’t know anything about the topic then maybe you shouldn’t speak on it. Especially when claiming you have debunked peer reviewed papers from Harvard physicists like Jacob Barandes.

    However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist

    Superposition is a property of statistics. Even classical statics commonly represent the system’s statistical state as a linear combination of basis states. That’s just what a probability distribution is. If you take any courses in statistics, you will superimpose things all the time. This is a mathematical property.

    so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.

    Quantum advantage obviously comes from the phase of the quantum state. If you remove the phase from the quantum state then all you are left with is a probability distribution, and so there would be nothing to distinguish it from a classical statistical theory. But the phase is, again, a sufficient statistic over the system’s history. The quantum advantage comes from the fact that you are ultimately operating with a much larger information space, since each instruction in the computer is a function over the whole algorithm’s history back to the start of the quantum circuit, rather than just the current state of the computer’s memory at that present moment.


  • What if two packets interact with each other? If you claim a collapse occurs then entanglement could never happen, and so such a viewpoint is logically ruled out. If you say a collapse does not occur but only occurs if you introduce a measurement device, then this is vague without rigorously defining what a measurement device is, but providing any additional physical definition with then introduce something into the dynamics which is not there in orthodox quantum mechanics, so you’ve not moved into a new theory and are no longer talking about textbook QM.



  • In any statistical theory, the statistical distribution, which is typically represented by a vector that is a superposition of basis states, evolves deterministcally. That is just a feature of statistics generally. But no one in the right mind would interpret the deterministic evolution of the statistical state as a physical object deterministically evolving in the real world. Yet, when it comes to QM, people insist we must change how we interpret statistics, yet nobody can give a good argument as to why.

    We only “don’t fully understand where the probabilistic measurement happens” if you deny it is probabilistic to begin with. If you just start with the assumption that it is a statistical theory then there is no issue. You just interpret it like you interpret any old statistical theory. There is no invisible “probability waves.” The quantum state is an epistemic state, based on the observer’s knowledge, their “best guess,” of a system that is in a definite state in the real world, but they cannot know it because it evolves randomly. Their measurement of that state just reveals what was already there. No “collapse” happens.

    The paradox where we “don’t know” what happens at measurement only arises if you deny this. If you insist that the probability distribution is somehow a physical object. If you do so, then, yes, we “don’t know” how this infinite-dimensional physical object which doesn’t even exist anywhere in physical space can possibly translate itself to the definite values that we observe when we look. Neither Copenhagen nor Many Worlds have a coherent and logically consistent answer to the question.

    But there is no good reason to believe the claim to begin with that the statistical distribution is a physical feature of the world. The fact that the statistical distribution evolves deterministically is, again, a feature of statistics generally. This is also true of classical statistical models. The probability vector for a classical probabilistic computer is mathematically described as evolving deterministically throughout an algorithm, but no sane person takes that to mean that the bits in the computer’s memory don’t exist when you aren’t looking at them an infinite-dimensional object that doesn’t exist anywhere in physical space is somehow evolving through the computer.

    Indeed, the quantum state is entirely decomposable into a probability distribution. Complex numbers aren’t magic, they always just represent something with two degrees of freedom, so we can always decompose it into two real-valued terms and ask what those two degrees of freedom represent. If you decompose the quantum state into polar form, you find that one of the degrees of freedom is just a probability vector, the same you’d see in classical statistics. The other is a phase vector.

    The phase vector seems mysterious until you write down time evolution rules for the probability vector in quantum systems as well as the phase vector. The rules, of course, take into account the previous values and the definition of the operator that is being applied to them. You then just have to recursively substitute in the phase vector’s evolution rule into the probability vector’s. You then find that the phase vector disappears, because it decomposes into a function over the system’s history, i.e. a function over all operators and probability vectors at all previous time intervals going back to a division event. The phase therefore is just a sufficient statistic over the system’s history and is not a physical object, as it can be defined in terms of the system’s statistical history.

    That is to say, without modifying it in any way, quantum mechanics is mathematically equivalent to a statistical theory with history dependence. The Harvard physicist Jacob Barandes also wrote a proof of this fact that you can read here. The history dependence does make it behave in ways that are bit counterintuitive, as it inherently implies a non-spatiotemporal aspect to how the statistics evolve, as well as interference effects due to interference in its history, but they are still just statistics all the same. You don’t need anything but the definition of the operators and the probability distributions to compute the evolution of a quantum circuit. A quantum state is not even necessary, it is just convenient.

    If you just accept that it is statistics and move on, there is no “measurement problem.” There would be no claim that the particles do not have definite states in the real world, only that we cannot know them because our model is not a deterministic model but a statistical model. If we go measure a particle’s position and find it to be at a particular location, the explanation for why we find it at that location is just because that’s where it was before we went to measure it. There is only a “measurement problem” if you claim the particle was not there before you looked, then you have difficulty explaining how it got there when you looked.

    But no one has presented a compelling argument in the scientific literature that we should deny that it is there before we look. We cannot know what its value is before we look as its dynamics are (as far as we know) random, but that is a very different claim than saying it really isn’t there until we look. This idea that the particles aren’t there until we look has, in my view, been largely ruled out in the academic literature, and should be treated as an outdated view like believing in the Rutherford model of the atom. Yet, people still insist on clinging to it.

    They pretend like Copenhagen and Many Worlds are logically consistent by writing enormous sea of papers upon papers upon papers, where it only seems “consistent” because it becomes so complicated that hardly anyone even bothers to follow along with it anymore, but if you actually go through the arguments with a fine-tooth comb, you can always show them to be inconsistent and circular. There is only a vague aura of logical and mathematical consistency on the surface. The more you actually engage with both the mathematics and read the academic literature on quantum foundations, the more clear it becomes how incoherent and contrived attempts to make Copenhagen and Many Worlds consistent actually are, and how no one in the literature has actually achieved it, even though many falsely pretend they have done so.





  • Technically aether theory was never ruled out. People love to claim that the Michelson-Morley experiment ruled it out, but this is historical revisionism. The MM experiment was conducted in 1887. Hendrik Lorentz proposed his aether model in 1904. Obviously Lorentz was not such a moron he would not take into account the findings of MM, but that is what people are unironically suggesting when they say MM somehow retrocausally ruled out his model. Indeed, both Michelson and Morley did not believe their own experiments ruled it out either but continued to promote such models.

    Lorentz’s aether model and Einstein’s relativity are actually mathematically equivalent so they make all the same predictions, so no possible experiment could rule out Lorentz’s aether theory that would not also rule out Einstein’s relativity. Indeed, if you read his 1905 paper where Einstein introduces special relativity, his criticism of Lorentz’s model is only a philosophical objection. He never posited that an experiment can rule it out. MM only rules out some very early aether models, not Lorentz’s model.

    I would recommend also checking out John Bell’s paper “How to Teach Special Relativity,” where he also discusses this fact, and how the mathematics of special relativity are perfectly consistent with a reality with an absolute space and time. Taking space and time to be relative only comes at the level of metaphysical interpretation.



  • It’s amazing how nonsensical the actual foundational axioms of modern day economics are.

    Classical economics tried to tie economics to functions of physical things we can measure. Adam Smith for example proposed that because you can recursively decompose every product into the amount of physical units of time it takes to produce it all the way down the supply chain, then any stable economy should, on the average (not the individual case), roughly buy and sell in a way that reflects that time, or else there would necessarily have to be physical time shortages or waste which would lead to economic problems. We thus may be able to use this time parameter to make quantifiable predictions about the economy.

    Many people had philosophical objections to this because it violates free will. If you can predict roughly what society will do based on physicals factors, then you are implying that people’s decisions are determined by physical parameters. Humans have the “free will” to just choose to buy and sell at whatever price they want, and so the economy cannot be reduced beyond the decisions of the human spirit. There was thus a second school of economics which tried to argue that maybe you could derive prices from measuring how much people subjectively desire things, measured in “utils.”

    “Utils” are of course such ambiguous nonsense that eventually these economists realized that this cannot work, so they proposed a different idea instead, which is to focus on marginal rates of substitution. Rather than saying there is some quantifiable parameter of “utils,” you say that every person would be willing to trade some quantity of object X for some quantity of object Y, and then you try to define the whole economy in terms of these substitutions.

    However, there are two obvious problems with this.

    The first problem is that to know how people would be willing to substitute things rigorously, you would need an incredibly deep and complex understanding of human psychology, which the founders of neoclassical economics did not have. Without a rigorous definition, you could not fit it to mathematical equations. It would just be vague philosophy.

    How did they solve this? They… made it up. I am not kidding you. Look up the axioms for consumer preference theory whenever you have the chance. It is a bunch of made up axioms about human psychology, many of which are quite obviously not even correct (such as, you have to assume that the person has evaluated and rated every product in the entire economy, you have to assume that every person would be more satisfied with having more of any given object, etc), but you have to adopt those axioms in order to derive any of the mathematics at all.

    The second problem is one first pointed out, to my knowledge, by the economist Nikolai Bukharin, which is that an economic model based around human psychology cannot possibly even be predictive because there is no logical reason to believe that the behavior of everything in the economy, including all social structures, is purely derivative of human psychology, i.e. that you cannot have a back-reaction whereby preexisting social structures and environmental factors people are born into shape their psychology, and he gives a good proof-by-contradiction that the back-reaction must exist.

    The idea that you can derive everything based upon some arbitrary set of immutable mathematical laws made up in someone’s armchair one day that supposedly rigorously details human behavior that is irreducible beyond anything else is just nonsense. No one has ever even tested any of these laws that supposedly govern human psychology.


  • Surprisingly that is a controversial view. Most physicists insist QM has nothing to do with probability! But then why does it only give you probabilistic predictions? Ye old measurement problem, an entirely fabricated problem because physicists cannot accept that a theory that gives you probabilities is obviously a probabilistic theory.


  • The PBR theorem argues that if the quantum state is purely epistemic, then different preparation procedures can correspond to overlapping probability distributions over underlying states, which creates ambiguity about which preparation was used based only on observed statistics. In contrast, if the quantum state is ontic, distinct quantum states correspond to non-overlapping distributions, so in principle one can always infer the preparation given sufficient data. The theorem shows that any model with such overlap cannot reproduce all quantum predictions, and therefore concludes that the quantum state must be ontic.

    However, this conclusion relies on the assumption of preparation independence, meaning that independently prepared systems have independent underlying states. If this assumption is relaxed and underlying states can depend on the joint preparation context, then overlap need not occur even in models that are otherwise epistemic. See this paper for example: https://arxiv.org/pdf/1811.01107. In this sense, such models may still be called psi-ontic under PBR’s definition, since distinct wavefunctions correspond to distinct underlying states, but the distinction reflects differences in preparation conditions rather than the existence of distinct physical wave-like entities.

    Related work has pointed out that PBR’s criterion can classify intuitively epistemic models as ontic when preparation independence is violated, as discussed in this paper: https://arxiv.org/pdf/2109.02676v2. Other results show more explicitly that by dropping this assumption, one can construct models consistent with quantum mechanics in which different quantum states correspond to the same underlying reality, allowing genuine overlap, as in this paper: https://arxiv.org/pdf/1201.6554.

    You should check out this lecture: https://pirsa.org/12050021





  • QM is a lot easier to understand when we stop pretending a theory that only gives you statistical results somehow has no relevance to statistics. Every “paradox” can always be understood and resolved by applying a statistical analysis. If you apply such a statistical analysis to entangled systems, let’s say you have two qubits with their own bit values b1 and b2, you find that if you apply a unitary operator to just b1, there are cases where the way in which this stochastically perturbs b1 has a dependence upon the value of b2.

    You could not send a signal to b2 by perturbing b1 because perturbing b1 has no effect on b2, rather, the way in which b1 stochastically changes merely depends upon the current state of b2. You might think maybe you could send a signal the other way. If b1 depends upon b2, then you could perturb b2 to alter b1. But the dependence is always symmetrical, such that if you apply a stochastic perturbation to b2 the way in which it will change will depend upon the value of b1, and so it becomes a vicious circle.

    It is non-local in the sense that the way in which one changes depends upon the value of the other far away, but not in the sense that perturbing one locally alters the value of the one far away, and the dependence is always symmetrically mutual, so there is no way to signal between them.


  • As it is normally explained, it’s definitely fake. There is no reason to believe particles turn into waves when you’re not looking and turn back into particles when you look, and believing this demonstrably leads to irreconcilable paradoxes. Dmitry Blokhintsev was correct that the particles are just particles, and the “wave” is a property of its stochastic dynamics over an ensemble of systems. The wave is part of the nomology: it tells you how the particles stochastically behave in the aggregate, but the particles are still particles at all times. Ontologically, they are particles. Nomologically, their stochastic dynamics in an ensemble of systems converges to wave-like behavior.