With respect to the term “observer” – I’ve never liked how the word is used to describe taking a measurement. When videos use observer to describe this, it makes it sound like the particles know when we’re looking at them. That’s not the case.
What observation means, is literally taking a measurement. You fundamentally cannot observe anything unless you interact with it, to see how it responds to your interaction. Whether that be by touch (electro-magnetism), or light (photons, electro-magnetic waves). When the thing hits it, and bounces off onto a detector, we know its there. Or if we detect its effect on an electro-magnetic field (electron microscopy).
Until such time as something interacts with the matter, it has an undefined state, and can only be expressed as a distribution of probabilities over the set of possible paths.
And the Zen Koans (Douglas Hofstadter writes about them in his book Gödel, Escher, Bach – though there are good reasons to dispute his idea of Strange Loops). This cuts into the fundamental division between the two sides. What is computational, and what is not, and is our universe computational? There have been many linguistic paradoxes that have come about over the years that don’t have a solution, but this is because they are specifications for a system which cannot be implemented. However, they are just that, linguistic paradoxes, but don’t have any relation to something that can exist.
They are self-contradictory. But they only seem like a paradox because we assume that the classical semantics of mathematics is true, and that we can have infinities and stuff like this. Even if we don’t believe in infinity itself, it is a common thing to believe in things and generally act and reason as if it did.
Interestingly, these linguistic paradoxes have been some of the sparks that gave birth to Set Theory, Type Theory, the Principia Mathematica by (Russel and Whitehead), Godel’s Incompleteness Theorems, the division between constructive and non-constructive mathematics, and computationalism. All of these are either attempts to prevent or describe paradoxes and contradictions arising, or fields/philosophies that emerged as a consequence.
That’s the thing though, we don’t actually know whether the noise we see in QM is truly random, or if it is just a measurement problem. We don’t know whether the universe is doing non-computable mathematics.
Then there’s the Cellular Automaton Interpretation of Quantum Mechanics by Nobel physicist Gerrard 't Hooft, which shows great promise in replicating the dynamics of QM in a complete computational and deterministic system.
You might have seen Conway’s Game of Life, that’a s type of cellular automaton. It has actually been proven that certain cellular automata are Turing-complete. Which means they are capable of performing any arbitrary computations (constructive ones).
With respect to multiple possible states existing simultaneously, that I don’t dispute as a possibility. In fact, t’ Hooft’s model has this. So does Stephen Wolfram’s model. This is also where much of the division lies between the different interpretations of QM. Some interpretations reason that these different possible paths in the integral (which I describe more below) actually happen, and cause these exponential branches of possibility that form parallel universes at each branching. Other interpretations reason that it doesn’t actually happen.
However, we also don’t know whether the problem of how the universe appears to do super computational operations in QM is simply a matter of whether it is a polynomial problem. This relates to the P vs NP problem, which relates the nature of complexity and computational irreducibility.
We have no proof that P != NP. We also have no proof that P = NP. It may be that due to Incompleteness, there is no proof. It may even be true but not provably true. This is one of the things about Godel’s Incompleteness Theorems, that when you allow any form of self-reference, which we can’t avoid when we want to do non-constructive mathematics, then these systems will always be incomplete, and there will be properties about the system that cannot be proven, but this only includes the subset of statements that make self-reference.
The main point is, this means that the classical semantics of mathematics lead to contradictions when you try to run it.
If you follow Type Theory, it does indeed allow you to circumvent these kinds of problems. The same goes for computational mathematics, computational systems.
So, the computational descriptions are the only ones that doesn’t lead to contradictions when you try to run them
Now, it could be that being a part of the universe, that is emergent from its mechanics, not being its underlying mechanics, we are confined to only ever enact the constructive. Perhaps that is so, however, there is no good reason to believe this is the case.
The non-computational descriptions make more assumptions about the nature of the underlying substrate to reality than the computable ones. So if we are to assume the model containing the fewest assumptions, then the computational one is the one we should be preferring.
These are also very new ideas, being stitched together and connections being made between these different fields like this. Joscha Bach did an informal survey among foundational physicists at a conference, and a growing number of them believe that a computational model is needed going forward. Gerrard 't Hooft does, and he’s not talking about things that have no evidence or justification like Penrose has been doing.
Indeed, but what he was talking about was that it is intuition-defying. We cannot use our intuitions to understand it, we can only understand it when we look at it in its mathematical form. Now, admittedly, I do not posses the expertise necessary to understand all of the mathematics. However, all you need to understand these concepts abstractly is to understand calculus, some basic geometry, complex numbers, some wave mechanics, to then have a decent idea of what the path integral is doing, and of what the wave function means and what the probability amplitude distribution means, and how it differs from a conventional probability.
So, what does it mean for a bunch of different things to be in multiple states at the same time? Well, they kind of are, but they also kind of aren’t.
This is where our intuition breaks down. Because what the path integral is doing is effectively considering all possible paths, and from the aggregation of these, you can form these wave functions that describe the probability distribution of a particle, and until there is an interaction (i.e., an observer), this is all it is. But when something does interact with it, we can then know its state, as the wave function “collapses” and the final state becomes known.
And then you can start to get a basic idea of the concepts at play.
But he’s still right, you can’t understand it, except by proxy of the mathematics. Nonetheless, one can begin to reason about these concepts despite this, without really truly understanding them.
In exactly the same way as we cannot understand 4-dimensional objects.
There’s still no good reason to accept any of this.
If we did that, we wouldn’t be talking about this now. We’d have never invented anything, probably wouldn’t exist as a species without a fundamental drive to see the world as not being how it should be, and doing what we can to make it that way. Whether that arise abstractly, in that what the world should be is “understood”, or as some injustice done by our own kind.