Q IS FOR QUANTUM
Quantum mechanics for those who only know basic arithmetic
These FAQs are mainly a lightly edited email exchange between Terry Rudolph and the author John Horgan

My hope at the beginning of my project was to understand Bell’s Theorem. Do you address it in your book?

Do you address the uncertainty principle?

Scott Aaronson says the source of “all quantum weirdness” is the concept of “negative probability.” Is that what B is? A negative probability? Which cancels out/interferes with positive probabilities?

Are imaginary/complex numbers implicit in your mist system, or do you dispense with them?

Your mist system is timeindependent, right?

I assume your black/white system applies best to binary properties, like spin. Can it also account for continuous, timedependent properties like position and momentum?

Does your book address the GRW model (or other spontaneous collapse theories)? The many worlds hypothesis? The pilot wave model? It from bit? QBism?

What’s your favorite quantum interpretation, if any?

What’s your take on interpretations that make consciousness fundamental to reality?

Do you think work on quantum computation might lead to a clearer understanding of QM?
My hope at the beginning of my project was to understand Bell’s Theorem. Do you address it in your book?
In my mind Bell’s theorem is the most important insight in physics since General Relativity, its something I am sure humans will still find puzzling in 500 years because it is so antithetical to our intuitions and experience of the monkeyscale world. So I encourage you to understand it as deeply as possible.
Bell’s theorem is normally presented via two separated parties, Alice and Bob, sharing two particles in an entangled state, and then each making an “independent” choice between one of two measurements to perform, so there are 4 possible measurements in total. (The independence is sometimes cast as “free will”, but thats an obviously contentious issue as you know better than me! They just need to each have a method of choosing between their two measurements whereby they are confident their choice is independent of the measurement the other person is choosing. One of them could flip a coin, the other use the stock market. Whatever.)
What Bell showed is that the statistics of the measurement outcomes could not be achieved in a local physical theory. This is normally done in the form of a bound which says “any local theory must satisfy that the fraction of outcomes they see which obey BLAH is less than BLAH.” This is the Bell "inequality". And then we do the experiment and exceed that bound.
Its possible to turn Bell’s scenario (or more often a version of it called the CHSH inequality) into a game where the independent measurement choices are being enforced by some skeptics (who flip a coin say) that are testing the claimed psychic abilities of Alice and Bob. Bell’s theorem then says “If they are *not* psychic, then they will only win this game at most 75% of the time. Thus if we see them winning (say) 85% of the time we will accept they are psychic”. The connection with locality here is that one way to completely shield the psychics from cheating is to have them spacelike separated, so that any signal containing information about what is going on at the other location would have to travel faster than light.
Annoyingly it is not possible to set that particular Bell/CHSH game up so that the psychics win 100% of the time. Thus if you are trying to explain to people why it is surprising you have to convince them about statistics  namely that if we repeat a game 1000 times each of which we only have a 3/4 chance of winning, and then we see them actually win closer to 850 times than the expected 750 times, then it is overwhelmingly likely that one of the assumptions which led us to thinking it could only be won 3/4 of the time must be violated. This assumption is that of “locality”, namely that whatever is going on at Alice’s side is unaffected by/independent from whatever is going on at Bob’s side, no matter how we try and separate them.
The struggle with this as a science communicator is people suck at understanding statistics, i.e. what “overwhelmingly likely” really means and so on. Even physics students, never mind the layman!
Fortunately there is a different version of Bell’s theorem, due to Lucien Hardy, which is "not statistical" in that sense. It involves the exact same setup and assumptions of Bell’s theorem, but you can understand the contradiction with locality via a kind of logical argument, rather than a statistical one. You still need to run the experiment multiple times to check that locality does get violated, but it doesn’t involve statistical analysis, it just involves seeing “forbidden” outcomes on some occasions.
And thats the version I chose to present in Part II. So if you follow that argument, you understand (to the extent it can be understood!) Bell’s theorem.
I have also taught the CHSH version on occasion  the misty state formalism will let you do the calculation  but it adds another layer of statistical complexity which students start thinking is the “loophole”, which it is not. Here is a draft from an early version of the book where I was considering teaching Bell's theorem with the CHSH version.
Do you address the uncertainty principle?
Not explicitly. But its easy to do.
There are simple demonstrations of an uncertainty principle you could do with the balls/boxes. Basically you can set up some configuration of boxes, and say “measurement X is to drop balls through this configuration and then observe the color(s) of the ball(s) that emerge”. Then you take a distinct setup of different boxes and say “measurement P is to drop balls through this setup and observe the color”. The uncertainty principle manifests itself by the claim “There is no misty state you can possibly prepare such that if you drop it into measurement X the balls will emerge with a definite color, or if you drop it into measurement P they will emerge it with a definite color. You must have some uncertainty about one or the other measurement outcome."
In more sophisticated versions you can look at the probabilities of different colors emerging and see that for certain choices of X and P some statistical rules are followed (the normal way the HUP is stated), but even the simple version I just presented is surprising from the classical perspective, because in classical physics measurements can anyway in principle be completely determined given arbitrarily good initial information about the system (the balls) and the measurement (the boxes followed by observing the ball color).
Note there is nothing about “you drop the balls into X and then into P and this causes one to disturb the other” or similar nonsense. Heisenberg was initially confused on this point, and some of that has promulgated. Ignore it. There are statements you can make like that, but they are less interesting.
So the uncertainty principle is really a statement about “the maximal certainty you can have about two different (potential) measurements by preparing even the very best misty state available to you”. Its a radical departure from classical physics, where maximal certainty is complete certainty, here it is not.
Why did I not make a big deal of it? Not sure! But it is implicitly there in a basic example: Let us define measurement P to be “drop the ball through a PETE box and then observe” and measurement X to be “do not drop the ball into a box, just observe its color directly”. Now, if you want to be certain about the outcome of the P measurement, you need to prepare either the misty state [W,B] or the misty state [W,B] to drop through the PETE box. Then you will know for sure the color the ball will be when you measure P, this is the core calculation we do with PETE boxes. However, if you prepare one of these two misty states your knowledge about the outcome of an X measurement is completely random. You have maximal uncertainty about X so to speak, because either color has equal likelihood being the outcome of the measurement.
The more general statement of the HUP is a bound on the maximal predictability you can have given any pair of measurements. But if you’ve internalized the misty states then you kind of know this.
Scott Aaronson says the source of “all quantum weirdness” is the concept of “negative probability.” Is that what B is? A negative probability? Which cancels out/interferes with positive probabilities?
Great question, a lot of potential for confusion here. The short answer is no, the misty states are directly comparable to standard quantum states. In a standard quantum state (which is normally written as a vector containing numerical entries) probabilities arise by squaring the vector entries. The mistyequivalent of this squaring rule is presented towards the end of Part II.
The numerical entries in a quantum state vector (which can actually be complex numbers as well as negative ones) are known as amplitudes. Thus the misty state formalism is also based on amplitudes, and the whole formalism is made possible because it turns out that various things we would do to vectors in quantum theory we can replace with what a computer scientist would call a “string rewriting rule”.
A key feature of quantum theory is interference, which is the cancellation of positive and negative amplitudes, and it underpins a lot of the quantum phenomena I chose to demonstrate in the book.
So whats the deal then with negative probabilities?
It turns out that you can do quantum mechanics in a completely different mathematical formulation that is not based at all on vectors of amplitudes. There is a precise mapping between regular quantum theory and one of these mathematically distinct different formulations (there are many of these other consistent formulations in fact). Because the mapping is rigorously well understood, you can choose whether to analyse a problem of interest in the standard formulation or in one of these other approaches.
Many of those other formulations have the feature that they do not depend on calculating the probability of seeing something by taking the square of anything. In fact they look much more like a regular probability theory where you just sum up the probabilities of different alternatives to find the total probability, or where you just multiply the numbers to find a joint probability of two events and so on. (Probabilities of what you may ask? Well that depends on which of the options you choose, there is no “single best choice” unfortunately). Of course because these other formulations are equivalent to the standard quantum theory they must be able to explain all the same “weird” phenomena. Typically this happens by somehow, somewhere, some of the probabilities end up being negative, which is definitely not a feature of regular probability theory. Miraculously these formulations remain consistent because the negative probabilities always end up being assigned to unobservable events. They play a critical role but in a sneaky fashion  whenever you compute the probability of something that can actually be observed, the negative probability contributions are always outweighed by at least as much positive ones coming from somewhere else.
Its debatable (and people do) how much understanding these other formulations contribute to questions about “what is really going on”. In practice there are strengths and weaknesses of doing a calculation in one of these other formulations, and in my day job as a quantum optician it has proven useful to be adept at 2 or 3 others.
Are imaginary/complex numbers implicit in your mist system, or do you dispense with them?
Quantum states in the standard formulation are vectors of complex numbers (“amplitudes”  see previous answer). However, it turns out that many “paradigmatic” quantum phenomena (doubleslit style interference, the uncertainty principle, teleportation, Bell’s theorem, quantum computational speedup,…..) all have at least one example wherein you can demonstrate them using only quantum states for which the amplitudes are real numbers. Moreover, they all have an example wherein the amplitudes are rational real numbers. These examples are “the real thing” (hah!) in as much as they do not need to drop/skip/lose any part of the phenomenon in question.
So, you can teach all of the key counterintuitive features of quantum theory without making the student learn anything about complex numbers. This is not to advocate dispensing with complex numbers per se, there are subtle (and I would claim quite deep) features of quantum theory which do depend on them, but I bet if you ask your average working quantum mechanic to come up with a feature that I cannot also demonstrate using only realnumberentried quantum states they will fail. I’ve been playing this game a long time :)
Fine, so right now you’re thinking that the answer to your question is that the misty states are simply reproducing those things that can be done using quantum states/vectors which have only realnumbered entries. While it certainly is true that the misty states can reproduce all those phenomena (as long as the amplitudes are rational real numbers), there is a little more to the story, and I think its interesting, though tangential to the goal of understanding basics of quantum theory per se. Feel free to stop now!
I will explain two distinct things:
(A) Simulating complex quantum systems with real ones
One natural question if you are a computer scientist or mathematician, is whether I lose calculational efficiency by using the misty state formalism. That is, if you are writing a calculation out on paper, or programming it into python, will you cause a massive blowup in the size of your calculation by restricting yourself to this set of rules rather than allowing yourself the full mathematical formalism of quantum theory?
The answer to that question is no. There will be a slight increase, but not much. So, how would you deal with a calculation that had complex numbers in it? You can play a trick that has conceptual pitfalls. First let me reiterate that in the book I was extremely careful to clearly separate (i) the physical stuff in the world  the balls and the boxes, (ii) the observations you make which are the input/output data you want to explain, and (iii) the calculations you draw on a piece of paper (using the misty states) to achieve that goal. If you think about every ball drawn in the book, it falls into the category of either a sketch that is meant to be a real physical ball or observation thereof, or it is a "mathematical ball”, ie part of a calculation (that in principle could just live in my head if I was smart enough). Now the trick if you want to do a calculation which supposedly requires complex amplitudes is you can add in a single fictional ball to the mist of any calculation you do. This is not a “real ball out there in the world”, its a mathematical trick. What this extra ball does is it is (mathematically) fixed to be white if we want the amplitude to be real and (mathematically) fixed to be black if we want it to be imaginary (every complex amplitude can be split into a real and imaginary part). I’m not going to explain in detail how it works. To make matters more confusing, if for some reason you were building a quantum computer and your gates (boxes) were restricted to the ones which only involve real numbers, you may want to create a physical qubit that mimics the fictional ball I just described. An example is here: https://arxiv.org/abs/quantph/0210187
At a physical level this fictional ball would be pretty implausible to believe existed and was the actual origin of why we use complex amplitudes, because every time the quantum state of any system in the whole universe changed it would need to “reach out” to the same, single fictional ball. Surprisingly (at least to me at the time) it is possible to replace that one fictional ball with many, and then let every system in the universe have its own fictional ball that it can interact with. This was shown here: https://arxiv.org/abs/0810.1923. More precisely, it was shown if you did this with all the systems in a single setup it was possible. This made the fictional ball(s) slightly more plausible as reflecting a “genuine physical objects”, though nobody really believed in them. I describe all these convolutions because recently a very nice new result shows that even these “local fictional balls” won’t work if you make them respect certain natural assumptions: https://arxiv.org/pdf/2101.10873.pdf.
(B) Quantum computational universality and the limited set of necessary boxes for mistcreation
In the book I used only a single “actually quantum” box, the PETE box. By this I mean it is the only box that has "mistcreating” properties. All the remaining boxes introduced are things that just shuffle colors around  they would be at home in a classical computer for example. Only having to introduce a single new misterious thing is very nice pedagogically.
You may wonder about other mistchanging boxes. Of course they do exist. For example there is a oneball box for which the evolution would look like this:
B > [W,W,W,B,B,B,B]
W > [W,W,W,W,B,B,B]
Well, you may wonder, perhaps there is a box which does this:
B > [W,W,W,W,B,B,B]
W > [W,W,W,W,B,B,B]
and the answer is no, there isn’t. Working out which boxes are physically possible in the mistystate formalism is not at all obvious (particularly once you go to multiball boxes). In the standard quantum formalism the answer is very simple  they are the analogs of rigid rotations of complexentried vectors. That observation resulted in probably the most trivial Nobel prize awarded in history. However I am trying to teach the basics of quantum theory without teaching highlevel complex geometry, so a downside of this approach is the students are not really going to be able to use geometrical insights that the working quantum mechanic relies on.
Now for the very interesting part, the genesis of the whole mistymethod: You may wonder whether my reliance on only the single PETE box is limiting, in the sense of (A) above  does it limit the calculations you could do, and the phenomena you can demonstrate? I just told you all of these (infinitely many) other mistcreating/destroying boxes exist. Yet in the book I used only one of them  the PETE box  plus some classical (“computer rules”) boxes.
The answer is that it is not limiting, that every calculation can be done (to goodenough accuracy, and again, perhaps with a small overhead) using only PETE boxes and the classical boxes. This is a remarkable mathematical result due to Shih, leveraging another powerful result (I think due to Kitaev). There is a citation at the end of the book. A few years ago I was in the middle of pondering this result when I realized I was running late to give a talk at a math camp for 1214 year olds which was being run in part by my friend PETE Shadbolt. I raced for the tube, and while on it thought about what could I explain to these kids that wasn’t the usual jargonfilled quantum fluff. And so here we are.
Your mist system is timeindependent, right?
Time is there but I don’t make a big deal of its role. Its there when you stack boxes, because you get sequential evolution that depends on the ordering of boxes the balls pass through. This is a very discretized time  more like a computer scientist's conception  because I don't say anything about how long it actually takes for a ball to emerge (except briefly in the "Archimedes" story). When we actually make quantum systems undergo the PETE box evolution it depends greatly on the type of physical system how long (as measured by my watch) it takes, and what physical setup you need in order to make it happen. Regardless of the physical implementation, the final change of misty state is identical. This a beautiful feature of quantum theory  for many purposes these abstract mathematical objects can be unpinned from the details of the many different physical systems they describe.
Perhaps you want to know what happens if I set things up to make a PETE box, but then I stop it early, i.e. I do not wait until the full evolution has occurred.
Basically what happens is the evolution goes like this:
W> [W,W,W,...(m repetitions)...,W,B,B,...(n repetitions)...,B]
B> [W,W,W,...(n repetitions)...,W,B,B,...(m repetitions),...,B]
If you let it evolve for only a very short time, m is much larger than n. The PETE box is implemented by choosing the physical time for which m=n.
But this is very ugly, because of all of these extra copies floating around. If m=n=100, why is that equivalent to the PETE box evolution:
W > [W,B]
B > [W,B]
which we know and love?
The answer is that if n and m have any factors in common, you can divide them out. Ultimately this is because if you go back and look at the "squaring rule" for computing probabilities of what you observe, you will see it depends on ratios. That ratio dependency lets us do simplifications when using the mists which I didn't need to discuss in the book because they didn't come up for the examples I used. When I teach I often do introduce them.
My final remark about the mists of time would be that the "arrow of time" is also skulking around because of the distinction between past and future when it comes to you making observations and “collapsing” the mist.
I assume your black/white system applies best to binary properties, like spin. Can it also account for continuous, timedependent properties like position and momentum?
It can, because the misty formalism is “universal”, in as much as you can use it to do any quantum calculation with only a small overhead. I should reiterate I am not advocating that we should recast all of quantum theory into this formalism. The misty state picture is a good way of getting people to the heart of some nontrivial quantum theory without them having to absorb a bloatload of irrelevant math. But that math is not largely irrelevant if you actually want to work in the field, it makes many things much easier.
Ok, so how would you deal with a physical property like position that is not binary? You just replace the binary choice black/white with a continuous choice of colors, such as those of the visible spectrum (a rainbow as it were).
As you know a real physical system has more than one physical property  perhaps it has a position AND a spin. How would that get represented? Let me use a lower case letter like r,g,b for colors like red, green, blue which are going to be possibilities within our continuous “rainbow” degree of freedom (like position or momentum). And I will continue to use B,W as the binary variables for spin say. A single system can be in a misty state like one of these three options:
[rB,rW]
[rB,gB,bB]
[rB,gW]
These are very different states. In the first the particle is definitely at “position” red, its spin is in a superposition of B and W. In the second the particle definitely has spin B, but its in a superposition of three distinct locations r,g,b (which, if you’re a good experimentalist, these days might be several meters apart!) In the last one the two degrees of freedom of the particle  its position and spin  are entangled. It has neither a definite position nor a definite spin, but its spin and position have been correlated somehow.
Perhaps you will have noticed that they way we are dealing with multiple degrees of freedom is exactly the same as we dealt with the black/white physical properties for two different physical balls. There we would write things like [BW,WB] and so on. How can this be? How can I tell from staring at the mathematics (the misty state) whether we are talking about one system of two degrees of freedom, or two systems each with one degree of freedom?
The answer is, we can’t! And this is not a bug of the misty states, the exact same thing is true in standard quantum theory. The same mathematical method for “combining” the states of multiple systems (I urge you to go back and read the parable of the lunches in Part I) is used for combining the different degrees of freedom of a single physical system. Being a physics student can be a misery because of stuff like this. You need to know the context and notation that the math is being applied to, or you will screw everything up. And consequently make your Professors miserable.
The very last thing to say is that in quantum theory position and momentum of a single particle are the same degree of freedom in the sense just discussed. They are related by the continuous version of the PETE box. Recall my answer about the uncertainty principle? I gave a simple example of two measurements, X and P, one of which was to observe the color directly, the other to do a PETE box and observe. These two measurements are being performed on a single degree of freedom  the black/white color of a single ball. But they are distinct measurements, and as we saw they are “incompatible” in as much as if you are certain about the one measurement outcome you are maximally uncertain about the other. I chose the letters X and P because those are often used for position and momentum in quantum theory, and this was actually a very simple “binary position/momentum” example. (Didn’t want to confuse you by mentioning it there, but you opened yourself up to it with this question!). For the continuous case the state of definite position (“r” say) is a state of maximally uncertain momentum. Of course the math is a bit more messy, because the continuous PETE box needs to act on the mist of definite position/color “[r]” and turn it into a mist that contains an infinite number of colors [….,r,g,b,….], which it turns out is the state of definite momentum (and maximally uncertain position).
Quantum theory is kinda boring once you grok the binary B/W stuff I focus on in the book  after that its all the same stuff repeated over and over again with just more degrees of freedom thrown in.
Does your book address the GRW model (or other spontaneous collapse theories)? The many worlds hypothesis? The pilot wave model? It from bit? QBism?
I tried very hard to make the book completely neutral interpretationally. Partly so it stands the test of time (I ain’t writing another one!) and but primarily so that a student doesn’t inherit my biases. They should have their own crack at understanding things. I do, in fact, make some reference to the core conceptual underpinnings of most of those interpretations (without naming them), but I do not analyse them. The book is primarily about laying out a pedagogical method by which someone can learn quantum theory, its secondary purpose is to enthuse the student that something weird is out there which needs explaining. Of course I also picked topics to cover that elucidate quantitative constraints on any proposal for explaining “whats really going on”  nothing worse than spending your time on an idea and then finding out its already ruled out by some known theorem.
What’s your favorite quantum interpretation, if any?
I don’t want you to inherit my biases :)
What’s your take on interpretations that make consciousness fundamental to reality?
Pretty skeptical, partly because I find pinning down rigorously what we mean by “reality” just as slippery as most people will agree it is hard to pin down rigorously what we mean by consciousness!
Do you think work on quantum computation might lead to a clearer understanding of QM?
Here is annoying answer: It already has, because it brought in smart people and their mathematical tools from other fields.
I do think computational complexity (both classical and quantum) is already giving us a much deeper understanding about why certain processes do or do not occur easily in nature. But as for the kind of thing I suspect you’re angling towards (“we all live in a simulation”) I’m personally skeptical this kind of thinking will help us make progress.