The initial ambition of anyone who studies quantum mechanics is to truly *understand *it. What do we mean with that? What does it mean to truly *understand* something? Understanding quantum mechanics is probably quite different from understanding music or, say, understanding politics. In my humble view, understanding quantum mechanics amounts to understanding the wavefunction. Hence, the question reduces to: what does it mean to understand the wavefunction?

### The Copenhagen interpretation of quantum mechanics

Let us be clear from the outset: the wavefunction is *not *real. It can’t be: it *represents *reality—or at least some *aspects *of reality. Hence, the wavefunction is like, say, a velocity vector: the velocity vector is not real either, but it *represents *something real—the velocity of an object, to be precise.

A velocity vector is a mathematical object. As a vector, it has a magnitude and a direction. As such, it is a *mathematical *vector. However, it also has a physical dimension: we express the magnitude of this vector in *meter per second*. That’s why the velocity vector is a *physical *vector. Its physical dimension (m/s) tells us what *aspect *of reality it describes—a velocity, in this particular case. In physics, we will usually analyze physical quantities as a *function* in time and in space. We write: ** v** =

**(**

*v***, t). [The bold face we use for the velocity**

*x***(and for the position vector**

*v***) shows we are thinking of it as a**

*x**vector*quantity: we are interested in the magnitude as well as the direction.]

The wavefunction is a complex function—in a literal as well as in a mathematical sense. Complex functions relate some *argument *to two dimensions: a so-called *real *and a so-called *imaginary *dimension. The Copenhagen interpretation of quantum mechanics maintains these two dimensions are mathematical dimensions only: they are just two numbers. According to the Copenhagen interpretation of quantum mechanics, we should not try to associate any *physical *dimension with these two numbers. Why not? Because it supposedly leads to inconsistencies.

In my view, these inconsistencies are only paradoxes, so they can be understood and solved. A paradox is an *apparent *contradiction, not a *real *one. For example, the wavefunction of an electron has a weird 720° symmetry. Real objects in space − such as a car, your wife, or an elementary particle such as an electron − have a 360° symmetry: if you rotate it over 360°, you get the same object. However, if your wavefunction describes an electron going through a Stern-Gerlach apparatus, for example, and then you imagine rotating that apparatus over 360° and doing the experiment again, you do *not *get the same wavefunction. You get *almost *the same wavefunction, but not *quite *the same. Most academics therefore conclude the wavefunction cannot represent anything real, because objects with a 720° symmetry don’t exist.

In my humble view, this is eminently stupid and shortsighted. The object below—for which I have to credit a brilliant video game designer—has a 720° symmetry, and for the same reasons as our wavefunction: it is connected, so to speak, to the observer.

I also paste a link here to an illustration of Dirac’s belt trick, which shows you exactly the same thing. 🙂 But I am getting ahead of myself here. Let’s first cover some basics. 🙂

### Math is a language – and language is always ambiguous

Physicists like to generalize. They talk about states and state vectors, or bosons and fermions, or wavefunctions, and forget to warn the reader they are actually talking about something very specific. The wavefunction for an electron orbital, or for a hypothetical particle traveling in free space, or for a photon, or for one of the states in a *n*-state system—and I can list many more *instances *of the wavefunction—are all very different beasts, and trying to cram them altogether in one generic lecture on quantum math will not necessarily further your understanding.

For example, I found it rather shocking to find out, after all of the lectures on the behavior of bosons (force carriers, as opposed to *matter*-particles), that *the* archetypal boson—the photon—always has spin up or down, no zero spin, although the theoretical spin-one boson has *three *possible spin states: −1, 0, and +1. In other words, the theoretical spin-one boson doesn’t exist, and so you wonder why physicists even talk about it.

Another thing that bothers me is that (real or possible) *physical states* are represented by so-called state *vectors*: the term suggests a resemblance with real vectors in three-dimensional space—think of vectors representing a direction, a force, a velocity, or whatever other reality that has a magnitude, a direction and something to grab onto—and… Well… No. State vectors are very different beasts too. In fact, some of the addition and multiplication rules—and the idea of *base *states—are encouragingly familiar concepts but you almost never feel like you get a satisfactory description of what a state actually *is*.

This Survivor’s Guide will, hopefully, give you some informal idea of what the physicists are talking about, which might be helpful in terms of… Well… Surviving a course on quantum mechanics… 🙂 * *

### States

Dirac invented this notation for *initial *versus *final *states. The initial state might just be: *I don’t have a clue*. In that case, you’d probably write it like |ψ〉 or |*s*〉. The final state might be really be final, which means you’ve made an observation and so you’ve established the position of a particle, or you measured its linear or angular momentum, or its energy, or whatever else you wanted to know about it. Those states are final because the atomic or sub-atomic scale ensures your measurement will usually destroy whatever was there. Such state will usually be written as 〈*x*|, 〈*↑*|, 〈E|, 〈p|, 〈1| or whatever other else that looks a bit more specific than those Greek letters. To be specific, the 〈*x*|, 〈*↑*|, 〈E|, 〈p|, 〈1| might mean: we’ve intercepted our particle at *x*, its spin is up, its energy is E, its momentum is p, or it’s in state 1 (whatever state 1 might be). However, a final state can also be intermediary. In fact, an initial state might also be intermediary, so we can write something like this:

〈*x*|1〉〈1|*s*〉

This probably means something like: our particle was at *s*, went to 1, and ended up at *x*. The combination of an initial and a final state, such as 〈*x*|1〉 or 〈1|*s*〉 here, is associated with an *amplitude* which, for all practical purposes, are just some complex number, and so we’re effectively multiplying two complex numbers here. To find what? Well… The amplitude for our particle to go from *s *to *x *via 1.

Now you’ll want to know the numbers: how do we calculate 〈*x*|1〉 and 〈1|*s*〉? Good question. We have a *propagator *function for that. In fact, there is a whole *zoo* of propagator functions out there, and it’s pretty hard to make sense of them. So you get stuck almost immediately. That makes quantum mechanics hard: even the simplest of ideas—the idea of a particle traveling from one place to another in a given time—is incredibly complicated in quantum mechanics. But let’s give it a try.

### Feynman’s propagator

According to Feynman’s *path integral formulation *of quantum theory, a particle can travel from A to B following any path, as illustrated below.It’s even worse than that: for each path, the *time *that is needed to go from A to B can vary. So we have all of these *random walks*, so to speak, and Feynman tells us we should sum up all of the amplitudes for these various paths to get *the *amplitude for the particle to go from A to B. The Wikipedia article on this approach has a wonderful animation which shows how that works, and how that actually yields a pretty consistent picture! However, for a better understanding, it’s probably best to combine it with a thorough reading of Feynman’s Lecture on the Principle of Least Action. For each path, we have an *action* integral, which is going to give us the *action *for that path. The concept of physical action (*Wirkung *in German) is not very well known despite it being a rather logical concept: its physical dimension is force over a distance over some time. That’s equivalent to some energy (force times distance) being available for some time or, alternatively, some momentum (force times time) being available over some distance. Planck’s quantum of action is, effectively, the natural unit of action. Let’s quote Feynman here:

“Here is how it works: Suppose that for all paths, *S* is very large compared to *ħ. *One path contributes a certain amplitude. For a nearby path, the phase is quite different, because with an enormous *S *even a small change in *S *means a completely different phase—because *ħ *is so tiny. So nearby paths will normally cancel their effects out in taking the sum—except for one region, and that is when a path and a nearby path all give the same phase in the first approximation (more precisely, the same action within *ħ*). Only those paths will be the important ones.

In the limiting case in which Planck’s constant *ħ* goes to zero, the correct quantum-mechanical laws can be summarized by simply saying: ‘Forget about all these probability amplitudes. The particle does go on a special path, namely, that one for which *S *does *not* vary in the first approximation. That’s the relation between the principle of least action and quantum mechanics.”

So how does it work *exactly*? As mentioned above, the action integral is going to give us *S* for each path in space and in time, and the *amplitude *for that path will be proportional to *e*^{i·S/ħ}, so we’ll write that amplitude as *a*·*e*^{i·S/ħ}. The coefficient *a *is, obviously, a *normalization* constant. OK. That’s enough on this as for now! 🙂

**Note**: The story on how quantum mechanics becomes classical mechanics for the limiting case in which in which Planck’s constant *ħ* goes to zero, is related to another one, although I should do a better job of pointing out *how exactly*. Look at the following super-obvious equation in classical mechanics:

*x*·p_{x }− p* _{x}*·

*x*= 0

In fact, this equation is so super-obvious that it’s almost meaningless. [In case you wonder, *x *is just the position and p_{x} is the momentum in the *x*-direction.] Almost. It’s super-obvious because multiplication is *commutative* (for real as well for complex numbers). However, when going from classical to quantum mechanics, replacing the *x* and p_{x }by the position and momentum *operator*, we get an entirely different result. You can verify the following yourself:

As Feynman puts it: “If Planck’s constant were zero, the classical and quantum results would be the same, and there would be no quantum mechanics to learn!” Let me make two remarks here:

1. We should not put any dot (·) between our operators, because they do *not *amount to multiplying one with another. We just apply operators successively. Hence, commutativity is *not *what we should expect.

2. Also, note that, when doing the same calculations for the equivalent of the *x*·p_{y }− p* _{y}*·

*x*expression, we

*do*get zero, as shown below:

### First Principles

Unfortunately, there are no First Principles in quantum mechanics. Those rules for adding or multiplying amplitudes are just mathematical rules: there are no real *physics *behind. So it’s not like classical mechanics, or electromagnetic theory, where you get a very limited number of equations (Newton’s Laws in classical mechanics, and Maxwell’s equations for electromagnetism) and then you write 200 pages on their consequences.

No. It’s not like that. In quantum mechanics, you constantly flip between rough *heuristic *arguments and a ton of incredibly complicated mathematical treatments of the same. The heuristic argument provides some rationale for whatever it is that you are trying to do, while the mathematical treatment tries to show you it might make some sense.

You’ll say: what could First Principles possibly look like? I’d say we should have some consistent idea of what a photon and an electron actually *are*, and then we can discuss why and how they interact the way they do. You’ll say: “Oh. You’re just another crazy wanting to present some physical interpretation of the wavefunction.” And… Well… Yes. I mean… Look at *any *equation involving some wavefunction, and they’ll also involve physical constants and whatever *physical *information we have about our object: its mass or its energy (these are equivalent, obviously), its (linear or angular) momentum, its position in space and time, and… Well… Whatever else you can think of. But then we say the (components of the) wavefunction itself have no *physical *dimension? They’re just some mathematical construct? Do a dimensional analysis of the equation below:

All of the terms in this equation have a 1/m^{2} dimension if… Well… If the real and imaginary part of φ have no physical dimension. *Come on ! *That doesn’t make sense: (the components of) φ *must* have some *physical *dimension too.

So the question is… We’re modeling *what *per square meter (1/m^{2}) here? Personally – but that’s just an uninformed *opinion*, of course – I think it’s an energy flow. Why? I don’t know. I need to work it out. But the *m*^{2}*c ^{2}*/

*ħ*

^{2}factor on the right-hand side of the equation definitely

*re-scales*stuff in terms of natural units. Why? Because it’s the (inverse) square of the

*Compton radius*of an electron:

*a*=

*ħ*/

*m*

*c*.

Is the Compton radius a natural unit? In my *flywheel model of an electron*, it surely is. So… Well… What’s the model?

### The crazy stuff

*My *physical interpretation of the wavefunction is inspired by the mathematical similarity between the E = m·*a*^{2}·ω^{2} (the energy of *two *oscillators in a particular 90° degree combination) and the E = m·*c*^{2} formula. If this were to represent something real, then we need to give some meaning to the *c *= *a*·ω identity that comes out of it. Now, if we assume, just for fun, that E and m are the energy and mass of an electron, then the *de Broglie *relations suggest we should equate ω to E/ħ. As for *a, *the *Compton scattering radius *of the electron (ħ/(m·*c*) would be a more likely candidate than, say, the Bohr radius, or the Lorentz radius. Why? Because we’re not looking at an electron in orbit around a nucleus (Bohr radius), and we’re also not looking at the size of the *charge *itself (classical electron radius), because we assume the charge is *pointlike*. We get:

*a*·ω = [ħ/(m·*c*)]·[E/ħ] = E/(m·*c*) = m·*c*^{2}/(m·*c*) = *c*

*Wow! *Did we just *prove* something? No. We don’t prove anything here. We only showed that our E = m·*a*^{2}·ω^{2} = m·*c*^{2} equation *might* (note the emphasis: *might*) make sense. Let me show you something else. If this *flywheel model* of an electron makes sense, then we can, obviously, also calculate a *tangential velocity *for our charge. The tangential velocity is the product of the *radius *and the *angular *velocity: *v* = *r*·ω = *a*·ω = *c*. So that’s great *visuals *too. But where did we get our *Compton scattering radius *from?

Good question. *Great* question, actually. It comes out of the *Great Depths *of all of the quantum math, which I don’t want to explain to you because I don’t quite master it. However, I am happy to note that – in my search for First Principles – the formula is *consistent *with my *intuitive model *of an electron, which is… Well… A pointlike electric charge rotating around some center, as illustrated below.

As for the mechanism, I’ve detailed that *ad nauseam *in a bunch of papers already, so I am not going to repeat that here. The basics are easy: our rotating charge should also have some angular momentum, right? In fact, the Stern-Gerlach experiment tells us the angular momentum should be equal to ± ħ/2. Is it? So we will need a formula for the angular mass, aka the moment of inertia. Let’s try the one for a rotating disk: I = (1/2)·m·*a*^{2}. The formula for the angular momentum itself is just L = I·ω. Combining both gives us the following result:

L = ħ/2 = I·ω = (1/2)·m·*a*^{2}·ω = (1/2)·(E/*c*^{2})·*a*^{2}·(E/ħ) ⇔ *a *= ħ/(m·*c*)

Did we just *get *the formula for the Compton scattering radius out of our model? Well… Yes. It is a rather Grand Result, I think. So… Well… It *might *make sense, but… Well… *Does *it, really?

There are several issues to be solved. For example, the *idea *of a force assumes some idea of *inertia*. If there is no inertia, a force will just give some object an *infinite* *momentum *in *zero time*. That doesn’t make sense ! So… Well… Perhaps the electron is just a charged version of the neutrino. 🙂

### Certainty and Uncertainty

I find that a lot of the Uncertainty in quantum mechanics is actually suspiciously certain. For example, we know an electron will always have its spin up or down, *in any direction along which we choose to measure it*, and the value of the angular momentum will, accordingly, be measured as plus or minus ħ/2. That doesn’t sound uncertain to me. In fact, I feel it sounds remarkably *certain*: we know that the electron will be in either of those two *states*, and we also know that these two states are separated by ħ, Planck’s quantum of action, *exactly*.

Of course, the corollary of this is that the idea of the direction of the angular momentum is a rather fuzzy concept. As Feynman convincingly demonstrates, it is ‘never completely along any direction’. Why? Well… Perhaps it can be explained by the idea of precession?

In fact, the idea of precession might also explain the weird 720° degree symmetry of the wavefunction. Hmm… Now *that *is an idea to look into ! 🙂

[To be continued.]