# A Survivor’s Guide to Quantum Mechanics?

When modeling electromagnetic waves, the notion of left versus right circular polarization is quite clear and fully integrated in the mathematical treatment. In contrast, quantum math sticks to the very conventional idea that the imaginary unit (i) is – always! – a counter-clockwise rotation by 90 degrees. We all know that –i would do just as an imaginary unit as i, because the definition of the imaginary unit says the only requirement is that its square has to be equal to –1, and (–i)2 is also equal to –1.

So we actually have two imaginary units: i and –i. However, physicists stubbornly think there is only one direction for measuring angles, and that is counter-clockwise. That’s a mathematical convention, Professor: it’s something in your head only. It is not real. Nature doesn’t care about our conventions and, therefore, I feel the spin ‘up’ versus spin ‘down’ should correspond to the two mathematical possibilities: if the ‘up’ state is represented by some complex function, then the ‘down’ state should be represented by its complex conjugate.

This ‘additional’ rule wouldn’t change the basic quantum-mechanical rules – which are written in terms of state vectors in a Hilbert space (and, yes, a Hilbert space is an unreal as it sounds: its rules just say you should separate cats and dogs while adding them – which is very sensible advice, of course). However, they would, most probably (just my intuition – I need to prove it), avoid these crazy 720 degree symmetries which inspire the likes of Penrose to say there is no physical interpretation on the wavefunction.

Oh… As for the title of my post… I think it would be a great title for a book – because I’ll need some space to work it all out. 🙂

# Quantum math: garbage in, garbage out?

This post is basically a continuation of my previous one but – as you can see from its title – it is much more aggressive in its language, as I was inspired by a very thoughtful comment on my previous post (albeit on my other site, where I had posted the same). Another advantage is that it avoids all of the math. 🙂 It’s… Well… I admit it: it’s just a rant. 🙂 [Those who wouldn’t appreciate the casual style of what follows, can download my paper on it – but that’s much longer and also has a lot more math in it – so it’s a much harder read than this ‘rant’.]

My previous post was actually triggered by an attempt to re-read Feynman’s Lectures on Quantum Mechanics, but in reverse order this time: from the last chapter to the first. [In case you doubt, I did follow the correct logical order when working my way through them for the first time because… Well… There is no other way to get through them otherwise. 🙂 ] But then I was looking at Chapter 20. It’s a Lecture on quantum-mechanical operators – so that’s a topic which, in other textbooks, is usually tackled earlier on. When re-reading it, I realize why people quickly turn away from the topic of physics: it’s a lot of mathematical formulas which are supposed to reflect reality but, in practice, few – if any – of the mathematical concepts are actually being explained. Not in the first chapters of a textbook, not in its middle ones, and… Well… Nowhere, really. Why? Well… To be blunt: I think most physicists themselves don’t really understand what they’re talking about. In fact, as I have pointed out a couple of times already, Feynman himself admits so much:

“Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.”

So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, here you have it: if you don’t understand what physicists are trying to tell you, don’t worry about it, because they don’t really understand it themselves. 🙂

Take the example of a physical state, which is represented by a state vector, which we can combine and re-combine using the properties of an abstract Hilbert space. Frankly, I think the word is very misleading, because it actually doesn’t describe an actual physical state. Why? Well… If we look at this so-called physical state from another angle, then we need to transform it using a complicated set of transformation matrices. You’ll say: that’s what we need to do when going from one reference frame to another in classical mechanics as well, isn’t it?

Well… No. In classical mechanics, we’ll describe the physics using geometric vectors in three dimensions and, therefore, the base of our reference frame doesn’t matter: because we’re using real vectors (such as the electric of magnetic field vectors E and B), our orientation vis-á-vis the object – the line of sight, so to speak – doesn’t matter.

In contrast, in quantum mechanics, it does: Schrödinger’s equation – and the wavefunction – has only two degrees of freedom, so to speak: its so-called real and its imaginary dimension. Worse, physicists refuse to give those two dimensions any geometric interpretation. Why? I don’t know. As I show in my previous posts, it would be easy enough, right? We know both dimensions must be perpendicular to each other, so we just need to decide if both of them are going to be perpendicular to our line of sight. That’s it. We’ve only got two possibilities here which – in my humble view – explain why the matter-wave is different from an electromagnetic wave.

I actually can’t quite believe the craziness when it comes to interpreting the wavefunction: we get everything we’d want to know about our particle through these operators (momentum, energy, position, and whatever else you’d need to know), but mainstream physicists still tell us that the wavefunction is, somehow, not representing anything real. It might be because of that weird 720° symmetry – which, as far as I am concerned, confirms that those state vectors are not the right approach: you can’t represent a complex, asymmetrical shape by a ‘flat’ mathematical object!

Huh? Yes. The wavefunction is a ‘flat’ concept: it has two dimensions only, unlike the real vectors physicists use to describe electromagnetic waves (which we may interpret as the wavefunction of the photon). Those have three dimensions, just like the mathematical space we project on events. Because the wavefunction is flat (think of a rotating disk), we have those cumbersome transformation matrices: each time we shift position vis-á-vis the object we’re looking at (das Ding an sich, as Kant would call it), we need to change our description of it. And our description of it – the wavefunction – is all we have, so that’s our reality. However, because that reality changes as per our line of sight, physicists keep saying the wavefunction (or das Ding an sich itself) is, somehow, not real.

Frankly, I do think physicists should take a basic philosophy course: you can’t describe what goes on in three-dimensional space if you’re going to use flat (two-dimensional) concepts, because the objects we’re trying to describe (e.g. non-symmetrical electron orbitals) aren’t flat. Let me quote one of Feynman’s famous lines on philosophers: “These philosophers are always with us, struggling in the periphery to try to tell us something, but they never really understand the subtleties and depth of the problem.” (Feynman’s Lectures, Vol. I, Chapter 16)

Now, I love Feynman’s Lectures but… Well… I’ve gone through them a couple of times now, so I do think I have an appreciation of the subtleties and depth of the problem now. And I tend to agree with some of the smarter philosophers: if you’re going to use ‘flat’ mathematical objects to describe three- or four-dimensional reality, then such approach will only get you where we are right now, and that’s a lot of mathematical mumbo-jumbo for the poor uninitiated. Consistent mumbo-jumbo, for sure, but mumbo-jumbo nevertheless. 🙂 So, yes, I do think we need to re-invent quantum math. 🙂 The description may look more complicated, but it would make more sense.

I mean… If physicists themselves have had continued discussions on the reality of the wavefunction for almost a hundred years now (Schrödinger published his equation in 1926), then… Well… Then the physicists have a problem. Not the philosophers. 🙂 As to how that new description might look like, see my papers on viXra.org. I firmly believe it can be done. This is just a hobby of mine, but… Well… That’s where my attention will go over the coming years. 🙂 Perhaps quaternions are the answer but… Well… I don’t think so either – for reasons I’ll explain later. 🙂

Post scriptum: There are many nice videos on Dirac’s belt trick or, more generally, on 720° symmetries, but this links to one I particularly like. It clearly shows that the 720° symmetry requires, in effect, a special relation between the observer and the object that is being observed. It is, effectively, like there is a leather belt between them or, in this case, we have an arm between the glass and the person who is holding the glass. So it’s not like we are walking around the object (think of the glass of water) and making a full turn around it, so as to get back to where we were. No. We are turning it around by 360°! That’s a very different thing than just looking at it, walking around it, and then looking at it again. That explains the 720° symmetry: we need to turn it around twice to get it back to its original state. So… Well… The description is more about us and what we do with the object than about the object itself. That’s why I think the quantum-mechanical description is defective.

# Should we reinvent wavefunction math?

Preliminary note: This post may cause brain damage. 🙂 If you haven’t worked yourself through a good introduction to physics – including the math – you will probably not understand what this is about. So… Well… Sorry. 😦 But if you have… Then this should be very interesting. Let’s go. 🙂

If you know one or two things about quantum math – Schrödinger’s equation and all that – then you’ll agree the math is anything but straightforward. Personally, I find the most annoying thing about wavefunction math are those transformation matrices: every time we look at the same thing from a different direction, we need to transform the wavefunction using one or more rotation matrices – and that gets quite complicated !

Now, if you have read any of my posts on this or my other blog, then you know I firmly believe the wavefunction represents something real or… Well… Perhaps it’s just the next best thing to reality: we cannot know das Ding an sich, but the wavefunction gives us everything we would want to know about it (linear or angular momentum, energy, and whatever else we have an operator for). So what am I thinking of? Let me first quote Feynman’s summary interpretation of Schrödinger’s equation (Lectures, III-16-1):

“We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”

Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. His analysis there is centered on the local conservation of energy, which makes me think Schrödinger’s equation might be an energy diffusion equation. I’ve written about this ad nauseam in the past, and so I’ll just refer you to one of my papers here for the details, and limit this post to the basics, which are as follows.

The wave equation (so that’s Schrödinger’s equation in its non-relativistic form, which is an approximation that is good enough) is written as:The resemblance with the standard diffusion equation (shown below) is, effectively, very obvious:As Feynman notes, it’s just that imaginary coefficient that makes the behavior quite different. How exactly? Well… You know we get all of those complicated electron orbitals (i.e. the various wave functions that satisfy the equation) out of Schrödinger’s differential equation. We can think of these solutions as (complex) standing waves. They basically represent some equilibrium situation, and the main characteristic of each is their energy level. I won’t dwell on this because – as mentioned above – I assume you master the math. Now, you know that – if we would want to interpret these wavefunctions as something real (which is surely what want to do!) – the real and imaginary component of a wavefunction will be perpendicular to each other. Let me copy the animation for the elementary wavefunction ψ(θ) = a·ei∙θ = a·ei∙(E/ħ)·t = a·cos[(E/ħ)∙t]  i·a·sin[(E/ħ)∙t] once more:

So… Well… That 90° angle makes me think of the similarity with the mathematical description of an electromagnetic wave. Let me quickly show you why. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Vψ term – which is just the equivalent of the the sink or source term S in the diffusion equation – disappears. Therefore, Schrödinger’s equation reduces to:

∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t)

Now, the key difference with the diffusion equation – let me write it for you once again: ∂φ(x, t)/∂t = D·∇2φ(x, t) – is that Schrödinger’s equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations:

1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ)
2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ)

Huh? Yes. These equations are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [Now that we’re getting a bit technical, let me note that the meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m.] 🙂 OK. Onwards ! 🙂

The equations above make me think of the equations for an electromagnetic wave in free space (no stationary charges or currents):

1. B/∂t = –∇×E
2. E/∂t = c2∇×B

Now, these equations – and, I must therefore assume, the other equations above as well – effectively describe a propagation mechanism in spacetime, as illustrated below:

You know how it works for the electromagnetic field: it’s the interplay between circulation and flux. Indeed, circulation around some axis of rotation creates a flux in a direction perpendicular to it, and that flux causes this, and then that, and it all goes round and round and round. 🙂 Something like that. 🙂 I will let you look up how it goes, exactly. The principle is clear enough. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle.

Now, we know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? I firmly believe they do. The obvious question then is the following: why wouldn’t we represent them as vectors, just like E and B? I mean… Representing them as vectors (I mean real vectors – something with a magnitude and a direction, in a real vector space – as opposed to these state vectors from a Hilbert space) would show they are real, and there would be no need for cumbersome transformations when going from one representational base to another. In fact, that’s why vector notation was invented (sort of): we don’t need to worry about the coordinate frame. It’s much easier to write physical laws in vector notation because… Well… They’re the real thing, aren’t they? 🙂

What about dimensions? Well… I am not sure. However, because we are – arguably – talking about some pointlike charge moving around in those oscillating fields, I would suspect the dimension of the real and imaginary component of the wavefunction will be the same as that of the electric and magnetic field vectors E and B. We may want to recall these:

1. E is measured in newton per coulomb (N/C).
2. B is measured in newton per coulomb divided by m/s, so that’s (N/C)/(m/s).

The weird dimension of B is because of the weird force law for the magnetic force. It involves a vector cross product, as shown by Lorentz’ formula:

F = qE + q(v×B)

Of course, it is only one force (one and the same physical reality), as evidenced by the fact that we can write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). [Check it, because you may not have seen this expression before. Just take a piece of paper and think about the geometry of the situation.] Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, if we can agree on a suitable convention for the direction of rotation here, we may boldly write:

B = (1/c)∙ex×E = (1/c)∙iE

This is, in fact, what triggered my geometric interpretation of Schrödinger’s equation about a year ago now. I have had little time to work on it, but think I am on the right track. Of course, you should note that, for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously (as shown below). So their phase is the same.

In contrast, the phase of the real and imaginary component of the wavefunction is not the same, as shown below.

In fact, because of the Stern-Gerlach experiment, I am actually more thinking of a motion like this:

But that shouldn’t distract you. 🙂 The question here is the following: could we possibly think of a new formulation of Schrödinger’s equation – using vectors (not state vectors – objects from an abstract Hilbert space – but real vectors) rather than complex algebra?

I think we can, but then I wonder why the inventors of the wavefunction – Heisenberg, Born, Dirac, and Schrödinger himself, of course – never thought of that. 🙂

Hmm… I need to do some research here. 🙂

Post scriptum: You will, of course, wonder how and why the matter-wave would be different from the electromagnetic wave if my suggestion that the dimension of the wavefunction component is the same is correct. The answer is: the difference lies in the phase difference and then, most probably, the different orientation of the angular momentum. Do we have any other possibilities? 🙂

# The flywheel model of an electron

In one of my previous posts, I developed a flywheel model for an electron – or for charged matter-particles in general. You can review the basics in the mentioned post or, else, download a more technical paper from this link. Here I just want to present some more general remarks about the conceptual difficulties. Let’s recall the basics of the model. We think of an electron as a point charge moving about some center, so that’s the green dot in the animation below. We can then analyze its movement in terms of two perpendicular oscillations, i.e. the sine (blue) and cosine (red) functions.

The idea is that space itself – by some mechanism we’d need to elaborate – provides a restoring force when our charge moves away from the center. So it wants to go back to the r = (x, y) = 0, but it never actually does because of these restoring forces, which are perpendicular to each other and just make it spin around. This may sound crazy but the original inspiration for this – the mathematical similarity between the E = m·c2 and E = m·a2·ω2 formulas – is intriguing enough to pursue the idea. Let me show you.

Think of an electron with energy E (the equivalent mass of this electron is, of course, m = E/c2). Now look at the following. If we substitute ω for E/ħ (the frequency of the matter-wave) and for ħ/m·c (i.e. the Compton scattering radius of an electron), we can write:

a·ω = [ħ/(m·c)]·[E/ħ] = E/(m·c) = m·c2/(m·c) = c

We get the same when equating the E = m·a2·ω2 and E = m·c2 formulas:

a2·ω2 = m·c2  ⇔ a·ω

Wow! Did we just prove something? No. Not really. We only showed that our E = m·a2·ω2 = m·c2 equation might make sense. Note that the a·ω = c product is the tangential velocity vtangential = r·ω = a·ω = c.

Our rotating charge should also have some angular momentum, right? In fact, the Stern-Gerlach experiment tells us the angular momentum should be equal to ± ħ/2. Is it? We will need a formula for the angular mass, aka the moment of inertia. Let’s try the one for a rotating disk: I = (1/2)·m·a2. The formula for the angular momentum itself is just L = I·ω. Combining both gives us the following result:

L = ħ/2 = I·ω = (1/2)·m·a2·ω = (1/2)·(E/c2a2·(E/ħ) ⇔ = ħ/(m·c)

Did we just get the formula for the Compton scattering radius out of our model? Well… Yes. It is a rather Grand Result, I think. So… Well… It might make sense, but… Well… Does it, really?

First, we should note that the math is not so easy as it might seem at first: we would need to use relativistically correct formulas everywhere. I made a start at that, and it’s quite encouraging. For example, I showed in one of my previous posts, that the classical E = m·v2/2 formula for the kinetic energy of a mass on a spring effectively morphs into the relativistically correct E = mv·c2 = m0·γ·c2 formula. So it looks like the annoying 1/2 factors in the classical formulas effectively disappear in a relativistically correct analysis. However, there is a lot more to be checked than just the math for one oscillator, and I haven’t done that yet.

The key issue is the interpretation of the mass factor. Our pointlike charge can only travel at light speed if its rest mass is equal to zero. But that is a problem for the model, because the idea of accelerations or decelerations as a result of a restoring force is no longer consistent: if there is no rest mass, even the tiniest force on it will give it an infinite acceleration or deceleration. Hence, it is tempting to assume the charge must have some mass, and the concepts of the electromagnetic and/or the effective mass spring to mind. However, I don’t immediately see how the specifics of this could be worked out. So we’re a bit stuck here. Still, it is tempting to try to think of it like a photon, for which we can effectively state that all of its energy – and the equivalent mass – is in the oscillation: E = ħ·ω and m = E/c2 = ħ·ω/c2.

Let’s think of a few other questions. What if the energy of our electron increases? Our a·ω = c tells that = c/ω = ħ/E should decrease. Does that make any sense? We may think of it like this. The E = m·c2 equation tells us that the ratio between the energy and the mass of a particle is constant: E/m = c2. So we can write:

E/m = c2 = a2·ω2 = a2·E22 ⇔ a·E/ħ = c

This tells us the same: is inversely proportional to E, and the constant of inverse proportionality is ħ. Hence, our disk becomes smaller but spins faster. What happens to the angular mass and the angular momentum? The angular momentum must L = ħ/2 = I·ω must remain the same, and you can check it does. Hence, the angular mass I effectively diminishes as ω = E/ħ goes up.

Hence, it all makes sense – intuitively, that is. In fact, our model is basically an adaptation of the more general idea of electromagnetic mass: we’re saying the energy in the (two-dimensional) oscillation gives our electron the equivalent mass that we measure in real-life experiments. Having said that, the idea of a pointlike charge with zero rest mass – accelerating and decelerating in two perpendicular directions – remains a strange one.

Let us think about the nature of the restoring force. The oscillator model is, effectively, based on the assumption we have a linear force here. To be precise, the restoring force is thought of as being directly proportional with the distance from the equilibrium: F = −k·x. The proportionality constant is the stiffness of our spring. It just comes with the system. So what’s the equivalent of in our model here? For a non-relativistic spring, it is easy to show that – while a constant – will always be equal to k = m·ω2. Hence, if we put a larger mass on the spring, then the frequency of the oscillation must go down, and vice versa. But what mass concept should we apply here? Before we further explore this question, let us look at those relativistic corrections. Why? Well… I think it’s about time I show you how they affect the analysis, right? Let me gratefully acknowledge an obviously brilliant B.A. student who did an excellent paper on relativistic springs (link and references here), from which I copy the illustration below.

The graphs show what happens to the trajectory and the velocity when speeds go up all the way to the relativistic limit. The light grey is the classical (non-relativistic) case: we recognize the sinusoidal motion and velocity function. See how these shapes change when velocities go to the relativistic limit (see the grey and (for velocities reaching the speed of light) the black curves). The sinusoidal shape of the functions disappears: the velocity very rapidly changes between −c and +c. In fact, the velocity function starts resembling a square wave function.

This is quite interesting, but… Well… It doesn’t make us much wiser, and it surely does not answer our question: how should we interpret the stiffness coefficient here? The relativistically correct force law is given below, but that doesn’t tell us anything new: tells us that the restoring force is going to be directly proportional to the distance from the equilibrium (zero) point:

F = dp/dt = d(mv)/dt = mvdv/dt = mva = −k·x

However, we may think of this, perhaps: when we analyze a mechanical (or any other typical) oscillator, we think of being given. But for our flywheel model, we find that the value of is going to depend on the energy and the amplitude of the oscillation. To be precise, if we just take the E = m·a2·ω2 = m·c2 and k = m·ω2 formulas for granted, we can write k as a function of the energy:

E = k·a2 ⇔ = E/a2 = (E·E2)/(c2ħ2) ⇔ = E3/(c2ħ2)

It is a funny formula: it tells us the stiffness – or whatever the equivalent is in this model of ours – increases exponentially with the energy. It is a weird formula, but it is consistent with the other conclusion above: if the energy goes up, the radius becomes smaller. It’s because space becomes stickier. 🙂

But space is space, right? And its defining characteristic – the constant speed of light – is… Well… It is what it is: it just tells us that the ratio between the energy and mass for any particle is always the same:

What else does it tell us? Well… If our model is correct, it also tells us the tangential velocity of our charge will always be equal to c. In that sense, then, it tells us space is space. There is one more relation that we might mention here:

a·ω ⇔ ω = c/a

The energy defines the radius in our model (remember the = ħ/E relation). Now, the relationship above tells us that, in turn, defines the frequency, and in a rather particular way: the frequency is inversely proportional to the radius, and the proportionality coefficient is, once again, c. 🙂

# Do we need mass?

You’ll say: of course, we do! Not too much of it, of course, but some mass is good, right? 🙂 Well… I guess so. Let me rephrase my question: do we need the concept of mass in theoretical physics?

Huh? I must be joking, right? No. It’s a serious question. Most of my previous posts revolved around the concept of mass. What is mass? What’s the meaning of Einstein’s E = m·c2 formula? If you’re read any of my previous posts, you’ll know that I am thinking of mass now as some kind of oscillation – not (or not only) in spacetime, but an oscillation of spacetime. A two-dimensional oscillation, to be precise. So… Well… If mass is an oscillation of something, then it’s energy: some force over some distance. Hence, it is only logical to ask whether we need the concept of mass at all.

Think of it. The E = m·c2 relates two variables only. It’s not like a force law or something. No. It says we can express mass in terms of energy units, and vice versa. In fact, if we’d use natural units for time and distance, so c = 1, the E = m·c2 formula reduces to E = m. So the energy concept is good enough, right? Instead of measuring the mass of our objects in kg, we’ll measure them in joule or – for small objects – in electronvolt. To be precise, we should say: we’ll measure them in J/c2, or in eV/c2. In fact, physicists do that already – for stuff at the atomic or sub-atomic scale, which is… Well… Most of what they write about, right? 🙂

If you think about this for a while, you might object to this by saying we need the mass concept in a lot of other formulas and laws, such as Newton’s Law: F = m·a. But that’s not very valid as an objection: we can still replace the m in this formula by E/c2, and we’re done, right? So Newton’s Law would look like this: F = (E/c2a. You may say: this doesn’t look as nice. But looks shouldn’t matter here, right? 🙂

Because you’re so used to using mass, you might say: mass is a measure of inertia (resistance to a change in motion), so that its meaning. Well… Yes and no. What Newton’s Law actually tells us is that there is a proportionality between (1) the force on an object, and (2) its acceleration. And that proportionality coefficient is m, so we should re-write Newton’s Law as F/a = m. But then… Well… We can just use something else, right. Why m? We can just write: F/a = E/c2. 🙂

You think I am joking, right? We surely need it somewhere, no? Well… No. Or… Well… I am not so sure, let’s say. 🙂 Think of the following. I don’t need to know the mass of an object to calculate the acceleration. I only need to know its trajectory in spacetime. In other words: I just need to know when it’s where. Huh? Yes. Think of the formulas you learned in high school: the distance traveled by an object with acceleration a is given by = (1/2)·a·t2. Hence, = 2·s/t2. I don’t need to know the mass. I can calculate the acceleration = 2·s/t2 from the time and distance traveled, and then – if I would be interested in that coefficient (m) – then I know m will be equal to m = F/a. But so it’s just a coefficient of proportionality. Nothing more, nothing less.

Oh! But what if you don’t know F? Then you need the mass to calculate F, right? Well… No. I need to know the kinetic energy of the object, or its momentum, or whatever else. I don’t need that enigmatic mass concept. That’s metaphysics, so that’s philosophy. Not physics. 🙂

Huh? Are you serious?

I actually am. Einstein’s formula tells us we really don’t need the concept of mass anymore. E/c2 is just as good as a measure of inertia, and we can use the same E/c2 in the gravitational law or in whatever other law or formula involving m. So much for the kg unit as a so-called fundamental unit in the S.I. system of units: they should scrap it. 🙂

And too bad I spent so much time (see all my previous posts) on an innovative theory of mass… 🙂

[…]

Now that we’re talking fundamental units and concepts, let me give you something else to think about. In the table below, I have a force (F) over some distance (s) during some time (t). As you know, the product of a force, time and distance is the total amount of action (Wirkung). Action is the physical dimension of Planck’s constant, which is the quantum of action. The concept of action is one that, unfortunately, is not being taught in high schools: it only pops up in more advanced (read: more abstract) texts (if you’re interested, check my post on it). Why is that unfortunate? Well… I think it’s really interesting because it answers a question I had as a high school student: why do we need two conservation laws? One for energy and one for momentum? What I write below might explain it: the action concept is a higher-level concept that combines energy as well as momentum – sort of, that is. 🙂 Check it out.

The table below shows that the same amount of action (1000 N·m·s) over the same distance (10 meter in this case) – but with different force and time (see below) – will result in the same momentum (100 N·s). In contrast, the same amount of action (1000 N·m·s) over the same time (5 seconds) – but with a different force over a different distance – will result in the same (kinetic) energy (200 N·m = 200 J).

So… Well… I like to think that (kinetic) energy and (linear) momentum are two manifestations of action – two sides of the same coin, so to speak:

1. The concept of momentum sort of abstracts away from distance: it’s a projection of action on the time axis, so to speak.
2. In contrast, energy abstracts away from the concept of time: it’s a projection of some amount of action in space.

Conversely, action can be thought of as (1) energy being available over a specific amount of time or, alternatively, as (2) a certain amount of momentum being available over a specific distance.

OK. That’s it for today. I hope you enjoyed it!

Post scriptum: In case you wonder, I do know about the experimental verification of the so-called Higgs field in CERN’s LHC accelerator six years ago (July 2012), and the award of the Nobel prize to the scientists who had predicted its existence (including Peter Higgs and François Englert). As far as I understand the Higgs theory (I don’t know a thing about it, actually), I note mass is being interpreted as some scalar field. I am sure there must be something about it that I am not catching here. 🙂

# The nature of Nature

One of the fundamental questions for any freshman studying physics is the following: if an electron orbits around a nucleus, the various accelerations and decelerations should cause it to emit electromagnetic radiation. Hence, it should gradually lose energy and, therefore, the orbital cannot be stable. Therefore, the atom cannot be stable. So we have a problem. 🙂

A related but different question is: why don’t the electron and the nucleus simply crash into each other? They attract each other, very strongly, right? Well… Yes. But, as I said, that’s a related but different question. Let me first try to handle the first one – as good as I can. 🙂

So… Well… It’s a simple question but – as you know by now – the science of physics seldom gives us simple answers to simple questions. Worse, I’ve studied physics for many years now – admittedly, in my own stubborn, critical and generally limited way 🙂 – and I feel the answer I am getting is not only complicated but also not very real. So… Well… We might want to think we probably do not quite understand what is going on really.

This lack of understanding is nothing to be ashamed of, as great physicists such as Richard Feynman (and others) acknowledge: “Atomic behavior appears peculiar and mysterious to everyone—both to the novice and to the experienced physicist. Even the experts do not understand it the way they would like to.” So… Well… If you’d be in need of a rather spectacular acknowledgement of the shortcomings of physics as a science, there you have it: physicists don’t understand their own science, it seems. 🙂 But let’s go beyond that. Let’s talk about the wavefunction because… Well… You know it’s supposed to describe the electron, right? So what is it?

Well… Unfortunately, physics textbooks won’t tell you what the wavefunction is. They’ll tell you it’s a mathematical construct. A solution to some differential equation (Schrödinger’s equation), to be precise. 😦 However, they will you – from time to time, at least – tell you what it isn’t. For example, Feynman’s most precise description of the model of an electron – or an electron orbital, I should say – might be the one he offers when, while deriving the electron orbitals from Schrödinger’s equation, he says what the wavefunction is surely not:

“The wave function Ψ(r) for an electron in an atom does not describe a smeared-out electron with a smooth charge density. The electron is either here, or there, or somewhere else, but wherever it is, it is a point charge.” (Feynman’s Lectures, Vol. III, p. 21-6)

So… Well… That’s not too bad as an explanation. 🙂 But… Well… While fairly precise, I’d think we can improve on Feynman’s language. For starters, we should distinguish the concept of an electron and the concept of its charge. When the electron is in some stable configuration – i.e. in an orbital as described by its wavefunction Ψ(r) – the idea of the electron combines both the orbital and the point charge. Let’s be precise here:

1. The charge is what, when probing, we’ll effectively find “here, there, or somewhere else” in the space that is being described by our wavefunction Ψ(r).
2. As for the electron… Well… We know that – by applying operators to the wavefunction – we’ll not only get information about its position, but also about its linear or angular momentum, its energy, and whatever other characteristic of the electron that we’re describing. In that sense, we might say that the wavefunction completely describes the electron and that, therefore, the electron is not the point charge itself, but the orbital, as described by the wavefunction, with its point charge somewhere.

In short, for all practical purposes, we might say that the electron is the wavefunction, and vice versa. 🙂 Indeed, when studying quantum mechanics, one effectively does end up equating the particle with its wavefunction, not with its charge. And rightly so ! An elementary particle – be it an electron or a quark – is more than just its charge: it has energy, momentum (linear or angular), occupies some space and – in the case of quarks – has a color too ! 🙂

But that still doesn’t answer the simple question I started out with: the electrons – or the point charges in those orbitals – don’t emit radiation. Why not? Well… If I’d be your professor, and you’d be sitting for an exam in front of me, then I’d expect you to start talking about the Uncertainty Principle, wavefunctions, energy states and what have you. But I am not your professor (I am not a professor at all, in fact), and so I don’t want hear that answer. To be precise, I don’t like that answer because, just like Feynman, I don’t quite understand it the way I would like to understand it! So… What other answer can we think of? Can we think of something that is, perhaps, more intuitive?

I think we can. I, for one, am thinking, once more, of that profound statement that Einstein made back in 1916, when explaining his relativity theory to a broader audience:

“Physical objects are not in space, but these objects are spatially extended. In this way, the concept “empty space” loses its meaning.”

In fact, I’d go one step further and say: objects create their own space.

Huh? Yes. Think of the planets – including Earth – going around the Sun. Einstein’s general relativity theory tells us they are in their own space. Indeed, Einstein told us we should not think of gravitation as a force: the masses involved just curve the space-time fabric around them in such a way that these planets just keep going and going around and around. They are basically moving in free space: their path just happens to be curved from our perspective. If we wouldn’t impose this abstract (or mathematical, I should say) rectangular Cartesian coordinate space on our description of them, but accept this system of large objects creates its own space, we’d understand why it’s stable: it doesn’t need any energy from the outside to keep going. Nor does it radiate anything out.

Let me emphasize this point: they are in their own space because they don’t radiate anything out. And, I should add, nor do they absorb any energy from the outside. Of course, you’ve heard about gravitational waves – and most notably the one detected by the LIGO Lab last year – but note that gravitational wave was created when two black holes spectacularly merged. That’s because black holes do emit radiation, as a result of which do lose mass and, therefore, this system of large objects became unstable. Of course, if we’d detonate all of the atomic bombs we’ve built, we might also cause our planetary system to become unstable, but you’ll understand that’s a different discussion altogether. 🙂

So… Well… I like to think a wavefunction for an orbital represents the same: we’re looking at a charge that moves around in its own space. In our Cartesian reference frame, this looks like a terribly complicated oscillation. In fact, the oscillation is not only complicated but also – literally – complex, because we’re keeping track of two dimensions simultaneously: the real and imaginary component of the wavefunction. Both are equally real, of course, in a physical sense (and we can argue about what that means, exactly, but not about the statement itself). But so… Well… It’s just a spacetime blob. The charge itself just moves around along a geodesic in its spacetime, and that’s why it doesn’t emit or absorb any energy from the outside. 🙂

Of course, the question now becomes: if an electron orbital is nothing but a weird blob of curved spacetime – in which our charge moves around like a planet moves around in a planetary system – then what’s causing the curvature of space? For our planetary system, we know it’s mass.

So… Well… What can I say? Well… What’s mass? Energy has an equivalent mass, and mass has an equivalent energy. In my previous posts, I look at mass as an oscillation itself and, as I show in one of my papers, that might allow us to interpret Schrödinger’s wave equation as an energy diffusion equation, and the wavefunction itself as a self-contained and self-sustaining gravitational wave. So… Well… If the wavefunction represents a blob of energy – some two-dimensional oscillation – then… Well… Then it could create its own space, right? Just like our Sun and the planets create their own space, in which they move without absorbing or radiating any energy away. In other words, they move in a straight line in their own space. I am tempted to think our pointlike charge must also be moving in a straight line in its own space because… Well… It would, effectively, be emitting radiation otherwise. 🙂

So what’s the nature of Nature, then? Well… All is movement, it seems. Panta rhei ! 🙂 And… Well… I’ll let you do the philosophy now. For example, if objects create their own space, how should we think of their interactions? 🙂

# Mass as a two-dimensional oscillation (2)

This post basically further develops my speculative thoughts about the real meaning of the E = m·c2 formula. However, I’ll use the relativistically correct formulas for the calculations this time, so it may look somewhat more complicated. However, I think you should be able to digest it relatively easily, as none of the math is exceedingly difficult.

My previous post explored the similarity between the formula for the energy of a harmonic oscillator and the E = m·c2 formula. Now, there is another formula that sort of resembles it: the E = m·v2/2 formula for the kinetic energy. Could we relate them somehow and – in the process – gain a better understanding of Einstein’s famous formula? I think we can, and I want to show you how. In fact, in this post, I will try to relate all three.

We should first note that the E = m·v2/2 is a non-relativistic formula. It is only correct if we assume the mass – defined as a measure of inertia, remember? – to be constant, which we know isn’t true. As an object accelerates and gains kinetic energy, its effective mass will increase. In fact, the relativistically correct formula for the kinetic energy just calculates it as the difference between (1) the total energy (which is given by the E = m·c2 formula, always) and (2) its rest energy, so we write:

K.E. = E − E0 = mv·c2 − m0·c2 = m0·γ·c2 − m0·c2 = m0·c2·(γ − 1)

The γ in this formula is, of course, the ubiquitous Lorentz factor. Hence, the correct formula for the kinetic energy is m0·c2·(γ − 1). We shouldn’t use that m·v2/2 formula. Still, the two formulas are remarkably similar: there is a squared velocity (v2 and c2) and some factor (1/2 versus γ − 1). Why the squared velocity? That’s child play, right? Yep, I effectively wrote a post on that for my kids. We have a force that acts on some object over some time and over some distance, and so that force is going to do some work. While it’s child play, we’re calculating a path or line integral here:

Child play? Perhaps, but many kids don’t know what a vector dot product is (the F·dx), and they also don’t realize we can only solve this because we assume the mass m to be constant (i.e. not a function of the velocity v). So… Well… In our flywheel model of an electron, we’ve been using a non-relativistic formula, but we’ve calculated the tangential speed as being equal to c. A recipe for disaster, right? 🙂 Can we re-do the calculations? We can. You can google a zillion publications on relativistic harmonic oscillators but I took the derivation below from a fairly simple one I’d recommend. The only correction we’ll do here is to use the relativistically correct expression of Newton’s force law: the force equals the time rate of change of the (relativistic) momentum p = mvv = γm0v. So we write:

F = dp/dt = F = –kx with p = mvv = γm0v

Multiplying both sides with = dx/dt yields the following expression: Now, when we combine two oscillators – think of the metaphor of a frictionless V-twin engine, as illustrated below 🙂 – then we know that – because of the 90° angle between the two cylinders, the motion of one piston will be equal to x = a∙cos(ω∙t), while the
motion of the other is given by y = a∙cos(ω∙t–π/2) = a∙sin(ω∙t).Now how do we calculate the total energy in this system? Should we distinguish the x– and y– components of the total momentum p? We can easily do that. Look at the animation below, and you’ll immediately understand that we can easily calculate the velocities in the x– and a y-direction: vx = dx/dt = −a·ω·sin(ω∙t) and vy = dy/dta·ω·cos(ω∙t). The sum of the square of both then gives us the tangential velocity vv2 a2∙ω2∙sin2(ω∙t) + a2∙ω2∙cos2(ω∙t) = a2∙ω2 ⇔ va∙ω.  But how do we add energies here? It’s a tricky question: we have potential energy in one oscillator, and then in the other, and these energies are being transferred from one to another through the flywheel, so to speak. So there is kinetic energy there. Can we just add it all? Let us think about our perpetuum mobile once more, noting that the equilibrium position for the piston is at the center of each cylinder. When it goes past that position, extra pressure will build up and eventually push the piston back. When it is below that position, pressure is below equilibrium and will, therefore, also want to pull the piston back. The dynamics are as follows:

• When θ is zero, the air in cylinder 1 is fully compressed, and the piston will return to the equilibrium position (x = 0) as θ goes to 90°. The flywheel will transfer energy to cylinder 2, where the piston goes from the equilibrium position to full compression. Cylinder 2 borrows energy, and will want to return to its equilibrium position.
• When θ is 90°, the air in cylinder 2 is fully compressed, and the piston will return to the equilibrium position (y = 0) as θ goes to 180°. The flywheel will transfer energy back to cylinder 1, where the piston goes past the equilibrium position to create a vacuum. The piston in cylinder 1 borrows energy, and will want to return to its equilibrium position.
• When θ is 180°, the piston in cylinder 1 is fully extended, and will want to return to equilibrium because the pressure is lower than when in equilibrium. As θ goes from 180° to 270°, the piston in cylinder 1 does effectively return to equilibrium and, through the flywheel, pushes the piston in cylinder 2 past the equilibrium to create vacuum. The piston in cylinder 2 borrows energy, and will want to return to equilibrium.
• Finally, between 270° and 360°, the piston in cylinder 2 returns to equilibrium and, through the flywheel, causes the piston in cylinder 1 to compress air. The piston in cylinder 2 borrows energy, and will want to return to equilibrium.

It is a funny thing. Where is the energy in this system? Energy is not supposed to be thought as being directional but, here, direction clearly matters! We need to think about averages here (kinetic energy is a non-directional (scalar) quantity but it’s a function of velocity, and velocity is directional. If we have two directions only (x and y), then we can write: 〈vx2〉 = 〈vy2〉 = [〈vx2〉 + 〈vy2〉]/2 = 〈v2〉/2. So this gives us a clue, but we won’t make things to complicated here. Think of it like this. While transferring energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa. So what is the total energy in the system? What if we would want to use it? What can we take from it? You’d agree we would have to take it from the flywheel, right? The usable energy is in the flywheel. Let’s have a look at that energy conservation law we derived above: The usable energy in the flywheel is the E = m·c2 term. This, and my previous post, suggests we may interpret the mass of an electron as a two-dimensional oscillation. In fact, I think my previous post is an easier read because I use the classical (non-relativistic) formulas there. This post, hopefully, demonstrates that a relativistically correct mathematical treatment doesn’t alter the picture that I’ve been trying to offer. 🙂

Of course, the more difficult thing is to go beyond this metaphor and explain how exactly this motion from borrowing and returning energy to space would actually work. 🙂 So that would be a proper ‘ring theory’ of matter. 🙂