Understanding semiconductors, lasers and other technical stuff

I wrote a lot of papers but most of them – if not all – deal with very basic stuff: the meaning of uncertainty (just statistical indeterminacy because we have no information on the initial condition of the system), the Planck-Einstein relation (how Planck’s quantum of action models an elementary cycle or an oscillation), and Schrödinger’s wavefunctions (the solutions to his equation) as the equations of motion for a pointlike charge. If anything, I hope I managed to restore a feeling that quantum electrodynamics is not essentially different from classical physics: it just adds the element of a quantization – of energy, momentum, magnetic flux, etcetera.

Importantly, we also talked about what photons and electrons actually are, and that electrons are pointlike but not dimensionless: their magnetic moment results from an internal current and, hence, spin is something real – something we can explain in terms of a two-dimensional perpetual current. In the process, we also explained why electrons take up some space: they have a radius (the Compton radius). So that explains the quantization of space, if you want.

We also talked fields and told you – because matter-particles do have a structure – we should have a dynamic view of the fields surrounding those. Potential barriers – or their corollary: potential wells – should, therefore, not be thought of as static fields. They result from one or more charges moving around and these fields, therefore, vary in time. Hence, a particle breaking through a ‘potential wall’ or coming out of a potential ‘well’ is just using an opening, so to speak, which corresponds to a classical trajectory.

We, therefore, have the guts to say that some of what you will read in a standard textbook is plain nonsense. Richard Feynman, for example, starts his lecture on a current in a crystal lattice by writing this: “You would think that a low-energy electron would have great difficulty passing through a solid crystal. The atoms are packed together with their centers only a few angstroms apart, and the effective diameter of the atom for electron scattering is roughly an angstrom or so. That is, the atoms are large, relative to their spacing, so that you would expect the mean free path between collisions to be of the order of a few angstroms—which is practically nothing. You would expect the electron to bump into one atom or another almost immediately. Nevertheless, it is a ubiquitous phenomenon of nature that if the lattice is perfect, the electrons are able to travel through the crystal smoothly and easily—almost as if they were in a vacuum. This strange fact is what lets metals conduct electricity so easily; it has also permitted the development of many practical devices. It is, for instance, what makes it possible for a transistor to imitate the radio tube. In a radio tube electrons move freely through a vacuum, while in the transistor they move freely through a crystal lattice.” [The italics are mine.]

It is nonsense because it is not the electron that is traveling smoothly, easily or freely: it is the electrical signal, and – no ! – that is not to be equated with the quantum-mechanical amplitude. The quantum-mechanical amplitude is just a mathematical concept: it does not travel through the lattice in any physical sense ! In fact, it does not even travel through the lattice in a logical sense: the quantum-mechanical amplitudes are to be associated with the atoms in the crystal lattice, and describe their state – i.e. whether or not they have an extra electron or (if we are analyzing electron holes in the lattice) if they are lacking one. So the drift velocity of the electron is actually very low, and the way the signal moves through the lattice is just like in the game of musical chairs – but with the chairs on a line: all players agree to kindly move to the next chair for the new arrival so the last person on the last chair can leave the game to get a beer. So here it is the same: one extra electron causes all other electrons to move. [For more detail, we refer to our paper on matter-waves, amplitudes and signals.]

But so, yes, we have not said much about semiconductors, lasers and other technical stuff. Why not? Not because it should be difficult: we already cracked the more difficult stuff (think of an explanation of the anomalous magnetic moment, the Lamb shift, or one-photon Mach-Zehnder interference here). No. We are just lacking time ! It is, effectively, going to be an awful lot of work to rewrite those basic lectures on semiconductors – or on lasers or other technical matters which attract students in physics – so as to show why and how the mechanics of these things actually work: not approximately, but how exactly – and, more importantly, why and how these phenomena can be explained in terms of something real: actual electrons moving through the lattice at lower or higher drift speeds within a conduction band (and then what that conduction band actually is).

The same goes for lasers: we talk about induced emission and all that, but we need to explain what that might actually represent – while avoiding the usual mumbo-jumbo about bosonic behavior and other useless generalizations of properties of actually matter- and light-particles that can be reasonably explained in terms of the structure of these particles – instead of invoking quantum-mechanical theorems or other dogmatic or canonical a priori assumptions.

So, yes, it is going to be hard work – and I am not quite sure if I have sufficient time or energy for it. I will try, and so I will probably be offline for quite some time while doing that. Be sure to have fun in the meanwhile ! 🙂

Quantum physics: The Guide

A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields (QED), and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):

Paper I: Quantum behavior (the abstract should enrage the dark forces)

Paper II: Probability amplitudes (quantum math)

Paper III: The concept of a field (why you should not bother about QFT)

Paper IV: Survivor’s guide to all of the rest (keep smiling)

Paper V: Uncertainty and the geometry of the wavefunction (the final!)

The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?

Jean Louis Van Belle, 21 October 2020

Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂

As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂

The concept of a field

I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.

As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂

Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:

“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”

So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊

Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part. :-/

Particles as spacetime oscillations

My very first publication on Phil Gibb’s site – The Quantum-Mechanical Wavefunction as a Gravitational Wave – reached 500+ downloads. I find that weird, because I warn the reader in the comments section that some of these early ideas do not make sense. Indeed, while my idea of modelling an electron as a two-dimensional oscillation has not changed, the essence of the model did. My theory of matter is based on the idea of a naked charge – with zero rest mass – orbiting around some center, and the energy in its motion – a perpetual current ring, really – is what gives matter its (equivalent) mass. Wheeler’s idea of ‘mass without mass’. The force is, therefore, definitely not gravitational.

It cannot be: the force has to grab onto something, and all it can grab onto is the naked charge. The force must, therefore, be electromagnetic. So I now look at that very first paper as an immature essay. However, I leave it there because that paper does ask all of the right questions, and I should probably revisit it – because the questions I get on my last paper on the subject – De Broglie’s Matter-Wave: Concept and Issues, which gets much more attention on ResearchGate than on Phil Gibb’s site (so it is more serious, perhaps) – are quite similar to the ones I try to answer in that very first paper: what is the true nature of the matter-wave? What is that fundamental oscillation?

I have been thinking about this for many years now, and I may never be able to give a definite answer to the question, but yesterday night some thoughts came to me that may or may not make sense. And so to be able to determine whether they might, I thought I should write them down. So that is what I am going to do here, and you should not take it very seriously. If anything, they may help you to find some answers for yourself. So if you feel like switching off because I am getting too philosophical, please do: I myself wonder how useful it is to try to interpret equations and, hence, to write about what I am going to write about here – so I do not mind at all if you do too!

That is too much already as an introduction, so let us get started. One of my more obvious reflections yesterday was this: the nature of the matter-wave is not gravitational, but it is an oscillation in space and in time. As such, we may think of it as a spacetime oscillation. In any case, physicists often talk about spacetime oscillations without any clear idea of what they actually mean by it, so we may as well try to clarify it in this very particular context here: the explanation of matter in terms of an oscillating pointlike charge. Indeed, the first obvious point to make is that any such perpetual motion may effectively be said to be a spacetime oscillation: it is an oscillation in space – and in time, right?

As such, a planet orbiting some star – think of the Earth orbiting our Sun – may be thought of a spacetime oscillation too ! Am I joking? No, I am not. Let me elaborate this idea. The concept of a spacetime oscillation implies we think of space as something physical, as having an essence of sorts. We talk of a spacetime fabric, a (relativistic) aether or whatever other term comes to mind. The Wikipedia article on aether theories quotes Robert B. Laughlin as follows in this regard: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum.”

I disagree with that. I do not think about the vacuum in such terms: the vacuum is the Cartesian mathematical 3D space in which we imagine stuff to exist. We should not endow this mathematical space with any physical qualities – with some essence. Mathematical concepts are mathematical concepts only. It is the difference between size and distance. Size is physical: an electron – any physical object, really – has a size. But the distance between two points is a mathematical concept only.

The confusion arises from us expressing both in terms of the physical distance unit: a meter, or a pico- or femtometer – whatever is appropriate for the scale of the things that we are looking at. So it is the same thing when we talk about a point: we need to distinguish a physical point – think of our pointlike charge here – and a mathematical point. That should be the key to understanding matter-particles as spacetime oscillations – if we would want to understand them as such, that is – which is what we are trying to do here. So how should we think of this? Let us start with matter-particles. In our realist interpretation of physics, we think of matter-particles as consisting of charge – in contrast to, say, photons, the particles of light, which (also) carry energy but no charge. Let us consider the electron, because the structure of the proton is very different and may involve a different force: a strong force – as opposed to the electromagnetic force that we are so familiar with. Let me use an animated gif from the Wikipedia Commons repository to recapture the idea of such (two-dimensional) oscillation.

Think of the green dot as the pointlike charge: it is a physical point moving in a mathematical space – a simple 2D plane, in this case. So it goes from here to there, and here and there are two mathematical points only: points in the 3D Cartesian space which – as H.A. Lorentz pointed out when criticizing the new theories – is a notion without which we cannot imagine any idea in physics. So we have a spacetime oscillation here alright: an oscillation in space, and in time. Oscillations in space are always oscillations in time, obviously – because the idea of an oscillation implies the idea of motion, and the idea of motion always involves the notion of space as well as the notion of time. So what makes this spacetime oscillation different from, say, the Earth orbiting around the Sun?

Perhaps we should answer this question by pointing out the similarities first. A planet orbiting around the sun involves perpetual motion too: there is an interplay between kinetic and potential energy, both of which depend on the distance from the center. Indeed, Earth falls into the Sun, so to speak, and its kinetic energy gets converted into potential energy and vice versa. However, the centripetal force is gravitational, of course. The centripetal force on the pointlike charge is not: there is nothing at the center pulling it. But – Hey ! – what is pulling our planet, exactly? We do not believe in virtual gravitons traveling up and down between the Sun and the Earth, do we? So the analogy may not be so bad, after all ! It is just a very different force: its structure is different, and it acts on something different: a charge versus mass. That’s it. Nothing more. Nothing less.

Or… Well… Velocities are very different, of course, but even there distinctions are, perhaps, less clear-cut than they appear to be at first. The pointlike charge in our electron has no mass and, therefore, moves at lightspeed. The electron itself, however, acquires mass and, therefore, moves at a fraction of lightspeed only in an atomic or molecular orbital. And much slower in a perpetual current in superconducting material. [Yes. When thinking of electrons in the context of superconduction, we have an added complication: we should think of electron pairs (Cooper pairs) rather than individual electrons, it seems. We are not quite sure what to make of this – except to note electrons will also want to lower their energy by pairing up in atomic or molecular orbitals, and we think the nature of this pairing must, therefore, be the same.]

Did we clarify anything? Maybe. Maybe not. Saying that an electron is a pointlike charge and a two-dimensional oscillation, or saying that it’s a spacetime oscillation itself, appears to be a tautology here, right? Yes. You are right. So what’s the point, then?

We are not sure, except for one thing: when defining particles as spacetime oscillations, we do definitely not need the idea of virtual particles. That’s rubbish: an unnecessary multiplication of concepts. So I think that is some kind of progress we got out of this rather difficult philosophical reflections, and that is useful, I think. To illustrate this point, you may want to think of the concept of heat. When there is heat, there is no empty space. There is no vacuum anymore. When we heat a space, we fill it with photons. They bounce around and get absorbed and re-emitted all of the time. in fact, we, therefore, also need matter to imagine a heated space. Hence, space here is no longer the vacuum: it is full of energy, but this energy is always somewhere – and somewhere specifically: it’s carried by a photon, or (temporarily) stored as an electron orbits around a nucleus in an excited state (which amounts to the same as saying it is being stored by an atom or some molecular structure consisting of atoms). In short, heat is energy but it is being ‘transmitted’ or ‘transported’ through space by photons. Again, the point is that the vacuum itself should not be associated with energy: it is empty. It is a mathematical construct only.

We should try to think this through – even further than we already did – by thinking how photons – or radiation of heat – would disturb perpetual currents: in an atom, obviously (the electron orbitals), but also perpetual superconducting currents at the macro-scale: unless the added heat from the photons is continuously taken away by the supercooling helium or whatever is used, radiation or heat will literally bounce the electrons into a different physical trajectory, so we should effectively associate excited energy states with different patterns of motion: a different oscillation, in other words. So it looks like electrons – or electrons in atomic/molecular orbitals – do go from one state into another (excited) state and back again but, in whatever state they are, we should think of them as being in their own space (and time). So that is the nature of particles as spacetime oscillations then, I guess. Can we say anything more about it?

I am not sure. At this moment, I surely have nothing more to say about it. Some more thinking about how superconduction – at the macro-scale – might actually work could, perhaps, shed more light on it: is there an energy transfer between the two electrons in a Cooper pair? An interplay between kinetic and potential energy? Perhaps the two electrons behave like coupled pendulums? If they do, then we need to answer the question: how, exactly? Is there an exchange of (real) photons, or is the magic of the force the same: some weird interaction in spacetime which we can no further meaningfully analyze, but which gives space not only some physicality but also causes us to think of it as being discrete, somehow. Indeed, an electron is an electron: it is a whole. Thinking of it as a pointlike charge in perpetual motion does not make it less of a whole. Likewise, an electron in an atomic orbital is a whole as well: it just occupies more space. But both are particles: they have a size. They are no longer pointlike: they occupy a measurable space: the Cartesian (continuous) mathematical space becomes (discrete) physical space.

I need to add another idea here – or another question for you, if I may. If superconduction can only occur when electrons pair up, then we should probably think of the pairs as some unit too – and a unit that may take up a rather large space. Hence, the idea of a discrete, pointlike, particle becomes somewhat blurred, right? Or, at the very least, it becomes somewhat less absolute, doesn’t it? 🙂

I guess I am getting lost in words here, which is probably worse than getting ‘lost in math‘ (I am just paraphrasing Sabine Hossenfelder here) but, yes, that is why I am writing a blog post rather than a paper here. If you want equations, read my papers. 🙂 Oh – And don’t forget: fields are real as well. They may be relative, but they are real. And it’s not because they are quantized (think of (magnetic) flux quantization in the context of superconductivity, for example) that they are necessarily discrete – that we have field packets, so to speak. I should do a blog post on that. I will. Give me some time. 🙂

Post scriptum: What I wrote above on there not being any exchange of gravitons between an orbiting planet and its central star (or between double stars or whatever gravitational trajectories out there), does not imply I am ruling out their existence. I am a firm believer in the existence of gravitational waves, in fact. We should all be firm believers because – apart from some marginal critics still wondering what was actually being measured – the LIGO detections are real. However, whether or not these waves involve discrete lightlike particles – like photons and, in the case of the strong force, neutrinos – is a very different question. Do I have an opinion on it? I sure do. It is this: when matter gets destroyed or created (remember the LIGO detections involved the creation and/or destruction of matter as black holes merge), gravitational waves must carry some of the energy, and there is no reason to assume that the Planck-Einstein relation would not apply. Hence, we will have energy packets in the gravitational wave as well: the equivalent of photons (and, most probably, of neutrinos), in other words. All of this is, obviously, very speculative. Again, just think of this whole blog post as me freewheeling: the objective is, quite simply, to make you think as hard as I do about these matters. 🙂

As for my remark on the Cooper pairs being a unit or not, that question may be answered by thinking about what happens if Cooper pairs are broken, which is a topic I am not familiar with, so I cannot say anything about it.

Bell’s No-Go Theorem

I’ve been asked a couple of times: “What about Bell’s No-Go Theorem, which tells us there are no hidden variables that can explain quantum-mechanical interference in some kind of classical way?” My answer to that question is quite arrogant, because it’s the answer Albert Einstein would give when younger physicists would point out that his objections to quantum mechanics (which he usually expressed as some new  thought experiment) violated this or that axiom or theorem in quantum mechanics: “Das ist mir wur(sch)t.

In English: I don’t care. Einstein never lost the discussions with Heisenberg or Bohr: he just got tired of them. Like Einstein, I don’t care either – because Bell’s Theorem is what it is: a mathematical theorem. Hence, it respects the GIGO principle: garbage in, garbage out. In fact, John Stewart Bell himself – one of the third-generation physicists, we may say – had always hoped that some “radical conceptual renewal”[1] might disprove his conclusions. We should also remember Bell kept exploring alternative theories – including Bohm’s pilot wave theory, which is a hidden variables theory – until his death at a relatively young age. [J.S. Bell died from a cerebral hemorrhage in 1990 – the year he was nominated for the Nobel Prize in Physics. He was just 62 years old then.]

So I never really explored Bell’s Theorem. I was, therefore, very happy to get an email from Gerard van der Ham, who seems to have the necessary courage and perseverance to research this question in much more depth and, yes, relate it to a (local) realist interpretation of quantum mechanics. I actually still need to study his papers, and analyze the YouTube video he made (which looks much more professional than my videos), but this is promising.

To be frank, I got tired of all of these discussions – just like Einstein, I guess. The difference between realist interpretations of quantum mechanics and the Copenhagen dogmas is just a factor 2 or π in the formulas, and Richard Feynman famously said we should not care about such factors (Feynman’s Lectures, III-2-4). Modern physicists fudge them away consistently. They’ve done much worse than that, actually. :-/ They are not interested in truth. Convention, dogma, indoctrination – – non-scientific historical stuff – seems to prevent them from that. And modern science gurus – the likes of Sean Carroll or Sabine Hossenfelder etc. – play the age-old game of being interesting: they pretend to know something you do not know or – if they don’t – that they are close to getting the answers. They are not. They have them already. They just don’t want to tell you that because, yes, it’s the end of physics.

Form and substance

Philosophers usually distinguish between form and matter, rather than form and substance. Matter, as opposed to form, is then what is supposed to be formless. However, if there is anything that physics – as a science – has taught us, is that matter is defined by its form: in fact, it is the form factor which explains the difference between, say, a proton and an electron. So we might say that matter combines substance and form.

Now, we all know what form is: it is a mathematical quality—like the quality of having the shape of a triangle or a cube. But what is (the) substance that matter is made of? It is charge. Electric charge. It comes in various densities and shapes – that is why we think of it as being basically formless – but we can say a few more things about it. One is that it always comes in the same unit: the elementary charge—which may be positive or negative. Another is that the concept of charge is closely related to the concept of a force: a force acts on a charge—always.

We are talking elementary forces here, of course—the electromagnetic force, mainly. What about gravity? And what about the strong force? Attempts to model gravity as some kind of residual force, and the strong force as some kind of electromagnetic force with a different geometry but acting on the very same charge, have not been successful so far—but we should immediately add that mainstream academics never focused on it either, so the result may be commensurate with the effort made: nothing much.

Indeed, Einstein basically explained gravity away by giving us a geometric interpretation for it (general relativity theory) which, as far as I can see, confirms it may be some residual force resulting from the particular layout of positive and negative charge in electrically neutral atomic and molecular structures. As for the strong force, I believe the quark hypothesis – which basically states that partial (non-elementary) charges are, somehow, real – has led mainstream physics into the dead end it finds itself in now. Will it ever get out of it?

I am not sure. It does not matter all that much to me. I am not a mainstream scientist and I have the answers I was looking for. These answers may be temporary, but they are the best I have for the time being. The best quote I can think of right now is this one:

‘We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.’ (Jacques Lacan, in Jeremy D. Safran (2003), Psychoanalysis and Buddhism: an unfolding dialogue, p. 134)

That says it all, doesn’t it? For the time being, at least. 🙂

Post scriptum: You might think explaining gravity as some kind of residual electromagnetic force should be impossible, but explaining the attractive force inside a nucleus behind like charges was pretty difficult as well, until someone came up with a relatively simple idea based on the idea of ring currents. 🙂

The proton radius and mass

Our alternative realist interpretation of quantum physics is pretty complete but one thing that has been puzzling us is the mass density of a proton: why is it so massive as compared to an electron? We simplified things by adding a factor in the Planck-Einstein relation. To be precise, we wrote it as E = 4·h·f. This allowed us to derive the proton radius from the ring current model:

proton radius This felt a bit artificial. Writing the Planck-Einstein relation using an integer multiple of h or ħ (E = n·h·f = n·ħ·ω) is not uncommon. You should have encountered this relation when studying the black-body problem, for example, and it is also commonly used in the context of Bohr orbitals of electrons. But why is n equal to 4 here? Why not 2, or 3, or 5 or some other integer? We do not know: all we know is that the proton is very different. A proton is, effectively, not the antimatter counterpart of an electron—a positron. While the proton is much smaller – 459 times smaller, to be precise – its mass is 1,836 times that of the electron. Note that we have the same 1/4 factor here because the mass and Compton radius are inversely proportional:

ratii

This doesn’t look all that bad but it feels artificial. In addition, our reasoning involved a unexplained difference – a mysterious but exact SQRT(2) factor, to be precise – between the theoretical and experimentally measured magnetic moment of a proton. In short, we assumed some form factor must explain both the extraordinary mass density as well as this SQRT(2) factor but we were not quite able to pin it down, exactly. A remark on a video on our YouTube channel inspired us to think some more – thank you for that, Andy! – and we think we may have the answer now.

We now think the mass – or energy – of a proton combines two oscillations: one is the Zitterbewegung oscillation of the pointlike charge (which is a circular oscillation in a plane) while the other is the oscillation of the plane itself. The illustration below is a bit horrendous (I am not so good at drawings) but might help you to get the point. The plane of the Zitterbewegung (the plane of the proton ring current, in other words) may oscillate itself between +90 and −90 degrees. If so, the effective magnetic moment will differ from the theoretical magnetic moment we calculated, and it will differ by that SQRT(2) factor.

Proton oscillation

Hence, we should rewrite our paper, but the logic remains the same: we just have a much better explanation now of why we should apply the energy equipartition theorem.

Mystery solved! 🙂

Post scriptum (9 August 2020): The solution is not as simple as you may imagine. When combining the idea of some other motion to the ring current, we must remember that the speed of light –  the presumed tangential speed of our pointlike charge – cannot change. Hence, the radius must become smaller. We also need to think about distinguishing two different frequencies, and things quickly become quite complicated.

The geometry of the matter-wave

Yesterday, I was to talk for about 30 minutes to some students who are looking at classical electron models as part of an attempt to try to model what might be happening to an electron when moving through a magnetic field. Of course, I only had time to discuss the ring current model, and even then it inadvertently turned into a two-hour presentation. Fortunately, they were polite and no one dropped out—although it was an online Google Meet. In fact, they reacted quite enthusiastically, and so we all enjoyed it a lot. So much that I adjusted the presentation a bit the next morning (which added even more time to it unfortunately) so as to add it to my YouTube channel. So this is the link to it, and I hope you enjoy it. If so, please like it—and share it! 🙂

Oh! Forgot to mention: in case you wonder why this video is different than others, see my Tweet on Sean Carroll’s latest series of videos hereunder. That should explain it.

Sean Carroll

Post scriptum: I got the usual question, of course: if an electron is a ring current, then why doesn’t it radiate its energy away? The easy answer is: an electron is an electron and it doesn’t—for the same reason that an electron in an atomic orbital or a Cooper pair in a superconducting loop of current does not radiate energy away. The more difficult answer is a bit mysterious: it has got to do with flux quantization and, most importantly, with the Planck-Einstein relation. I cannot be too long here (this is just a footnote in a blog post) but the following elements should be noted:

1. The Planck-Einstein law embodies a (stable) wavicle: a wavicle respects the Planck-Einstein relation (E = h·f) as well as Einstein’s mass-energy equivalence relation (E = mc2). A wavicle will, therefore, carry energy but it will also pack one or more units of Planck’s quantum of action. Both the energy as well as this finite amount of physical action (Wirkung in German) will be conserved—cycle after cycle.

2. Hence, equilibrium states should be thought of as electromagnetic oscillations without friction. Indeed, it is the frictional element that explains the radiation of, say, an electron going up and down in an antenna and radiating some electromagnetic signal out. To add to this rather intuitive explanation, I should also remind you that it is the accelerations and decelerations of the electric charge in an antenna that generate the radio wave—not the motion as such. So one should, perhaps, think of a charge going round and round as moving like in a straight line—along some geodesic in its own space. That’s the metaphor, at least.

3. Technically, one needs to think in terms of quantized fluxes and Poynting vectors and energy transfers from kinetic to potential (and back) and from ‘electric’ to ‘magnetic’ (and back). In short, the electron really is an electromagnetic perpetuum mobile ! I know that sounds mystical (too) but then I never promised I would take all of the mystery away from quantum physics ! 🙂 If there would be no mystery left, I would not be interested in physics. :wink: On the quantization of flux for superconducting loops: see, for example, http://electron6.phys.utk.edu/qm2/modules/m5-6/flux.htm. There is other stuff you may want to dig into too, like my alternative Principles of Physics, of course ! 🙂