The nature of antimatter (and dark matter too!)

The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:

B = –iE/c

The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and aeiEt/ħ functions with left and right-handed spin (angular momentum), respectively.

Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):

Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.

We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.

So did we explain the mystery? We think so. 🙂

We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).

We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂

The End of Science?

There are two branches of physics. The nicer branch studies equilibrium states: simple laws, stable particles (electrons and protons, basically), the expanding (oscillating?) Universe, etcetera. This branch includes the study of dynamical systems which we can only describe in terms of probabilities or approximations: think of kinetic gas theory (thermodynamics) or, much simpler, hydrostatics (the flow of water, Feynman, Vol. II, chapters 40 and 41), about which Feynman writes this:

“The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.” (Feynman, I-3-7)

Still, we believe first principles do apply to the flow of water through a pipe. In contrast, the second branch of physics – we think of the study of non-stable particles here: transients (charged kaons and pions, for example) or resonances (very short-lived intermediate energy states). The class of physicists who studies these must be commended, but they resemble econometrists modeling input-output relations: if they are lucky, they will get some kind of mathematical description of what goes in and what goes out, but the math does not tell them how stuff actually happens. It leads one to think about the difference between a theory, a calculation and an explanation. Simplifying somewhat, we can represent such input-output relations by thinking of a process that will be operating on some state |ψ⟩ to produce some other state |ϕ⟩, which we write like this:

⟨ϕ|A|ψ⟩

A is referred to as a Hermitian matrix if the process is reversible. Reversibility looks like time reversal, which can be represented by taking the complex conjugate ⟨ϕ|A|ψ⟩* = ⟨ψ|A†|ϕ⟩: we put a minus sign in front of the imaginary unit, so we have –i instead of i in the wavefunctions (or i instead of –i with respect to the usual convention for denoting the direction of rotation). Processes may not reversible, in which case we talk about symmetry-breaking: CPT-symmetry is always respected so, if T-symmetry (time) is broken, CP-symmetry is broken as well. There is nothing magical about that.

Physicists found the description of these input-output relations can be simplified greatly by introducing quarks (see Annex II of our paper on ontology and physics). Quarks have partial charge and, more generally, mix physical dimensions (mass/energy, spin or (angular) momentum). They create some order – think of it as some kind of taxonomy – in the vast zoo of (unstable) particles, which is great. However, we do not think there was a need to give them some kind of ontological status: unlike plants or insects, partial charges do not exist.

We also think the association between forces and (virtual) particles is misguided. Of course, one might say forces are being mediated by particles (matter- or light-particles), because particles effectively pack energy and angular momentum (light-particles – photons and neutrinos – differ from matter-particles (electrons, protons) in that they carry no charge, but they do carry electromagnetic and/or nuclear energy) and force and energy are, therefore, being transferred through particle reactions, elastically or non-elastically. However, we think it is important to clearly separate the notion of fields and particles: they are governed by the same laws (conservation of charge, energy, and (linear and angular) momentum, and – last but not least – (physical) action) but their nature is very different.

W.E. Lamb (1995), nearing the end of his very distinguished scientific career, wrote about “a comedy of errors and historical accidents”, but we think the business is rather serious: we have reached the End of Science. We have solved Feynman’s U = 0 equation. All that is left, is engineering: solving practical problems and inventing new stuff. That should be exciting enough. 🙂

Post scriptum: I added an Annex (III) to my paper on ontology and physics, with what we think of as a complete description of the Universe. It is abstruse but fun (we hope!): we basically add a description of events to Feynman’s U = 0 (un)worldliness formula. 🙂

On the Universe, Alien Life and the End of Science

I was a bit bored today (Valentine’s Day but no Valentine playing for me), and so I did a video on the Universe and (the possibility) of Life elsewhere. It is simple (I managed to limit it to 40 minutes!) but it deals with all of the Big Questions: fundamental forces and distance scales; the geometric approach to gravity and the curvature of the Universe; Big Bang(s) and – who knows? – an oscillating Universe; and, yes, Life here and, perhaps, elsewhere. Enjoy ! The corresponding paper is available on ResearchGate.

PS: I’ve also organized my thoughts on quarks in a (much more) moderate annex to my paper on ontology and physics. Quite a productive Valentine’s Day – despite the absence of a Valentina ! 🙂 JL

Ontology and physics

One sometimes wonders what keeps amateur physicists awake. Why is it that they want to understand quarks and wave equations, or delve into complicated math (perturbation theory, for example)? I believe it is driven by the same human curiosity that drives philosophy. Physics stands apart from other sciences because it examines the smallest of smallest – the essence of things, so to speak.

Unlike other sciences (the human sciences in particular, perhaps), physicists also seek to reduce the number of concepts, rather than multiply them – even if, sadly, enough, they do not always a good job at that. However, generally speaking, physics and math may, effectively, be considered to be the King and Queen of Science, respectively.

The Queen is an eternal beauty, of course, because Her Language may mean anything. Physics, in contrast, talks specifics: physical dimensions (force, distance, energy, etcetera), as opposed to mathematical dimensions – which are mere quantities (scalars and vectors).

Science differs from religion in that it seeks to experimentally verify its propositions. It measures rather than believes. These measurements are cross-checked by a global community and, thereby, establish a non-subjective reality. The question of whether reality exists outside of us, is irrelevant: it is a category mistake (Ryle, 1949). It is like asking why we are here: we just are.

All is in the fundamental equations. An equation relates a measurement to Nature’s constants. Measurements – energy/mass, or velocities – are relative. Nature’s constants do not depend on the frame of reference of the observer and we may, therefore, label them as being absolute. This corresponds to the difference between variables and parameters in equations. The speed of light (c) and Planck’s quantum of action (h) are parameters in the E/m = c2 and E = hf, respectively.

Feynman (II-25-6) is right that the Great Law of Nature may be summarized as U = 0 but that “this simple notation just hides the complexity in the definitions of symbols is just a trick.” It is like talking of the night “in which all cows are equally black” (Hegel, Phänomenologie des Geistes, Vorrede, 1807). Hence, the U = 0 equation needs to be separated out. I would separate it out as:

We imagine things in 3D space and one-directional time (Lorentz, 1927, and Kant, 1781). The imaginary unit operator (i) represents a rotation in space. A rotation takes time. Its physical dimension is, therefore, s/m or -s/m, as per the mathematical convention in place (Minkowski’s metric signature and counter-clockwise evolution of the argument of complex numbers, which represent the (elementary) wavefunction).

Velocities can be linear or tangential, giving rise to the concepts of linear versus angular momentum. Tangential velocities imply orbitals: circular and elliptical orbitals are closed. Particles are pointlike charges in closed orbitals. We are not sure if non-closed orbitals might correspond to some reality: linear oscillations are field particles, but we do not think of lines as non-closed orbitals: the curvature of real space (the Universe we live in) suggest we should but we are not sure such thinking is productive (efforts to model gravity as a residual force have failed so far).

Space and time are innate or a priori categories (Kant, 1781). Elementary particles can be modeled as pointlike charges oscillating in space and in time. The concept of charge could be dispensed with if there were not lightlike particles: photons and neutrinos, which carry energy but no charge. The pointlike charge which is oscillating is pointlike but may have a finite (non-zero) physical dimension, which explains the anomalous magnetic moment of the free (Compton) electron. However, it only appears to have a non-zero dimension when the electromagnetic force is involved (the proton has no anomalous magnetic moment and is about 3.35 times smaller than the calculated radius of the pointlike charge inside of an electron). Why? We do not know: elementary particles are what they are.

We have two forces: electromagnetic and nuclear. One of the most remarkable things is that the E/m = c2 holds for both electromagnetic and nuclear oscillations, or combinations thereof (superposition theorem). Combined with the oscillator model (E = ma2ω2 = mc2 and, therefore, c must be equal to c = aω), this makes us think of c2 as modeling an elasticity or plasticity of space. Why two oscillatory modes only? In 3D space, we can only imagine oscillations in one, two and three dimensions (line, plane, and sphere). The idea of four-dimensional spacetime is not relevant in this context.

Photons and neutrinos are linear oscillations and, because they carry no charge, travel at the speed of light. Electrons and muon-electrons (and their antimatter counterparts) are 2D oscillations packing electromagnetic and nuclear energy, respectively. The proton (and antiproton) pack a 3D nuclear oscillation. Neutrons combine positive and negative charge and are, therefore, neutral. Neutrons may or may not combine the electromagnetic and nuclear force: their size (more or less the same as that of the proton) suggests the oscillation is nuclear.  

 2D oscillation3D oscillation
electromagnetic forcee± (electron/positron)orbital electron (e.g.: 1H)
nuclear forceμ± (muon-electron/antimuon)p± (proton/antiproton)
compositepions (π±/ π0)?n (neutron)? D+ (deuteron)?
corresponding field particleγ (photon)ν (neutrino)

The theory is complete: each theoretical/mathematical/logical possibility corresponds to a physical reality, with spin distinguishing matter from antimatter for particles with the same form factor.

When reading this, my kids might call me and ask whether I have gone mad. Their doubts and worry are not random: the laws of the Universe are deterministic (our macro-time scale introduces probabilistic determinism only). Free will is real, however: we analyze and, based on our analysis, we determine the best course to take when taking care of business. Each course of action is associated with an anticipated cost and return. We do not always choose the best course of action because of past experience, habit, laziness or – in my case – an inexplicable desire to experiment and explore new territory.

PS: I’ve written this all out in a paper, of course. 🙂 I also did a 30 minute YouTube video on it. Finally, I got a nice comment from an architect who wrote an interesting paper on wavefunctions and wave equations back in 1996 – including thoughts on gravity.

A Zitterbewegung model of the neutron

As part of my ventures into QCD, I quickly developed a Zitterbewegung model of the neutron, as a complement to my first sketch of a deuteron nucleus. The math of orbitals is interesting. Whatever field you have, one can model is using a coupling constant between the proportionality coefficient of the force, and the charge it acts on. That ties it nicely with my earlier thoughts on the meaning of the fine-structure constant.

My realist interpretation of quantum physics focuses on explanations involving the electromagnetic force only, but the matter-antimatter dichotomy still puzzles me very much. Also, the idea of virtual particles is no longer anathema to me, but I still want to model them as particle-field interactions and the exchange of real (angular or linear) momentum and energy, with a quantization of momentum and energy obeying the Planck-Einstein law.

The proton model will be key. We cannot explain it in the typical ‘mass without mass’ model of zittering charges: we get a 1/4 factor in the explanation of the proton radius, which is impossible to get rid of unless we assume some ‘strong’ force come into play. That is why I prioritize a ‘straight’ attack on the electron and the proton-electron bond in a primitive neutron model.

The calculation of forces inside a muon-electron and a proton (see ) is an interesting exercise: it is the only thing which explains why an electron annihilates a positron but electrons and protons can live together (the ‘anti-matter’ nature of charged particles only shows because of opposite spin directions of the fields – so it is only when the ‘structure’ of matter-antimatter pairs is different that they will not annihilate each other).

[…]

In short, 2021 will be an interesting year for me. The intent of my last two papers (on the deuteron model and the primitive neutron model) was to think of energy values: the energy value of the bond between electron and proton in the neutron, and the energy value of the bond between proton and neutron in a deuteron nucleus. But, yes, the more fundamental work remains to be done !

Cheers – Jean-Louis

The electromagnetic deuteron model

In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.

I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.

Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?

I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]

If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).

Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jckd expression then). Using vector equations throughout and thinking of h as a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write f as a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]

Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense ! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]

The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.

[…]

This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂

Brussels, 30 December 2020

Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂

The complementarity of wave- and particle-like viewpoints on EM wave propagation

In 1995, W.E. Lamb Jr. wrote the following on the nature of the photon: “There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. I admit that the word is short and convenient. Its use is also habit forming. Similarly, one might find it convenient to speak of the “aether” or “vacuum” to stand for empty space, even if no such thing existed. There are very good substitute words for “photon”, (e.g., “radiation” or “light”), and for “photonics” (e.g., “optics” or “quantum optics”). Similar objections are possible to use of the word “phonon”, which dates from 1932. Objects like electrons, neutrinos of finite rest mass, or helium atoms can, under suitable conditions, be considered to be particles, since their theories then have viable non-relativistic and non-quantum limits.”[1]

The opinion of a Nobel Prize laureate carries some weight, of course, but we think the concept of a photon makes sense. As the electron moves from one (potential) energy state to another – from one atomic or molecular orbital to another – it builds an oscillating electromagnetic field which has an integrity of its own and, therefore, is not only wave-like but also particle-like.

We, therefore, dedicated the fifth chapter of our re-write of Feynman’s Lectures to a dual analysis of EM radiation (and, yes, this post is just an announcement of the paper so you are supposed to click the link to read it). It is, basically, an overview of a rather particular expression of Maxwell’s equations which Feynman uses to discuss the laws of radiation. I wonder how to – possibly – ‘transform’ or ‘transpose’ this framework so it might apply to deep electron orbitals and – possibly – proton-neutron oscillations.


[1] W.E. Lamb Jr., Anti-photon, in: Applied Physics B volume 60, pages 77–84 (1995).

Cold fusion

I thought I should stop worrying about physics, but then I got an impromptu invitation to a symposium on low-energy nuclear reactions (LENR) and I got all excited about it. The field of LENR was, and still is, often referred to as cold fusion which, after initial enthusiasm, got a not-so-good name because of… More than one reason, really. Read the Wikipedia article on it, or just google and read some other blog articles (e.g. Scientific American’s guest blog on the topic is a pretty good one, I think).

The presentations were very good (especially those on the experimental results and the recent involvement of some very respectable institutions in addition to the usual suspects and, sadly, some fly-by-night operators too), and the follow-on conversation with one of the co-organizers convinced me that the researchers are serious, open-minded and – while not quite being able to provide all of the answers we are all seeking – very ready to discuss them seriously. Most, if not all, experiments involve transmutions of nuclei triggered by low-energy inputs such as a low-energy radiation (irradiation and transmutation of palladium by, say, a now-household 5 mW laser beam is just one of the examples). One experiment even triggered a current just by adding plain heat which, as you know, is nothing but very low-energy (infrared) radiation, although I must admit this was one I would like to see replicated en masse before believing it to be real (the equipment was small and simple, and so the experimenters could have shared it easily with other labs).

When looking at these experiments, the comparison that comes to mind is that of an opera singer shattering crystal with his or her voice: some frequency in the sound causes the material to resonate at, yes, its resonant frequency (most probably an enormous but integer multiple of the sound frequency), and then the energy builds up – like when you give a child on a swing an extra push every time when you should – as the amplitude becomes larger and larger – till the breaking point is reached. Another comparison is the failing of a suspension bridge when external vibrations (think of the rather proverbial soldier regiment here) cause similar resonance phenomena. So, yes, it is not unreasonable to believe that one could be able to induce neutron decay and, thereby, release the binding energy between the proton and the electron in the process by some low-energy stimulation provided the frequencies are harmonic.

The problem with the comparison – and for the LENR idea to be truly useful – is this: one cannot see any net production of energy here. The strain or stress that builds up in the crystal glass is a strain induced by the energy in the sound wave (which is why the singing demos usually include amplifiers to attain the required power/amplitude ratio, i.e. the required decibels). In addition, the breaking of crystal or a suspension bridge typically involves a weaker link somewhere, or some directional aspect (so that would be the equivalent of an impurity in a crystal structure, I guess), but that is a minor point, and a point that is probably easier to tackle than the question on the energy equation.

LENR research has probably advanced far enough now (the first series of experiments started in 1989) to slowly start focusing on the whole chain of these successful experiments: what is the equivalent, in these low-energy reactions, of the nuclear fuel in high-energy fission or fusion experiments? And, if it can be clearly identified, the researchers need to show that the energy that goes into the production of this fuel is much less than the energy you get out of it by burning it (and, of course, with ‘burning’ I mean the decay reaction here). [In case you have heard about Randell Mills’ hydrino experiments, he should show the emission spectrum of these hydrinos. Otherwise, one might think he is literally burning hydrogen. Attracting venture capital and providing scientific proof are not mutually exclusive, are they? In the meanwhile, I hope that what he is showing is real, in the way all LENR researchers hope it is real.]

LENR research may also usefully focus on getting the fundamental theory right. The observed anomalous heat and/or transmutation reactions cannot be explained by mainstream quantum physics (I am talking QCD here, so that’s QFT, basically). That should not surprise us: one does not need quarks or gluons to explain high-energy nuclear processes such as fission or fusion, either! My theory is, of course, typically simplistically simple: the energy that is being unlocked is just the binding energy between the nuclear electron and the protons, in the neutron itself or in a composite nucleus, the simplest of which is the deuteron nucleus. I talk about that in my paper on matter-antimatter pair creation/annihilation as a nuclear process but you do not need to be an adept of classical or realist interpretations of quantum mechanics to understand this point. To quote a motivational writer here: it is OK for things to be easy. 🙂

So LENR theorists just need to accept they are not mainstream – yet, that is – and come out with a more clearly articulated theory on why their stuff works the way it does. For some reason I do not quite understand, they come across as somewhat hesitant to do so. Fears of being frozen out even more by the mainstream? Come on guys ! You are coming out of the cold anyway, so why not be bold and go all the way? It is a time of opportunities now, and the field of LENR is one of them, both theoretically as well as practically speaking. I honestly think it is one of those rare moments in the history of physics where experimental research may be well ahead of theoretical physics, so they should feel like proud trailblazers!

Personally, I do not think it will replace big classical nuclear energy plants anytime soon but, in a not-so-distant future, it might yield much very useful small devices: lower energy, and, therefore, lower risk also. I also look forward to LENR research dealing the fatal blow to standard theory by confirming we do not need perturbation and renormalization theories to explain reality. 🙂

Post scriptum: If low-energy nuclear reactions are real, mainstream (astro)physicists will also have to rework their stories on cosmogenesis and the (future) evolution of the Universe. The standard story may well be summed up in the brief commentary of the HyperPhysics entry on the deuteron nucleus:

The stability of the deuteron is an important part of the story of the universe. In the Big Bang model it is presumed that in early stages there were equal numbers of neutrons and protons since the available energies were much higher than the 0.78 MeV required to convert a proton and electron to a neutron. When the temperature dropped to the point where neutrons could no longer be produced from protons, the decay of free neutrons began to diminish their population. Those which combined with protons to form deuterons were protected from further decay. This is fortunate for us because if all the neutrons had decayed, there would be no universe as we know it, and we wouldn’t be here!

If low-energy nuclear reactions are real – and I think they are – then the standard story about the Big Bang is obviously bogus too. I am not necessarily doubting the reality of the Big Bang itself (the ongoing expansion of the Universe is a scientific fact so, yes, the Universe must have been much smaller and (much) more energy-dense long time ago), but the standard calculations on proton-neutron reactions taking place, or not, at cut-off temperatures/energies above/below 0.78 MeV do not make sense anymore. One should, perhaps, think more in terms of how matter-antimatter ratios might or might not have evolved (and, of course, one should keep an eye on the electron-proton ratio, but that should work itself out because of charge conservation) to correctly calculate the early evolution of the Universe, rather than focusing so much on proton-neutron ratios.

Why do I say that? Because neutrons do appear to consist of a proton and an electron – rather than of quarks and gluons – and they continue to decay and then recombine again, so these proton-neutron reactions must not be thoughts of as some historic (discontinuous) process.

[…] Hmm… The more I look at the standard stories, the more holes I see… This one, however, is very serious. If LENR and/or cold fusion is real, then it will also revolutionize the theories on cosmogenesis (the evolution of the Universe). I instinctively like that, of course, because – just like quantization – I had the impression the discontinuities are there, but not quite in the way mainstream physicists – thinking more in terms of quarks and gluons rather than in terms of stuff that we can actually measure – portray the whole show.