Just delete “vacuum fluctuations”

How do you build a tower? One layer of bricks at a time. But before you lay down the next layer of bricks, you need to make sure the current layer of bricks has been laid down properly. Otherwise, the whole thing may be tumbling down.

The same is true in physics. Before, you base your ideas on previous ideas, you need to check that those previous ideas are correct. Otherwise, you would be misleading yourself and others, and the new theories may not be able to make successful predictions.

Physics is a science, which means that we should only trust previous ideas after they have been tested through comparison with physical observations. Unfortunately, there are some ideas that cannot be checked so easily. Obviously, one should then be very careful when you base new ideas on such unchecked ideas. Some people blame the current lack of progress in fundamental physics on this problem. They say we need to go back and check if we have not made a mistake somewhere. I think I know where this problem is.

Over the centuries of physics research, many tools have been developed to aid the formulation of theories. These tools include things like differential calculus in terms of which equations of motion can be formulated, and Hamiltonians and Lagrangians, to name a few.

Now, I see that some people claim that most of these tools won’t work for the formulation of a fundamental theory that includes gravity with quantum theory. It is stated that a minimum measurement uncertainty, imposed by the Planck scale, would render the formulation of equations of motion and Lagrangians at this scale impossible. Why is that? Well, it is claimed that the uncertainty at such small distance scales is large enough to allow tiny black holes to pop in and out of existence, creating havoc with spacetime at such small scales. This argument is the reason why people consider the Planck scale as a fundamental scale beneath which our traditional notions of physics and spacetime break down.

But why does uncertainty lead to black holes popping in and out of existence? It comes from an unchecked idea based on the Heisenberg uncertainty principle, which claims that it allows particles to pop in and out of existence, and such particles can have larger energies when the time for their existence is short enough. This hypothetical process is generally referred to as “vacuum fluctuations.” However, there does not exist any conclusive experimental confirmation of the process of vacuum fluctuations. Therefore, any idea based on vacuum fluctuations is an idea based on an unchecked idea.

Previously, I have explained that the Heisenberg uncertainty principle is not a fundamental principle of quantum physics, but instead comes from Fourier theory. As such the uncertainty principle represents a prohibition and not a license. It imposes restrictions on what can exist. Instead, people somehow decided that it allows things to exist in violation of other principles such as energy conservation. This is an erroneous notions with no experimental confirmation.

Hence, the vacuum does not fluctuate! There are no particles popping in and out of existence in the vacuum. There is nothing in our understanding of the physical world that has been experimentally confirmed which needs the concept of vacuum fluctuations.

Now, if we get rid of this notion of vacuum fluctuations, several issues in fundamental physics will simply disappear. For example, the black hole information paradox. A key ingredient of this paradox is the idea that black holes will evaporate due to Hawking radiation. The notion of Hawking radiation is another unchecked idea, which is based on …? You guessed it: vacuum fluctuations! So if we just get rid of this silly notion of vacuum fluctuations, the black hole information paradox will evaporate, instead of the black holes.


One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.

Mopping up

The particle physics impasse prevails. That is my impression, judging from the battles raging on the blogs.

Among these, I recently saw an interesting comment by Terry Bollinger to a blog post by Sabine Hossenfelder. According to Terry, the particle physics research effort lost track (missed the right turnoff) already in the 70’s. This opinion is in agreement with the apparent slow down in progress since the 70’s. Apart from the fact that neutrino’s have mass, we did not learn much more about fundamental physics since the advent of the standard model in the 70’s.

However, some may argue that the problem already started earlier. Perhaps just after the Second World War. Because that was when the world woke up to the importance of fundamental physics. That was the point where vanity became more important than curiosity for the driving force to do research. The result was an increase in weird science – crazy predictions that are more interested in drawing attention than increasing understanding.

Be that as it may. (I’ve written about that in my book.) The question is, what to do about that? There are some concepts in fundamental physics that are taken for granted, yet have never been established as scientific fact through a proper scientific process. One such concept pointed out by Terry is the behaviour of spacetime at the Planck scale.

Today the Planck scale is referred to as if it is establish scientific fact, where in fact it is a hypothetical scale. The physical existence of the Planck scale has not and probably cannot be confirmed through scientific experiments, at least not with out current capability. Chances are it does not exist.

The existence of the Planck scale is based on some other concepts that are also not scientific facts. One is the notion of vacuum fluctuations, a concept that is often invoked to come up with exotic predictions. What about the vacuum is fluctuating? It follows from a very simple calculation that the particle number of the vacuum state is exactly zero with zero uncertainty. So it seems that the notion of vacuum fluctuations is not as well understood as is generally believed.

Does it mean that we are doomed to wander around in a state of confusion? No, we just need to return to the basic principles of the scientific method.

So I propose a mopping up exercise. We need to go back to what we understand according to the scientific method and then test those parts that we are not sure about using scientific experiments and observations. Those aspects that are not testable in a scientific manner needs to be treated on a different level.

For instance, the so-called measurement problem involves aspects that are in principle not testable. As such, they belong to the domain of philosophy and should not be incorporated into our scientific understanding. There are things we can never know in a scientific manner and it is pointless to make them prerequisites for progress in our understanding of the physical world.

The importance of falsifiability

Many years ago, while I was still a graduate student studying particle physics, my supervisor Bob was very worried about supersymmetry. He was particularly worried that it will become the accepted theory without the need to be properly tested.

In those days, it was almost taken for granted that supersymmetry is the correct theory. Since he came from the technicolour camp, Bob did not particularly like supersymmetry. Unfortunately, at that point, the predictions of the technicolour models did not agree with experimental observations. So it was not a seriously considered as a viable theory. Supersymmetry, on the other hand, had enough free parameters that it could sidestep any detrimental experimental results. This ability to dodge these results and constantly hiding itself made supersymmetry look like a theory that can never be ruled out. Hence my supervisor’s concern.

Today the situation is much different. As the Large Hadron Collider accumulated data, it could systematically rule out progressively larger energy ranges where the supersymmetric particles could hide. Eventually, there was simply no place to hide anymore. At least those versions of supersymmetry that rely on a stable superpartner that must exist at the electroweak scale have been ruled out. For most particle physicists this seems to indicate the supersymmetry as a whole has been ruled out. But of course, there are still those that cling to the idea.

So, in hindsight, supersymmetry was falsifiable after all. For me this whole process exemplify the importance of falsifiability. Imagine that supersymmetry could keep on hiding. How would we know if it is right? The reason why so many physicists believed it must be right is because it is “so beautiful.” Does beauty in this context imply that a theory must be correct? Evidently not. There is now alternative to experimental testing to know if a scientific theory is correct.

This bring me to another theory that is believed to be true simply because it is considered so beautiful that it must be correct. I’m talking of string theory. In this case there is a very serious issue about the falsifiability of the theory. String theory addresses physics at the hypothetical Planck scale. However, there does not exist any conceivable way to test physics at this scale.

Just to avoid any confusion about what I mean by falsifiable: There are those people that claim that string theory is falsifiable. It is just not practically possible to test it. Well, that is missing the point now, isn’t it? The reason for falsifiability is to know if the theory is right. It does not help if it is “in principle” falsifiable, because then we won’t be able to know if it is right. The only useful form of falsifiability is when one can physically test it. Otherwise it is not interesting from a scientific point of view.

Having said that, I do not think one should dictate to people what they are allowed to research. We may agree about whether it is science or not, but if somebody wants to investigate something that we do not currently consider as scientific, then so be it. Who knows, one day that research may somehow lead to research that is falsifiable.

There is of course the whole matter of whether such non-falsifiable research should be allowed to receive research funding. However, the matter of how research should be funded is a whole topic on its own. Perhaps for another day.