Particle physics impasse

Physics is the study of the physical universe. As a science, it involves a process consisting of two components. The theoretical component strives to construct theoretical models for the physical phenomena that we observe. The experimental component tests these theoretical models and explores the physical world for more information about phenomena.

Progress in physics is enhanced when many physicists using different approaches tackle the same problem. The diversity in the nature of problems need to be confronted by a diversity of perspectives. This diversity is reflected in the literature. The same physical phenomenon is often studied by different approaches, using different mathematical formulations. Some of them may turn out to produce the same results, but some may differ in their predictions. The experimental work can then be used to make a selection among them.

That is all fine and dandy for physics in general, but the situation is a bit more complicated for particle physics. Perhaps, one can see the reason for all these complications as the fact that particle physics is running out of observable energy space.

What do I mean by that? Progress in particle physics is (to some extent at least) indicated by understanding the fundamental mechanisms of nature at progressively higher energy scales. Today, we understand these fundamental mechanisms to a fairly good degree up to the electroweak scale (at about 200 GeV). It is described by the Standard Model, which was established during the 1970’s. So, for the past 4 decades, particle physicists tried to extend the understand beyond that scale. Various theoretical ideas were proposed, prominent among these were the idea of supersymmetry. Then a big experiment, the Large Hadron Collider (LHC) was constructed to test these ideas above the electroweak scale. It discovered the Higgs boson, which was the last extent particle predicted by the standard model. But no supersymmetry. In fact, none of the other ideas panned out at all. So there is a serious back-to-the-drawing-board situation going on in particle physics.

The problem is, the LHC did not discover anything else that could give a hint at what is going on up there, or did it? There will be another run to accumulate more data. The data still needs to be analyzed. Perhaps something can still emerge. Who knows? However, even if some new particle is lurking within the data, it becomes difficult to see. Such particles tend to be more unstable at those higher energies, leading to very broad peaks. To make things worse, there is so much more background noise. This makes it difficult, even unlikely, that such particles can be identified at these higher energies. At some point, no experiment would be able to observe such particles anymore.

The interesting things about the situation is the backlash that one reads about in the media. The particle physicists are arguing among themselves about the reason for the current situation and what the way forward should be. There are those that say that the proposed models were all a bunch of harebrained ideas that were then hyped and that we should not build any new colliders until we have done some proper theoretical work first.

See, the problem with building new colliders is the cost involved. It is not like other fields of physics where the local funding organization can support several experimental groups. These colliders require several countries to pitch in to cover the cost. (OK, particle physics is not the only field with such big ticket experiments.)

The combined effect of the unlikeness to observe new particles at higher energies and the cost involved to build new colliders at higher energies, creates an impasse in particle physics. Although they may come up with marvelous new theories for the mechanisms above the electroweak scale, it may be impossible to see whether these theories are correct. Perhaps the last energy scale below which we will be able to understand the fundamental mechanisms in a scientific manner, will turn out to be the electroweak scale.

Glad I did not stay in particle physics.

The importance of falsifiability

Many years ago, while I was still a graduate student studying particle physics, my supervisor Bob was very worried about supersymmetry. He was particularly worried that it will become the accepted theory without the need to be properly tested.

In those days, it was almost taken for granted that supersymmetry is the correct theory. Since he came from the technicolour camp, Bob did not particularly like supersymmetry. Unfortunately, at that point, the predictions of the technicolour models did not agree with experimental observations. So it was not a seriously considered as a viable theory. Supersymmetry, on the other hand, had enough free parameters that it could sidestep any detrimental experimental results. This ability to dodge these results and constantly hiding itself made supersymmetry look like a theory that can never be ruled out. Hence my supervisor’s concern.

Today the situation is much different. As the Large Hadron Collider accumulated data, it could systematically rule out progressively larger energy ranges where the supersymmetric particles could hide. Eventually, there was simply no place to hide anymore. At least those versions of supersymmetry that rely on a stable superpartner that must exist at the electroweak scale have been ruled out. For most particle physicists this seems to indicate the supersymmetry as a whole has been ruled out. But of course, there are still those that cling to the idea.

So, in hindsight, supersymmetry was falsifiable after all. For me this whole process exemplify the importance of falsifiability. Imagine that supersymmetry could keep on hiding. How would we know if it is right? The reason why so many physicists believed it must be right is because it is “so beautiful.” Does beauty in this context imply that a theory must be correct? Evidently not. There is now alternative to experimental testing to know if a scientific theory is correct.

This bring me to another theory that is believed to be true simply because it is considered so beautiful that it must be correct. I’m talking of string theory. In this case there is a very serious issue about the falsifiability of the theory. String theory addresses physics at the hypothetical Planck scale. However, there does not exist any conceivable way to test physics at this scale.

Just to avoid any confusion about what I mean by falsifiable: There are those people that claim that string theory is falsifiable. It is just not practically possible to test it. Well, that is missing the point now, isn’t it? The reason for falsifiability is to know if the theory is right. It does not help if it is “in principle” falsifiable, because then we won’t be able to know if it is right. The only useful form of falsifiability is when one can physically test it. Otherwise it is not interesting from a scientific point of view.

Having said that, I do not think one should dictate to people what they are allowed to research. We may agree about whether it is science or not, but if somebody wants to investigate something that we do not currently consider as scientific, then so be it. Who knows, one day that research may somehow lead to research that is falsifiable.

There is of course the whole matter of whether such non-falsifiable research should be allowed to receive research funding. However, the matter of how research should be funded is a whole topic on its own. Perhaps for another day.