Transcending the impasse, part VIII

… or not?

In this final posting in the series on transcending the impasse in fundamental physics, we need to consider the possibility that we may never be able to transcend the impasse. Perhaps this is it as far as our scientific understanding of fundamental physics in concerned. Perhaps our ability to probe deeper into the unknown ends here.

Why would that be? Perhaps the theory that would correctly explain what happens above the electroweak scale would need observations at an energy scale that is too high to reach with conceivable colliders. Without such observations, the theory may remain in the status of a hypothesis and never become part of our scientifically established knowledge.

It seems that collider physics has run its course. The contributions to our scientific knowledge made with the aid of colliders are truly remarkable. But, at increasing higher energies, it runs into a number of serious challenges. At such high energies, a collider needs to be very large and extremely expensive. As a result, it becomes impractical and financially unjustifiable.

Even if such a large expensive collider does become a reality, the challenges do not end there. The scattering events produced in such a collider become increasingly complex. Already at the Large Hadron Collider, the scattering events look more like the hair on a drag queen’s wig. The amount of data produced in such events is formidable. The rate at which the data is generated become unmanageable.

Even if one can handle that much data, then one finds that the signal is swamped by background noise. At those high energies, particles are more unstable. It means that their peaks are very broad and relatively low. So it becomes that much harder to see a new particle popping up in the scattering data.

There are suggestions of how scientific observations can be made to support high-energy physics without the use of colliders. One such suggestion is based on astronomical observations. There are high energies generated in some astronomical events. However, such events are unpredictable and the information that can be extracted from these event is very limited compared to what is possible with the detectors of colliders.

Another suggestion is to use high precision measurements at lower energies. It becomes a metrology challenge to measure properties of matter increasingly more accurate and use that to infer what happens at high energies.

Whether any of these suggestions will eventually be able to increase our knowledge of fundamental physics remains to be seen. But I would not be holding my breath.

Perhaps it sounds like that old story about those 19th century physicists that predicted the end of physics even before the discoveries of relativity and quantum mechanics. Well, I think the idea that a steady increase in our physical understanding in perpetuity is equally ludicrous. At some point, we will see a slow-down in the increase of our understanding of fundamental physics, and even in physics in general. However, applied physics and engineering can proceed unabated.

We are already seeing a slow-down in the increase of our understanding of fundamental physics. Many fields of physics are already mostly devoted to applied physics. Very little is added in terms of new fundamental understanding of our physical universe. So, perhaps the impasse is simply an inevitable stage in the development of human culture, heralding to maturity of our knowledge about the universe in which we live.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part VII

Vanity in physics

In this penultimate posting in the series on transcending the impasse in fundamental physics, I’ll address an issue that I consider to be one of the major reasons for the impasse, if the main reason. It is a topic that I feel very passionate about and one that I’ve written about in my book. It is a very broad topic with various aspects that can be addressed. So, I can see this topic becoming a spin-off series on its own.

Stating it briefly, without ranting too much, one can bring this issue into the context of the scientific method itself. As remarkable as the scientific method is with all the successes associated with it, if the very foundation on which it is based starts to erode, the whole edifice in all its glory will come tumbling down.

Now what is this foundation of the scientific method that could be eroded away? Well, the scientific method shares the property with capitalism and democracy in that it is a self-regulating feedback system. Each of these mechanisms is based on a property, a driving force, found in human nature that makes it work. For democracy, it is the reaction to the conditions one finds oneself in as provided by the authorities. For capitalism, it is basically greed and the need for material possessions. For the scientific method it is curiosity and need for knowledge and understanding.

So, the basic assumption is that those that are involved in the scientific process, the scientists, are driven by their curiosity. It has to a large extent been the case for centuries, and we have the accumulated scientific knowledge obtain through this process thanks to this curiosity.

However, during the past century, things started to change. It some point, due to some key event or perhaps as a result of various minor events, the fundamental driving force for scientists started to change. Instead of being internally motivated by their curiosity, they became externally motivated by … vanity!

Today, one gets the impression that researchers are far more concerned about egos than the knowledge they create. To support this statement, I can provide numerous examples. But instead of doing that, I’ll focus on only aspect: how this vanity issue impacts and causes the current impasse. Perhaps I’ll provide and discuss those examples in followup posts.

In the aftermath of the disappointing lack of results from the Large Hadron Collider (LHC), some people blamed other prominent researchers for their ludicrously exotic proposals and predictions. None of which survived the observations of the LHC.

Why would highly respected physicists make such ludicrous predictions? The way I see it, is as a gamble with high stakes. Chances were that these predictions would not have panned out. But if one of them did receive confirmation from the LHC, the return on investment would have been extremely high. The person that made the prediction would have become extremely famous not only among physicists, but probably also among the general public. It would probably have ensured that the person receives a Nobel prize. Hence, all the needs for vanity would have been satisfied instantly.

What about knowledge? Surely, if the prediction turned out to be correct, then it must imply a significant increase in our knowledge. True, but now one should look at the reality. None of these exotic predictions succeeded. This situation is not really surprising, probably not even to the people that made these predictions, because they probably knew the probability for their success to be extremely low. In that context, the motivation for making the predictions was never about the increase in knowledge. It was purely aimed at vanity.

An extreme example is this one physicists, who shall remain unnamed. He is known for making random predictions at a remarkable rate. It is obvious to everybody that he is not making these predictions because he expects them to work out. It is simply an attempt to be the first to have made a specific prediction in the off-chance that one of them came true. Then he’ll probably hope to receive all the vanity rewards that he so desperately craves.

It might have been amusing, were it not for the fact that this deplorable situation is adversely affecting progress in physics, and probably in science in general, albeit I don’t have such extensive experience in other fields of science. The observable effect in fundamental physics is a significant slowdown in progress that is stretching over several decades.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Naturalness

One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.

The importance of falsifiability

Many years ago, while I was still a graduate student studying particle physics, my supervisor Bob was very worried about supersymmetry. He was particularly worried that it will become the accepted theory without the need to be properly tested.

In those days, it was almost taken for granted that supersymmetry is the correct theory. Since he came from the technicolour camp, Bob did not particularly like supersymmetry. Unfortunately, at that point, the predictions of the technicolour models did not agree with experimental observations. So it was not a seriously considered as a viable theory. Supersymmetry, on the other hand, had enough free parameters that it could sidestep any detrimental experimental results. This ability to dodge these results and constantly hiding itself made supersymmetry look like a theory that can never be ruled out. Hence my supervisor’s concern.

Today the situation is much different. As the Large Hadron Collider accumulated data, it could systematically rule out progressively larger energy ranges where the supersymmetric particles could hide. Eventually, there was simply no place to hide anymore. At least those versions of supersymmetry that rely on a stable superpartner that must exist at the electroweak scale have been ruled out. For most particle physicists this seems to indicate the supersymmetry as a whole has been ruled out. But of course, there are still those that cling to the idea.

So, in hindsight, supersymmetry was falsifiable after all. For me this whole process exemplify the importance of falsifiability. Imagine that supersymmetry could keep on hiding. How would we know if it is right? The reason why so many physicists believed it must be right is because it is “so beautiful.” Does beauty in this context imply that a theory must be correct? Evidently not. There is now alternative to experimental testing to know if a scientific theory is correct.

This bring me to another theory that is believed to be true simply because it is considered so beautiful that it must be correct. I’m talking of string theory. In this case there is a very serious issue about the falsifiability of the theory. String theory addresses physics at the hypothetical Planck scale. However, there does not exist any conceivable way to test physics at this scale.

Just to avoid any confusion about what I mean by falsifiable: There are those people that claim that string theory is falsifiable. It is just not practically possible to test it. Well, that is missing the point now, isn’t it? The reason for falsifiability is to know if the theory is right. It does not help if it is “in principle” falsifiable, because then we won’t be able to know if it is right. The only useful form of falsifiability is when one can physically test it. Otherwise it is not interesting from a scientific point of view.

Having said that, I do not think one should dictate to people what they are allowed to research. We may agree about whether it is science or not, but if somebody wants to investigate something that we do not currently consider as scientific, then so be it. Who knows, one day that research may somehow lead to research that is falsifiable.

There is of course the whole matter of whether such non-falsifiable research should be allowed to receive research funding. However, the matter of how research should be funded is a whole topic on its own. Perhaps for another day.

Particle physics blues

The Large Hadron Collider (LHC) recently completed its second run. While the existence of the Higgs boson was confirmed during the first run, the outcome from the second run was … well, shall we say somewhat less than spectacular. In view of the fact that the LHC carries a pretty hefty price tag, this rather disappointing state of affairs is producing a certain degree of soul searching within the particle physics community. One can see that from the discussions here and here.

CMS detector at LHC (from wikipedia)

So what went wrong? Judging from the discussions, one may guess it could be a combination of things. Perhaps it is all the hype that accompanies some of the outlandish particle physics predictions. Or perhaps it is the overly esoteric theoretical nature of some of the physics theories. String theory seems to be singled out as an example of a mathematical theory without any practical predictions.

Perhaps the reason for the current state of affairs in particle physics is none of the above. Reading the above-mentioned discussions, one gets the picture from those that are close to the fire. Sometimes it helps to step away and look at the situation from a little distance. Could it be that, while these particle physicists vehemently analyze all the wrong ideas and failed approaches that emerged over the past few decades (even starting to question one of the foundations of the scientific method: falsifiability), they are missing the elephant in the room?

The field of particle physics has been around for a while. It has a long history of advances: from uncovering the structure of the atom, to revealing the constituents of protons and neutrons. The culmination is the Standard Model of Particle Physics – a truly remarkable edifice of current understand.

So what now? What’s next? Well, the standard model does not include gravity. So there is still a strong activity to come up with a theory that would include gravity with the other forces currently included in the standard model. It is the main motivation behind string theory. There’s another issue. The standard model lacks something called naturalness. The main motivation for the LHC was to address this problem. Unfortunately, the LHC has not been able to solve the issue and it seems unlikely that it, or any other collider, ever will. Perhaps that alludes to the real issue.

Could it be that particle physics has reached the stage where the questions that need answers cannot be answered through experiments anymore? The energy scales where the answers to these questions would be observable are just too high. If this is indeed the case, it would mark the end of particle physics as we know it. It would enter a stage of unverifiable philosophy. One may be able to construct beautiful mathematical theories to address the remaining questions. But one would never know whether these theories are correct.

What then?