A post mortem for string theory

So string theory is dead. But why? What went wrong causing its demise? Or more importantly, why did it not succeed?

We don’t remember those theories that did not succeed. Perhaps we remember those that were around for a long time before they were shown to be wrong, like Newton’s corpuscular theory of light or Ptolemy’s epicycles. Some theories that unsuccessfully tried to explain things that we still don’t understand are also still remembered, like the different models for grand unification. But all those different models that people proposed for the electro-weak theory are gone. We only remember the successful one which is now part of the standard model.

Feynman said at some point that he does not like to read the literature on theories that could not explain something successfully, because it may mislead him. However, I think we can learn something generic about how to approach challenges in our fundamental understanding by looking at the the unsuccessful attempts. It is important not to be deceived by the seductive ideas of such failed attempts, but to scrutinize it for its flaws and learn from that.

Previously, I have emphasized the importance of a guiding principle for our endeavors to understand the fundamental aspects of our universe. I believe that one of the reasons why sting theory failed is because it has a flawed guiding principle. It is based on the idea that, instead of particles, the universe is made up of strings. Since strings are extended objects with a certain scale (the Planck scale), they provide a natural cut-off, removing those pesky infinities.

The problem is, when you invent something to replace something else, it begs the question that there is something to be replaced. In other words, did we need particles in the first place? The answer is no. Quantum field theory, which is the formalism in terms of which the successful standard model is formulated does not impose the existence of particles. It merely requires localized interactions.

But what about the justification for extended objects based on getting rid of the infinities? I’ve written about these infinities before and explained that they are to be expected in any realistic formulation of fundamental physics and that some contrivance to get rid of them does not make sense.

So, the demise of a theory based on a flawed guiding principle is not surprising. What we learn from this post mortem is that it is important to be very careful when we impose guiding principles. Although such principles are not scientifically testable, the notions on which we base such principles should be.

In memoriam: string theory

Somebody once explained that when a theory is shown to be wrong, its proponents will keep on believing in it. It is only when they pass away that the younger generation can move on.

None of this applies to string theory. To be shown to be wrong there must be something to present. The mathematical construct that is currently associated with string theory is not in any form that can be subjected to any scientific testing.

What was shown to be wrong is supersymmetry, which is a prerequisite for the currently favored version of string theory – super string theory. (The non-supersymmetric version of string theory fell into disfavor decades ago.) The Large Hadron Collider did not see the expected particles predicted by supersymmetry. Well, to be honest, there is a small change that it will see something in the third run which has just started, but I get the feeling that people are not exactly holding their breath. I’m willing to say supersymmetry is dead and therefore so is super string theory.

Another reason why things are different with string theory is because the proponents found a way to extend the postmortem activity in string theory beyond their own careers. They get a younger generation of physicists addicted to it, so that this new generation of string theorist would go on working in it and popularizing it. What a horrible thing to do!

Why would the current string theorists mislead a younger generation of physicists to work on a failed idea? Legacy! Most of these current string theorists have spent their entire careers working on this topic. Some of them got very famous for it. Now they want to ensure that they are remembered for something that worked and not for something that failed. So it all comes down to vanity, which I’ve written about before.

String theory was already around when I was still a student several decades ago. I could have decided to pursue it as a field of study at that point. What would I have had to show for it now? Nothing! No accomplishments! A wasted career!

There was a time when you couldn’t get a position in a physics department unless you were a string theorist. As a result, there is a vast population of string theorists sitting in faculty positions. It is no wonder that they still maintain such a strong influence in physics even though the theory they work on is dead.

Adrift in theory space

It is downright depressing to think that after all the effort to understand the overlap between gravity and quantum physics there is still no scientific theory that explains the situation. For several decades a veritable crowd of physicists worked on this problem and the best they have are conjectures that cannot be tested experimentally. The manpower that has been spent on this topic must be phenomenal. How is it possible that they are not making progress?

I do understand that it is a difficult problem. However, the quantum properties of nature was also a difficult problem, and so was the particle zoo that led to quantum field theory. And what about gravity, which was effectively solved singled-handedly by just one person? There must be another reason why the current challenge is evidently so much more formidable, or why the efforts to address the challenge are not successful.

It could be that we really have reached the end of science as far as fundamental physics is concerned. For a long time it was argued that the effects of the overlap between gravity and quantum physics will only show at energy scales that are much higher than what a particle collider could achieve. As a result, there is a lack of experimental observations that can point the way. However, with the increase in understanding of quantum physics, which led to the notion of entanglement, it has become evident that it should be possible to consider experiments where mass is entangled, leading to scenarios where gravity comes in confrontation with quantum physics at energy levels easily achievable with current technology. We should see results of such experiments in the not-too-distant future.

Another reason for the lack of progress is of a more cultural nature. Physics as a cultural activity that has gone through some changes, which I believe may be responsible for the lack of progress. I have written before about the problem with vanity and do not want to discuss that again here. Instead, I want to discuss the effect of the current physics culture on progress in fundamental physics.

The study of fundamental physics differs from other fields in physics in that it does not have an underlying well-establish theory in terms of which one can formulate the current problem. In other fields of physics, you always have more fundamental physical theories in terms of which you can model the problem under investigation. So how does one approach problems in fundamental physics? You basically need to make a leap into theory space hoping that the theory you end up with successfully describes the problem that you are studying. But theory space is vast and the number of directions you can leap into is infinite. You need something to guide you.

In the past, this guidance often came in the form of experimental results. However, there are cases where progress in fundamental physics was made without the benefit of experimental results. An prominent example is Einstein’s theory of general relativity. How did he do it? He spent a long time think about the problem until he came up with some guiding principles. He realized that gravity and acceleration are interchangeable.

So, if you want to make progress in fundamental physics and you don’t have experimental results to guide you, then you need a guiding principle to show you which direction to take in theory space. What are the guiding principles of the current effort? For string theory, it is the notion that fundamental particles are strings rather than points. But why would that be the case? It seems to be a rather ad hoc choice for a guiding principle. One justification is the fact that it seems to avoid some of the infinities that often appear in theories of fundamental physics. However, these infinities are mathematical artifacts of such theories that are to be expected when the theory must describe an infinite number of degrees of freedom. Using some mathematical approach to avoid such infinities, we may end up with a theory that is finite, but such an approach only address the mathematical properties of the theory and has nothing to do with physical reality. So, it does not serve as a physical guiding principle. After all the effort that has been poured into string theory, without having achieved success, one should perhaps ponder whether the departing assumption is not where the problem lies.

The problem with such a large effort is the investment that is being made. Eventually the investment is just too large to abandon. A large number of very intelligent people have spent their entire careers on this topic. They have reached prominence in the broader field of physics and simply cannot afford to give it up now. As a result, they drag most of the effort in fundamental physics, including a large number of young physicists, along with them on this failed endeavor.

There are other theories, such as loop quantum gravity, that tries to find an description of fundamental physics. These theories, together with string theory, all have it in common that they rely heavily on highly sophisticated mathematics. In fact, the “progress” in these theories often takes on the form of mathematical theorems. It does not look like physics anymore. Instead of physical guiding principles, they are using sets of mathematical axioms as their guiding principle.

To make things worse, physicists working on these fundamental aspect are starting to contemplate deviating from the basics of the scientific method. They judge the validity of their theories on various criteria that have nothing to do with the scientific approach of testing predictions against experimental observations. Hence, the emergence of non-falsifiable notions such as the multiverse.

In view of these distortions that are currently plaguing the prevailing physics culture, I am not surprised at the lack of progress in fundamental physics. The remarkable understand in our physical world that humanity has gained has come through the healthy application of the scientific method. No alternative has made any comparable progress.

What I am proposing is that we go back to the basics. First and foremost, we need to establish the scientific method as the only approach to follow. And then, we need to discuss physical guiding principles that can show the way forward in our current effort to understand the interplay between gravity and quantum physics.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

The importance of falsifiability

Many years ago, while I was still a graduate student studying particle physics, my supervisor Bob was very worried about supersymmetry. He was particularly worried that it will become the accepted theory without the need to be properly tested.

In those days, it was almost taken for granted that supersymmetry is the correct theory. Since he came from the technicolour camp, Bob did not particularly like supersymmetry. Unfortunately, at that point, the predictions of the technicolour models did not agree with experimental observations. So it was not a seriously considered as a viable theory. Supersymmetry, on the other hand, had enough free parameters that it could sidestep any detrimental experimental results. This ability to dodge these results and constantly hiding itself made supersymmetry look like a theory that can never be ruled out. Hence my supervisor’s concern.

Today the situation is much different. As the Large Hadron Collider accumulated data, it could systematically rule out progressively larger energy ranges where the supersymmetric particles could hide. Eventually, there was simply no place to hide anymore. At least those versions of supersymmetry that rely on a stable superpartner that must exist at the electroweak scale have been ruled out. For most particle physicists this seems to indicate the supersymmetry as a whole has been ruled out. But of course, there are still those that cling to the idea.

So, in hindsight, supersymmetry was falsifiable after all. For me this whole process exemplify the importance of falsifiability. Imagine that supersymmetry could keep on hiding. How would we know if it is right? The reason why so many physicists believed it must be right is because it is “so beautiful.” Does beauty in this context imply that a theory must be correct? Evidently not. There is now alternative to experimental testing to know if a scientific theory is correct.

This bring me to another theory that is believed to be true simply because it is considered so beautiful that it must be correct. I’m talking of string theory. In this case there is a very serious issue about the falsifiability of the theory. String theory addresses physics at the hypothetical Planck scale. However, there does not exist any conceivable way to test physics at this scale.

Just to avoid any confusion about what I mean by falsifiable: There are those people that claim that string theory is falsifiable. It is just not practically possible to test it. Well, that is missing the point now, isn’t it? The reason for falsifiability is to know if the theory is right. It does not help if it is “in principle” falsifiable, because then we won’t be able to know if it is right. The only useful form of falsifiability is when one can physically test it. Otherwise it is not interesting from a scientific point of view.

Having said that, I do not think one should dictate to people what they are allowed to research. We may agree about whether it is science or not, but if somebody wants to investigate something that we do not currently consider as scientific, then so be it. Who knows, one day that research may somehow lead to research that is falsifiable.

There is of course the whole matter of whether such non-falsifiable research should be allowed to receive research funding. However, the matter of how research should be funded is a whole topic on its own. Perhaps for another day.

Particle physics blues

The Large Hadron Collider (LHC) recently completed its second run. While the existence of the Higgs boson was confirmed during the first run, the outcome from the second run was … well, shall we say somewhat less than spectacular. In view of the fact that the LHC carries a pretty hefty price tag, this rather disappointing state of affairs is producing a certain degree of soul searching within the particle physics community. One can see that from the discussions here and here.

CMS detector at LHC (from wikipedia)

So what went wrong? Judging from the discussions, one may guess it could be a combination of things. Perhaps it is all the hype that accompanies some of the outlandish particle physics predictions. Or perhaps it is the overly esoteric theoretical nature of some of the physics theories. String theory seems to be singled out as an example of a mathematical theory without any practical predictions.

Perhaps the reason for the current state of affairs in particle physics is none of the above. Reading the above-mentioned discussions, one gets the picture from those that are close to the fire. Sometimes it helps to step away and look at the situation from a little distance. Could it be that, while these particle physicists vehemently analyze all the wrong ideas and failed approaches that emerged over the past few decades (even starting to question one of the foundations of the scientific method: falsifiability), they are missing the elephant in the room?

The field of particle physics has been around for a while. It has a long history of advances: from uncovering the structure of the atom, to revealing the constituents of protons and neutrons. The culmination is the Standard Model of Particle Physics – a truly remarkable edifice of current understand.

So what now? What’s next? Well, the standard model does not include gravity. So there is still a strong activity to come up with a theory that would include gravity with the other forces currently included in the standard model. It is the main motivation behind string theory. There’s another issue. The standard model lacks something called naturalness. The main motivation for the LHC was to address this problem. Unfortunately, the LHC has not been able to solve the issue and it seems unlikely that it, or any other collider, ever will. Perhaps that alludes to the real issue.

Could it be that particle physics has reached the stage where the questions that need answers cannot be answered through experiments anymore? The energy scales where the answers to these questions would be observable are just too high. If this is indeed the case, it would mark the end of particle physics as we know it. It would enter a stage of unverifiable philosophy. One may be able to construct beautiful mathematical theories to address the remaining questions. But one would never know whether these theories are correct.

What then?