The deceptive lure of a final theory

There has been this nagging feeling that something is not quite right with the current flavor of fundamental physics theories. I’m not just talking about string theory. All the attempts that are currently being pursued share this salient property, which, until recently, I could not quite put my figure on. One thing that is quite obvious is that the level of mathematics that they entail are of a extremely sophisticated nature. That in itself is not quite where the problem lies, although it does have something to do with it.

Then, recently I looked at a 48 page write-up of somebody’s ideas concerning a fundamental theory to unify gravity and quantum physics. (It identifies the need for the “analytic continuation of spinors” and I thought it may be related to something that I’ve worked on recently.) It was while I read through the introductory parts of this manuscript that it struck me what the problem is.

If we take the standard model of particle physics as a case in point. It is a collection of theories (quantum chromodynamics or QCD, and the electro-weak theory) formulated in the language of quantum field theory. So, there is a separation between the formalism (quantum field theory) and the physics (QCD, etc.). The formalism was originally developed for quantum electro-dynamics. It contains some physics principles that have previous been established as scientific principles. In other words, those principles which are regarded as established scientific knowledge are built into the formalism. The speculative parts are all the models that can be modeled in terms of the formalism. They are not cast in stone, but the formalism is powerful enough to allow different models. Eventually some of these models passed various experimental tests and thus became established theories, which we now call the standard model.

What the formalism of quantum field theory does not allow is the incorporation of general relativity or some equivalent that would allow us to formulate models for quantum theories of gravity. So it is natural to think that what fundamental physicists should be spending their efforts on, would be an even more powerful formalism that would allow model building that addresses the question of gravity. However, when you take a critical look at the theoretical attempts that are currently being worked on, then we see that this is not the case. Instead, the models and the formalisms are the same thing. The established scientific knowledge and the speculative stuff are mixed together in highly complex mathematical theories. Does such an approach have any hope of success?

Why do people do that? I think it is because they are aiming high. They have the hope that what they come up with will be the last word in fundamental physics. It is the ambitious dream of a final theory. They don’t want to be bothering with models that are built on some general formalism in terms of which one can formulate various different models, and which may eventually be referred to as “the standard model.” That is just too modest.

Another reason is the view that seems to exist among those working on fundamental physics that nature dictates the mathematics that needs to be used to model it. In other words, they seem to think that the correct theory can only have one possible mathematical formalism. If that were true the chances that we have already invented that formalism or that we may by chance select the correct approach is extremely small.

But can it work? I don’t think there is any reasonable chance that some random venture into theory space could miraculously turn out to be the right guess. Theory space is just too big. In the manuscript I read, one can see that the author makes various ad hoc decisions in terms of the mathematical modeling. Some of these guesses seem to produce familiar aspects that resemble something about the physical world as we understand it, which them gives some indication that it is the “right path” to follow. However, mathematics is an extremely versatile and diverse language. One can easily be mislead by something that looked like the “right path” at some point. String theory is an excellent example in this regard.

So what would be a better approach? We need a powerful formalism in terms of which we can formulate various different quantum theories that incorporate gravity. The formalism can have, incorporate into it, as much of the established scientific principles as possible. That will make it easier to present models that already satisfy those principles. The speculations are then left for the modeling part.

The benefit of such an approach is that it unifies the different attempts in that such a common formalism makes it easier to use ideas from other attempts that seemed to have worked. In this way, the community of fundamental physics can work together to make progress. Hopefully the theories thus formulated will be able to make predictions that can be tested with physical experiments or perhaps astronomical observations that would allow such theories to become scientific theories. Chances are that a successful theory that incorporates gravity and at the same time covers all of particle physics as we understand it today will still not be the “final theory.” It may still be just a “standard model.” But it will represent progress in understanding which is more than what we can say for what is currently going on in fundamental physics.

Diversity of ideas

The prevailing “crisis in physics” has lead some people to suggest that physicists should only follow a specific path in their research. It creates the impression that one person is trying to tell the entire physics community what they are allowed to do and what not. Speculative ideas are not to be encouraged. The entire physics research methodology need to be reviewed.

Unfortunately, it does not work like that. One of the key underlying principles of the scientific method is the freedom that all people involved in it have to do whatever they like. It is the agreement between these ideas and what nature says that determines which ideas work and which do not. How one comes up with the ideas should not be restricted in any way.

This freedom is important, because nature is resourceful. From the history of science we learn that the ways people got those ideas that turned out to be right differ in all sorts of ways. If one starts to restrict the way these ideas are generated, one may end up empty handed.

Due to this diversity in the ways nature works, we need a diversity in perspectives to find the solutions. It is like a search algorithm in a vast energy landscape. One needs numerous diverse starting points to have any hope to find the global minimum.

Having said that, one does find that there are some guiding principles that have proven useful in selecting among various ideas. One is Occam’s razor. It suggests that one starts with the simplest explanation first. Nature seems to be minimalist. If we are trying to find an underlying system to explain a certain phenomenology, then the underlying system needs to be rich enough to be able to produce the level of complexity that one observes in the phenomenology. However, it should not be too rich, leading to too much complexity. As an example, conjuring up extra dimensions to explain what we see, we produce too much complexity. Therefore, chances are that we don’t need this.

Another principle, which is perhaps less well-known is the minimum disturbance principle. It suggests that when we find that something is wrong with our current understanding, it does not make sense to through everything away and build up the whole understanding from scratch. Just fix that which is wrong.

Now, there are examples in the history of science where the entire edifice of existing theory in a particular field is changed to solve a problem. However, this only happens when the observations that contradict the current theory start to accumulate. In other words, when there is a crisis.

Do we have such a kind of crisis at the moment? I don’t think so. The problem is not that the existing standard model of particle physics have all these predictions that contradict observations. The problem is precisely the opposite. It is very good at making predictions that agree with what we can observe. We don’t seem to see anything that can tell us what to do next. So, the effort to see what we can improve may well be beyond our capability.

The current crisis in physics may be because we are nearing the end of observable advances in our fundamental understanding. We may come up with new ideas, but we may be unable to get any more hints from experimental observation. In the end we not even be able to test these new ideas. This problem starts to enter the domain of what we see as the scientific method. Can we compromise it?

That is a topic for another day.


One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.

Mopping up

The particle physics impasse prevails. That is my impression, judging from the battles raging on the blogs.

Among these, I recently saw an interesting comment by Terry Bollinger to a blog post by Sabine Hossenfelder. According to Terry, the particle physics research effort lost track (missed the right turnoff) already in the 70’s. This opinion is in agreement with the apparent slow down in progress since the 70’s. Apart from the fact that neutrino’s have mass, we did not learn much more about fundamental physics since the advent of the standard model in the 70’s.

However, some may argue that the problem already started earlier. Perhaps just after the Second World War. Because that was when the world woke up to the importance of fundamental physics. That was the point where vanity became more important than curiosity for the driving force to do research. The result was an increase in weird science – crazy predictions that are more interested in drawing attention than increasing understanding.

Be that as it may. (I’ve written about that in my book.) The question is, what to do about that? There are some concepts in fundamental physics that are taken for granted, yet have never been established as scientific fact through a proper scientific process. One such concept pointed out by Terry is the behaviour of spacetime at the Planck scale.

Today the Planck scale is referred to as if it is establish scientific fact, where in fact it is a hypothetical scale. The physical existence of the Planck scale has not and probably cannot be confirmed through scientific experiments, at least not with out current capability. Chances are it does not exist.

The existence of the Planck scale is based on some other concepts that are also not scientific facts. One is the notion of vacuum fluctuations, a concept that is often invoked to come up with exotic predictions. What about the vacuum is fluctuating? It follows from a very simple calculation that the particle number of the vacuum state is exactly zero with zero uncertainty. So it seems that the notion of vacuum fluctuations is not as well understood as is generally believed.

Does it mean that we are doomed to wander around in a state of confusion? No, we just need to return to the basic principles of the scientific method.

So I propose a mopping up exercise. We need to go back to what we understand according to the scientific method and then test those parts that we are not sure about using scientific experiments and observations. Those aspects that are not testable in a scientific manner needs to be treated on a different level.

For instance, the so-called measurement problem involves aspects that are in principle not testable. As such, they belong to the domain of philosophy and should not be incorporated into our scientific understanding. There are things we can never know in a scientific manner and it is pointless to make them prerequisites for progress in our understanding of the physical world.

Particle physics blues

The Large Hadron Collider (LHC) recently completed its second run. While the existence of the Higgs boson was confirmed during the first run, the outcome from the second run was … well, shall we say somewhat less than spectacular. In view of the fact that the LHC carries a pretty hefty price tag, this rather disappointing state of affairs is producing a certain degree of soul searching within the particle physics community. One can see that from the discussions here and here.

CMS detector at LHC (from wikipedia)

So what went wrong? Judging from the discussions, one may guess it could be a combination of things. Perhaps it is all the hype that accompanies some of the outlandish particle physics predictions. Or perhaps it is the overly esoteric theoretical nature of some of the physics theories. String theory seems to be singled out as an example of a mathematical theory without any practical predictions.

Perhaps the reason for the current state of affairs in particle physics is none of the above. Reading the above-mentioned discussions, one gets the picture from those that are close to the fire. Sometimes it helps to step away and look at the situation from a little distance. Could it be that, while these particle physicists vehemently analyze all the wrong ideas and failed approaches that emerged over the past few decades (even starting to question one of the foundations of the scientific method: falsifiability), they are missing the elephant in the room?

The field of particle physics has been around for a while. It has a long history of advances: from uncovering the structure of the atom, to revealing the constituents of protons and neutrons. The culmination is the Standard Model of Particle Physics – a truly remarkable edifice of current understand.

So what now? What’s next? Well, the standard model does not include gravity. So there is still a strong activity to come up with a theory that would include gravity with the other forces currently included in the standard model. It is the main motivation behind string theory. There’s another issue. The standard model lacks something called naturalness. The main motivation for the LHC was to address this problem. Unfortunately, the LHC has not been able to solve the issue and it seems unlikely that it, or any other collider, ever will. Perhaps that alludes to the real issue.

Could it be that particle physics has reached the stage where the questions that need answers cannot be answered through experiments anymore? The energy scales where the answers to these questions would be observable are just too high. If this is indeed the case, it would mark the end of particle physics as we know it. It would enter a stage of unverifiable philosophy. One may be able to construct beautiful mathematical theories to address the remaining questions. But one would never know whether these theories are correct.

What then?