The standard model turned 50

The essence of the standard model for particle physics was formulated in an article by Steven Weinberg in 1967, but the term “standard model” was apparently first introduced by him in a talk in 1973. Regardless whether it is its name or its formulation that marks its origin, the standard model is by now at least 50 years old. The fact that we still call it the standard model, means that there has not been any improvement for the past 50 years. Otherwise the improvement model would have become a new “standard model.”

Steven Weinberg
(my academic grandfather)

From different perspectives, one can either see this longevity as a good thing or as a bad thing. The fact that it survived this long is a testimony to how well it works. In fact, the only clear shortcoming is that it still contains the neutrinos as massless particles. We know that neutrinos have relatively small but nonzero masses, but there is no successful formulation yet that incorporates masses for the neutrinos into the standard model. Nevertheless, despite this shortcoming, the standard model is still an amazing triumph in the scientific endeavor to understand how the physical world works.

On the other hand, there are several notions about what an improved standard model should include. One obvious thing is the idea that gravity should be included in the standard model as a fourth force. However, if we treat gravity as a force in the same way that the other forces in the standard model are treated, are we not taking a step backward? Didn’t Einstein say that gravity is not a force? It is the result of curve spacetime. Ironically, naive attempts to describe gravity as a force in terms of quantum field theory has led to some insurmountable problems. So, I don’t think it is a good idea. Then of course there was the attempts associated with string theory. Enough said about that.

The more I think about it, the less I am convinced that gravity needs to be quantized. What it comes down to is whether it would be possible to entangle the curvature of spacetime with a superposition of different mass density distributions. From a purely formal point of view, one can treat the stress-energy tensor as a quantum observable. The expectation value for this stress-energy tensor observable of a state representing the superposition of mass density distributions would still give a well-defined “classical” stress-energy tensor in the Einstein field equation. Hence, no entanglement and no need for a theory of quantum gravity. Well, I am not completely sure about it because I haven’t done the actual calculation yet.

Other things that are believed to be missing in the standard model are dark energy and dark matter. There we still have much work to do before we can even start to think about changing the standard model. I am not as knowledgeable about everything associated with these concepts as I would like to be, but I need to be convinced that we really need such exotic explanations. In my view, it could be just complications in the calculations of what is being observed.

So, apart from the neutrino masses, I would not be surprised if the standard model in its current form is pretty much as good as it can get. We may be celebrating many more years … decades … centuries of the standard model.

The deceptive lure of a final theory

There has been this nagging feeling that something is not quite right with the current flavor of fundamental physics theories. I’m not just talking about string theory. All the attempts that are currently being pursued share this salient property, which, until recently, I could not quite put my figure on. One thing that is quite obvious is that the level of mathematics that they entail are of a extremely sophisticated nature. That in itself is not quite where the problem lies, although it does have something to do with it.

Then, recently I looked at a 48 page write-up of somebody’s ideas concerning a fundamental theory to unify gravity and quantum physics. (It identifies the need for the “analytic continuation of spinors” and I thought it may be related to something that I’ve worked on recently.) It was while I read through the introductory parts of this manuscript that it struck me what the problem is.

If we take the standard model of particle physics as a case in point. It is a collection of theories (quantum chromodynamics or QCD, and the electro-weak theory) formulated in the language of quantum field theory. So, there is a separation between the formalism (quantum field theory) and the physics (QCD, etc.). The formalism was originally developed for quantum electro-dynamics. It contains some physics principles that have previous been established as scientific principles. In other words, those principles which are regarded as established scientific knowledge are built into the formalism. The speculative parts are all the models that can be modeled in terms of the formalism. They are not cast in stone, but the formalism is powerful enough to allow different models. Eventually some of these models passed various experimental tests and thus became established theories, which we now call the standard model.

What the formalism of quantum field theory does not allow is the incorporation of general relativity or some equivalent that would allow us to formulate models for quantum theories of gravity. So it is natural to think that what fundamental physicists should be spending their efforts on, would be an even more powerful formalism that would allow model building that addresses the question of gravity. However, when you take a critical look at the theoretical attempts that are currently being worked on, then we see that this is not the case. Instead, the models and the formalisms are the same thing. The established scientific knowledge and the speculative stuff are mixed together in highly complex mathematical theories. Does such an approach have any hope of success?

Why do people do that? I think it is because they are aiming high. They have the hope that what they come up with will be the last word in fundamental physics. It is the ambitious dream of a final theory. They don’t want to be bothering with models that are built on some general formalism in terms of which one can formulate various different models, and which may eventually be referred to as “the standard model.” That is just too modest.

Another reason is the view that seems to exist among those working on fundamental physics that nature dictates the mathematics that needs to be used to model it. In other words, they seem to think that the correct theory can only have one possible mathematical formalism. If that were true the chances that we have already invented that formalism or that we may by chance select the correct approach is extremely small.

But can it work? I don’t think there is any reasonable chance that some random venture into theory space could miraculously turn out to be the right guess. Theory space is just too big. In the manuscript I read, one can see that the author makes various ad hoc decisions in terms of the mathematical modeling. Some of these guesses seem to produce familiar aspects that resemble something about the physical world as we understand it, which them gives some indication that it is the “right path” to follow. However, mathematics is an extremely versatile and diverse language. One can easily be mislead by something that looked like the “right path” at some point. String theory is an excellent example in this regard.

So what would be a better approach? We need a powerful formalism in terms of which we can formulate various different quantum theories that incorporate gravity. The formalism can have, incorporate into it, as much of the established scientific principles as possible. That will make it easier to present models that already satisfy those principles. The speculations are then left for the modeling part.

The benefit of such an approach is that it unifies the different attempts in that such a common formalism makes it easier to use ideas from other attempts that seemed to have worked. In this way, the community of fundamental physics can work together to make progress. Hopefully the theories thus formulated will be able to make predictions that can be tested with physical experiments or perhaps astronomical observations that would allow such theories to become scientific theories. Chances are that a successful theory that incorporates gravity and at the same time covers all of particle physics as we understand it today will still not be the “final theory.” It may still be just a “standard model.” But it will represent progress in understanding which is more than what we can say for what is currently going on in fundamental physics.

Diversity of ideas

The prevailing “crisis in physics” has lead some people to suggest that physicists should only follow a specific path in their research. It creates the impression that one person is trying to tell the entire physics community what they are allowed to do and what not. Speculative ideas are not to be encouraged. The entire physics research methodology need to be reviewed.

Unfortunately, it does not work like that. One of the key underlying principles of the scientific method is the freedom that all people involved in it have to do whatever they like. It is the agreement between these ideas and what nature says that determines which ideas work and which do not. How one comes up with the ideas should not be restricted in any way.

This freedom is important, because nature is resourceful. From the history of science we learn that the ways people got those ideas that turned out to be right differ in all sorts of ways. If one starts to restrict the way these ideas are generated, one may end up empty handed.

Due to this diversity in the ways nature works, we need a diversity in perspectives to find the solutions. It is like a search algorithm in a vast energy landscape. One needs numerous diverse starting points to have any hope to find the global minimum.

Having said that, one does find that there are some guiding principles that have proven useful in selecting among various ideas. One is Occam’s razor. It suggests that one starts with the simplest explanation first. Nature seems to be minimalist. If we are trying to find an underlying system to explain a certain phenomenology, then the underlying system needs to be rich enough to be able to produce the level of complexity that one observes in the phenomenology. However, it should not be too rich, leading to too much complexity. As an example, conjuring up extra dimensions to explain what we see, we produce too much complexity. Therefore, chances are that we don’t need this.

Another principle, which is perhaps less well-known is the minimum disturbance principle. It suggests that when we find that something is wrong with our current understanding, it does not make sense to through everything away and build up the whole understanding from scratch. Just fix that which is wrong.

Now, there are examples in the history of science where the entire edifice of existing theory in a particular field is changed to solve a problem. However, this only happens when the observations that contradict the current theory start to accumulate. In other words, when there is a crisis.

Do we have such a kind of crisis at the moment? I don’t think so. The problem is not that the existing standard model of particle physics have all these predictions that contradict observations. The problem is precisely the opposite. It is very good at making predictions that agree with what we can observe. We don’t seem to see anything that can tell us what to do next. So, the effort to see what we can improve may well be beyond our capability.

The current crisis in physics may be because we are nearing the end of observable advances in our fundamental understanding. We may come up with new ideas, but we may be unable to get any more hints from experimental observation. In the end we not even be able to test these new ideas. This problem starts to enter the domain of what we see as the scientific method. Can we compromise it?

That is a topic for another day.

Naturalness

One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.

Mopping up

The particle physics impasse prevails. That is my impression, judging from the battles raging on the blogs.

Among these, I recently saw an interesting comment by Terry Bollinger to a blog post by Sabine Hossenfelder. According to Terry, the particle physics research effort lost track (missed the right turnoff) already in the 70’s. This opinion is in agreement with the apparent slow down in progress since the 70’s. Apart from the fact that neutrino’s have mass, we did not learn much more about fundamental physics since the advent of the standard model in the 70’s.

However, some may argue that the problem already started earlier. Perhaps just after the Second World War. Because that was when the world woke up to the importance of fundamental physics. That was the point where vanity became more important than curiosity for the driving force to do research. The result was an increase in weird science – crazy predictions that are more interested in drawing attention than increasing understanding.

Be that as it may. (I’ve written about that in my book.) The question is, what to do about that? There are some concepts in fundamental physics that are taken for granted, yet have never been established as scientific fact through a proper scientific process. One such concept pointed out by Terry is the behaviour of spacetime at the Planck scale.

Today the Planck scale is referred to as if it is establish scientific fact, where in fact it is a hypothetical scale. The physical existence of the Planck scale has not and probably cannot be confirmed through scientific experiments, at least not with out current capability. Chances are it does not exist.

The existence of the Planck scale is based on some other concepts that are also not scientific facts. One is the notion of vacuum fluctuations, a concept that is often invoked to come up with exotic predictions. What about the vacuum is fluctuating? It follows from a very simple calculation that the particle number of the vacuum state is exactly zero with zero uncertainty. So it seems that the notion of vacuum fluctuations is not as well understood as is generally believed.

Does it mean that we are doomed to wander around in a state of confusion? No, we just need to return to the basic principles of the scientific method.

So I propose a mopping up exercise. We need to go back to what we understand according to the scientific method and then test those parts that we are not sure about using scientific experiments and observations. Those aspects that are not testable in a scientific manner needs to be treated on a different level.

For instance, the so-called measurement problem involves aspects that are in principle not testable. As such, they belong to the domain of philosophy and should not be incorporated into our scientific understanding. There are things we can never know in a scientific manner and it is pointless to make them prerequisites for progress in our understanding of the physical world.