A post mortem for string theory

So string theory is dead. But why? What went wrong causing its demise? Or more importantly, why did it not succeed?

We don’t remember those theories that did not succeed. Perhaps we remember those that were around for a long time before they were shown to be wrong, like Newton’s corpuscular theory of light or Ptolemy’s epicycles. Some theories that unsuccessfully tried to explain things that we still don’t understand are also still remembered, like the different models for grand unification. But all those different models that people proposed for the electro-weak theory are gone. We only remember the successful one which is now part of the standard model.

Feynman said at some point that he does not like to read the literature on theories that could not explain something successfully, because it may mislead him. However, I think we can learn something generic about how to approach challenges in our fundamental understanding by looking at the the unsuccessful attempts. It is important not to be deceived by the seductive ideas of such failed attempts, but to scrutinize it for its flaws and learn from that.

Previously, I have emphasized the importance of a guiding principle for our endeavors to understand the fundamental aspects of our universe. I believe that one of the reasons why sting theory failed is because it has a flawed guiding principle. It is based on the idea that, instead of particles, the universe is made up of strings. Since strings are extended objects with a certain scale (the Planck scale), they provide a natural cut-off, removing those pesky infinities.

The problem is, when you invent something to replace something else, it begs the question that there is something to be replaced. In other words, did we need particles in the first place? The answer is no. Quantum field theory, which is the formalism in terms of which the successful standard model is formulated does not impose the existence of particles. It merely requires localized interactions.

But what about the justification for extended objects based on getting rid of the infinities? I’ve written about these infinities before and explained that they are to be expected in any realistic formulation of fundamental physics and that some contrivance to get rid of them does not make sense.

So, the demise of a theory based on a flawed guiding principle is not surprising. What we learn from this post mortem is that it is important to be very careful when we impose guiding principles. Although such principles are not scientifically testable, the notions on which we base such principles should be.

In memoriam: string theory

Somebody once explained that when a theory is shown to be wrong, its proponents will keep on believing in it. It is only when they pass away that the younger generation can move on.

None of this applies to string theory. To be shown to be wrong there must be something to present. The mathematical construct that is currently associated with string theory is not in any form that can be subjected to any scientific testing.

What was shown to be wrong is supersymmetry, which is a prerequisite for the currently favored version of string theory – super string theory. (The non-supersymmetric version of string theory fell into disfavor decades ago.) The Large Hadron Collider did not see the expected particles predicted by supersymmetry. Well, to be honest, there is a small change that it will see something in the third run which has just started, but I get the feeling that people are not exactly holding their breath. I’m willing to say supersymmetry is dead and therefore so is super string theory.

Another reason why things are different with string theory is because the proponents found a way to extend the postmortem activity in string theory beyond their own careers. They get a younger generation of physicists addicted to it, so that this new generation of string theorist would go on working in it and popularizing it. What a horrible thing to do!

Why would the current string theorists mislead a younger generation of physicists to work on a failed idea? Legacy! Most of these current string theorists have spent their entire careers working on this topic. Some of them got very famous for it. Now they want to ensure that they are remembered for something that worked and not for something that failed. So it all comes down to vanity, which I’ve written about before.

String theory was already around when I was still a student several decades ago. I could have decided to pursue it as a field of study at that point. What would I have had to show for it now? Nothing! No accomplishments! A wasted career!

There was a time when you couldn’t get a position in a physics department unless you were a string theorist. As a result, there is a vast population of string theorists sitting in faculty positions. It is no wonder that they still maintain such a strong influence in physics even though the theory they work on is dead.

Those quirky fermions

All of the matter in the universe is made of fermions. They are for this reason one of the most abundant things in the universe. Fermions have been the topic of investigation for a long time. We have learned much about them. However, what we do know about them is encapsulated in the formalisms with which we deal with them in our theories. Does that mean we understand them?

Let’s think about the way we treat fermions in our theories. Basically, we represent them in terms of creation and annihilation operators, which are used to formulate the interactions in which they take part. These operators are distinguished from those for bosons by the anti-commutation relations that they obey.

To the uninitiated, all this must sound like a bunch of gobbledygook. What are the physical manifestations of all these operators? There are none! These operators are just mathematical entities in the formalism for our theories. Although these theories are quite successful, it does not reveal the physical machinery at work on the inside. Or does it?

Although a creation operator does not by itself represent any physical process, it distinguishes different scenarios with different arrangements of fermions. Starting with a given scenario, I can apply a fermion creation operator to introduce a new scenario which contains one additional fermion. Then I can apply the operator again, provided that I am not trying to add another fermion with the same degrees of freedom, it will produce another new scenario.

Here is the strange thing. If I change the order in which I added the two additional fermions, I get a scenario that is different from the one with the previous order. I can contrast this to the situation with bosons. Provided that I don’t try to add bosons with the same degrees of freedom, the order in which I add them doesn’t matter. What it tells us is that bosons with different degrees of freedom don’t effect each other. (We need to be careful about the concepts of time-like or space-like separations, but for the sake of this argument, we’ll assume all bosons or fermions are space-like separated.)

The fact that the order in which we place fermions in our scenario (even when they are space-like separated) makes a difference tells us something physical about fermions. They must be global entities. The entire universe seems to “know” about the existence of each and every fermion in it.

How can that be possible? I can think of one way: topological defects. This is not a new idea. It pops up quite often in various fields of physics.

Topological defect

Why would a topological defect explain the apparent global nature of fermions? It is because all kinds of topological defects can be identified with the aid of an integral that computes the winding number of the topological defect. This type of integral is evaluated over a (hyper)surface that encloses the topological defect. In other words, the field values far away from the defect are included in the integral and not the field value at the defect. Therefore, knowledge about the defect in encoded in the entire field. It therefore suggest that fermions can behave as global entities if they topological defect. This is just a hypothesis. It needs more careful investigation.

Why not an uncertainty principle?

It may seem strange that there are no fundamental physical principle for quantum physics that is associated with uncertainty among those that I have proposed recently. What would be the reason? Since we refer to it as the Heisenberg uncertainty principle, it should qualify as one of the fundamental principles of quantum physics, right? Well, that is just it. Although it qualifies as being a principle, it is not fundamental.

It may help to consider how these concepts developed. At first, quantum mechanics was introduced as an improvement of classical mechanics. Therefore, quantities like position and momentum played an important role.

A prime example of a system in classical mechanics is the harmonic oscillator. Think of a metal ball hanging from a spring. Being pulled down and let go, the ball will start oscillating, moving periodically up and down. This behavior is classically described in terms of a Hamiltonian that contains the position and momentum of the metal ball.

In the quantum version of this system, the momentum is replaced by the wave vector times the Planck constant. But position and the wave vector are conjugate Fourier variables. That is the origin of the uncertainty. Moreover, it also leads to non-commutation when position and momentum are represented as operators. The sum and difference of these two operators behave as lowering and raising operators for quanta of energy in the system. The one reduces the energy in discrete steps and the other increases it in discrete steps.

It was then found that quantum mechanics can also be used to improve classical field theory. But there are several differences. Oscillations in fields are not represented as displacements in position that is exchange into momentum. Instead, their oscillations manifest in terms of the field strength. So, to develop a quantum theory of fields, one would start with the lowering and raising operators, which are now called creation and annihilation operators or ladder operators. Their sum and difference produce a pair of operators that are analogues to the position and momentum operators for the harmonic oscillator. In this context, these are called quadrature operators. They portray the same qualitative behavior as the momentum and position operators. They represent conjugate Fourier variables and therefore again produce an uncertainty and non-commutation. The full development of quantum field theory is far more involved then what I described here, but I only focused on the origin of the uncertainty in this context here.

So, in summary, uncertainty emerges as an inevitable consequence of the Fourier relationship between conjugate variables. In the case of mechanical systems, these conjugate variables come about because of the quantum relationship between momentum and wave vector. In the case of fields, these conjugate variables comes from the ladder operators, leading to analogues properties as found for the formal description of the harmonic oscillator. Hence, uncertainty is not a fundamental property in quantum physics.

The nerds and the jocks, the saga continues

Recently, after reading another blog, I was reminded of this issue. There are jocks and there are nerds. The jocks are popular and influential. They like to run the show and order others around. Nerds, on the other hand, are not popular. They are not good at running the show, but they make everything else runs smoothly. They tend to be the backroom boys and the behind-the-scene people that make sure things work.

Image from Revenge of the Nerds movie

The one place where the nerds use to hold their own was the academic world. They are particularly excellent at figuring out how things work and therefore they thrived in the sciences. Much of what we know about the physical world is thanks to the nerds who passionately, tenaciously and meticulously studied the physical phenomena.

That was how things were up until roughly the second world war. Then their knowledge started to have a big enough impact that they appeared on the radar screen of the jocks. So, the jock said to themselves, “Wait a minute, what is going on here? Why are we not aware of this?” And so the jocks started to infiltrate the academic scene.

Today the situation is very different. The jock are running the show in the academic world. They are involved in academic research. The most prominent academic are, with almost no exception, all jocks.

Make no mistake, the jocks are not stupid. They are good enough to maintain successful academic programs. In fact, the way that currently works has to a large extent been invented by the jocks. The funding process, the way academics are currently recruited, and even the way publications are evaluated and judged for suitability are based on the methods typical of the way that jocks would run things. It’s all based on popularity, impact and influence.

However, the jock are not as good at academic research as the nerds are. The consequences can be seen in the lack of progress in fundamental research. You see, jocks are more concerned about their egos and they are only doing this research thing for the fame and glory that first popped onto their radar at the time of the second world war. They are not primarily interested to gain an understanding. No, it is all about the glory. Ostensibly, the goal is still to gain the understanding, and for that the reward comes with all the fame and glory. However, when the reward and goal is not one and the same thing, it is always possible to reap the reward without achieving the goal. This is something I call rewardism.

For the nerds, the understanding itself is the reward. Anything less is simply not good enough. Sure, it is good to receive recognition, but that is not the reason for getting up in the morning.

So, the more I think about the situation in fundamental physics, the more convinced I become that the reason for the lack of progress is at least partly due to the bloated egos of those people running the show there. There may still be some nerds that are actively trying the figure out how nature works, but they are marginalized to the point of being totally ignored. Instead, we have all these people with their crazy predictions and unjustified inventions, that has reached the point where they even consider dispensing with the scientific method itself.

I don’t see how this will ever change. Perhaps several generations need to pass to weed out the jocks by depriving them of the fame and glory that they were hoping for. Then the nerds can come back and pick up where they left off. Who knows? I won’t be around by then.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png