Guiding principles I: substructure

Usually the principles of physics are derived from successful scientific theories. For instance, Lorentz invariance which can be seen as the underlying principle on which special relativity is based, was originally derived from Maxwell’s equations. As we learn more about the universe and how it works, we discover more principles. These principles serve to constrain any new theories that we try to formulate to describe that which we don’t understand yet.

It turns out that the physics principles that we have uncovered so far, don’t seem to constrain theories enough. There are still vastly different ways to formulate new theories. So we need to do something that is very dangerous. We need to guess some additional physics principles that would guide us in the formulation of such new theories. Chances are that any random guess would send us down a random path in theory space with very little chance of being the right thing. An example is string theory, where the random guess was that the fundamental objects are strings. It has kept a vast number of researchers busy for decades without success.

Instead of making a random guess, we can try to see if our existing theories don’t perhaps already give us some additional hints at what such a guiding principle should be. So, I’ll share my thoughts on this for what it is worth. I’ll start with what our current theories tell us about substructure.

The notion of a substructure can already be identified in the work of Huygens, Fresnel, etc. on interference . It revealed that light is a wave. The physical quantity that is observed is the intensity, which is always positive. However, we need to break the intensity apart into amplitudes that can have negative values to allow destructive interference. In this very simple sense, the amplitude (which is often modeled as a complex valued function) serves as a substructure for that which is observed.

Interference

It is not a big leap from interference in classical light to come to the interference in quantum systems. Here the observation is interpreted as a probability, which is also a positive quantity. In quantum mechanics, the notion of a probability is given a substructure in the form of a probability amplitude which can be negative (or complex) to allow interference phenomena.

The concept of a substructure is today perhaps mostly associated with the notion of constituent particles. We know now that the proton is not a fundamental particle, but that it has a substructure consisting of fundamental particles called quarks, bound together via the strong force. Although it is not currently considered to be the case, these quarks may also have some substructure. However, the concept of this substructure may be different from the way it appears in protons.

A new idea that is emerging is the idea that spacetime itself may have a substructure. Ever since the advent of general relativity, we know that spacetime is affective by gravity. In our current formulation of particle physics, spacetime is the backdrop on which all the particle fields perform their dance. But when gravity is added, spacetime joins the dance. It makes the formulation of fundamental theories very complicated. The difference between the particles and spacetime becomes blurred. This leads to the idea that spacetime itself may have a substructure. In this way, it combines the two different ways to look at substructure. On the one hand it may be divided into two parts, perhaps to separate chirality, much in the way intensity separates into an amplitude and its complex conjugate. On the other hand the separation of spacetime may give some substructure to the particle fields, being described in terms of fluctuations in spacetime’s substructure.

Caution is necessary here. Even if these ideas turn out to be valid, they still leave much detail unspecified. It may not be enough to regard the idea of substructure as a physics principle. The importance it to keep to the standard practice in physics: mathematics is merely used to formulate and model the physics universe. It does not tell us something new about the universe unless this is somehow already logically encoded in what we start off with.

Perhaps an example would help to explain what I mean. Einstein formulated general relativity (GR) after he figured out the equivalence principle. So everything that we can learn from GR follows as inevitable logical consequences from this principle. It tells us that the mass-energy distribution curves spacetime, but it does not tell us how this happens. In other words, the mechanism by which mass curves spacetime is not known because it is not a logical consequence of the equivalence principle.

So, the idea is to come up with a general mathematical formalism that is powerful enough to model this kind of scenario without trying to dictate the physics. Remember, quantum field theory is a formalism in terms of which different models for the dynamics in particle physics can be modeled. It does not dictate the dynamics but allow anything to be modeled. Another example is differential geometry which allows the formulation of GR but does not dictate it. Part of the reason why string theory fails is because is a mathematical formulation that also dictates the dynamics. The formulation of a quantum theory for gravity requires a flexible formalism that does not dictate the dynamics.

In defense of particle physics experiments

As a theorist, I may have misled some people into thinking that I don’t care much for experimental work. In particle physics, there tend to be a clear separation between theorists and experimentalists, with the phenomenologists sitting in between. Other fields in physics don’t have such sharp separations. However, most physicists lean toward one of the two.

Physics is a science. As such, it follows the scientific method. That implies that both theory and experiment are important. In fact, they are absolutely essential!

There are people that advocate, not only the suspension of experimental work in particle physics, but even that the methodology in particle physics be changed. What methodology in particle physics needs to be changed? Hopefully not anything related to the scientific method! To maintain the scientific method in particle physics, people need to keep on doing particle physics experiments.

CMS detector at LHC

There was a time when I also thought that the extreme expense in doing particle physics experiments was not justified by the results obtained from the Large Hadron Collider (LHC). However, as somebody explained, the results of the LHC are not so insignificant. If you think about it, the “lack of results” is a fallout of the bad theories that the theorists came up with. So by stopping the experimental work due to the “lack of results,” you would be punishing the experimentalists for the bad work of the theorists. More importantly, the experimentalists are just doing precisely what they should be doing in support of the scientific method: ruling out the nonsense theories that the theorists came up with. I think they’ve done more than just that. Hopefully, the theorists will do better in future, so that the experimentalists can have more positive results in future.

I should also mention the experimental work that is currently being done on neutrinos. It is a part of particle physics that we still do not understand well. These results may open the door for significant improvements in our theoretical understanding of particle physics.

So, please keep on doing experimental work in particle physics. If there is an methodological changes needed in particle physics, then that is limited to the way theorists are doing their work.

A post mortem for string theory

So string theory is dead. But why? What went wrong causing its demise? Or more importantly, why did it not succeed?

We don’t remember those theories that did not succeed. Perhaps we remember those that were around for a long time before they were shown to be wrong, like Newton’s corpuscular theory of light or Ptolemy’s epicycles. Some theories that unsuccessfully tried to explain things that we still don’t understand are also still remembered, like the different models for grand unification. But all those different models that people proposed for the electro-weak theory are gone. We only remember the successful one which is now part of the standard model.

Feynman said at some point that he does not like to read the literature on theories that could not explain something successfully, because it may mislead him. However, I think we can learn something generic about how to approach challenges in our fundamental understanding by looking at the the unsuccessful attempts. It is important not to be deceived by the seductive ideas of such failed attempts, but to scrutinize it for its flaws and learn from that.

Previously, I have emphasized the importance of a guiding principle for our endeavors to understand the fundamental aspects of our universe. I believe that one of the reasons why sting theory failed is because it has a flawed guiding principle. It is based on the idea that, instead of particles, the universe is made up of strings. Since strings are extended objects with a certain scale (the Planck scale), they provide a natural cut-off, removing those pesky infinities.

The problem is, when you invent something to replace something else, it begs the question that there is something to be replaced. In other words, did we need particles in the first place? The answer is no. Quantum field theory, which is the formalism in terms of which the successful standard model is formulated does not impose the existence of particles. It merely requires localized interactions.

But what about the justification for extended objects based on getting rid of the infinities? I’ve written about these infinities before and explained that they are to be expected in any realistic formulation of fundamental physics and that some contrivance to get rid of them does not make sense.

So, the demise of a theory based on a flawed guiding principle is not surprising. What we learn from this post mortem is that it is important to be very careful when we impose guiding principles. Although such principles are not scientifically testable, the notions on which we base such principles should be.

In memoriam: string theory

Somebody once explained that when a theory is shown to be wrong, its proponents will keep on believing in it. It is only when they pass away that the younger generation can move on.

None of this applies to string theory. To be shown to be wrong there must be something to present. The mathematical construct that is currently associated with string theory is not in any form that can be subjected to any scientific testing.

What was shown to be wrong is supersymmetry, which is a prerequisite for the currently favored version of string theory – super string theory. (The non-supersymmetric version of string theory fell into disfavor decades ago.) The Large Hadron Collider did not see the expected particles predicted by supersymmetry. Well, to be honest, there is a small change that it will see something in the third run which has just started, but I get the feeling that people are not exactly holding their breath. I’m willing to say supersymmetry is dead and therefore so is super string theory.

Another reason why things are different with string theory is because the proponents found a way to extend the postmortem activity in string theory beyond their own careers. They get a younger generation of physicists addicted to it, so that this new generation of string theorist would go on working in it and popularizing it. What a horrible thing to do!

Why would the current string theorists mislead a younger generation of physicists to work on a failed idea? Legacy! Most of these current string theorists have spent their entire careers working on this topic. Some of them got very famous for it. Now they want to ensure that they are remembered for something that worked and not for something that failed. So it all comes down to vanity, which I’ve written about before.

String theory was already around when I was still a student several decades ago. I could have decided to pursue it as a field of study at that point. What would I have had to show for it now? Nothing! No accomplishments! A wasted career!

There was a time when you couldn’t get a position in a physics department unless you were a string theorist. As a result, there is a vast population of string theorists sitting in faculty positions. It is no wonder that they still maintain such a strong influence in physics even though the theory they work on is dead.

Those quirky fermions

All of the matter in the universe is made of fermions. They are for this reason one of the most abundant things in the universe. Fermions have been the topic of investigation for a long time. We have learned much about them. However, what we do know about them is encapsulated in the formalisms with which we deal with them in our theories. Does that mean we understand them?

Let’s think about the way we treat fermions in our theories. Basically, we represent them in terms of creation and annihilation operators, which are used to formulate the interactions in which they take part. These operators are distinguished from those for bosons by the anti-commutation relations that they obey.

To the uninitiated, all this must sound like a bunch of gobbledygook. What are the physical manifestations of all these operators? There are none! These operators are just mathematical entities in the formalism for our theories. Although these theories are quite successful, it does not reveal the physical machinery at work on the inside. Or does it?

Although a creation operator does not by itself represent any physical process, it distinguishes different scenarios with different arrangements of fermions. Starting with a given scenario, I can apply a fermion creation operator to introduce a new scenario which contains one additional fermion. Then I can apply the operator again, provided that I am not trying to add another fermion with the same degrees of freedom, it will produce another new scenario.

Here is the strange thing. If I change the order in which I added the two additional fermions, I get a scenario that is different from the one with the previous order. I can contrast this to the situation with bosons. Provided that I don’t try to add bosons with the same degrees of freedom, the order in which I add them doesn’t matter. What it tells us is that bosons with different degrees of freedom don’t effect each other. (We need to be careful about the concepts of time-like or space-like separations, but for the sake of this argument, we’ll assume all bosons or fermions are space-like separated.)

The fact that the order in which we place fermions in our scenario (even when they are space-like separated) makes a difference tells us something physical about fermions. They must be global entities. The entire universe seems to “know” about the existence of each and every fermion in it.

How can that be possible? I can think of one way: topological defects. This is not a new idea. It pops up quite often in various fields of physics.

Topological defect

Why would a topological defect explain the apparent global nature of fermions? It is because all kinds of topological defects can be identified with the aid of an integral that computes the winding number of the topological defect. This type of integral is evaluated over a (hyper)surface that encloses the topological defect. In other words, the field values far away from the defect are included in the integral and not the field value at the defect. Therefore, knowledge about the defect in encoded in the entire field. It therefore suggest that fermions can behave as global entities if they topological defect. This is just a hypothesis. It needs more careful investigation.