Transcending the impasse, part VIII

… or not?

In this final posting in the series on transcending the impasse in fundamental physics, we need to consider the possibility that we may never be able to transcend the impasse. Perhaps this is it as far as our scientific understanding of fundamental physics in concerned. Perhaps our ability to probe deeper into the unknown ends here.

Why would that be? Perhaps the theory that would correctly explain what happens above the electroweak scale would need observations at an energy scale that is too high to reach with conceivable colliders. Without such observations, the theory may remain in the status of a hypothesis and never become part of our scientifically established knowledge.

It seems that collider physics has run its course. The contributions to our scientific knowledge made with the aid of colliders are truly remarkable. But, at increasing higher energies, it runs into a number of serious challenges. At such high energies, a collider needs to be very large and extremely expensive. As a result, it becomes impractical and financially unjustifiable.

Even if such a large expensive collider does become a reality, the challenges do not end there. The scattering events produced in such a collider become increasingly complex. Already at the Large Hadron Collider, the scattering events look more like the hair on a drag queen’s wig. The amount of data produced in such events is formidable. The rate at which the data is generated become unmanageable.

Even if one can handle that much data, then one finds that the signal is swamped by background noise. At those high energies, particles are more unstable. It means that their peaks are very broad and relatively low. So it becomes that much harder to see a new particle popping up in the scattering data.

There are suggestions of how scientific observations can be made to support high-energy physics without the use of colliders. One such suggestion is based on astronomical observations. There are high energies generated in some astronomical events. However, such events are unpredictable and the information that can be extracted from these event is very limited compared to what is possible with the detectors of colliders.

Another suggestion is to use high precision measurements at lower energies. It becomes a metrology challenge to measure properties of matter increasingly more accurate and use that to infer what happens at high energies.

Whether any of these suggestions will eventually be able to increase our knowledge of fundamental physics remains to be seen. But I would not be holding my breath.

Perhaps it sounds like that old story about those 19th century physicists that predicted the end of physics even before the discoveries of relativity and quantum mechanics. Well, I think the idea that a steady increase in our physical understanding in perpetuity is equally ludicrous. At some point, we will see a slow-down in the increase of our understanding of fundamental physics, and even in physics in general. However, applied physics and engineering can proceed unabated.

We are already seeing a slow-down in the increase of our understanding of fundamental physics. Many fields of physics are already mostly devoted to applied physics. Very little is added in terms of new fundamental understanding of our physical universe. So, perhaps the impasse is simply an inevitable stage in the development of human culture, heralding to maturity of our knowledge about the universe in which we live.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part VII

Vanity in physics

In this penultimate posting in the series on transcending the impasse in fundamental physics, I’ll address an issue that I consider to be one of the major reasons for the impasse, if the main reason. It is a topic that I feel very passionate about and one that I’ve written about in my book. It is a very broad topic with various aspects that can be addressed. So, I can see this topic becoming a spin-off series on its own.

Stating it briefly, without ranting too much, one can bring this issue into the context of the scientific method itself. As remarkable as the scientific method is with all the successes associated with it, if the very foundation on which it is based starts to erode, the whole edifice in all its glory will come tumbling down.

Now what is this foundation of the scientific method that could be eroded away? Well, the scientific method shares the property with capitalism and democracy in that it is a self-regulating feedback system. Each of these mechanisms is based on a property, a driving force, found in human nature that makes it work. For democracy, it is the reaction to the conditions one finds oneself in as provided by the authorities. For capitalism, it is basically greed and the need for material possessions. For the scientific method it is curiosity and need for knowledge and understanding.

So, the basic assumption is that those that are involved in the scientific process, the scientists, are driven by their curiosity. It has to a large extent been the case for centuries, and we have the accumulated scientific knowledge obtain through this process thanks to this curiosity.

However, during the past century, things started to change. It some point, due to some key event or perhaps as a result of various minor events, the fundamental driving force for scientists started to change. Instead of being internally motivated by their curiosity, they became externally motivated by … vanity!

Today, one gets the impression that researchers are far more concerned about egos than the knowledge they create. To support this statement, I can provide numerous examples. But instead of doing that, I’ll focus on only aspect: how this vanity issue impacts and causes the current impasse. Perhaps I’ll provide and discuss those examples in followup posts.

In the aftermath of the disappointing lack of results from the Large Hadron Collider (LHC), some people blamed other prominent researchers for their ludicrously exotic proposals and predictions. None of which survived the observations of the LHC.

Why would highly respected physicists make such ludicrous predictions? The way I see it, is as a gamble with high stakes. Chances were that these predictions would not have panned out. But if one of them did receive confirmation from the LHC, the return on investment would have been extremely high. The person that made the prediction would have become extremely famous not only among physicists, but probably also among the general public. It would probably have ensured that the person receives a Nobel prize. Hence, all the needs for vanity would have been satisfied instantly.

What about knowledge? Surely, if the prediction turned out to be correct, then it must imply a significant increase in our knowledge. True, but now one should look at the reality. None of these exotic predictions succeeded. This situation is not really surprising, probably not even to the people that made these predictions, because they probably knew the probability for their success to be extremely low. In that context, the motivation for making the predictions was never about the increase in knowledge. It was purely aimed at vanity.

An extreme example is this one physicists, who shall remain unnamed. He is known for making random predictions at a remarkable rate. It is obvious to everybody that he is not making these predictions because he expects them to work out. It is simply an attempt to be the first to have made a specific prediction in the off-chance that one of them came true. Then he’ll probably hope to receive all the vanity rewards that he so desperately craves.

It might have been amusing, were it not for the fact that this deplorable situation is adversely affecting progress in physics, and probably in science in general, albeit I don’t have such extensive experience in other fields of science. The observable effect in fundamental physics is a significant slowdown in progress that is stretching over several decades.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part VI

A little bit of meta-physics

Anyone that has read some of my previous posts may know that I’m not a big fan of philosophy. However, I admit that philosophy can sometimes have some benefits. It occurs to me that, if we want to transcend the impasse in fundamental physics, we may need to take one step back; stand outside the realm of science and view our activities a bit more critically.

Yeah well flippiefanus, what do you think all the philosophers of science are doing? OK, maybe I’m not going to be jumping so deeply into the fray. Only a tiny little step, just enough to say something about the meta-physics of those aspects most pertinent to the problem.

So what is most pertinent to the problem? Someone said that we need to go back and make sure that we sort out the mistakes and misconceptions. That idea resonates with me. However, it is inevitable in the diverse nature of humans to do that anyway. The problem is that if somebody finds something that seems incorrect in our current understanding, then it is generally very difficult to convince people that it is something that needs to be corrected.

What I want to propose here is a slightly different approach. We need to get rid of the clutter.

Clutter in our theory space

There is such a large amount of clutter in our way of looking at the physical world. Much of this clutter is a kind of curtain that we use to hide our ignorance behind. I guess it is human to try hiding one’s ignorance and what better way to do that by dumping a lot of befuddling nonsense over it.

Take for instance quantum mechanics. One often hears about quantum weirdness or the statement that nobody can really understand quantum physics. This mystery that anything quantum represents is one such curtain that people draw over their ignorance. I don’t think that it is impossible to understand quantum mechanics. It is just that we don’t like what we learn.

So what I propose is a minimalist approach. The idea is to identify the core of our understand about a phenomenon and put everything else in the proper perspective without cluttering it with nonsense. The idea of minimalism resonates with the idea of Occam’s razor. It states that the simplest explanation is probably the correct one.

To support the idea of minimalism in physics, we can remind ourselves that scientific theories are constructs that we compile in our minds to help us make sense of the physical world. One should be wary of confusing the two. That opens up the possibility that there may always be multiple theoretical constructs that successfully describe the same physical phenomena. Minimalism tells us to look for the simplest one among them. Those that are more complicated may contain unnecessary clutter that will inevitably just confuse us later.

To give a concrete example of this situation, we can think of the current so-called measurement problem. Previous, I explained that one can avoid any issues related to the measurement problem and the enigma of quantum collapse by resorting to the many-worlds interpretation. This choice enforces the principle of minimalism by selecting the simplest interpretation. Thereby, we are getting rid of the unnecessary clutter of quantum collapse.

This example is somewhat beyond science, because the interpretations of quantum mechanics is not (currently?) a scientific topic. However, there are other examples where we can also apply the minimalist principle. Perhaps I’ll write about that some other day.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part V

Beauty as a guiding principle

Proceeding with the series on Transcending the impasse in fundamental physics, I like to address some of the issues that has been proposed as reasons for the current impasse. One such issue is the methods by which theorists come up with their theories in fundamental physics. Sabine Hossenfelder, for example, feels strongly that one should not use beauty in the mathematics as a guide to what could be a potential theoretical explanation for fundamental phenomena.

What am I talking about? Perhaps the idea that beauty can have anything to do with fundamental physics sounds ridiculous anyway. Well, beauty, as they say lies in the eyes of the beholder. To a theoretical physicist, the notion of beauty may refer to a different experience than to an artist or a lover. Potential salient aspects of the concept of beauty that would be relevant for all those that experience beauty may include things like symmetry, balance, consistency, etc.

However, it is not my intention here to philosophize about beauty and what it is. The fact of the matter is that physicist do sometimes use their notion of beauty to guide them in how they construct their theories, or in what they consider to be the correct theory. One example that springs to mind is the relativistic equation of the electron of Paul Dirac. It is said that Dirac was guided in its derivation by the beauty in the mathematics.

Paul Dirac, who apparently used beauty as a guide to derive the relativistic electron equation

The issue of whether one should use beauty, or for that matter anything else, as a guide in the construction of fundamental theories reveals a deeper issue at stake here. First, we need to identify a difference between fundamental theoretical physics and other fields of physics. I hasten to add that this is not to be interpreted as a distinction between what is inferior and what is superior.

Other fields of physics usually have some underlying scientifically established physical theory in terms of which investigations are (or can be) done. For example, in classical optics, the fundamental theory is electromagnetism. If all else fails, one can always start with electromagnetism and derive the theoretical description of a phenomena rigorously from Maxwell’s equations for electromagnetism. If the phenomenon includes quantum effects, one may need to fall back on quantum electrodynamics (QED) for this purpose.

In fundamental physics, one does not have this commodity. In most cases one can be lucky to have some experimental results to work with. Sometimes, the only guide is a nagging feeling that the current theories are not adequate. This is the case with quantum gravity. There are some conceptual arguments why general relativity cannot explain everything, but there are no experimental observations showing that something is missing.

How does one approach such a problem? One needs some form of inspiration. Different people tend to use different forms of inspiration. Some use the beauty in mathematics as their inspiration. Perhaps too many theorists have done that and ended up with unsuccessful theories. Hence, the reaction against it.

The point is, we need to remember what it takes to arrive at a scientifically established physical theory. Regardless of what method or form of inspiration or guiding principle one uses, the resulting theory can only become a scientific theory once it has survived experimental testing. In other words, the theory must be able to make predictions that can then be compared with actually observations and then be shown to agree with such observations.

So, in the end, whatever method theorists use to produce their theories is of no consequence, as long as it can succeed as a scientific theory. To put restrictions on the guiding principles, be it beauty or whatever else, makes no sense. Instead, one should allow the diversity of perspectives and freedom in thought to come up with potential theoretical explanations, and leave it to the rigors of the scientific method to sort out the successful theoretical descriptions from those that are to be discarded.

I do not believe that the use of beauty as a guiding principle is responsible for the current impasse in fundamental physics. That dubious honor belongs to a much more inimical phenomenon. But that is a topic for another day.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part IV

Planck’s constant

It all started with the work of Max Planck. He famously introduced the notion that the energy absorbed or emitted during an interaction is proportional to the frequency of the field being absorbed or emitted. The proportionality constant h is today considered as a fundamental constant of nature. In honor of Max Planck is called Planck’s constant.

Max Planck, the father of quantum mechanics

The reason why we need to look at the Planck constant for transcending the impasse in physics is because there seem to be some confusion as to the role that it plays in quantum mechanics. The confusion manifests in two aspects of quantum mechanics.

One of these aspects is related to the transition from quantum to classical physics, which we have considered before. It is assumed that one should recover classical physics from quantum physics by simply taking the limit where Planck constant goes to zero. Although this assumption is reasonable, it depends on where the constant shows up. One may think that the presence of Planck’s constant in expressions should be unambiguous. That turns out not to be the case.

An example is the commutation relation for spin operators. Often one finds that the commutator produces the spin operators multiplied by Planck’s constant. According to this practice the limit where Planck’s constant goes to zero would imply that spin operators must commute in the classical theory, which is obviously not correct. Spin operators are the generators of three-dimensional rotations which still obey the same algebraic structure in classical theories as they do in quantum theories.

So when should there be a factor of Planck’s constant and when not? Perhaps a simple way to see it is that, if one finds that a redefinition of the quantities in an expression can be used to remove Planck’s constant from that expression, then it should not be there in the first place.

Using this approach, one can consider what happens in a Hamiltonian or Lagrangian for a theory. Remember that both of these are divided by Planck’s constant in the unitary evolution operator or path integral, respectively. One also finds that the quantization of the fields in these theories always contains a factor of the square root of Planck constant. If we pull it out of the definition and make it explicit in the expression of the theory, one finds that Planck’s constant cancels for all the free-field terms (kinetic term and mass term) in the theory. The only terms in either the Hamiltonian or the Lagrangian where the Planck constant remains are the interaction terms. This brings us full circle to the reason why Max Planck introduced the constant in the first place. Planck’s constant is specifically associated with interactions.

So if one sets Planck constant to zero in a theory, the result is that it removes all the interactions. It leads to a free-field theory without interactions, which is indistinguishable form a classical theory. Interactions are responsible for the changes in the number of particles and that is where all the quantum effects come from that we observe.

The other confusion about Planck’s constant is related to the uncertain principle. Again, the role that Planck’s constant plays is that it relates two quantities that, on the one hand, is the conjugate variable on phase space with, on the other hand, the Fourier variable. Without this relationship, one recovers the same uncertainty relationships between Fourier variables in classical theories, but not between conjugate variables in phase space. Planck’s relationship transfers the uncertainty relationship between Fourier variables to conjugate variables on phase space. So, the uncertainty relationship is not a fundamental quantum mechanical principle. No, it is the Planck relationship that deserves that honor.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part III

Many-worlds interpretation

In my series on the impasse in physics and how to transcend it, I previously discussed the issue of classical vs quantum physics. Here, I want to talk about the interpretations of quantum mechanics.

There is much activity and debate on these interpretations. Part of it is related to the measurement problem. Is there such a thing as quantum collapse? How does it work?

David Mermin once said in an article in Physics Today that new interpretations are added every year and none has ever been ruled out. If this is true, then it indicates that the interpretations of quantum mechanics is not part of science, and therefore also not part of physics.

I am not going to say one should not work on such interpretations and try to make sense of what is going on, but the scientific method does not seem to help us here. Perhaps people will eventually come up with experiments to determine how nature works. I’ve seen some proposals, but they are usually associated with some new mechanisms, which in my view are unlikely to be correct.

It occurs to me that while we cannot say which of the interpretations are correct, we may just as well just pick one and work with that. So I pick the simplest one and when I want to figure out how things will work out in one of these experiments, then I can just consider how things will work according to this interpretation. If such a prediction turns out to be wrong, it would show that this interpretation (and all those that made the same prediction) is wrong after all.

The simplest interpretation according to me is the many-world interpretation. It is simple because it does not require the weird unexplained notion of quantum collapse. People don’t like it, because it seems to require such a lot of different worlds. For that reason it is also associated with the idea of a multiverse.

Hugh Everett III, the person that invented the many-worlds interpretation

Well no, those ideas are anyway misleading. In quantum mechanics, all interactions are described by unitary evolution. The picture that it represents is that there is a set of states that the universe can take on. One can think of each such state as a different description of the world. Hence “many worlds.” However, the actual state of the universe is a quantum superposition of all the possible worlds. In the superposition each world is associated with a complex probability amplitude. It means that some worlds are more likely than others. During interactions these probability amplitudes change.

That is the whole idea of unitary evolution. All the possibilities are already present right from the start. The only thing that interactions do is to change the probability amplitudes that are associated with the different worlds. During the evolution in time the different worlds in the superposition can experience constructive or destructive interference, which would change their probability amplitudes, making some less or more likely that they were before.

The number of worlds (number of terms in the superposition) stays the same. They don’t increase as a result of interactions. How many such worlds are there? Well, if we look at the properties of the set of such basis states, then it is often assumed to be a countable infinite number. However, it may turn out to be uncountably infinite, having what is called the cardinality of the continuum.

What is more is that these different worlds are not distinct unique worlds. One can redefine the basis set of worlds by forming different superpositions of the worlds in the original set to get a new set in which the worlds now look different.

How does all this relate to what we see? The dynamics of the universe causes the interferences due to the unitary evolution to favor a small set of worlds that look very similar. This coherence in what the world looks like is a result of the constructive interference produced by the dynamics.

So the world that we see at a macroscopic level is not just one of these worlds. It is, in a sense, a conglomeration of all those worlds with large probability amplitudes. However, the differences among all these worlds are so small that we cannot notice it at a macroscopic level.

OK, not everything I said here can be confirmed in a scientific way. I cannot even proof that the many-worlds interpretation is correct. However, by thinking of it in this way, one can at least get some idea of it that makes sense.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part II

Classical vs quantum

It is a strange thing. Why the obsession with something that in the end comes down to a rather artificial distinction. Nature is the way it is. There is no dualism in nature. The distinction we make between classical and quantum is just an artifact of the theoretical model we build to understand nature. Or is it?

Well there is a history. It started with Einstein’s skepticism about quantum mechanics. Together with some co-workers, he eventually came up with a very good argument to justify the idea that quantum mechanics must be incomplete. At least, it seemed like a good argument until it was eventually shown to be wrong. It was found that the idea that quantum mechanics is incomplete and needs some extra hidden variables does not agree with experimental observations. The obsession with the distinction between what is classical and what is quantum is a remnant of this debate that originated with Einstein.

Today, we have a very successful formalism, which is simply called quantum mechanics, and can be used to model quantum phenomena. Strictly speaking, there are different versions of the quantum mechanics formalism, but they are all equivalent. The choice of specific formalism is usually based on convenience and personal taste.

Though Einstein’s issues with quantum mechanics may have been resolved, the mystery of what it really means remains. Therefore, many people are trying to probe deeper to find out why quantum mechanics works the way it does. However, despite all the probing, nothing seems to be discovered that disagrees with the quantum mechanics formalism, which is by now almost a hundred years old. The strange concepts, such as entanglement, discord, and contextuality, that have been distilled from quantum physics, turn out to be aspects that are already built into the quantum mechanics formalism. So, in effect all the probing merely comes down to an attempt to understand the implications of the formalism. We do not uncover any new physics.

But now a new understanding is rearing it ugly head. It turns out that the quantum mechanics formalism is not only successful for situation where we are clearly dealing with quantum physics. It is equally successful in situations where the physical phenomena are clearly classical. The consequence is that many of the so-called quintessential quantum properties, are actually properties of the formalism and are for that reason also present in cases where one can apply the formalism to classical scenarios.

I’ll give two examples. The one is the celebrated concept of entanglement. It has been shown now that the non-separability, which signals entanglement, is also present in classical optical fields. The difference is, in classical field it is restricted to local properties and cannot be separated over a distance as in the quantum case. This classical non-separability display many of the features that were traditionally associated with quantum entanglement. Many people now impose a dogmatic restriction on the use of the term entanglement, reserving it for those cases where it is clearly associated with quantum phenomena.

It does not serve the scientific community well to be dogmatic. It reminds us of the dogmatism that prevailed shortly after the advent of quantum mechanics. For a long while, any questioning of this dogma was simply not tolerated. It has led to a stagnation in progress in the understanding of quantum physics. Eventually, through the work of dissidents such as J. S. Bell, this stagnation was overthrown.

The other example is where certain properties of quasi-probability distributions are used as an indication of the quantum nature of a state. For instance, in the case of the Wigner distribution, any presence of negative values in the function is used as such an indication of it quantum nature. Nothing prevents one from using the Wigner distribution for classical fields. One can for instance consider the mode profiles of classical optical beams. Some of these mode profiles produce Wigner distributions that take on negative values at certain points. Obviously, it would be misleading to use this as a indication of a quantum nature. So, to avoid this situation, one needs to impose the dogmatic restriction that one can only used this indication in those cases where the Wigner distribution is computed for quantum state. But then the indication becomes somewhat circular, doesn’t it?

It occurs to me that the fact that we can use the quantum mechanics formalism in classical scenarios provides us with an opportunity to question our understanding of what it truly means to be quantum. What are the fundamental properties of nature that indicates scenarios that can be unambiguously identified as quantum phenomena? Through a process of elimination we may be able to arrive at such unambiguous properties. That may help us to see that the difference between the quantum nature of things and the classical nature of things is perhaps not as big as we thought.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png