Non-commutation

It is believed that the non-commutation of operators is a characteristic property of quantum mechanics. So much so that axiomatic mathematical structures are developed specifically to represent this non-commuting nature for the purpose of being the ideal formalism in terms of which quantum physics can be modeled.

Is quantum physics the exclusive scenario in which non-commuting operators are found? Is the non-commutative nature of these operators in quantum mechanics a fundamental property of nature?

No, one can also define operators in classical theories and find that they are non-commuting. And, no, this non-commuting property is not fundamental. It is a consequence of more fundamental properties.

To illustrate these statements, I’ll use a well-known classical theory: Fourier optics. It is a linear theory in which the propagation of a beam of light is represented in terms of an angular spectrum of plane waves. The angular spectrum is obtained by computing the two-dimensional Fourier transform of the complex function representing the optical beam profile on some transverse plane.

The general propagation direction of such a beam of light, which is the same thing as the expectation value of its momentum, can be calculated with the aid of the angular spectrum as its first moment. An equivalent first moment of the optical beam profile gives us the expectation value of the beam’s position. Both these calculations can be represented formally as operators. And, these two operators do not commute. Therefore, the non-commutation of operators has nothing to do with quantum mechanics.

So what is going on here? It is an inevitable consequence that two operators associated with quantities which are Fourier conjugate variables would be non-commuting. Therefore, the non-commuting property is an inevitable result of Fourier theory. Quantum mechanics inherits this property because the Planck relationship converts the phase space variables, momentum and position, into Fourier conjugate variables.

So, is Fourier analysis then the fundamental property? Well, no. There is a more fundamental property. The reason why Fourier conjugate variables lead to non-commuting operators is because the bases associated with these conjugate variable are mutually unbiased.

We can again think of Fourier optics to understand this. The basis of the angular spectrum consists of the plane waves. The basis of the beam profile are the points on the transverse plane. Since plane waves have the same amplitude at all points in space, the overlap of a plane wave with any point on the transverse plane gives a result with the same magnitude. Hence, these two bases are mutually unbiased.

Although Fourier theory always leads to such mutually unbiased bases, not all mutually unbiased bases are produced by a Fourier relationship. Another example is found with Lie algebras. For example, consider the Lie algebra associated with three dimensional rotations. This algebra consists of three matrices called the Pauli matrices. We can determine the eigenbases of the three Pauli matrices and we’ll see that they are mutually unbiased. These three matrices do not commute. So, we can make a general statement.

Two operators are maximally non-commuting if and only if their eigenbases are mutually unbiased

The reason for the term “maximally” is to take care of those cases where some degree of non-commutation is seen even when the bases are not completely unbiased.

Although the Pauli matrices are ubiquitous in quantum theory, they are not only found in quantum physics. Since they represent three-dimensional rotations they are also found in purely classical scenarios. Therefore, their non-commutation has nothing to do with quantum physics per se. Of course, as we already showed, the same is true for Fourier analysis.

So, if we are looking for some fundamental principles that would describe quantum physics exclusively, then non-commutation would be a bad choice. The hype about non-commutation in quantum physics is misleading.

Infinities

There is a notion in the quest for a fundamental understanding of the physical world that I wish to challenge. It is the idea the infinities are bad and should be avoided at all cost. It seems to be one of the main justifications for string theory. Perhaps that is why so many people still believe that string theory is the best candidate for the fundamental theory that would explain everything.

Leave aside the idea that we’ll ever have such a theory of everything. In my view any fundamental understanding of nature, as represented in terms of some mathematical formalism would necessarily incorporate and successfully deal with infinities. The idea that one can impose some fundamental cutoff scale that would render all calculations finite is in my view contrived. The Planck scale is a hypothetical thing that has never and will never be established as a scientific fact.

It is true that we never observe an infinity in any experiment. So all predicted observables must be rendered as finite values. That does not mean that the formalisms that produce these quantities would be free of such infinities. It only means that such infinities must cancel when the formalism is used to calculate physical observable quantities.

Infinities appear in calculations of fundamental physics because the degrees of freedom at the fundamental level occupy sets with infinitely many elements. The number of elements is represented by the cardinality of these spaces.

A simple example to show how such an infinity shows up is to consider the real line. It has an infinite number of points. It is said to have the cardinality of the continuum. Now consider a very simple function which is equal to one for every point on the real line. When we integrate this function, we effectively measure the length of the real line, which is infinite. Mathematical calculations involving infinities present serious challenges. They tend to give ambiguous answers. These infinite cardinal numbers obey cardinal arithmetic in which various operations applied to a cardinal number give back the same cardinal number. Therefore mathematicians always avoid things (functions) that would give infinities. Most functions on the real line would integrate to an infinite result. Discarding all such functions, we end up with a much smaller set of functions that are now well defined in the sense that they integrate to finite results.

In fundamental physics, we not only deal with functions on the real line, but with functions on multiple real lines. In fact, we end up with functions on infinitely many real lines. It becomes even harder to avoid infinities in this case. You can imagine that if I had a function on the real line that integrates to a finite number, say 2, then the integral of a product of infinitely many such functions would produce 2 to the power of infinity. So, results that involve infinities are ubiquitous in calculations in fundamental physics.

One way that people try to avoid these infinities is with a process called regularization. It is a process whereby the degrees of freedom is somehow reduced or the integrals are truncated with some cutoff. At the end, the quantity that represents the cutoff or the reduction in degrees of freedom is allowed to grow back to infinity. Provided that this quantity has somehow cancelled during the calculation, this limit will not change the answer. One can argue that the process of regularization is a way to apply ordinal arithmetic (the arithmetic of ordinary numbers) to cardinal numbers. Hence, one can forgo the regularization process and just keep careful track of the cardinal numbers in the calculation to ensure that in the normal process of preforming the calculation as if everything is finite, these cardinal numbers would cancel. Sometimes, however, it is necessary to perform a limit process at the end in which the answer becomes finite when the quantities that represent these cardinal numbers are allowed to go to infinity.

This type of operation is what I would expect to see in a fundamental theory. Not some contrivance that magically removes all cardinal numbers from entering the calculations in the first place.

Vanity and formalism

During my series on Transcending the impasse, I wrote about Vanity in Physics. I also addressed the issue of Physics vs Formalism in a previous post. Neither of these two aspects are conducive to advances in physics. So, when one encounters the confluence of these aspects, things are really turning inimical. Recently, I heard of such a situation.

In an attempt to make advances in fundamental physics, the physics community has turned to mathematics, or at least something that looks like mathematics. It seems to be the believe that some exceptional mathematical formalism will lead us to a unique understanding of the fundamentals of nature.

Obviously, based on what I’ve written before, this approach is not ideal. However, we need to understand that the challenges in fundamental physics is different from those in other fields of physics. For the latter, there are always some well-established underlying theory in terms of which the new phenomena are studied. The underlying theory usually comes with a thoroughly developed formalism. The new phenomena may require a refinement in formalism, but one can always check that any improvements or additions are consistent with the underlying theory.

With fundamental physics, the situation is different. There is no underlying theory. So, the whole thing needs to be invented from scratch. How does one do that?

We can take a leave out of the book of previous examples from the history of physics. A good example is the development of general relativity. Today there are well established formalisms for general relativity. (Note the use of the plural. It will become important later.) How did Einstein know what formalism to use for the development of general relativity? He realized that spacetime is curved and therefore need a formalism that can handle curved spacetime metrics. How did he know that spacetime is curved? He figured it out with the aid of some simple heuristic arguments. These arguments led him to conceive of a fundamental principle that would guide him in the development of the theory.

That is a success story. Now compare it with what is going on today. There are different formalisms being developed. The “fundamental principle” is simply to get a formalism that can handle curved spacetime in the context of a quantum field theory so that the curvature of spacetime can somehow be represented be the exchange of particles. As such, it goes back to the old notions existing before general relativity that regarded gravity as a force. According to our understanding of general relativity, gravity is not a force. But let’s leave that for now.

There does not seem to be any new physics principles that guide the development of these new formalisms. Here I exclude all those so called “postulates” that have been presented for quantum mechanics, because those postulates are of a mathematical nature. They may provide a basis for quantum mechanics as a mathematical formalism but not for the physics associated with quantum phenomena.

So, if there is no fundamental principle driving the current effort to develop new formalisms for fundamental physics, then what is driving it? What motivates people to spend all the effort in this formidable exercise?

Recent revelations gave me a clue. There was some name-calling going on among some of the most prominent researcher in the field. The proponents of one formalism would denounce some other formalism. It is as if we are watching a game show to see which formalism would “win” at the end of the day. However, the fact that there are different approaches should be seen as a good thing. It provides the diversity that improves the chances for success. More than one of these approaches may turn out to be successful. Here again an example from the history of science can be provided. The formalisms of Heisenberg and Schroedinger both turned out to be correct descriptions for quantum physics. Moreover, there are more than one formalism in terms of which general relativity can be expressed.

So what then is really the reason for this name-calling among proponents of the different approaches to develop formalisms for fundamental physics? It seems to be that deviant new motivation for doing physics: vanity! It is not about gaining a new understanding. That is secondary. It is all about being the one that comes up with the successful theory and then reaping in all the fame and glory.

The problem with vanity is that it does not directly address the goal. Vanity is a reward that can be acquired without achieving the goal. Therefore, it is not the optimal motivation for uncovering an understanding of fundamental physics. I see this as one of the main reasons for the lack of progress in fundamental physics.

The seasons are opposite in the Northern and Southern hemisphere. So, while the Northern hemisphere is moving into autumn (or “fall”), we are having spring down here in the Southern hemisphere.

Therefore, I am glad to see the leaves coming out on the trees. The world is beautiful. It makes the neighborhood looks like an urban jungle.

I have a favorite tree in my garden. Not sure what kind of tree it is. Perhaps some kind of maple tree? In autumn its leave turn red. That makes me remember Canada.

Most trees don’t turn red. Some even remain green right through winter.

For some reason, I was worried that it would die during winter. Therefore, I was very happy to see that it is sprouting new leaves.

The role of mathematics in physics

Recently, the number of preprints that contain theorems with proofs in the arXiv under quantum physics has increased drastically. I’ve also noticed that some journals in this field tend to publish more such papers, even though they are not ostensibly mathematical physics journals. It seems to suggest that theoretical physics needs to look like mathematics in order to be taken seriously.

Theorems with proofs are not science. Physics, which is a science, is about getting agreement between predictions and experimental observations. So, what is the role of mathematics in physics?

For the physicist, mathematics is a tool, often an indispensable tool, but still, just a tool. When Feynman invented his version of quantum field theory in terms of the path integral, he provided a means to compute predictions for the scattering amplitudes in particle physics that can be compared with the results from high energy particle physics experiments. That was the whole point of this formulation. From a mathematical perspective, the path integral formulation was a bit crude to say the least. It presented a significant challenge to come up with a rigorous formulation of the measure theory that would be suitable for the notion of a path integral.

These days, there seems to be much criticism against quantum field theory. The Haag theorem indicates some inconsistencies in the interaction picture. I also saw that Ed Witten is taking issue with the process of quantization that is used in quantum field theory because of some inconsistencies and he tries to solve these problems with some concepts taken from string theory.

I think these criticisms are missing the point. The one thing that you can take from quantum field theory is this: it works! There is a very good agreement between the predictions of the standard model and the results from high energy physics experiments. So, if anybody thinks that quantum field theory needs to be reformulated or replaced by a better formulations then they are missing the point. The physics is only concerned with having some mathematical procedure to compute predictions, regardless of whether that procedure is a bit crude or not. It is just a tool. Mathematicians may then ask themselves: why does it work?

Mathematics is extremely flexible. There are usually more than one way to represent physical reality in terms of mathematical models. Often these different formulations are completely equivalent as far as experimental predictions are concerned. For this reason, one should realize that physical reality is not intrinsically mathematical. Or stated differently, the math is not real (as Hossenfelder would like us to believe). Mathematical models exist in our minds. It is merely the way we represent the physical world so that we can do calculations. If we come up with a crude model that serves the purpose to perform successful calculations, then there are probably several other less crude ways to do the same calculations. However, it is the amusement of the mathematician to ponder such alternatives. As far as the physicist is concerned, such alternatives are of less importance.

Having said that, there is one possible justification for a physicist to be concerned about the more rigorous formulation of mathematical models. That has to do with progress beyond the current understanding. It may be possible that a more rigorous formulation of our current models may point the way forward. However, here the flexibility of mathematics produces such a diverse array of possibilities that this line of argument is probably not going to be of much use.

Consider another example from the history of physics. Newtonian mechanics was developed into a very rigorous format with the aid of Hamiltonian mechanics. And yet, none of that gave any indication of the direction that special and general relativity took us in. The mathematics turned out to be completely different.

So, I don’t think that we should rely on more rigor in our mathematical models to point the way forward in physics. For progress in physics, we need to focus on physics. As always, mathematics will merely be the tool to do it. For that reason, I tend to ignore all these preprints with their theorems and proofs.