Non-commutation

It is believed that the non-commutation of operators is a characteristic property of quantum mechanics. So much so that axiomatic mathematical structures are developed specifically to represent this non-commuting nature for the purpose of being the ideal formalism in terms of which quantum physics can be modeled.

Is quantum physics the exclusive scenario in which non-commuting operators are found? Is the non-commutative nature of these operators in quantum mechanics a fundamental property of nature?

No, one can also define operators in classical theories and find that they are non-commuting. And, no, this non-commuting property is not fundamental. It is a consequence of more fundamental properties.

Diffraction pattern

To illustrate these statements, I’ll use a well-known classical theory: Fourier optics. It is a linear theory in which the propagation of a beam of light is represented in terms of an angular spectrum of plane waves. The angular spectrum is obtained by computing the two-dimensional Fourier transform of the complex function representing the optical beam profile on some transverse plane.

The general propagation direction of such a beam of light, which is the same thing as the expectation value of its momentum, can be calculated with the aid of the angular spectrum as its first moment. An equivalent first moment of the optical beam profile gives us the expectation value of the beam’s position. Both these calculations can be represented formally as operators. And, these two operators do not commute. Therefore, the non-commutation of operators has nothing to do with quantum mechanics.

So what is going on here? It is an inevitable consequence that two operators associated with quantities which are Fourier conjugate variables would be non-commuting. Therefore, the non-commuting property is an inevitable result of Fourier theory. Quantum mechanics inherits this property because the Planck relationship converts the phase space variables, momentum and position, into Fourier conjugate variables.

So, is Fourier analysis then the fundamental property? Well, no. There is a more fundamental property. The reason why Fourier conjugate variables lead to non-commuting operators is because the bases associated with these conjugate variable are mutually unbiased.

We can again think of Fourier optics to understand this. The basis of the angular spectrum consists of the plane waves. The basis of the beam profile are the points on the transverse plane. Since plane waves have the same amplitude at all points in space, the overlap of a plane wave with any point on the transverse plane gives a result with the same magnitude. Hence, these two bases are mutually unbiased.

Although Fourier theory always leads to such mutually unbiased bases, not all mutually unbiased bases are produced by a Fourier relationship. Another example is found with Lie algebras. For example, consider the Lie algebra associated with three dimensional rotations. This algebra consists of three matrices called the Pauli matrices. We can determine the eigenbases of the three Pauli matrices and we’ll see that they are mutually unbiased. These three matrices do not commute. So, we can make a general statement.

Two operators are maximally non-commuting if and only if their eigenbases are mutually unbiased

The reason for the term “maximally” is to take care of those cases where some degree of non-commutation is seen even when the bases are not completely unbiased.

Although the Pauli matrices are ubiquitous in quantum theory, they are not only found in quantum physics. Since they represent three-dimensional rotations they are also found in purely classical scenarios. Therefore, their non-commutation has nothing to do with quantum physics per se. Of course, as we already showed, the same is true for Fourier analysis.

So, if we are looking for some fundamental principles that would describe quantum physics exclusively, then non-commutation would be a bad choice. The hype about non-commutation in quantum physics is misleading.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Infinities

There is a notion in the quest for a fundamental understanding of the physical world that I wish to challenge. It is the idea the infinities are bad and should be avoided at all cost. It seems to be one of the main justifications for string theory. Perhaps that is why so many people still believe that string theory is the best candidate for the fundamental theory that would explain everything.

Leave aside the idea that we’ll ever have such a theory of everything. In my view any fundamental understanding of nature, as represented in terms of some mathematical formalism would necessarily incorporate and successfully deal with infinities. The idea that one can impose some fundamental cutoff scale that would render all calculations finite is in my view contrived. The Planck scale is a hypothetical thing that has never and will never be established as a scientific fact.

It is true that we never observe an infinity in any experiment. So all predicted observables must be rendered as finite values. That does not mean that the formalisms that produce these quantities would be free of such infinities. It only means that such infinities must cancel when the formalism is used to calculate physical observable quantities.

Infinities appear in calculations of fundamental physics because the degrees of freedom at the fundamental level occupy sets with infinitely many elements. The number of elements is represented by the cardinality of these spaces.

A simple example to show how such an infinity shows up is to consider the real line. It has an infinite number of points. It is said to have the cardinality of the continuum. Now consider a very simple function which is equal to one for every point on the real line. When we integrate this function, we effectively measure the length of the real line, which is infinite. Mathematical calculations involving infinities present serious challenges. They tend to give ambiguous answers. These infinite cardinal numbers obey cardinal arithmetic in which various operations applied to a cardinal number give back the same cardinal number. Therefore mathematicians always avoid things (functions) that would give infinities. Most functions on the real line would integrate to an infinite result. Discarding all such functions, we end up with a much smaller set of functions that are now well defined in the sense that they integrate to finite results.

In fundamental physics, we not only deal with functions on the real line, but with functions on multiple real lines. In fact, we end up with functions on infinitely many real lines. It becomes even harder to avoid infinities in this case. You can imagine that if I had a function on the real line that integrates to a finite number, say 2, then the integral of a product of infinitely many such functions would produce 2 to the power of infinity. So, results that involve infinities are ubiquitous in calculations in fundamental physics.

One way that people try to avoid these infinities is with a process called regularization. It is a process whereby the degrees of freedom is somehow reduced or the integrals are truncated with some cutoff. At the end, the quantity that represents the cutoff or the reduction in degrees of freedom is allowed to grow back to infinity. Provided that this quantity has somehow cancelled during the calculation, this limit will not change the answer. One can argue that the process of regularization is a way to apply ordinal arithmetic (the arithmetic of ordinary numbers) to cardinal numbers. Hence, one can forgo the regularization process and just keep careful track of the cardinal numbers in the calculation to ensure that in the normal process of preforming the calculation as if everything is finite, these cardinal numbers would cancel. Sometimes, however, it is necessary to perform a limit process at the end in which the answer becomes finite when the quantities that represent these cardinal numbers are allowed to go to infinity.

This type of operation is what I would expect to see in a fundamental theory. Not some contrivance that magically removes all cardinal numbers from entering the calculations in the first place.

Vanity and formalism

During my series on Transcending the impasse, I wrote about Vanity in Physics. I also addressed the issue of Physics vs Formalism in a previous post. Neither of these two aspects are conducive to advances in physics. So, when one encounters the confluence of these aspects, things are really turning inimical. Recently, I heard of such a situation.

In an attempt to make advances in fundamental physics, the physics community has turned to mathematics, or at least something that looks like mathematics. It seems to be the believe that some exceptional mathematical formalism will lead us to a unique understanding of the fundamentals of nature.

Obviously, based on what I’ve written before, this approach is not ideal. However, we need to understand that the challenges in fundamental physics is different from those in other fields of physics. For the latter, there are always some well-established underlying theory in terms of which the new phenomena are studied. The underlying theory usually comes with a thoroughly developed formalism. The new phenomena may require a refinement in formalism, but one can always check that any improvements or additions are consistent with the underlying theory.

With fundamental physics, the situation is different. There is no underlying theory. So, the whole thing needs to be invented from scratch. How does one do that?

Albert Einstein

We can take a leave out of the book of previous examples from the history of physics. A good example is the development of general relativity. Today there are well established formalisms for general relativity. (Note the use of the plural. It will become important later.) How did Einstein know what formalism to use for the development of general relativity? He realized that spacetime is curved and therefore need a formalism that can handle curved spacetime metrics. How did he know that spacetime is curved? He figured it out with the aid of some simple heuristic arguments. These arguments led him to conceive of a fundamental principle that would guide him in the development of the theory.

That is a success story. Now compare it with what is going on today. There are different formalisms being developed. The “fundamental principle” is simply to get a formalism that can handle curved spacetime in the context of a quantum field theory so that the curvature of spacetime can somehow be represented be the exchange of particles. As such, it goes back to the old notions existing before general relativity that regarded gravity as a force. According to our understanding of general relativity, gravity is not a force. But let’s leave that for now.

There does not seem to be any new physics principles that guide the development of these new formalisms. Here I exclude all those so called “postulates” that have been presented for quantum mechanics, because those postulates are of a mathematical nature. They may provide a basis for quantum mechanics as a mathematical formalism but not for the physics associated with quantum phenomena.

So, if there is no fundamental principle driving the current effort to develop new formalisms for fundamental physics, then what is driving it? What motivates people to spend all the effort in this formidable exercise?

Recent revelations gave me a clue. There was some name-calling going on among some of the most prominent researcher in the field. The proponents of one formalism would denounce some other formalism. It is as if we are watching a game show to see which formalism would “win” at the end of the day. However, the fact that there are different approaches should be seen as a good thing. It provides the diversity that improves the chances for success. More than one of these approaches may turn out to be successful. Here again an example from the history of science can be provided. The formalisms of Heisenberg and Schroedinger both turned out to be correct descriptions for quantum physics. Moreover, there are more than one formalism in terms of which general relativity can be expressed.

So what then is really the reason for this name-calling among proponents of the different approaches to develop formalisms for fundamental physics? It seems to be that deviant new motivation for doing physics: vanity! It is not about gaining a new understanding. That is secondary. It is all about being the one that comes up with the successful theory and then reaping in all the fame and glory.

The problem with vanity is that it does not directly address the goal. Vanity is a reward that can be acquired without achieving the goal. Therefore, it is not the optimal motivation for uncovering an understanding of fundamental physics. I see this as one of the main reasons for the lack of progress in fundamental physics.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Einstein, Podolski, Rosen

Demystifying quantum mechanics VI

When one says that one wants to demystify quantum mechanics, then it may create the false impression that there is nothing strange about quantum mechanics. Well, that would be a misleading notion. Quantum mechanics does have a counterintuitive aspect (perhaps even more than one). However, that does not mean that quantum mechanics need to be mysterious. We can still understand this aspect, and accept its counterintuitive aspect as part of nature, even though we don’t experience it in everyday life.

The counterintuitive aspect of quantum mechanics is perhaps best revealed by the phenomenon of quantum entanglement. But before I discuss quantum entanglement, it may be helpful to discuss some of the historical development of this concept. Therefore, I’ll focus on an apparent paradox that Einstein, Podolski and Rosen (EPR) presented.

They proposed a simple experiment to challenge the idea that one cannot measure position and momentum of a particle with arbitrary accuracy, due to the Heisenberg uncertainty. In the experiment, an unstable particle would be allowed to decay into two particles. Then, one would measure the momentum of one of the particles and the position of the other particle. Due to the conservation momentum, one can then relate the momentum of the one particle to that of the other. The idea is now that one should be able to make the respective measurements as accurately as possible so that the combined information would then give one the position and momentum of one particle more accurately than what Heisenberg uncertainty should allow.

Previously, I explained that the Heisenberg uncertainty principle has a perfectly understandable foundation, which has nothing to do with quantum mechanics apart from the de Broglie relationship, which links momentum to the wave number. However, what the EPR trio revealed in their hypothetical experiment is a concept which, at the time, was quite shocking, even for those people that thought they understood quantum mechanics. This concept eventually led to the notion of quantum entanglement. But, I’m getting ahead of myself.

John Bell

The next development came from John Bell, who also did not quite buy into all this quantum mechanics. So, to try and understand what would happen in the EPR experiment, he made a derivation of the statistics that one can expect to observe in such an experiment. The result was an inequality, which shows that, under some apparently innocuous assumptions, the measurement results when combine in a particular way must always give a value smaller than a certain maximum value. These “innocuous” assumptions were: (a) that there is a unique reality, (b) that there are no nonlocal interactions (“spooky action at a distance”) .

It took a while before an actual experiment that tested the EPR paradox could be perform. However, eventually such experiments were performed, notably by Alain Aspect in 1982. He used polarization of light instead of position and momentum, but the same principle applies. And guess what? When he combined the measurement result as proposed for the Bell inequality, he found that it violated the Bell inequality!

So, what does this imply? It means that at least one of the assumption made by Bell must be wrong. Either, the physical universe does not have a unique reality, or there are nonlocal interactions allowed. The problem with the latter is that it would then also contradict special relativity. So, then we have to conclude that there is no unique reality.

It is this lack of a unique reality that lies at the heart of an understand of the concept of quantum entanglement. More about that later.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

What is your aim?

The endless debate about where fundamental physics should be going, proceeds unabated. As can be expected, this soul searching exercise includes many discussions of a philosophical nature. The ideas of Popper and Kuhn are reassessed for the gazillionth time. Where is all this leading us?

The one thing I often identify in these discussions is the narrow-minded view people have of the diversity of humanity. Philosophers and physicists alike, come up with all sorts of ways to describe what science is supposed to be and what methodologies are supposed to be followed. However, they miss the fact that none of these “extremely good ideas” have any reasonable probability to be successful in the long run.

Why am I so pessimistic? Because humanity has the ability to corrupt almost anything that you can come up with. Those structures and systems that exist in our cultures that actual do work are not the result of some “bright individuals” that decided on some sunny day to suck some good ideas out of their thumbs. No, these structures have evolved into the forms that they have today over a long time. They work because they have been tested over generations by people trying to corrupt them with the devious ideas. (It reminds me that cultural anthropology is, according to me, one of the most underrated fields of study. The scientific knowledge of how cultures evolve would help many governments to make better decisions.)

The scientific method is one such cultural system that has evolved over many centuries. The remarkable scientific and technological knowledge that we posses today stand as clear evidence of the robustness of this method. There is not much, if anything, to be improved in this system.

However, we do need to understand that one cannot obtain all possible knowledge with the scientific method. It does have limitations, but these limitations are not failing of the method that can be improved on. These limitations lie in the nature of knowledge itself. The simple fact is that there are things that we cannot know with any scientific certainty.

What is your reward?

So, the current problem in fundamental science is not something that can be overcome by “improving” the scientific method. The problem lies elsewhere. According to my understanding, this problem has one of two possible reasons, which I have discussed previously. It is either because people have lost their true curiosity in favor of vanity. Or it is because our knowledge is running into a wall that cannot be penetrated by the scientific method.

While the latter has no solution, the former may be overcome if people realize that a return to curiosity instead of vanity as the driving force behind scientific research may help to adjust their focus to achieve progress. Short term extravagant research results do not always provide the path to more knowledge. It is mainly designed to increase some individual’s impact with the aim to obtain fame and glory. The road to true knowledge may sometimes lead through mundane avenues that seem boring to the general public. Only the truly passionate researcher with no interest in fame and glory would follow that avenue. However, it may perhaps be what is needed to make the breakthrough that would advance fundamental physics.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png