Vanity and formalism

During my series on Transcending the impasse, I wrote about Vanity in Physics. I also addressed the issue of Physics vs Formalism in a previous post. Neither of these two aspects are conducive to advances in physics. So, when one encounters the confluence of these aspects, things are really turning inimical. Recently, I heard of such a situation.

In an attempt to make advances in fundamental physics, the physics community has turned to mathematics, or at least something that looks like mathematics. It seems to be the believe that some exceptional mathematical formalism will lead us to a unique understanding of the fundamentals of nature.

Obviously, based on what I’ve written before, this approach is not ideal. However, we need to understand that the challenges in fundamental physics is different from those in other fields of physics. For the latter, there are always some well-established underlying theory in terms of which the new phenomena are studied. The underlying theory usually comes with a thoroughly developed formalism. The new phenomena may require a refinement in formalism, but one can always check that any improvements or additions are consistent with the underlying theory.

With fundamental physics, the situation is different. There is no underlying theory. So, the whole thing needs to be invented from scratch. How does one do that?

Albert Einstein

We can take a leave out of the book of previous examples from the history of physics. A good example is the development of general relativity. Today there are well established formalisms for general relativity. (Note the use of the plural. It will become important later.) How did Einstein know what formalism to use for the development of general relativity? He realized that spacetime is curved and therefore need a formalism that can handle curved spacetime metrics. How did he know that spacetime is curved? He figured it out with the aid of some simple heuristic arguments. These arguments led him to conceive of a fundamental principle that would guide him in the development of the theory.

That is a success story. Now compare it with what is going on today. There are different formalisms being developed. The “fundamental principle” is simply to get a formalism that can handle curved spacetime in the context of a quantum field theory so that the curvature of spacetime can somehow be represented be the exchange of particles. As such, it goes back to the old notions existing before general relativity that regarded gravity as a force. According to our understanding of general relativity, gravity is not a force. But let’s leave that for now.

There does not seem to be any new physics principles that guide the development of these new formalisms. Here I exclude all those so called “postulates” that have been presented for quantum mechanics, because those postulates are of a mathematical nature. They may provide a basis for quantum mechanics as a mathematical formalism but not for the physics associated with quantum phenomena.

So, if there is no fundamental principle driving the current effort to develop new formalisms for fundamental physics, then what is driving it? What motivates people to spend all the effort in this formidable exercise?

Recent revelations gave me a clue. There was some name-calling going on among some of the most prominent researcher in the field. The proponents of one formalism would denounce some other formalism. It is as if we are watching a game show to see which formalism would “win” at the end of the day. However, the fact that there are different approaches should be seen as a good thing. It provides the diversity that improves the chances for success. More than one of these approaches may turn out to be successful. Here again an example from the history of science can be provided. The formalisms of Heisenberg and Schroedinger both turned out to be correct descriptions for quantum physics. Moreover, there are more than one formalism in terms of which general relativity can be expressed.

So what then is really the reason for this name-calling among proponents of the different approaches to develop formalisms for fundamental physics? It seems to be that deviant new motivation for doing physics: vanity! It is not about gaining a new understanding. That is secondary. It is all about being the one that comes up with the successful theory and then reaping in all the fame and glory.

The problem with vanity is that it does not directly address the goal. Vanity is a reward that can be acquired without achieving the goal. Therefore, it is not the optimal motivation for uncovering an understanding of fundamental physics. I see this as one of the main reasons for the lack of progress in fundamental physics.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

The role of mathematics in physics

Recently, the number of preprints that contain theorems with proofs in the arXiv under quantum physics has increased drastically. I’ve also noticed that some journals in this field tend to publish more such papers, even though they are not ostensibly mathematical physics journals. It seems to suggest that theoretical physics needs to look like mathematics in order to be taken seriously.

Theorems with proofs are not science. Physics, which is a science, is about getting agreement between predictions and experimental observations. So, what is the role of mathematics in physics?

Mathematics

For the physicist, mathematics is a tool, often an indispensable tool, but still, just a tool. When Feynman invented his version of quantum field theory in terms of the path integral, he provided a means to compute predictions for the scattering amplitudes in particle physics that can be compared with the results from high energy particle physics experiments. That was the whole point of this formulation. From a mathematical perspective, the path integral formulation was a bit crude to say the least. It presented a significant challenge to come up with a rigorous formulation of the measure theory that would be suitable for the notion of a path integral.

These days, there seems to be much criticism against quantum field theory. The Haag theorem indicates some inconsistencies in the interaction picture. I also saw that Ed Witten is taking issue with the process of quantization that is used in quantum field theory because of some inconsistencies and he tries to solve these problems with some concepts taken from string theory.

I think these criticisms are missing the point. The one thing that you can take from quantum field theory is this: it works! There is a very good agreement between the predictions of the standard model and the results from high energy physics experiments. So, if anybody thinks that quantum field theory needs to be reformulated or replaced by a better formulations then they are missing the point. The physics is only concerned with having some mathematical procedure to compute predictions, regardless of whether that procedure is a bit crude or not. It is just a tool. Mathematicians may then ask themselves: why does it work?

Mathematics is extremely flexible. There are usually more than one way to represent physical reality in terms of mathematical models. Often these different formulations are completely equivalent as far as experimental predictions are concerned. For this reason, one should realize that physical reality is not intrinsically mathematical. Or stated differently, the math is not real (as Hossenfelder would like us to believe). Mathematical models exist in our minds. It is merely the way we represent the physical world so that we can do calculations. If we come up with a crude model that serves the purpose to perform successful calculations, then there are probably several other less crude ways to do the same calculations. However, it is the amusement of the mathematician to ponder such alternatives. As far as the physicist is concerned, such alternatives are of less importance.

Having said that, there is one possible justification for a physicist to be concerned about the more rigorous formulation of mathematical models. That has to do with progress beyond the current understanding. It may be possible that a more rigorous formulation of our current models may point the way forward. However, here the flexibility of mathematics produces such a diverse array of possibilities that this line of argument is probably not going to be of much use.

Consider another example from the history of physics. Newtonian mechanics was developed into a very rigorous format with the aid of Hamiltonian mechanics. And yet, none of that gave any indication of the direction that special and general relativity took us in. The mathematics turned out to be completely different.

So, I don’t think that we should rely on more rigor in our mathematical models to point the way forward in physics. For progress in physics, we need to focus on physics. As always, mathematics will merely be the tool to do it. For that reason, I tend to ignore all these preprints with their theorems and proofs.

Quantum teleportation

One of the most iconic quantum phenomena is quantum teleportation. But the reason why it is so iconic has nothing to do with the idea behind “beam me up, Scotty.”  In quantum teleportation, it is only the state of matter that is being transferred and not the matter itself. Usually, it is the state of light (a photon) that is being teleported. Quantum teleportation is iconic because it involves a mechanism that reveals a truly quantum nature.

How does it work? The state to be teleported is represented by photons that are specially prepared for the purpose. You can think of some light source that produces photons having specific properties that represent their state. We shall label one such photon as A. The resource that will mediate the teleportation process is a different bunch of photons representing an entangled state. This entangled state consists of a pair of entangle photons, which we label as B and C, respectively. To perform the process of teleportation, all we need to do is to make a joint measurement of photons A and B. It is the nature of this joint measurement that makes the process of quantum teleportation possible. The information that we obtain from this measurement tells us what transformation to perform on C to reproduce the state of A. Sometimes, we would not need to make any transformation. The state of C would already be that of A.

So, let’s look a little more carefully at the nature of the joint measurement. What do we mean by a joint measurement? To understand what it means, we need to discuss the state of photon A . There are many different possible states that this photon can have. All such states are collected into a set that we call a Hilbert space. Any of the states in this set can be represented as a superposition of a small set of states that we call a basis. One way to determine the state of a photon is the measure how much of each of these basis elements are required to make up the state of the photon. Such measurements are called projective measurements.

To understand joint measurements we just need to generalize our understanding of projective measurements a bit. What the measurement instrument in a teleportation experiment sees is not just A, but A and B together. The Hilbert space for the combination of the states of these two photons consists of all the combinations of all the states from their respective Hilbert spaces. One can produce a basis for the combined Hilbert space by combining the elements of the respective bases. There are different ways to do that, including some that would cause the elements of the combined basis to be entangled states. That is the key for quantum teleportation. One needs to make projective measurements of the combined state in a basis where the elements are themselves entangled.

Why would projective measurements in terms of an entangled basis cause teleportation? This mechanism is what makes teleportation an amazing process. It involves the multiple-reality nature of the quantum world. The entangled resource state can be interpreted in terms of such multiple realities. What joint measurements are doing to knit these multiple realities together with those presented by the input state A. But the latter is just one state (one reality), therefore, in the ideal case, only one of the realities of the resource state will survive the measurement process, the one where C has the same state as A. In a less ideal case, bits and pieces of A will be distributed over different realities. In that case, one can reconfigure the different realities with the aid of a unitary transformation on C, such that A becomes associated with just one reality in which C would then have the same state as A. The outcome of the joint measurement would tell us which unitary transformation to perform to achieve the necessary reconfiguration.

How does one make projective measurements in terms of an entangled basis? That is challenging, but people have identified at least two ways to do that. The first process and the one most often used is the Hong-Ou-Mandel effect. It is accomplished with the aid of a beamsplitter, causing a quantum interference effect. If two photons are observed simultaneously from the two output ports, then it signals the detection of a special entangled state called a Bell state, which implies a successful teleportation. The benefit of this method is that it does not require any unitary transformation of C.

Another way to perform a joint measurement is with the aid of the inverse of a process that would produce entangled states. In quantum optics, most entangled photon states are produces with the aid of a nonlinear optical process called parametric down-conversion. The inverse process is parametric up-conversion (also called sum frequency generation). While down-conversion converts a single incoming photon into two photons that are entangled to maintain energy and momentum conservation, the up-conversion process takes two incoming photons and combine them into one photon. A successful up-conversion implies a projection unto an entangled state to maintain energy and momentum conservation. Therefore, it can also be used for quantum teleportation. However, the process of up-conversion is very inefficient.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

A mechanism for quantum collapse

It is argued that the current impasse in fundamental physics is at least partially caused by the fact that there does not exist a credible understanding of the process of quantum collapse. This argument begs the question of those interpretations of quantum mechanics that incorporate quantum collapse. Therein lies a dilemma: interpretations of quantum mechanics are generally non-scientific – except for a few special cases – interpretations of quantum mechanics are not falsifiable. Therefore, even if we were able to come up with a mechanism for quantum collapse, it would not form part of our scientific understanding because it would not allow falsification.

However, there are interpretations of quantum mechanics that do not involve quantum collapse. They present a possible way out of this dilemma. Whatever reason one would have to introduce the notion of quantum collapse in the first place, must somehow be reproduced in those interpretations of quantum mechanics that do not explicitly contain it. In other words, the observable phenomena that suggest a collapse mechanism, must somehow be reproduced in interpretations without quantum collapse. The explanation of any mechanism that would reproduce such phenomena would therefore provide a kind of mechanism for quantum collapse without quantum collapse. In this case, the falsification takes on the lesser form of a retrodiction where the observed phenomena are explained in term of a successful theory. That successful theory is standard quantum mechanics. In other words, what we’ll do is to use standard quantum mechanics without introducing quantum collapse to explain those phenomena that seem to require quantum collapse.

If we do not want to introduce quantum collapse we inevitably employ an interpretation of quantum mechanics that does not involve quantum collapse. The one that we use is the so-called many worlds interpretation. However, we first need to review it to remove some misconceptions. In effect, we’ll use a modified version of the many worlds interpretation.

The many worlds interpretation is often associated with a multi-verse that is produced by the constant branching of a universe due to the quantum interactions that take place in that universe. This notion is misleading. Taking a good hard look at the quantum mechanics formalism, one should realize that it does not support the idea of a multi-verse produced by constant branching. Instead, there is just one universe, but with an infinite multiple of “realities” that are associated with the infinite number of elements in the basis of the Hilbert space of this universe. The number of these realities never change – there is no branching. Instead, the basis always consist of an infinite number of discrete elements. The only effect of quantum interactions is to change their relative probabilities. So, the many worlds (or the multiple realities) correspond to the terms in the superposition of all these basis elements forming the state of the universe. This state evolves in a unitary fashion that incorporates all the interactions that take place in the universe and thereby produces a constant variation over time in the coefficients of the terms in the superposition.

With this picture in mind, we can consider what it means to have quantum collapse, or to understand any physical process that seems to suggest quantum collapse. As a first example of such a physical process, we use a historically relevant example presented by Albert Einstein during the 5th Solvay conference, which was held in Brussels in 1927. [For a transcript of the discussions at the 5th Solvay conference, see G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads, Reconsidering the 1927 Solvay Conference, Cambridge University Press (2009); arXiv:0609184.] At this conference, which played a prominent role in the development of quantum mechanics and the Copenhagen interpretation, Einstein described a scenario where an electron propagates toward a screen on which it is registered as a single point of absorption on the screen. He then presented two possible ways to view the process that takes place, exemplifying the problem of maintaining energy conservation without introducing action at a distance. This example is an apt demonstration of one of the key problems with the notion of quantum collapse.

Attendants of the Fifth Solvay Conference
Attendants of the Fifth Solvay Conference

Before we deal with the understanding of this process in the context of multiple realities, we first need to remove an unfortunate tacit assumption that we find in the discussions of such experimental scenarios. It mentions a particle. What particle? Or perhaps one should ask what is meant by the term particle? Does is refer to a localized lump of matter (or a dimensionless point) traveling on a world line? Or is it a more abstract notion associated with a finite mass and a discrete charge, without localization? I suspect, that the former is implies in these discussions, because it would help to explain the localization of the observed absorption. However, what we see is a localized absorption and not a particle. The notion of something that is itself localized is not the only possible explanation for the localization of an absorption process. The latter can simply be the result of a localized mechanism for the absorption process. In the scenario, the screen is assumed to be a photographic material, presumably consisting of little silver crystals that can register the electron.

So, a slightly different picture from that which was put forward by Einstein emerges. The electron is now represented by a wave function (not a particle) that propagates toward the screen. The screen contains numerous little crystals that can register the electron. However, assuming that the wave function represents only one electron, one would find only one such absorption event (otherwise we’ll have the problem of violating conservation principles). In terms of multiple realities, many of these crystals could serve to perform the absorption process. Each reality would correspond to a different crystal receiving the electron.

But how does the situation within a particular reality manages the localize the electron wave function without causing action at a distance? Remember that the different realities are associated with the different basis elements in the Hilbert space. The basis of the Hilbert space is not unique. One can define infinitely many different basis, each of which is related to any other basis by a unitary transformation. In general, there is nothing special about any particular basis. In other words, the separation of the wave function of the universe into a superposition of different realities can be changed via unitary transformation into another set of realities, each of which is a combination of the previous set of realities. However, when it comes to a specific interaction process, such as the absorption of an electron by a specific crystal, then there is a special basis that would clarify the process. This basis corresponds to the measurement basis of the crystal.

To understand what this measurement basis is, one can determine what electron wave function would have been radiated by the crystal if the absorption process is inverted as the adjoint process. At the fundamental level, all processes are invertible. It is just that the probability for the required initial conditions to produce a specific inverted process may be vanishingly small. We can nevertheless assume that these required conditions exist so that it would produce the radiated wave function. The conjugate of the radiated wave function would describe the ideal measurement basis element for the absorption of an electron by this crystal. In effect, nature transforms the basis (the realities) into the measurement basis for the absorption of electrons by the crystals. All the different realities now correspond to different basis elements, each of which is associated by the absorption of the electron by a different crystal.

Obviously all these basis elements are localized at the their associated crystals. This localization is accomplished purely through a unitary transformation without any funny action at a distance. The way that the unitary transformation accomplishes this localization is through constructive and destructive interference among the elements of any other basis that may initially have represented the multiple realities.

The different realities in the localized measurement basis have different coefficients associated with them in the superposition. Some of these coefficients would be larger than others, indicating which crystal is most likely to absorb the electron. It may be that one specific reality has a coefficient that completely dominates. In that case, although there are multiple realities, one specific reality would dominate. This dominant reality may be the one that we perceive as the single reality in our experience.

Hopefully, this understanding gives a feasible picture of the process whereby quantum collapse seems to take place. More can be said about this topic, but since this discussion has already been rather long, I’ll postpone further discussions for another day.

 

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Testable proposals for the measurement problem

In a previous post, I made the statement: “Currently, there are no known experimental conditions that can distinguish between different interpretations of quantum mechanics.” Well, that is not exactly true. Perhaps one can argue that no experiment has yet been performed that conclusively ruled out or confirmed any of the interpretations of quantum mechanics. But, the fact is that recently, there has been some experimentally testable proposals. Still, I’m not holding my breath.

Recently, seeing one such proposal, I remembered that I also knew about another testable proposal made by Lajos Diósi and Roger Penrose. The reason I forgot about that is probably because it seems to have some serious problems. At some point, during a conversation I had with Lajos, I told him I have a stupid question to ask him: does quantum collapse travel at the speed of light? His response was: that is not a stupid question. So, then I concluded that it is not something that any of the existing collapse models can handle correctly. In fact, I don’t think any such model will ever be able to handle it in a satisfactory manner.

Thinking back to those discussions and the other bits and pieces I’ve read about the measurement problem, I tend to reconfirm my conviction that the simplest interpretation of quantum mechanics (and therefore the one most likely to be correct) is the so-called Many Worlds interpretation of quantum mechanics. However, the more I think about it, the more I believe that “many worlds” is a misnomer. It is not about many worlds or many universes that are constantly branching off to become disjoint universes.

Perhaps one can instead call it the “multiple reality” interpretation. But how would multiple realities be different from “many worlds”? That fact is that these realities are not disjoint, but form part of the unitary whole of a single universe. These realities can be combined into arbitrary superpositions. What more, these realities are not branching of to produce more realities. The number of realities remains the same for all time. (There are actually an infinite number of them, but the cardinality of the set remains the same.) The interactions merely change the relative complex probability amplitudes of all the realities. 

Anyway, just thought I should clear this up. I don’t see myself ever writing publication on this topic.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png