Transcending the impasse, part IV

Planck’s constant

It all started with the work of Max Planck. He famously introduced the notion that the energy absorbed or emitted during an interaction is proportional to the frequency of the field being absorbed or emitted. The proportionality constant h is today considered as a fundamental constant of nature. In honor of Max Planck is called Planck’s constant.

Max Planck, the father of quantum mechanics

The reason why we need to look at the Planck constant for transcending the impasse in physics is because there seem to be some confusion as to the role that it plays in quantum mechanics. The confusion manifests in two aspects of quantum mechanics.

One of these aspects is related to the transition from quantum to classical physics, which we have considered before. It is assumed that one should recover classical physics from quantum physics by simply taking the limit where Planck constant goes to zero. Although this assumption is reasonable, it depends on where the constant shows up. One may think that the presence of Planck’s constant in expressions should be unambiguous. That turns out not to be the case.

An example is the commutation relation for spin operators. Often one finds that the commutator produces the spin operators multiplied by Planck’s constant. According to this practice the limit where Planck’s constant goes to zero would imply that spin operators must commute in the classical theory, which is obviously not correct. Spin operators are the generators of three-dimensional rotations which still obey the same algebraic structure in classical theories as they do in quantum theories.

So when should there be a factor of Planck’s constant and when not? Perhaps a simple way to see it is that, if one finds that a redefinition of the quantities in an expression can be used to remove Planck’s constant from that expression, then it should not be there in the first place.

Using this approach, one can consider what happens in a Hamiltonian or Lagrangian for a theory. Remember that both of these are divided by Planck’s constant in the unitary evolution operator or path integral, respectively. One also finds that the quantization of the fields in these theories always contains a factor of the square root of Planck constant. If we pull it out of the definition and make it explicit in the expression of the theory, one finds that Planck’s constant cancels for all the free-field terms (kinetic term and mass term) in the theory. The only terms in either the Hamiltonian or the Lagrangian where the Planck constant remains are the interaction terms. This brings us full circle to the reason why Max Planck introduced the constant in the first place. Planck’s constant is specifically associated with interactions.

So if one sets Planck constant to zero in a theory, the result is that it removes all the interactions. It leads to a free-field theory without interactions, which is indistinguishable form a classical theory. Interactions are responsible for the changes in the number of particles and that is where all the quantum effects come from that we observe.

The other confusion about Planck’s constant is related to the uncertain principle. Again, the role that Planck’s constant plays is that it relates two quantities that, on the one hand, is the conjugate variable on phase space with, on the other hand, the Fourier variable. Without this relationship, one recovers the same uncertainty relationships between Fourier variables in classical theories, but not between conjugate variables in phase space. Planck’s relationship transfers the uncertainty relationship between Fourier variables to conjugate variables on phase space. So, the uncertainty relationship is not a fundamental quantum mechanical principle. No, it is the Planck relationship that deserves that honor.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part III

Many-worlds interpretation

In my series on the impasse in physics and how to transcend it, I previously discussed the issue of classical vs quantum physics. Here, I want to talk about the interpretations of quantum mechanics.

There is much activity and debate on these interpretations. Part of it is related to the measurement problem. Is there such a thing as quantum collapse? How does it work?

David Mermin once said in an article in Physics Today that new interpretations are added every year and none has ever been ruled out. If this is true, then it indicates that the interpretations of quantum mechanics is not part of science, and therefore also not part of physics.

I am not going to say one should not work on such interpretations and try to make sense of what is going on, but the scientific method does not seem to help us here. Perhaps people will eventually come up with experiments to determine how nature works. I’ve seen some proposals, but they are usually associated with some new mechanisms, which in my view are unlikely to be correct.

It occurs to me that while we cannot say which of the interpretations are correct, we may just as well just pick one and work with that. So I pick the simplest one and when I want to figure out how things will work out in one of these experiments, then I can just consider how things will work according to this interpretation. If such a prediction turns out to be wrong, it would show that this interpretation (and all those that made the same prediction) is wrong after all.

The simplest interpretation according to me is the many-world interpretation. It is simple because it does not require the weird unexplained notion of quantum collapse. People don’t like it, because it seems to require such a lot of different worlds. For that reason it is also associated with the idea of a multiverse.

Hugh Everett III, the person that invented the many-worlds interpretation

Well no, those ideas are anyway misleading. In quantum mechanics, all interactions are described by unitary evolution. The picture that it represents is that there is a set of states that the universe can take on. One can think of each such state as a different description of the world. Hence “many worlds.” However, the actual state of the universe is a quantum superposition of all the possible worlds. In the superposition each world is associated with a complex probability amplitude. It means that some worlds are more likely than others. During interactions these probability amplitudes change.

That is the whole idea of unitary evolution. All the possibilities are already present right from the start. The only thing that interactions do is to change the probability amplitudes that are associated with the different worlds. During the evolution in time the different worlds in the superposition can experience constructive or destructive interference, which would change their probability amplitudes, making some less or more likely that they were before.

The number of worlds (number of terms in the superposition) stays the same. They don’t increase as a result of interactions. How many such worlds are there? Well, if we look at the properties of the set of such basis states, then it is often assumed to be a countable infinite number. However, it may turn out to be uncountably infinite, having what is called the cardinality of the continuum.

What is more is that these different worlds are not distinct unique worlds. One can redefine the basis set of worlds by forming different superpositions of the worlds in the original set to get a new set in which the worlds now look different.

How does all this relate to what we see? The dynamics of the universe causes the interferences due to the unitary evolution to favor a small set of worlds that look very similar. This coherence in what the world looks like is a result of the constructive interference produced by the dynamics.

So the world that we see at a macroscopic level is not just one of these worlds. It is, in a sense, a conglomeration of all those worlds with large probability amplitudes. However, the differences among all these worlds are so small that we cannot notice it at a macroscopic level.

OK, not everything I said here can be confirmed in a scientific way. I cannot even proof that the many-worlds interpretation is correct. However, by thinking of it in this way, one can at least get some idea of it that makes sense.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part II

Classical vs quantum

It is a strange thing. Why the obsession with something that in the end comes down to a rather artificial distinction. Nature is the way it is. There is no dualism in nature. The distinction we make between classical and quantum is just an artifact of the theoretical model we build to understand nature. Or is it?

Well there is a history. It started with Einstein’s skepticism about quantum mechanics. Together with some co-workers, he eventually came up with a very good argument to justify the idea that quantum mechanics must be incomplete. At least, it seemed like a good argument until it was eventually shown to be wrong. It was found that the idea that quantum mechanics is incomplete and needs some extra hidden variables does not agree with experimental observations. The obsession with the distinction between what is classical and what is quantum is a remnant of this debate that originated with Einstein.

Today, we have a very successful formalism, which is simply called quantum mechanics, and can be used to model quantum phenomena. Strictly speaking, there are different versions of the quantum mechanics formalism, but they are all equivalent. The choice of specific formalism is usually based on convenience and personal taste.

Though Einstein’s issues with quantum mechanics may have been resolved, the mystery of what it really means remains. Therefore, many people are trying to probe deeper to find out why quantum mechanics works the way it does. However, despite all the probing, nothing seems to be discovered that disagrees with the quantum mechanics formalism, which is by now almost a hundred years old. The strange concepts, such as entanglement, discord, and contextuality, that have been distilled from quantum physics, turn out to be aspects that are already built into the quantum mechanics formalism. So, in effect all the probing merely comes down to an attempt to understand the implications of the formalism. We do not uncover any new physics.

But now a new understanding is rearing it ugly head. It turns out that the quantum mechanics formalism is not only successful for situation where we are clearly dealing with quantum physics. It is equally successful in situations where the physical phenomena are clearly classical. The consequence is that many of the so-called quintessential quantum properties, are actually properties of the formalism and are for that reason also present in cases where one can apply the formalism to classical scenarios.

I’ll give two examples. The one is the celebrated concept of entanglement. It has been shown now that the non-separability, which signals entanglement, is also present in classical optical fields. The difference is, in classical field it is restricted to local properties and cannot be separated over a distance as in the quantum case. This classical non-separability display many of the features that were traditionally associated with quantum entanglement. Many people now impose a dogmatic restriction on the use of the term entanglement, reserving it for those cases where it is clearly associated with quantum phenomena.

It does not serve the scientific community well to be dogmatic. It reminds us of the dogmatism that prevailed shortly after the advent of quantum mechanics. For a long while, any questioning of this dogma was simply not tolerated. It has led to a stagnation in progress in the understanding of quantum physics. Eventually, through the work of dissidents such as J. S. Bell, this stagnation was overthrown.

The other example is where certain properties of quasi-probability distributions are used as an indication of the quantum nature of a state. For instance, in the case of the Wigner distribution, any presence of negative values in the function is used as such an indication of it quantum nature. Nothing prevents one from using the Wigner distribution for classical fields. One can for instance consider the mode profiles of classical optical beams. Some of these mode profiles produce Wigner distributions that take on negative values at certain points. Obviously, it would be misleading to use this as a indication of a quantum nature. So, to avoid this situation, one needs to impose the dogmatic restriction that one can only used this indication in those cases where the Wigner distribution is computed for quantum state. But then the indication becomes somewhat circular, doesn’t it?

It occurs to me that the fact that we can use the quantum mechanics formalism in classical scenarios provides us with an opportunity to question our understanding of what it truly means to be quantum. What are the fundamental properties of nature that indicates scenarios that can be unambiguously identified as quantum phenomena? Through a process of elimination we may be able to arrive at such unambiguous properties. That may help us to see that the difference between the quantum nature of things and the classical nature of things is perhaps not as big as we thought.

This image has an empty alt attribute; its file name is 1C7DB1746CFC72286DF097344AF23BD2.png

Transcending the impasse, part I

The current impasse in fundamental physics stifles progress. The rate of advances in our understand has slowed down. Although several exotic predictions have been made in recent years, none of these seem to be correct. Have we reached the end of our ability to learn more about the universe we live in?

It has been suggested that the way forward is to go back and fix what is wrong. Is there then something wrong with what we’ve learned before? Apparently yes. We are biased by what we think we know. It misleads us to conjure up theories that cannot work.

How is this possible? Would such misconceptions not have been ruled out by experimental observation? That’s the problem. Much of what we think we know never got tested by experimental observations.

As an example, one is reminded of all the aspects of quantum physics that is not currently understood. Yes, we know enough about quantum mechanics (the mathematical formalism) to do calculations. The problem is that we then go and interpret what we see. That part cannot be tested by experiments.

For example, in certain interpretations of quantum mechanics it is believed that the wave function collapses to produce (or because of) the result we observed. Nobody really knows how this works. This is the measurement problem, which is currently a hot topic in quantum foundations.

But is this even science? How is this going to help us move forward? It occurs to me that these types of problems require us to step out of this struggle and get some distance from it. I said elsewhere that wisdom is the path to knowledge. Perhaps we need to get the metaphysics right before we will be able to get the physics right. We need to separate that which we can learn from a scientific approach from that which cannot be investigated scientifically.

Perhaps there is not such a clear cut distinction between those aspects of quantum physics that can and cannot be studied scientifically. However, it is not difficult to see where we are bound to waste much time with potentially limited or no advances.

In the following posts, I intend to address some specific aspects of the current impasse and how it impacts our current understanding. Although I’m not a fan of philosophy, some of these discussions may touch on some philosophical aspects of the topic – the metaphysics – in as far as it may show us the way.

Wisdom is the path to knowledge

As a physicist, I cherish the freedom that comes with the endeavor to uncover new knowledge about our physical world. However, it irks me when people include things in physics that do not qualify.

Physics is a science. As a science, it follows the scientific method. What this means is that, while one can use any conceivable method to come up with ideas for explaining the physical world, only those ideas that work survive to become scientific knowledge. How do we know that it works? We go and look! That means we make observations and perform experiments.

That is the scientific method. It has been like that for more than a few centuries. And it is still the way it is today. All this talk about compromising on the basics of the scientific method is annoying. If we start to compromise, then eventually we’ll end up compromising on our understanding of the physical world. The scientific method works the way it works because that is the only way we can know that our ideas work.

Some people want to go further and put restrictions on how one should come up with these ideas or what kind of ideas should be allowed to have any potential to become scientific knowledge even before it has been tested. There is the idea of falsifiability, as proposed by Karl Popper. It may be a useful idea, but sometimes it is difficult to say in advance whether an idea would be falsifiable. So, I don’t think one should be too exclusive. However, sometimes it is quite obvious that an idea can never be tested.

For example, the interior of a black hole cannot be observed in a way that will give us scientific knowledge about what is going on inside a black hole. Nobody that has entered a black hole can come back with the experimental or observational evidence to tell us that the theory works. So, any theory about the inside of a black hole can never constitute scientific knowledge.

Now there is this issue of the interpretations of quantum mechanics. In a broader sense, it is included under the current studies of the foundations of quantum mechanics. A particular problem that is much talked about within this field, is the so-called measurement problem. The question is: are these scientific topics? Will it ever be possible to test interpretations of quantum mechanics experimentally? Will we be able to study the foundations of quantum mechanics experimentally? Some aspects of it perhaps? What about the measurement problem? Are these topics to be included in physics, or is it perhaps better to just include them under philosophy?

Does philosophy ever lead to knowledge? No, probably not. However, it helps one to find the path to knowledge. If philosophy is considered to embody wisdom (it is the love of wisdom after all), then wisdom must be the path to knowledge. Part of this wisdom is also to know which paths do not lead to knowledge.

It then follows that one should probably not even include the studies of foundations of quantum mechanics under philosophy, because it is not about discovering which paths will lead to knowledge. It tries to achieve knowledge itself, even if it does not always follow the scientific method. Well, we argued that such an approach cannot lead to scientific knowledge. I guess a philosophical viewpoint would then tell us that this is not the path to knowledge after all.

Diversity of ideas

The prevailing “crisis in physics” has lead some people to suggest that physicists should only follow a specific path in their research. It creates the impression that one person is trying to tell the entire physics community what they are allowed to do and what not. Speculative ideas are not to be encouraged. The entire physics research methodology need to be reviewed.

Unfortunately, it does not work like that. One of the key underlying principles of the scientific method is the freedom that all people involved in it have to do whatever they like. It is the agreement between these ideas and what nature says that determines which ideas work and which do not. How one comes up with the ideas should not be restricted in any way.

This freedom is important, because nature is resourceful. From the history of science we learn that the ways people got those ideas that turned out to be right differ in all sorts of ways. If one starts to restrict the way these ideas are generated, one may end up empty handed.

Due to this diversity in the ways nature works, we need a diversity in perspectives to find the solutions. It is like a search algorithm in a vast energy landscape. One needs numerous diverse starting points to have any hope to find the global minimum.

Having said that, one does find that there are some guiding principles that have proven useful in selecting among various ideas. One is Occam’s razor. It suggests that one starts with the simplest explanation first. Nature seems to be minimalist. If we are trying to find an underlying system to explain a certain phenomenology, then the underlying system needs to be rich enough to be able to produce the level of complexity that one observes in the phenomenology. However, it should not be too rich, leading to too much complexity. As an example, conjuring up extra dimensions to explain what we see, we produce too much complexity. Therefore, chances are that we don’t need this.

Another principle, which is perhaps less well-known is the minimum disturbance principle. It suggests that when we find that something is wrong with our current understanding, it does not make sense to through everything away and build up the whole understanding from scratch. Just fix that which is wrong.

Now, there are examples in the history of science where the entire edifice of existing theory in a particular field is changed to solve a problem. However, this only happens when the observations that contradict the current theory start to accumulate. In other words, when there is a crisis.

Do we have such a kind of crisis at the moment? I don’t think so. The problem is not that the existing standard model of particle physics have all these predictions that contradict observations. The problem is precisely the opposite. It is very good at making predictions that agree with what we can observe. We don’t seem to see anything that can tell us what to do next. So, the effort to see what we can improve may well be beyond our capability.

The current crisis in physics may be because we are nearing the end of observable advances in our fundamental understanding. We may come up with new ideas, but we may be unable to get any more hints from experimental observation. In the end we not even be able to test these new ideas. This problem starts to enter the domain of what we see as the scientific method. Can we compromise it?

That is a topic for another day.

Naturalness

One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.