Wisdom is the path to knowledge

As a physicist, I cherish the freedom that comes with the endeavor to uncover new knowledge about our physical world. However, it irks me when people include things in physics that do not qualify.

Physics is a science. As a science, it follows the scientific method. What this means is that, while one can use any conceivable method to come up with ideas for explaining the physical world, only those ideas that work survive to become scientific knowledge. How do we know that it works? We go and look! That means we make observations and perform experiments.

That is the scientific method. It has been like that for more than a few centuries. And it is still the way it is today. All this talk about compromising on the basics of the scientific method is annoying. If we start to compromise, then eventually we’ll end up compromising on our understanding of the physical world. The scientific method works the way it works because that is the only way we can know that our ideas work.

Some people want to go further and put restrictions on how one should come up with these ideas or what kind of ideas should be allowed to have any potential to become scientific knowledge even before it has been tested. There is the idea of falsifiability, as proposed by Karl Popper. It may be a useful idea, but sometimes it is difficult to say in advance whether an idea would be falsifiable. So, I don’t think one should be too exclusive. However, sometimes it is quite obvious that an idea can never be tested.

For example, the interior of a black hole cannot be observed in a way that will give us scientific knowledge about what is going on inside a black hole. Nobody that has entered a black hole can come back with the experimental or observational evidence to tell us that the theory works. So, any theory about the inside of a black hole can never constitute scientific knowledge.

Now there is this issue of the interpretations of quantum mechanics. In a broader sense, it is included under the current studies of the foundations of quantum mechanics. A particular problem that is much talked about within this field, is the so-called measurement problem. The question is: are these scientific topics? Will it ever be possible to test interpretations of quantum mechanics experimentally? Will we be able to study the foundations of quantum mechanics experimentally? Some aspects of it perhaps? What about the measurement problem? Are these topics to be included in physics, or is it perhaps better to just include them under philosophy?

Does philosophy ever lead to knowledge? No, probably not. However, it helps one to find the path to knowledge. If philosophy is considered to embody wisdom (it is the love of wisdom after all), then wisdom must be the path to knowledge. Part of this wisdom is also to know which paths do not lead to knowledge.

It then follows that one should probably not even include the studies of foundations of quantum mechanics under philosophy, because it is not about discovering which paths will lead to knowledge. It tries to achieve knowledge itself, even if it does not always follow the scientific method. Well, we argued that such an approach cannot lead to scientific knowledge. I guess a philosophical viewpoint would then tell us that this is not the path to knowledge after all.

Diversity of ideas

The prevailing “crisis in physics” has lead some people to suggest that physicists should only follow a specific path in their research. It creates the impression that one person is trying to tell the entire physics community what they are allowed to do and what not. Speculative ideas are not to be encouraged. The entire physics research methodology need to be reviewed.

Unfortunately, it does not work like that. One of the key underlying principles of the scientific method is the freedom that all people involved in it have to do whatever they like. It is the agreement between these ideas and what nature says that determines which ideas work and which do not. How one comes up with the ideas should not be restricted in any way.

This freedom is important, because nature is resourceful. From the history of science we learn that the ways people got those ideas that turned out to be right differ in all sorts of ways. If one starts to restrict the way these ideas are generated, one may end up empty handed.

Due to this diversity in the ways nature works, we need a diversity in perspectives to find the solutions. It is like a search algorithm in a vast energy landscape. One needs numerous diverse starting points to have any hope to find the global minimum.

Having said that, one does find that there are some guiding principles that have proven useful in selecting among various ideas. One is Occam’s razor. It suggests that one starts with the simplest explanation first. Nature seems to be minimalist. If we are trying to find an underlying system to explain a certain phenomenology, then the underlying system needs to be rich enough to be able to produce the level of complexity that one observes in the phenomenology. However, it should not be too rich, leading to too much complexity. As an example, conjuring up extra dimensions to explain what we see, we produce too much complexity. Therefore, chances are that we don’t need this.

Another principle, which is perhaps less well-known is the minimum disturbance principle. It suggests that when we find that something is wrong with our current understanding, it does not make sense to through everything away and build up the whole understanding from scratch. Just fix that which is wrong.

Now, there are examples in the history of science where the entire edifice of existing theory in a particular field is changed to solve a problem. However, this only happens when the observations that contradict the current theory start to accumulate. In other words, when there is a crisis.

Do we have such a kind of crisis at the moment? I don’t think so. The problem is not that the existing standard model of particle physics have all these predictions that contradict observations. The problem is precisely the opposite. It is very good at making predictions that agree with what we can observe. We don’t seem to see anything that can tell us what to do next. So, the effort to see what we can improve may well be beyond our capability.

The current crisis in physics may be because we are nearing the end of observable advances in our fundamental understanding. We may come up with new ideas, but we may be unable to get any more hints from experimental observation. In the end we not even be able to test these new ideas. This problem starts to enter the domain of what we see as the scientific method. Can we compromise it?

That is a topic for another day.

Naturalness

One of the main objectives for the Large Hadron Collider (LHC) was to solve the problem of naturalness. More precisely, the standard model contains a scalar field, the Higgs field, that does not have a mechanism to stabilize its mass. Radiative corrections are expected to cause the mass to grow all the way to the cut-off scale, which is assumed to be the Planck scale. If the Higgs boson has a finite mass far below the Planck scale (as was found to be the case), then it seems that there must exist some severe fine tuning giving cancellations among the different orders of the radiative corrections. Such a situation is considered to be unnatural. Hence, the concept of naturalness.

It was believed, with a measure of certainty, that the LHC would give answers to the problem of naturalness, telling us how nature maintains the mass of the Higgs far below the cut-off scale. (I also held to such a conviction, as I recently discovered reading some old comments I made on my blog.)

Now, after the LHC has completed its second run, it seems that the notion that it would provide answers for the naturalness problem is confronted with some disappointment (to put it mildly). What are we to conclude from this? There are those saying that the lack of naturalness in the standard model is not a problem. It is just the way it is. It is stated that the requirement for naturalness is an unjustified appeal to beauty.

No, no, no, it has nothing to do with beauty. At best, beauty is just a guide that people sometimes use to select the best option among a plethora of options. It falls in the same category as Occam’s razor.

On the other hand, naturalness is associated more with the understanding of scale physics. The way scales govern the laws of nature is more than just an appeal to beauty. It provides us with a means to guess what the dominant behavior of a phenomenon would be like, even when we don’t have an understanding of the exact details. As a result, when we see a mechanism that deviates from our understanding of scale physics, it gives a strong hint that there are some underlying mechanisms that we have not yet uncovered.

For example, in the standard model, the masses of the elementary particles range over several orders of magnitude. We cannot predict these mass values. They are dimension parameters that we have to measure. There is no fundamental scale parameter close to the masses that can give any indication of where they come from. Our understanding of scale physics tells us that there must be some mechanism that gives rise to these masses. To say that these masses are produced by the Yukawa couplings to the Higgs field does not provide the required understanding. It replaces one mystery with another. Why would such Yukawa couplings vary over several orders of magnitude? Where did they come from?

So the naturalness problem, which is part of a bigger mystery related to the mass scales in the standard model, still remains. The LHC does not seem to be able to give us any hints to solve this mystery. Perhaps another larger collider will.

Mopping up

The particle physics impasse prevails. That is my impression, judging from the battles raging on the blogs.

Among these, I recently saw an interesting comment by Terry Bollinger to a blog post by Sabine Hossenfelder. According to Terry, the particle physics research effort lost track (missed the right turnoff) already in the 70’s. This opinion is in agreement with the apparent slow down in progress since the 70’s. Apart from the fact that neutrino’s have mass, we did not learn much more about fundamental physics since the advent of the standard model in the 70’s.

However, some may argue that the problem already started earlier. Perhaps just after the Second World War. Because that was when the world woke up to the importance of fundamental physics. That was the point where vanity became more important than curiosity for the driving force to do research. The result was an increase in weird science – crazy predictions that are more interested in drawing attention than increasing understanding.

Be that as it may. (I’ve written about that in my book.) The question is, what to do about that? There are some concepts in fundamental physics that are taken for granted, yet have never been established as scientific fact through a proper scientific process. One such concept pointed out by Terry is the behaviour of spacetime at the Planck scale.

Today the Planck scale is referred to as if it is establish scientific fact, where in fact it is a hypothetical scale. The physical existence of the Planck scale has not and probably cannot be confirmed through scientific experiments, at least not with out current capability. Chances are it does not exist.

The existence of the Planck scale is based on some other concepts that are also not scientific facts. One is the notion of vacuum fluctuations, a concept that is often invoked to come up with exotic predictions. What about the vacuum is fluctuating? It follows from a very simple calculation that the particle number of the vacuum state is exactly zero with zero uncertainty. So it seems that the notion of vacuum fluctuations is not as well understood as is generally believed.

Does it mean that we are doomed to wander around in a state of confusion? No, we just need to return to the basic principles of the scientific method.

So I propose a mopping up exercise. We need to go back to what we understand according to the scientific method and then test those parts that we are not sure about using scientific experiments and observations. Those aspects that are not testable in a scientific manner needs to be treated on a different level.

For instance, the so-called measurement problem involves aspects that are in principle not testable. As such, they belong to the domain of philosophy and should not be incorporated into our scientific understanding. There are things we can never know in a scientific manner and it is pointless to make them prerequisites for progress in our understanding of the physical world.

A brave new quantum world

They say one votes through one’s actions. Where you spend your money is where cast your vote. For example, if you want to support the recycling effort then you would buy only products that somehow support the recycling effort.

Would it be possible that someone may in this way cast a vote in favour of some human endeavour while not supporting that endeavour? Yes, that is often the case (I think) when it comes to earning your money. People find themselves in work situations where they are effectively supporting the activities of the organization that they work for even though in their hearts they are not really in favour of the activities associated with the organization.

For a long while I was under the impression that this situation is valid for the current so-called quantum revolution. There are many people doing research in this field, but I was doubtful whether they all really believe that this “revolution” is a real thing. There is a large amount of hype surrounding the expectations of these technologies. Most people working in this field must be aware of the fact that not all the promises are realistic.

Perhaps, part of this notion was based on my one skepticism about this field. I was thinking that most if not all of these new quantum technologies are just activities being pursued for the sake of getting research funding and ego trips.

Well now I’m starting to form a different picture of the situation. The difference come from seeing the efforts made by commercial companies. The way that such companies approach the challenges is very different from the way that academic researchers do it. While the academic approach is often purely for the wow-factor of what is being achieved, the industry must do this in a sustainable way. When they market a product, it must work according to specification and keep on working for a reasonable time after being sold. As a result, the industry is far more serious when is comes to the design and implementation of these systems than any academic researcher would ever be.

So it is when I saw the seriousness with which a commercial enterprise is addressing the challenges of quantum computing that I realize that this is not going to be something that will just blow over after some time. We are very likely to see a world where quantum computers enter our lives in the not too distant future. Yes, there are still challenges, but they are not insurmountable. What the specific details of the technology are going to be I cannot tell you, but I can see that the way quantum computing is being addressed gives it a very high chance of success.

Are you ready for that? How is that going to change our lives? That remains to be see.

Physics vs formalism

This is something I just have to get off my chest. It’s been bugging me for a while now.

Physics is the endeavour to understand the physical world. Mathematics is a powerful tool employed in this endeavour. It often happens that specific mathematical procedures are developed for specific scenarios found in physics. These developments then often lead to dedicated mathematical methods, even special notations, that we call formalisms.

The idea of a formalism is that it makes life easier for us to investigate physical phenomena belonging to a specific field. An example is quantum mechanics. The basic formalism has been developed almost a hundred years ago. Since then, many people have investigated various sophisticated aspects of this formalism and placed it on a firm foundation. Books are dedicated to it and university courses are designed to teach students all the intricate details.

One can think of it almost like a kitchen appliance with a place to put in some ingredients, a handle to crank, and a slot at the bottom where the finished product will emerge once the process is completed. Beautiful!

So does this mean that we don’t need to understand what we are doing anymore? We simply need to put the initial conditions into the appropriate slot, the appropriate Hamiltonian into its special slot and crank away. The output should then be guaranteed to be the answer that we are looking for.

Well, it is like the old saying: garbage in, garbage out. If you don’t know what you are doing, you may be putting the wrong things in. The result would be a mess from which one cannot learn anything.

Actually, the situation is even more serious than this. For all the effort that has gone into developing the formalism (and I’m not only talking about quantum mechanics), it remains a human construct of what is happening in the real physical world. It inevitably still contains certain prejudices, left over as a legacy of the perspectives of the people that initially came up with it.

Take the example of quantum mechanics again. It is largely based on an operator called the Hamiltonian. As such, it displays a particular prejudice. It is manifestly non-relativistic. Moreover, it assumes that we know the initial state at a given time, for all space. We then use the Hamiltonian approach to evolve the state in time to see what one would get at some later point in time. But what if we know the initial state for all time, but not for all space and we want to know what the state looks like at other regions in space? An example of such a situation is found in the propagation of a quantum state through a random medium.

Those that are dead sold on the standard formal quantum mechanics procedure would try to convince you that the Hamiltonian formalism would still give you the right answer. Perhaps one can use some fancy manipulations of the input state in special cases to get situations where the Hamiltonian approach would work for this problem. However, even in such cases, the process becomes awkward and far from efficient. The result would also be difficult to interpret. But why would you want to do it this way, in the first place? Is it so important that we always use the established formalism?

Perhaps you think we have no choice, but that is not true. We understand enough of the fundamental physics to come up with an efficient mathematical model for the problem, even though the result would not be recognizable as the standard formalism. Did we become so lazy in our thoughts that we don’t want to employ our understanding of the fundamental physics anymore? Or did we lose our understanding of the basics to the point that we cannot do calculations unless we use the established formalism?

What would you rather sacrifice: the precise physical understanding or the established mathematical formalism? If you choose to sacrifice the former rather than the latter, then you are not a physicist, then you are a formalist! In physics, the physical understanding should always be paramount! The formalism is merely a tool with which we strive to increase our understanding. If the formalism is not appropriate for the problem, or does not present us with the most efficient way to do the computation, then by all means cast it aside without a second thought.

Focus on the physics, not on the formalism! There I’ve said it.

Art in research

Does it help to apply some form of creativity in scientific research? Stated differently, does creativity have any role to play in scientific research? I would like to think so.

At first one may think that creativity is only associated with the act of conjuring up things that don’t really exist. A painter paints a land scape scene and applies creativity to render the trees and the clouds in interesting ways. As such, they are different from the trees and cloud in the real scene. In as far as the artist employs creativity, the result become different from reality.

If this is what creativity produces, then it would have no place in scientific research, because in this context, we are not interested in anything that would deviate from reality. But creativity does not only representing that which doesn’t exists. It can also be associated with a much more abstract activity.

When a theoretical researcher tries to come up with a model that describes an aspect for physical reality, he or she needs to create something that has not existed before. It is not initially known whether this model gives the correct description of reality. In that sense, one does not known whether it represents anything that is real. One would know that only after the model has been tested. But before that step can be taken, one needs to create the model. For this first step, the researcher is required to employ creativity.

The act of creating such a model is an act of bring into existence something that has not existed before. The inspiration for this model may be obtained from other similar models or from other models in unrelated fields of study. In the same way, artists get inspiration from the works of other artists. despite the source of inspiration, the resulting model is novel in one way or another. That is where the creativity lies.

So, art and science are not that different after all. Both require the same mental faculties. Perhaps they just call it by different names.