Physics vs formalism

This is something I just have to get off my chest. It’s been bugging me for a while now.

Physics is the endeavour to understand the physical world. Mathematics is a powerful tool employed in this endeavour. It often happens that specific mathematical procedures are developed for specific scenarios found in physics. These developments then often lead to dedicated mathematical methods, even special notations, that we call formalisms.

The idea of a formalism is that it makes life easier for us to investigate physical phenomena belonging to a specific field. An example is quantum mechanics. The basic formalism has been developed almost a hundred years ago. Since then, many people have investigated various sophisticated aspects of this formalism and placed it on a firm foundation. Books are dedicated to it and university courses are designed to teach students all the intricate details.

One can think of it almost like a kitchen appliance with a place to put in some ingredients, a handle to crank, and a slot at the bottom where the finished product will emerge once the process is completed. Beautiful!

So does this mean that we don’t need to understand what we are doing anymore? We simply need to put the initial conditions into the appropriate slot, the appropriate Hamiltonian into its special slot and crank away. The output should then be guaranteed to be the answer that we are looking for.

Well, it is like the old saying: garbage in, garbage out. If you don’t know what you are doing, you may be putting the wrong things in. The result would be a mess from which one cannot learn anything.

Actually, the situation is even more serious than this. For all the effort that has gone into developing the formalism (and I’m not only talking about quantum mechanics), it remains a human construct of what is happening in the real physical world. It inevitably still contains certain prejudices, left over as a legacy of the perspectives of the people that initially came up with it.

Take the example of quantum mechanics again. It is largely based on an operator called the Hamiltonian. As such, it displays a particular prejudice. It is manifestly non-relativistic. Moreover, it assumes that we know the initial state at a given time, for all space. We then use the Hamiltonian approach to evolve the state in time to see what one would get at some later point in time. But what if we know the initial state for all time, but not for all space and we want to know what the state looks like at other regions in space? An example of such a situation is found in the propagation of a quantum state through a random medium.

Those that are dead sold on the standard formal quantum mechanics procedure would try to convince you that the Hamiltonian formalism would still give you the right answer. Perhaps one can use some fancy manipulations of the input state in special cases to get situations where the Hamiltonian approach would work for this problem. However, even in such cases, the process becomes awkward and far from efficient. The result would also be difficult to interpret. But why would you want to do it this way, in the first place? Is it so important that we always use the established formalism?

Perhaps you think we have no choice, but that is not true. We understand enough of the fundamental physics to come up with an efficient mathematical model for the problem, even though the result would not be recognizable as the standard formalism. Did we become so lazy in our thoughts that we don’t want to employ our understanding of the fundamental physics anymore? Or did we lose our understanding of the basics to the point that we cannot do calculations unless we use the established formalism?

What would you rather sacrifice: the precise physical understanding or the established mathematical formalism? If you choose to sacrifice the former rather than the latter, then you are not a physicist, then you are a formalist! In physics, the physical understanding should always be paramount! The formalism is merely a tool with which we strive to increase our understanding. If the formalism is not appropriate for the problem, or does not present us with the most efficient way to do the computation, then by all means cast it aside without a second thought.

Focus on the physics, not on the formalism! There I’ve said it.

Art in research

Does it help to apply some form of creativity in scientific research? Stated differently, does creativity have any role to play in scientific research? I would like to think so.

At first one may think that creativity is only associated with the act of conjuring up things that don’t really exist. A painter paints a land scape scene and applies creativity to render the trees and the clouds in interesting ways. As such, they are different from the trees and cloud in the real scene. In as far as the artist employs creativity, the result become different from reality.

If this is what creativity produces, then it would have no place in scientific research, because in this context, we are not interested in anything that would deviate from reality. But creativity does not only representing that which doesn’t exists. It can also be associated with a much more abstract activity.

When a theoretical researcher tries to come up with a model that describes an aspect for physical reality, he or she needs to create something that has not existed before. It is not initially known whether this model gives the correct description of reality. In that sense, one does not known whether it represents anything that is real. One would know that only after the model has been tested. But before that step can be taken, one needs to create the model. For this first step, the researcher is required to employ creativity.

The act of creating such a model is an act of bring into existence something that has not existed before. The inspiration for this model may be obtained from other similar models or from other models in unrelated fields of study. In the same way, artists get inspiration from the works of other artists. despite the source of inspiration, the resulting model is novel in one way or another. That is where the creativity lies.

So, art and science are not that different after all. Both require the same mental faculties. Perhaps they just call it by different names.

Particle physics impasse

Physics is the study of the physical universe. As a science, it involves a process consisting of two components. The theoretical component strives to construct theoretical models for the physical phenomena that we observe. The experimental component tests these theoretical models and explores the physical world for more information about phenomena.

Progress in physics is enhanced when many physicists using different approaches tackle the same problem. The diversity in the nature of problems need to be confronted by a diversity of perspectives. This diversity is reflected in the literature. The same physical phenomenon is often studied by different approaches, using different mathematical formulations. Some of them may turn out to produce the same results, but some may differ in their predictions. The experimental work can then be used to make a selection among them.

That is all fine and dandy for physics in general, but the situation is a bit more complicated for particle physics. Perhaps, one can see the reason for all these complications as the fact that particle physics is running out of observable energy space.

What do I mean by that? Progress in particle physics is (to some extent at least) indicated by understanding the fundamental mechanisms of nature at progressively higher energy scales. Today, we understand these fundamental mechanisms to a fairly good degree up to the electroweak scale (at about 200 GeV). It is described by the Standard Model, which was established during the 1970’s. So, for the past 4 decades, particle physicists tried to extend the understand beyond that scale. Various theoretical ideas were proposed, prominent among these were the idea of supersymmetry. Then a big experiment, the Large Hadron Collider (LHC) was constructed to test these ideas above the electroweak scale. It discovered the Higgs boson, which was the last extent particle predicted by the standard model. But no supersymmetry. In fact, none of the other ideas panned out at all. So there is a serious back-to-the-drawing-board situation going on in particle physics.

The problem is, the LHC did not discover anything else that could give a hint at what is going on up there, or did it? There will be another run to accumulate more data. The data still needs to be analyzed. Perhaps something can still emerge. Who knows? However, even if some new particle is lurking within the data, it becomes difficult to see. Such particles tend to be more unstable at those higher energies, leading to very broad peaks. To make things worse, there is so much more background noise. This makes it difficult, even unlikely, that such particles can be identified at these higher energies. At some point, no experiment would be able to observe such particles anymore.

The interesting things about the situation is the backlash that one reads about in the media. The particle physicists are arguing among themselves about the reason for the current situation and what the way forward should be. There are those that say that the proposed models were all a bunch of harebrained ideas that were then hyped and that we should not build any new colliders until we have done some proper theoretical work first.

See, the problem with building new colliders is the cost involved. It is not like other fields of physics where the local funding organization can support several experimental groups. These colliders require several countries to pitch in to cover the cost. (OK, particle physics is not the only field with such big ticket experiments.)

The combined effect of the unlikeness to observe new particles at higher energies and the cost involved to build new colliders at higher energies, creates an impasse in particle physics. Although they may come up with marvelous new theories for the mechanisms above the electroweak scale, it may be impossible to see whether these theories are correct. Perhaps the last energy scale below which we will be able to understand the fundamental mechanisms in a scientific manner, will turn out to be the electroweak scale.

Glad I did not stay in particle physics.

How far away is that star?

On a clear night, far away from the city lights, one can look up and enjoy the beauty of the starry sky. This display must have enticed people for as long as people existed and I’m sure the question has often come up: how far away are those stars?

Well, there is an interesting tale of discovery related to the progression of measuring sticks that give the ability to determine the distances to astronomical objects. Part of this tale is how Edwin Hubble discovered that the universe is expanding.

The realization that we live in an expanding universe complicates the answer to the question of how far away astronomical objects are. Apart from the fact that the distances change, there is also the issue of what distance we observe at a given point in time. If I use the apparent brightness of a star with a known absolute brightness, then one may think (at least I would have) that the implied distance is between us (the earth) and the location of the star at the time the light was emitted. This is not the case.

Diagram of light from a star or galaxy propagating to be observed on earth

The above diagram tries to explain what happens. The black dots represent a star or galaxy (the source of the light) at different locations in an expanding universe. The blue dot is the earth which is kept it at a fixed location in the expanding universe. The red circles represent the expanding sphere of light after being emitted by the source at some point in the past. Assuming that the universe expands uniformly, we see the source would always remain at the center of the expanding sphere. Moreover, since the observed apparent brightness is given by the total emitted power divided by the total surface area of the sphere, the associated distance is the distance from the earth to the current location of the source. This is called the proper distance to the source.

Amazing, we are able to know the distance to an object at its current location even if we cannot see that object now. Who knew?

Neutrino dust

It is the current understanding that the universe came into being in a hot big bang event. All matter initially existed as a very hot “soup” (or plasma) of charged particles – protons and electrons. The neutral atom (mostly hydrogen) only appeared after the soup cooled off a bit. At that point, the light that was produced by the thermal radiation of the hot matter had a chance to escape being directly re-absorbed.

Much of that light is still around today. We call it the microwave background radiation, because today that light has turned into microwave radiation as a result of being extremely Doppler-shifted toward low frequencies. The extreme Doppler-shift is caused by the expansion of the universe that happened since the origin of the microwave background radiation.

It is reasonable to assume that the very energetic conditions that existed during the big bang would have caused some of the hydrogen nuclei (protons) to combine in a fusion process to form helium nuclei. At the same time, some of the protons are converted to neutrons. The weak interaction mediates this process and it produces a neutrino, the lightest matter particle (fermion) that we know of.

So what happened to all these neutrinos? They were emitted at the same time or even before the light that caused the microwave background radiation. Since neutrinos are so light, their velocities are close to that of the speed of light. While expansion of the universe causes the light to be red-shifted, it also causes the neutrinos, which have a small mass to be slowed down. (Light never slows down, it always propagates at the speed of light.) Eventually these neutrinos are so slow that they are effectively stationary with respect to the local region in space. At this point they become dust, drifting along aimlessly in space.

While, since they do have mass, the neutrinos will be attracted by massive objects like the galaxies. So, the moment their velocities fall below the escape velocity of a nearby galaxy, they will become gravitationally bound to that galaxy. However, since they do not interact very strongly with matter, they will keep on orbiting these galaxies. So the neutrino dust will become clouds of dust in the vicinity of galaxies.

Hubble Space Telescope observes diffuse starlight in Galaxy Cluster Abell S1063NASAESA, and M. Montes (University of New South Wales)

Could the neutrino dust be the dark matter that we are looking for? Due to their small mass and the ratio of protons to neutrons in the universe, it is unlikely that there would be enough neutrinos to account for the missing mass attributed to dark matter. The ordinary neutrino dust would contribute to the effect of dark matter, but may not solve the whole problem.

There are some speculations that the three neutrinos may not be the only neutrinos that exist. Some theories also consider the possibility that an additional sterile neutrino exists. These sterile neutrinos could have large masses. For this reason, they have been considered as candidates for the dark matter. How these heavy neutrinos would have been produced is not clear, but, if they were produced during the big bang, they would also have undergone the same slow-down and eventually be converted into dust. So, it could be that there are a lot of them drifting around aimlessly through space.

Interesting, don’t you think?

The importance of falsifiability

Many years ago, while I was still a graduate student studying particle physics, my supervisor Bob was very worried about supersymmetry. He was particularly worried that it will become the accepted theory without the need to be properly tested.

In those days, it was almost taken for granted that supersymmetry is the correct theory. Since he came from the technicolour camp, Bob did not particularly like supersymmetry. Unfortunately, at that point, the predictions of the technicolour models did not agree with experimental observations. So it was not a seriously considered as a viable theory. Supersymmetry, on the other hand, had enough free parameters that it could sidestep any detrimental experimental results. This ability to dodge these results and constantly hiding itself made supersymmetry look like a theory that can never be ruled out. Hence my supervisor’s concern.

Today the situation is much different. As the Large Hadron Collider accumulated data, it could systematically rule out progressively larger energy ranges where the supersymmetric particles could hide. Eventually, there was simply no place to hide anymore. At least those versions of supersymmetry that rely on a stable superpartner that must exist at the electroweak scale have been ruled out. For most particle physicists this seems to indicate the supersymmetry as a whole has been ruled out. But of course, there are still those that cling to the idea.

So, in hindsight, supersymmetry was falsifiable after all. For me this whole process exemplify the importance of falsifiability. Imagine that supersymmetry could keep on hiding. How would we know if it is right? The reason why so many physicists believed it must be right is because it is “so beautiful.” Does beauty in this context imply that a theory must be correct? Evidently not. There is now alternative to experimental testing to know if a scientific theory is correct.

This bring me to another theory that is believed to be true simply because it is considered so beautiful that it must be correct. I’m talking of string theory. In this case there is a very serious issue about the falsifiability of the theory. String theory addresses physics at the hypothetical Planck scale. However, there does not exist any conceivable way to test physics at this scale.

Just to avoid any confusion about what I mean by falsifiable: There are those people that claim that string theory is falsifiable. It is just not practically possible to test it. Well, that is missing the point now, isn’t it? The reason for falsifiability is to know if the theory is right. It does not help if it is “in principle” falsifiable, because then we won’t be able to know if it is right. The only useful form of falsifiability is when one can physically test it. Otherwise it is not interesting from a scientific point of view.

Having said that, I do not think one should dictate to people what they are allowed to research. We may agree about whether it is science or not, but if somebody wants to investigate something that we do not currently consider as scientific, then so be it. Who knows, one day that research may somehow lead to research that is falsifiable.

There is of course the whole matter of whether such non-falsifiable research should be allowed to receive research funding. However, the matter of how research should be funded is a whole topic on its own. Perhaps for another day.

Particle physics blues

The Large Hadron Collider (LHC) recently completed its second run. While the existence of the Higgs boson was confirmed during the first run, the outcome from the second run was … well, shall we say somewhat less than spectacular. In view of the fact that the LHC carries a pretty hefty price tag, this rather disappointing state of affairs is producing a certain degree of soul searching within the particle physics community. One can see that from the discussions here and here.

CMS detector at LHC (from wikipedia)

So what went wrong? Judging from the discussions, one may guess it could be a combination of things. Perhaps it is all the hype that accompanies some of the outlandish particle physics predictions. Or perhaps it is the overly esoteric theoretical nature of some of the physics theories. String theory seems to be singled out as an example of a mathematical theory without any practical predictions.

Perhaps the reason for the current state of affairs in particle physics is none of the above. Reading the above-mentioned discussions, one gets the picture from those that are close to the fire. Sometimes it helps to step away and look at the situation from a little distance. Could it be that, while these particle physicists vehemently analyze all the wrong ideas and failed approaches that emerged over the past few decades (even starting to question one of the foundations of the scientific method: falsifiability), they are missing the elephant in the room?

The field of particle physics has been around for a while. It has a long history of advances: from uncovering the structure of the atom, to revealing the constituents of protons and neutrons. The culmination is the Standard Model of Particle Physics – a truly remarkable edifice of current understand.

So what now? What’s next? Well, the standard model does not include gravity. So there is still a strong activity to come up with a theory that would include gravity with the other forces currently included in the standard model. It is the main motivation behind string theory. There’s another issue. The standard model lacks something called naturalness. The main motivation for the LHC was to address this problem. Unfortunately, the LHC has not been able to solve the issue and it seems unlikely that it, or any other collider, ever will. Perhaps that alludes to the real issue.

Could it be that particle physics has reached the stage where the questions that need answers cannot be answered through experiments anymore? The energy scales where the answers to these questions would be observable are just too high. If this is indeed the case, it would mark the end of particle physics as we know it. It would enter a stage of unverifiable philosophy. One may be able to construct beautiful mathematical theories to address the remaining questions. But one would never know whether these theories are correct.

What then?