lundi 27 février 2017

(Celebrating fifty years of) electroweak symmetry breaking theory

This is the third fragment of my Lover's dictionary of Spectral Physics, for the new entry:


Electroweak symmetry breaking theory 

The Standard Model wins all the battles 

Yes but only those requiring a limited weapon finesse ;-)
a grand unified theory building gamer troll



This important part of the Standard Model is fifty years old this year 2017 and it is still undefeated experimentally by LHC Run 2. Yet it is far from having been thoroughly tested in its full Standard Model version as you will read it below:

Spontaneous symmetry breaking occurs when the ground state or vacuum, or equilibrium state of a system does not share the underlying symmetries of the theory. It is ubiquitous in condensed matter physics, associated with phase transitions. Often, there is a high-temperature symmetric phase and a critical temperature below which the symmetry breaks spontaneously. A simple example is crystallization. If we place a round bowl of water on a table, it looks the same from every direction, but when it freezes the ice crystals form in specific orientations, breaking the full rotational symmetry. The breaking is spontaneous in the sense that, unless we have extra information, we cannot predict in which directions the crystals will line up... In 1960, Nambu [12] pointed out that gauge symmetry is broken in a superconductor when it goes through the transition from normal to superconducting, and that this gives a mass to the plasmon, although this view was still quite controversial in the superconductivity community (see also Anderson [13]). Nambu suggested that a similar mechanism might give masses to elementary particles... The next year, with Jona-Lasinio [14], he proposed a specific model, though not a gauge theory... 
The model had a significant feature, a massless pseudoscalar particle, which Nambu and Jona-Lasinio tentatively identified with the pion. To account for the non-zero (though small) pion mass, they suggested that the chiral symmetry was not quite exact even before the spontaneous symmetry breaking. Attempts to apply this idea to symmetry breaking of fundamental gauge theories however ran into a severe obstacle, the Goldstone theorem... the spontaneous breaking of a continuous symmetry often leads to the appearance of massless spin-0 particles. The simplest model that illustrates this is the Goldstone model [15]... 
The appearance of the... massless spin-zero Nambu–Goldstone bosons was believed to be an inevitable consequence of spontaneous symmetry breaking in a relativistic theory; this is the content of the Goldstone theorem. That is a problem because such massless particles, if they had any reasonable interaction strength, should have been easy to see, but none had been seen...  
This problem was obviously of great concern to all those who were trying to build a viable gauge theory of weak interactionsWhen Steven Weinberg came to spend a sabbatical at Imperial College in 1961, he and Salam spent a great deal of time discussing the obstacles. They developed a proof of the Goldstone theorem, published jointly with Goldstone [16]...
Spontaneous symmetry breaking implied massless spin-zero bosons, which should have been easy to see but had not been seen. On the other hand adding explicit symmetry-breaking terms led to non-renormalizable theories predicting infinite results. Weinberg commented ‘Nothing will come of nothing; speak again’, a quotation from King Lear. Fortunately, however, our community was able to speak again...
The {Goldstone theorem} argument fails in the case of a gauge theory, for quite subtle reasons ... {its} proof is valid, but there is a hidden assumption which, though seemingly natural, is violated by gauge theories. This was discovered independently by three groups, first Englert and Brout from Brussels [19], then Higgs from Edinburgh [20, 21] and finally Guralnik, Hagen and myself from Imperial College [22]. All three groups published papers in Physical Review Letters during the summer and autumn of 1964... 
The 1964 papers from the three groups attracted very little attention at the time. Talks on the subject were often greeted with scepticism. By the end of that year, the mechanism was known, and Glashow’s (and Salam and Ward’s) SU(2) × U(1) model was known. But, surprisingly perhaps, it still took three more years for anyone to put the two together. This may have been in part at least because many of us were still thinking primarily of a gauge theory of strong interactions, not weak 
In early 1967, I did some further work on the detailed application of the mechanism to models with larger symmetries than U(1), in particular on how the symmetry breaking pattern determines the numbers of massive and massless particles [23]. I had some lengthy discussions with Salam on this subject, which I believed helped to renew his interest in the subject. A unified gauge theory of weak and electromagnetic interactions of leptons was first proposed by Weinberg later that year [24]. Essentially the same model was presented independently by Salam in lectures he gave at Imperial College in the autumn of 1967 — he called it the electroweak theory. (I was not present because I was in the United States, but I have had accounts from others who were.) Salam did not publish his ideas until the following year, when he spoke at a Nobel Symposium [25], largely perhaps because his attention was concentrated on the development in its crucial early years of his International Centre for Theoretical Physics in Trieste. Weinberg and Salam both speculated that their theory was renormalizable, but they could not prove it. An important step was the working out by Faddeev and Popov of a technique for applying Feynman diagrams to gauge theories [26]. Renormalizability was finally proved by a young student, Gerard ’t Hooft [27], in 1971, a real tour de force using methods developed by his supervisor, Martinus Veltman, especially the computer algebra programme Schoonship. 
In 1973, the key prediction of the electroweak theory, the existence of the neutral current interactions — those mediated by Z0 — was confirmed at CERN [28]...The next major step was the discovery of the W and Z particles at CERN in 1983 [29, 30]... 
In 1964, or 1967, the existence of a massive scalar boson had been a rather minor and unimportant feature. The important thing was the mechanism for giving masses to gauge bosons and avoiding the appearance of massless Nambu–Goldstone bosons. But after 1983, the Higgs boson began to assume a key importance as the only remaining undiscovered piece of the standard-model jigsaw — apart that is from the last of the six quarks, the top quark. The standard model worked so well that the Higgs boson, or something else doing the same job, more or less had to be present. Finding the boson was one of the main motivations for building the Large Hadron Collider (LHC) at CERN. Over a period of more than twenty years, the two great collaborations, ATLAS and CMS, have designed, built and operated their two huge and massively impressive detectors. As is by now well known, their efforts were rewarded in 2012 by the unambiguous discovery of the Higgs boson by each of the two detectors [31, 32].
History of electroweak symmetry breaking T.W.B. Kibble
2015

I think it is fair to complete the previous experimental success story of the electroweak symmetry breaking theory by the following facts:
... in computing the theoretical predictions [of the Standard Model], one should include also the strong interactions, so the model is really the gauge theory of the group U(1)×SU(2)×SU(3). Here we shall present only a list of the most spectacular successes in the electroweak sector:
...
The discovery of charmed particles at SLAC in 1974–1976. Their characteristic property is to decay predominantly in strange particles. 
• A necessary condition for the consistency of the Model is that  ∑i Qi =0 inside each family. When the τ lepton was discovered the b and t quarks were predicted with the right electric charges.
...
The t-quark was seen at LEP through its effects in radiative corrections before its actual discovery at Fermilab.
• An impressive series of experiments have tested the Model at a level such that the weak interaction radiative corrections are important.
John Iliopoulos, 2016



And now for a nice outlook of the 125 GeV Higgs boson discovery let us read an eminent superviser of the TeV scale physics exploration using hadron colliders:

The most succinct summary we can give is that the data from the ATLAS and CMS experiments are developing as if electroweak symmetry is broken spontaneously through the work of elementary scalars, and that the emblem of that mechanism is the standard-model Higgs boson... 
As one measure of the progress the discovery of the Higgs boson represents, let us consider some of the questions I posed before the LHC experiments ... 
1. What is the agent that hides the electroweak symmetry? Specifically, is there a Higgs boson? Might there be several? 
To the best of our knowledge, H(125) displays the characteristics of a standard model Higgs boson, an elementary scalar. Searches will continue for other particles that may play a role in electroweak symmetry breaking. 
2. Is the “Higgs boson” elementary or composite? How does the Higgs boson interact with itself? What triggers electroweak symmetry breaking? 
We have not yet seen any evidence that H(125) is other than an elementary scalar. Searches for a composite component will continue. The Higgs-boson self-interaction is almost certainly out of the reach of the LHC; it is a very challenging target for future, very-high-energy, accelerators. We don’t yet know what triggers electroweak symmetry breaking. 
3. Does the Higgs boson give mass to fermions, or only to the weak bosons? What sets the masses and mixings of the quarks and leptons? 
The experimental evidence suggests that H(125) couples to tt, bb, and τ+τ−, so the answer is probably yes. All these are third-generation fermions, so even if the evidence for these couplings becomes increasingly robust, we will want to see evidence that H couples to lighter fermions. The most likely candidate, perhaps in High-Luminosity LHC running, is for the Hµµ coupling, which would already show that the third generation is not unique in its relation to H. Ultimately, to show that spontaneous symmetry breaking accounts for electron mass, and thus enables compact atoms, we will want to establish the Hee coupling. That is extraordinarily challenging because of the minute branching fraction
10. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
Establishing that scalar fields drive electroweak symmetry breaking will encourage the already standard practice of using auxiliary scalars to hide the symmetries that underlie unified theories. 
To close, I offer a revised list of questions to build on what our first look at the Higgs boson has taught us. Issues Sharpened by the Discovery of H (125) 
1. How closely does H(125) hew to the expectations for a standard-model Higgs boson? Does H have any partners that contribute appreciably to electroweak symmetry breaking? 
2. Do the HZZ and HWW couplings indicate that H(125) is solely responsible for electroweak symmetry breaking, or is it only part of the story? 
3. Does the Higgs field give mass to fermions beyond the third generation? Does H(125) account quantitatively for the quark and lepton masses? What sets the masses and mixings of the quarks and leptons? 
4. What stabilizes the Higgs-boson mass below 1 TeV? 
5. Does the Higgs boson decay to new particles, or via new forces? 
6. What will be the next symmetry recognized in Nature? Is Nature supersymmetric? Is the electroweak theory part of some larger edifice? 
7. Are all the production mechanisms as expected? 
8. Is there any role for strong dynamics? Is electroweak symmetry breaking related to gravity through extra spacetime dimensions? 
9. What lessons does electroweak symmetry breaking hold for unified theories of the strong, weak, and electromagnetic interactions? 
10. What implications does the value of the H(125) mass have for speculations that go beyond the standard model?...for the range of applicability of the electroweak theory? 
In the realms of refined measurements, searches, and theoretical analysis and imagination, great opportunities lie before us! 
Electroweak Symmetry Breaking in Historical Perspective Chris Quigg 2015


Now what about the role geometry plays in the game? It may be relevant to go once more to the historical review by Iliopoulos:

The construction of the Standard Model, which became gradually the Standard Theory of elementary particle physics, is, probably, the most remarkable achievement of modern theoretical physics.... as we intend to show, the initial motivation was not really phenomenological. It is one of these rare cases in which a revolution in physics came from theorists trying to go beyond a simple phenomenological model, not from an experimental result which forced them to do so. This search led to the introduction of novel symmetry concepts which brought geometry into physics...
At the beginning of the twentieth century the development of the General Theory of Relativity offered a new paradigm for a gauge theory. The fact that it can be written as the theory invariant under local translations was certainly known to Hilbert, hence the name of Einstein–Hilbert action. The two fundamental forces known at that time, namely electromagnetism and gravitation, were thus found to obey a gauge principle. It was, therefore, tempting to look for a unified theory... 
The transformations of the vector potential in classical electrodynamics are the first example of an internal symmetry transformation, namely one which does not change the space–time point x. However, the concept, as we know it today, belongs really to quantum mechanics. It is the phase of the wave function, or that of the quantum fields, which is not an observable quantity and produces the internal symmetry transformations. The local version of these symmetries are the gauge theories we study here. The first person who realised that the invariance under local transformations of the phase of the wave function in the Schrödinger theory implies the introduction of an electromagnetic field was Vladimir Aleksandrovich Fock in 1926, just after Schrödinger wrote his equation... 
In 1929 Hermann Klaus Hugo Weyl extended this work to the Dirac equation. In this work he introduced many concepts which have become classic, such as the Weyl two-component spinors and the vierbein and spin-connection formalism. Although the theory is no more scale invariant, he still used the term gauge invariance, a term which has survived ever since.
Naturally, one would expect non-Abelian gauge theories to be constructed following the same principle immediately after Heisenberg introduced the concept of isospin in 1932. But here history took a totally unexpected route.  
The first person who tried to construct the gauge theory for SU(2) is Oskar Klein who, in an obscure conference in 1938, he presented a paper with the title: On the theory of charged fields. The most amazing part of this work is that he follows an incredibly circuitous road: He considers general relativity in a five-dimensional space and compactifies à la Kaluza–Klein. Then he takes the limit in which gravitation is decoupled. In spite of some confused notation, he finds the correct expression for the field strength tensor of SU(2). He wanted to apply this theory to nuclear forces by identifying the gauge bosons with the new particles which had just been discovered, (in fact the muons), misinterpreted as the Yukawa mesons in the old Yukawa theory in which the mesons were assumed to be vector particles. He considered massive vector bosons and it is not clear whether he worried about the resulting breaking of gauge invariance. 
The second work in the same spirit is due to Wolfgang Pauli who, in 1953, in a letter to Abraham Pais, developed precisely this approach: the construction of the SU(2) gauge theory as the flat space limit of a compactified higher-dimensional theory of general relativity...  
It seems that the fascination which general relativity had exerted on this generation of physicists was such that, for many years, local transformations could not be conceived independently of general coordinate transformations. Yang and Mills were the first to understand that the gauge theory of an internal symmetry takes place in a fixed background space which can be chosen to be flat, in which case general relativity plays no role...
In particle physics we put the birth of non-Abelian gauge theories in 1954, with the fundamental paper of Chen Ning Yang and Robert Laurence Mills. It is the paper which introduced the SU(2) gauge theory and, although it took some years before interesting physical theories could be built, it is since that date that non-Abelian gauge theories became part of high energy physics. It is not surprising that they were immediately named Yang–Mills theories. Although the initial motivation was a theory of the strong interactions, the first semi-realistic models aimed at describing the weak and electromagnetic interactions. In fact, following the line of thought initiated by Fermi, the theory of electromagnetism has always been the guide to describe the weak interactions... 
Gauge invariance requires the conservation of the corresponding currents and a zero masse for the Yang–Mills vector bosons. None of these properties seemed to be satisfied for the weak interactions. People were aware of the difficulty, but had no means to bypass it. The mechanism of spontaneous symmetry breaking was invented a few years later in 1964... The synthesis of Glashow’s 1961 model with the mechanism of spontaneous symmetry breaking was made in 1967 by Steven Weinberg, followed a year later by Abdus Salam... Many novel ideas have been introduced in this paper, mostly connected with the use of the spontaneous symmetry breaking which became the central point of the theory.
Gauge theories contain three independent worlds. The world of radiation with the gauge bosons, the world of matter with the fermions and the world of BEH scalars. In the framework of gauge theories these worlds are essentially unrelated to each other. Given a group G the world of radiation is completely determined, but we have no way to know a priori which and how many fermion representations should be introduced; the world of matter is, to a great extent, arbitrary.  
This arbitrariness is even more disturbing if one considers the world of BEH scalars. Not only their number and their representations are undetermined, but their mere presence introduces a large number of arbitrary parameters into the theory. Notice that this is independent of our computational ability, since these are parameters which appear in our fundamental Lagrangian. What makes things worse, is that these arbitrary parameters appear with a wild range of values. From the theoretical point of view, an attractive possibility would be to connect the three worlds with some sort of symmetry principle. Then the knowledge of the vector bosons will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses will forbid their appearance in the scalar masses. We shall call such transformations supersymmetry transformations and we see that a given irreducible representation will contain both fermions and bosons. It is not a priori obvious that such supersymmetries can be implemented consistently, but in fact they can.  
... supersymmetric field theories have remarkable renormalisation properties [57] which make them unique. In particular, they offer the only field theory solution of the hierarchy problem. Another attractive feature refers to grand unification. The presence of the supersymmetric particles modifies the renormalisation group equations and the effective coupling constants meet at high scales...   
An interesting extension consists of considering gauge supersymmetry transformations, i.e. transformations whose infinitesimal parameters — which are anticommuting spinors — are also functions of the space–time point x... 
The miraculous cancellation of divergences we find in supersymmetry theories raises the hope that the supersymmetric extension of general relativity will give a consistent quantum field theory. In fact local supersymmetry, or “supergravity”, is the only field theoretic extension of the Standard Model which addresses the issue of quantum gravity...

N=8 supergravity promised to give us a truly unified theory of all interactions, including gravitation and a description of the world in terms of a single fundamental multiplet. The main question is whether it defines a consistent field theory. At the moment we have no clear answer to this question, although it sounds rather unlikely. In some sense N = 8 supergravity can be viewed as the end of a road, the road of local quantum field theory. The usual response of physicists whenever faced with a new problem was to seek the solution in an increase of the symmetry. This quest for larger and larger symmetry led us to the standard model, to grand unified theories and then to supersymmetry, to supergravity and, finally, to the largest possible supergravity, that with N=8. In the traditional framework we are working, that of local quantum field theory, there exists no known larger symmetry scheme
Id.

I let the reader compare the above last Iliopoulos claims about supergravity with the following Connes' statement about the potential bonus offered by his geometric perspective in order to appreciate who sticks the most to the two guide lines of i) phenomenological approach in which the introduction of every new concept is motivated by the search of a consistent theory which agrees with experiment and ii) mathematical consistency which both helped in making the Standard Theory.
... the point of view adopted in this essay is to try to understand from a mathematical perspective, how the perplexing combination of the Einstein-Hilbert action coupled with matter, with all the subtleties such as the Brout-Englert-Higgs sector, the V-A and the see-saw mechanisms etc.. can emerge from a simple geometric model. The new tool is the spectral paradigm and the new outcome is that geometry does emerge from purely Hilbert space and operator considerations, i.e. on the stage where Quantum Mechanics happens. The idea that group representations as operators in Hilbert space are relevant to physics is of course very familiar to every particle theorist since the work of Wigner and Bargmann. That the formalism of operators in Hilbert space encompasses the variable geometries which underly gravity is the leitmotiv of our approach. In order to estimate the potential relevance of this approach to Quantum Gravity, one first needs to understand the physics underlying the problem of Quantum Gravity.... Quoting from [40]: “Quantization of gravity is inevitable because part of the metric depends upon the other fields whose quantum nature has been well established”. Two main points are that the presence of the other fields forces one, due to renormalization, to add higher derivative terms of the metric to the Lagrangian and this in turns introduces at the quantum level an inherent instability that would make the universe blow up. This instability is instantly fatal to an interacting quantum field theory. Moreover primordial inflation prevents one from fixing the problem by discretizing space at a very small length scale. What our approach permits is to develop a “particle picture” for geometry and a careful reading of this paper should hopefully convince the reader that this particle picture stays very close to the inner workings of the Standard Model coupled to gravity. For now the picture is limited to the “one-particle” description and there are deep purely mathematical reasons to develop the many particles picture.
Alain Connes
(still draft version February 21, 2017)

Beyond the somewhat vein comparative on the respective merits of both approaches to unify the standard model interactions with gravitation at the Planck scale, one can't help to notice how far their geometrical premises are different. On the one side, there is supergravity as the boldest symmetric extension of local quantum field gauge theories on traditional but higher dimensional spacetimes with the hope to quantize gravity. On the other side one contemplates an original reformulation and slight but radical extension of spacetime in a framework derived from quantum mechanics with the full Standard Model theory emerging from an action principle inspired by general relativity.

As a consequence, the grand unification scheme present in both approaches borrows nevertheless quite distinct paths. In the evocative words of some bold pioneers of the spectral noncommutative phenomenology:

... at the higher [unification scale Λ]... it is not the particle spectrum that changes, but the geometry of spacetime itself. We shall assume that the (commutative) Riemannian geometry of spacetime is only a low energy approximation of a – not yet known – noncommutative geometry. Being noncommutative, this geometry has radically different short distance properties and is expected to produce quite a different renormalisation flow... At energies below Λ, this noncommutativity manifests itself only in its mild, almost commutative version through the gauge- and Higgs-fields of the standard model, which are magnetic-like fields accompanying the gravitational field
Spectral action and big desert  Marc Knecht, Thomas Schucker (2006)

To insist now on the foresights, one has also two very different landscapes. Roughly writing:

- focussing on a solution to the naturalness problem of the Brout-Englert-Higgs scalar boson, supersymmetry predicts a new superparticle spectrum. From the knowledge of the vector bosons it will determine the fermions and the scalars and the absence of quadratically divergent counterterms in the fermion masses forbidding their appearance in the scalar masses. Then one can envision hopefully a supergravity theory amenable to quantize gravitation. 
- Looking for a geometric understanding of the electroweak symmetry breaking, the spectral noncommutative framework distills from the knowledge of the spin one-half fermion particle spectrum of the current Standard Model completed minimally with three right-handed Majorana neutrinos (required to explain neutrino oscillations with a type I seesaw mechanism) the full scalar and vector boson spectra. Its operator theoretic formalism develops a “particle picture” for geometry that stays very close to the inner workings of the Standard Model coupled to gravity and it makes it already possible to describe a volume quantized 4D spacetime with a Euclidean signature translating phenomenologically in mimetic dark energy and dark matter models.


Considering the fact that no experimental evidence for supersymmetric particles has been found yet, one may appreciate then from a heuristic point of view the potential relevance of the spectral noncommutative geometrization of the Standard Model leading to a minimal Pati-Salam extension. The latter provides indeed a unification of electroweak and strong gauge interactions pretty close in its particle spectrum to the non-supersymmetric minimal SO(10) models currently consistent with current neutrino oscillations data that goes beyond the Standard Model (thus not under the scope of Iliopoulos review) and also with a leptogenesis scenario able to explain the asymmetry between matter and antimatter.

At last, one may add the following from a more consistent* effective field theory perspective.
The spectral standard model post-diction for the 125 GeV mass of the Higgs boson that breaks the electroweak symmetry requires its very small mixing with a "big" Higgs brother responsible in a Pati-Salam symmetry breaking at around 1012 GeV consistent with a see-saw mechanism amenable to explain the known data on left-handed neutrinos. Even if the naturalness problem is not settled here it is phenomenologically encouraging that the Higgs boson already discovered may talk with a very high seesaw scale well motivated as a natural effective field theory to explain the very low mass of active neutrinos. The ultra heavy singlet scalar could also help to unitarise the theory in the sub-Planckian regime where inflation happens. Last but not least one may be reminded that provided the arbitrary mass scale in the spectral action is made dynamical by introducing a dilaton field, the resulting action is almost identical to the one proposed for making the standard model scale invariant and has the same low-energy limit as the Randall-Sundrum model and remarkably, all desirable features with correct signs for the relevant terms are obtained uniquely and without any fine tuning.

Whatever the path chosen by space-time-matter-radiation to cool down to nowadays cosmological background temperature one may conclude that the spectrum of particles required for an electroweak symmetry breaking theory consistent with energies beyond the TeV scale has not been fully probed yet. To know if this search will bring a novel symmetry concept to tame the Higgs scalar feared quantum instabilities and require noncommutative geometry into physics to do so, only future will tell but may be the past laying in the dark sky already knows...



* about the role of consistency in theory choice I would like to offer the following thoughts that seems  to me particularly relevant at the present time for obvious reasons:

One of the most interesting questions in philosophy of science is how to determine the quality of a theory. Given the data, how can we infer a “best explanation” for the data. This often goes by the name “Inference to Best Explanation” (IBE) [1, 2, 3]. The wide variety of claims for important criteria are a measure of how difficult it is to come up with a clear and general algorithm for choosing between theories. Some claim even that it is intrinsically not possible to come up with a methodology of deciding.

... in our discussion of IBE criteria... we must first ask ourselves what is non-negotiable. Falsifiability is clearly something that can be haggled over. Simplicity is subject to definitional uncertainty, and furthermore has no universally accepted claim to preeminence. Naturalness, calculability, unifying ability, predictivity, etc. are also subject to preeminence doubts

What is non-negotiable is consistency. A theory shown definitively to be inconsistent does not live another day. It might have its utility, such as Newton’s theory of gravity for crude approximate calculations, but nobody would ever say it is a better theory than Einstein’s theory of General Relativity.

Consistency has two key parts to it. The first is that what can and has been computed must be consistent with all known observational facts. As Murray Gell-Mann said about his early graduate student years, “Suddenly, I understood the main function of the theoretician: not to impress the professors in the front row but to agree with observation [10].” Experimentalists of course would not disagree with this non-negotiable requirement of observational consistency. If you cannot match the data what are you doing, they would say?


However, theorists have a more nuanced approach to establishing observational consistency. They often do not spend the time to investigate all the consequences of their theories. Others do not want to “mop up” someone else’s theory, so they are not going to investigate it either. We often get into a situation of a new theory being proposed that solves one problem, but looks like it might create dozens of other incompatibilities with the data but nobody wants to be bothered to compute it. Furthermore, the implications might be extremely difficult to compute.

Sometimes there must be suspended judgment in the competition between excellent theories and observational consequences. Lord Kelvin claimed Darwin’s evolution ideas could not be right because the sun could not burn long enough to enable long-term evolution over millions of years that Darwin knew was needed. Darwin rightly ignored such arguments, deciding to stay on the side of geologists who said the earth appeared to be millions of years old [11]. Of course we know now that Kelvin made a bad inference because he did not know about the fusion source of burning within the sun that could sustain its heat output for billions of years.

A second part to consistency is mathematical consistency. There are numerous examples in the literature of subtle mathematical consistency issues that need to be understood in a theory. Massive gauge theories looked inconsistent for years until the Higgs mechanism was understood. Some gauge theories you can dream up are “anomalous” and inconsistent. Some forms of string theory are inconsistent unless there are extra spatial dimensions. Extra time dimensions appear to violate causality, even when one tries to demand it from the outset, thereby rendering the theory inconsistent. Theories with ghosts, which may not be obvious upon first inspection, give negative probabilities of scattering
Mathematical consistency is subtle and hard at times, and like observational consistency there is no theorem that says that it can be established to comfortable levels by theorists on time scales convenient to humans. Sometimes the inconsistency is too subtle for the scientists to see right off. Other times the calculability of the mathematical consistency question is too difficult to give definitive answer and it is a “coin flip” whether the theory is ultimately consistent or not. For example, pseudomoduli potentials that could cause a runaway problem are incalculable in some interesting dynamically broken supersymmetric theories [12].

It is not controversial that observational consistency and mathematical consistency are non-negotiable; however, the due diligence given to them in theory choice is often lacking. The establishment of observational consistency or mathematical consistency can remain in an embryonic state for years while research dollars flow and other IBE criteria become more motivational factors in research and inquiry, and the consistency issues become taken for granted.

This is one of the themes of Gerard ‘t Hooft’s essay “Can there be physicist without experiments?”. He reminds the reader that some of the grandest theories are investigations of the nature of spacetime at the Planck scale, which is many orders of magnitude beyond where we currently have direct experimental probes. If this is to continue as a physics enterprise it “may imply that we should insist on much higher demands of logical and mathematical rigour than before.” Despite the weakness of verb tense employed, it is an incontestable point. It is in these Planckian theories, such as string theory and loop quantum gravity, where the lack of consistency rigor is so plainly unacceptable. However, the cancer of lax attention to consistency can spread fast in an environment where theories and theorists are feted before vetted.

(2012)
Added on February 28


This long retroactive analysis of the already 50 years old story of electroweak symmetry breaking mechanism has been carried out in the light of experimental discovery of the 125 GeV resonance at LHC Run1 and through the prism of its geometrization with a tentative noncommutative biais to uncover a new spectrum of bright colours entangled in the pale glow of beyond the Standard Model physics.

As reported above, Iliopoulos explains nicely in his review how Yang and Mills succeeded in providing the first geometric setting to describe quantum non abelian gauge fields focusing on the interpretation of the latter as internal symmetries in a fixed background space where general relativity plays no role (even if it inspired them). It’s hard to miss the reversal and more extensive move operated by the spectral noncommutative paradigm of Connes and Chamseddine that have patiently build and polish a new mathematically and experimentally coherent geometric spectral standard model where the internal symmetries appear in a natural manner as a slight refinement of the algebraic rules of coordinates (different from supersymmetry).

Yang-Mills theories where first criticized by Pauli, as their quanta had to be massless in order to maintain gauge invariance. Thus this theory was set aside for a while before the concept of particles acquiring mass through symmetry breaking in massless theories was discovered triggering a significant restart of Yang–Mills theory studies.

As far as spectral geometric models are concerned there are at best marginally quoted in reviews but rarely considered seriously. What major advance will prompt a significant interest in the physics community is hard to anticipate. One can hope the already established connection of some mimetic gravity models with a possible quantization of the volume of spacetime will light the fire for a new kind of investigations on the cosmological standard model dark sector…

To come back to ground, one other obstacle for a more extensive study of spectral models is the emptiness of their expected spectrum of new fundamental particles to discover with man-made accelerators, but well, this is also a perspective sketched by the study of minimal but realistic grand unified SO(10) or recent SMASH models all accommodating the full spectrum of low energy phenomenology (with the exception of a very low axion particle).

Hopefully there is more to search for with nuclear reactors and hadron or lepton colliders than new elementary particles! A lot of physicists are involved in flavor mixing for instance. It could be that noncommutative geometry gives a fresh look here too.

For the theorist, a critical of spectral noncommutative geometry might come from the prejudice against models that do not provide a solution to naturalness problem. May be this requirement might be suspended for a while waiting for a more extensive study of the fine tuning "parameters" (coming from new degrees of freedom like a singlet scalar and right-handed neutrinos) computable from the spectral action principle or required to make it mathematically coherent. Indeed these parameters involved in the renormalisation flow would have values constrained on the full energy spectrum : from low energy scale to the unification one in order to tame the quantum mass corrections to the Higgs boson and also on the intermediate seesaw scale to accommodate left-handed neutrino masses and leptogenesis cosmological scenario. If such a scenario were miraculously possible it could help to uncover some new hidden symmetry from possible accidental corrections in the quadratic divergence of in some extended versions of the Standard model Higgs sector ...



vendredi 24 février 2017

(The inception of spectral) noncommutative geometry, its calculus and functional action principle (to model spacetime ?)

This is the second fragment of my Lover's dictionary of Spectral Physics, the entry is of course:


Noncommutative Geometry

Plato derives the knowledge of ideas from body by abstraction and cutting away, leading us by various steps in mathematical discipline from arithmetic to geometry, thence to astronomy, and setting harmony above them all. For things become geometrical by the accession of magnitude to quantity; solid, by the accession of profundity to magnitude; astronomical, by the accession of motion to solidity; harmonical, by the accession of sound to motion. 

Plato alleges that God forever geometrizes... meanwhile Connes and Chamseddine are computing what other children's of Archimedes have not finished to measure.
 Folklore


The geometric concepts have first been formulated and exploited in the Framework of Euclidean geometry. This framework is best described using Euclid’s axioms (in their modern form by Hilbert’). These axioms involve the set X of points pX of the geometric space as well as families of subsets: the lines and the planes for 3-dimensional geometry. Besides incidence and order axioms one assumes that an equivalence relation (called congruence) is given between segments, i.e., pairs of points (p,q),p,q X and also between angles, i.e., triples of points (a,O,b);a,O,b X. These relations eventually allow us to define the length |(p.q)| of a segment and the size (a,O,b) of an angle. The geometry is uniquely specified once these two congruence relations are given. They of course have to satisfy a compatibility axiom: up to congruence a triangle with vertices a,O,b X is uniquely specified by the angle (a,O,b) and the lengths of (a,O) and (0,b) ... Besides the completeness or continuity axiom, the crucial one is the axiom of unique parallel. The efforts of many mathematicians trying to deduce this last axiom from the others led to the discovery of non-Euclidean geometry...  
The introduction by Descartes of coordinates in geometry was at first an act of violence (cf. Ref. 2). In the hands of Gauss and Riemann it allowed one to extend considerably the domain of validity of geometric ideas. In Riemannian geometry the space Xn is an n-dimensional manifold. Locally in X a point p is uniquely specified by giving n real numbers x1,...,xn which are the coordinates of p. The various coordinate patches are related by diffeomorphisms. The geometric structure on X is prescribed by a (positive definite) quadratic form, gµν dxµdxν, (1.4) which specifies the length of tangent vectors... This allows, using integration, to define the length of a path γ... The analog of the lines of Euclidean or non-Euclidean geometry are the geodesics. The analog of the distance between two points p,q X is given by the formula, d(p,q)=Inf Length(γ)... where γ varies among all paths with γ(0)=p, γ(l)=q ... The obtained notion of “Riemannian space” has been so successful that it has become the paradigm of geometric spaceThere are two main reasons behind this success. On the one hand this notion of Riemannian space is general enough to cover the above examples of Euclidean and non-Euclidean geometries and also the fundamental example given by space-time in general relativity (relaxing the positivity condition Of (1.4)). On the other hand it is special enough to still deserve the name of geometry, the point being that through the use of local coordinates all the tools of the differential and integral calculus can be brought to bear ...   
Besides its success in physics as a model of space-time, Riemannian geometry plays a key role in the understanding of the topology of manifolds, starting with the Gauss Bonnet theorem, the theory of characteristic classes, index theory, and the Yang Mills theory. 
Thanks to the recent experimental confirmations of general relativity from the data given by binary pulsars there is little doubt that Riemannian geometry provides the right framework to understand the large scale structure of space-time. 
The situation is quite different if one wants to consider the short scale structure of space-time. We refer to Refs. 5 and 6 for an analysis of the problem of the coordinates of an event when the scale is below the Planck length. In particular there is no good reason to presume that the texture of space-time will still be the 4-dimensional continuum at such scales.  
In this paper we shall propose a new paradigm of geometric space which allows us to incorporate completely different small scale structures. It will be clear from the start that our framework is general enough. It will of course include ordinary Riemannian spaces but it will treat the discrete spaces on the same footing as the continuum, thus allowing for a mixture of the two. It also will allow for the possibility of noncommuting coordinates. Finally it is quite different from the geometry arising in string theory but is not incompatible with the latter since supersymmetric conformal field theory gives a geometric structure in our sense whose low energy part can be defined in our framework and compared to the target space geometry. 
It will require the most work to show that our new paradigm still deserves the name of geometry. We shall need for that purpose to adapt the tools of the differential and integral calculus to our new framework. This will be done by building a long dictionary which relates the usual calculus (done with local differentiation of functions) with the new calculus which will be done with operators in Hilbert space and spectral analysis, commutators.... The first two lines of the dictionary give the usual interpretation of variable quantities in quantum mechanics as operators in Hilbert space. For this reason and many others (which include integrality results) the new calculus can be called the quantized calculus’ but the reader who has seen the word “quantized” overused so many times may as well drop it and use “spectral calculus” instead. 
Alain Connes 
Received 4 April 1995; accepted for publication 7 June 1995

... we shall build our notion of geometry, in a very similar but somehow dual manner [to the Riemann's concept], on the pair (A, ds) of the algebra A of coordinates and the infinitesimal length element ds. For the start we only consider ds as a symbol, which together with A generates an algebra (A, ds). The length element ds does not commute with the coordinates, i.e. with the functions f on our space, f ∈ A. But it does satisfy non trivial relations. 
... we shall write down the axioms of geometry as the presentation of the algebraic relations between A and ds and the representation of those relations in Hilbert space. In order to compare different geometries, i.e. different representations of the algebra (A, ds) generated by A and ds, we shall use the following action functional,
(14) Trace(ϕ(ds/p))
where is the Planck length and ϕ is a suitable cutoff function which will cut off all eigenvalues of ds larger than p. We shall show in [CC] that for a suitable choice of the algebra A, the above action will give Einstein gravity coupled with the Lagrangian of the standard U(1)×SU(2)×SU(3) model of Glashow Weinberg Salam. The algebra will not be C(M) with M a (compact) 4-manifold but a non commutative refinement of it which has to do with the quantum group refinement of the Spin covering of SO(4). 
1 → Z/2 → Spin(4) → SO(4) → 1.  
Amazingly, in this description the group of gauge transformations of the matter fields arises spontaneously as a normal subgroup of the generalized diffeomorphism group Aut(A). It is the non commutativity of the algebra A which gives for free the group of gauge transformations of matter fields as a (normal) subgroup of the group of diffeomorphisms.
What the present paper shows is that one should consider the internal gauge symmetries as part of the diffeomorphism group of the non commutative geometry, and the gauge bosons as the internal fluctuations of the metric. It follows then that the action functional should be of purely gravitational nature. We state the principle of spectral invariance, stronger than the invariance under diffeomorphisms, which requires that the action functional only depends on the spectral properties of D=ds-1 in H. This is verified by the action,
I =Trace (ϕ(ds/ℓp))+<Dψ, ψ>
for any nice function ϕ from R*+ to R. We shall show in [CC] that this action gives the SM Lagrangian coupled with gravity. It would seem at first sight that the algebra A has disappeared from the scene when one writes down the above action, the point is that it is still there because it imposes the constraints [[D, a], b0]=0 ∀ a, b ∈ A and Σa0i[D, a1i]...[D, a4i]= γ coming from axioms [required to provide with the spectral calculus and the volume form]. It is important at this point to note that the integrality, n ∈ N of the dimension of a non commutative geometry appears to be essential to define the [algebraic formulation of a differential form called a] Hochschild cycle c∈Zn and in turns the chirality γ. This is very similar to the obstruction which appears when one tries to apply dimensional regularization to chiral gauge theories.
(Submitted on 8 Mar 1996)

This leads us to the postulate that: 
The symmetry principle in noncommutative geometry is invariance under the group Aut(A). 
We now apply these ideas to derive a noncommutative geometric action unifying gravity with the standard model. The algebra is taken to be A=C(M)⊗AF where the algebra AF is finite dimensional, AF=M3() and ℍ ⊂ M2() is the algebra of quaternions, ℍ ...  
A is a tensor product which geometrically corresponds to a product space, an instance of spectral geometry for A is given by the product rule
H = L2(M, S)⊗ HF , D = M ⊗ 1 + γ5 ⊗ DF  
where (HFDF) is a spectral geometry on AF, while both L2(M, S) and the Dirac operator M on M are as above. The group Aut(A) of diffeomorphisms falls in equivalence classes under the normal subgroup Int(A) of inner automorphisms. In the same way the space of metrics has a natural foliation into equivalence classes. The internal fluctuations of a given metric are given by the formula, 
D = D0 + A + JA-1,   A = Σai[D0, bi] , aibi ∈ A and A = A*... 
For Riemannian geometry these fluctuations are trivial. 
The hypothesis which we shall test in this letter is that there exist an energy scale Λ in the range 1015−1019 Gev at which we have a geometric action given by the spectral action...   
We now describe the internal geometry. The choice of the Dirac operator and the action of AF in HF comes from the restrictions that these must satisfy: 
J2 = 1 , [J, D] = 0,        [a, Jb*-1]=0 ,        [[D, a], Jb*-1]=0 ∀ a, b. (4) 
We can now compute the inner fluctuations of the metric and thus operators of the form: A = Σai[D, bi]. This with the self-adjointness condition A = A* gives a U(1), SU(2) and U(3) gauge fields as well as a Higgs field... 
It is a simple exercise to compute the square of the Dirac operator ... This can be cast into the elliptic operator form [7]: 
P = D2 = −(gµνµν · 1I + Aγµµ + B) 
where 1I, A µ and B are matrices of the same dimensions as D. Using the heat kernel expansion for Tr(e-tP) ... we can show that ... a very lengthy but straightforward calculation ... gives for the bosonic action ... {the standard model action coupled to Einstein and Weyl gravity} plus higher order non-renormalizable interactions suppressed by powers of the inverse of the mass scale in the theory}... 
We ... adopt Wilson’s view point of the renormalization group approach to field theory [9] where the spectral action is taken to give the bare action with bare quantities ... at a cutoff scale Λ which regularizes the action the theory is assumed to take a geometrical form.
The renormalized action receives counterterms of the same form as the bare action but with physical parameters  ... The renormalization group equations ... yield relations between the bare quantities and the physical quantities with the addition of the cutoff scale Λ. Conditions on the bare quantities would translate into conditions on the physical quantities. The renormalization group equations of this system were studied by Fradkin and Tseytlin [10] and is known to be renormalizable, but non-unitary [11] due to the presence of spin-two ghost (tachyon) pole near the Planck mass. We shall not worry about non-unitarity (see, however, reference 12), because in our view at the Planck energy the manifold structure of space-time will break down and must be replaced with a genuienly noncommutative structure.
Relations between the bare gauge coupling constants as well as equations (3.19) have to be imposed as boundary conditions on the renormalization group equations [9]. The bare mass of the Higgs field is related to the bare value of Newton’s constant, and both have quadratic divergences in the limit of infinite cutoff Λ... 
There are some relations between the bare quantities. The renormalized action will have the same form as the bare action but with physical quantities replacing the bare ones. The relations among the bare quantities must be taken as boundary conditions on the renormalization group equations governing the scale dependence of the physical quantities. These boundary condition imply that the cutoff scale is of order ∼ 1015 Gev and sin2θw∼0.21 which is off by ten percent from the true value. We also have a prediction of the Higgs mass in the interval 170 − 180 Gev. There is ... a stronger disagreement where Newton’s constant comes out to be too large... Incidentally the problem that Newton’s constant is coming out to be too large is also present in string theory where also has unification of gauge couplings and Newton’s constant occurs [15]. These results must be taken as an indication that the spectrum of the standard model has to be altered as we climb up in energy. The change may happen at low energies (just as in supersymmetry ...) or at some intermediate scale. This also could be taken as an indication that the the concept of space-time as a manifold breaks down and the noncommutativity of the algebra must be extended to include the manifold part.
(Submitted on 11 Jun 1996)


The notion of spectral geometry has deep roots in pure mathematics. They have to do with the understanding of the notion of (smooth) manifold. While this notion is simple to define in terms of local charts i.e. by glueing together open pieces of finite dimensional vector spaces, it is much more difficult and instructive to arrive at a global understanding ... What one does is to detect global properties of the underlying space with the goal of characterizing manifolds... At the beginning of the 80’s, motivated by numerous examples of noncommutative spaces arising naturally in geometry from foliations or in physics from the Brillouin zone in the work of Bellissard on the quantum Hall effect, I realized that specifying an unbounded representative of the Fredholm operator was giving the right framework for spectral geometry ...
Over the years this new [noncommutative geometric paradigm of spectral nature] has been considerably refined ... The noncommutative geometry dictated by physics is the product of the ordinary 4-dimensional continuum by a finite noncommutative geometry which appears naturally from the classification of finite geometries of KO-dimension equal to 6 modulo 8 (cf. [15, 18]). The compatibility of the model with the measured value of the Higgs mass was demonstrated in [20] due to the role in the renormalization of the scalar field already present in [19].
In [21, 22], with Chamseddine and Mukhanov we gave the conceptual explanation of the finite noncommutative geometry from Clifford algebras and obtained a higher form of the Heisenberg commutation relations between p and q, whose irreducible Hilbert space representations correspond to 4-dimensional spin geometries. The role of p is played by the Dirac operator and the role of q by the Feynman slash of coordinates using Clifford algebras. The proof that all spin geometries are obtained relies on deep results of immersion theory and ramified coverings of the sphere. The volume of the 4-dimensional geometry is automatically quantized by the index theorem and the spectral model, taking into account the inner automorphisms due to the noncommutative nature of the Clifford algebras, gives Einstein gravity coupled with the slight extension of the standard model which is a Pati-Salam model. This model was shown in our joint work with A. Chamseddine and W. van Suijlekom [24, 25] to yield unification of coupling constants.
Th{e} quantization of the volume implies that the bothering cosmological leading term of the spectral action is now quantized and thus it no longer appears in the variation of the spectral action. Thus provided one understands how to reinstate all the ne details of the nite geometry (the one encoded by the Clifford algebras) such as the nuance on the grading and the number of generations, the variation of the spectral action will reproduce the Einstein equations coupled with matter.
Alain Connes 
Draft version from February 21, 2017

mercredi 22 février 2017

(How and when) Spectral Physics (was born) or first fragment of a Lover's Dictionary on...

Spectral Physics (draft for the core entry)



... in the beginning of the year 1666 (at which time I applyed my self to the grinding of Optick glasses of other figures then Sphericall) I procured me a triangular glasse Prisme to try therewith the celebrated phænomena of colours. And in order thereto having darkned my chamber & made a small hole in my window-shuts to let in a convenient quantity of the sun's light, I placed my Prism at its entrance that it might be thereby refracted to the opposite wall. It was at first a very pleasing divertisement to view the vivid & intense colours produced thereby; but after a while applying my selfe to consider them more circumspectly, I became surprized to see them in an oblong form, which according to the received lawes of refraction I expected should have been circular. 
They were terminated at the sides with streight lines, but at the ends the decay of light was so graduall that it was difficult to determine justly what was their figure, yet they seemed semicircular. 
Comparing the length of this Coloured Spectrum with its bredth I found it about five times greater, a disproportion soe extravagant that it excited me to a more then ordinary curiosity of examining from whence it might proceed; I could scarce think that the various thicknesse of the glasse, or the termination with shaddow or darknesse could have any influence on light to produce such an effect, yet I thought it not amisse to examine first those circumstances, & soe tryed what would happen by transmitting light through parts of the glasse of divers thicknesses, or through holes in the window of divers bignesses, or by setting the Prism without, so that the light might passe through it & bee refracted before it was terminated by the hole: but I found none of those circumstances materiall. The fashion of the colours was in all these cases the same.
Isaac Newton
Trinity Coll Cambridge. Feb. 6. 1671/2


I envision spectral physics as a scientific endeavour based on a set of experimental and conceptual spectroscopes to scrutinise and merge in a coherent picture the macroscopic and microscopic features of the phenomenological world physicists probe thanks to telescopes, high energy accelerators, extremely low temperature devices, very high magnetic fields ... etc, and confront with heuristic tools like quantum mechanics, thermal physics or general relativity while mathematicians formalise the computations confirmed by nature with theories like Euclidean geometry, calculus, Fourier analysis, Riemannian manifolds and their noncommutative extensions with a proper spectral calculus.