How I learned to stop worrying (and love?) the quadratic divergences (of the Higgs scalar field)

The phenomenologist is not afraid of quadratic divergences
This post is a continuation of the former one.
... it is phenomenologically clear that quadratic divergences need to be ignored in the Standard Model, and this is widely recognized in the literature: it is enough to mention that the renormalisation group (RG) evolution plots in refs. [13] ... and [8a, 8b, 9] included evolution of the mass term, but only logarithmic corrections were taken into account and considered as relevant to “real” physics. 
As for explanations, the hope may be that the “hidden symmetry”... could provide a new tool for the resolution of the hierarchy problem, since the symmetry would protect these relations, in particular, leaving no room for quadratic divergences. In fact, though not sufficiently appreciated, the idea that the apparent conformal symmetry of the Standard Model at the classical level could forbid the generation of quadratic corrections at the quantum level, has been discussed in the literature – it is best expressed in [11], where even a concrete quantization scheme is suggested. This idea is also studied in [24]... or very recently in [47a, 47b] and, in a context related with the neutrino mass mechanism, in [48]... 
Usually, classical conformal symmetry of the Standard Model is broken softly by mass terms and seriously by (logarithmic) quantum corrections, giving rise to non-vanishing beta-functions. Our [observations] implies that the only role of the beta- functions is to drive the theory away from the UV point – but exactly there approximate conformal symmetry is actually enhanced: in the scalar sector the beta-function is vanishing and the interaction is also vanishing. The theory looks even more conformal than one could expect. And this is further supported by extreme flatness of effective potential (... it is clear that the height of the barrier is seven orders of magnitude lower than the naive MPL4 while the mass of the scalar mode at the Planckian minimum is instead higher by many orders of magnitude than the naive MPL, so that it can be actually ignored) – and all this is just an experimental fact!, following from the well-established properties of the Standard Model itself, with no reference to any kind of “new physics”, nothing to say about quantum gravity and string theory: the Planck scale appears ... just from the study of RG evolution of the Standard Model itself(!). The only assumption is to neglect the quadratic quantum corrections– but given not just the classical conformal symmetry of [11], but its further enhancement by (1) at the “starting point” in the ultraviolet, one can hardly be surprised that they should be neglected in appropriate quantization scheme. In our view, it is now a clear challenge for string theory or whatever is the UV completion of the Standard Model to make such a scheme natural.  
As we mentioned, within ordinary quantum field theory one option would be to look for a formulation, where the Higgs scalars are actually Goldstones of spontaneously broken conformal symmetry, which get relatively small masses due to the explicit breaking of this symmetry by beta-functions, as implied by the analogy with a similar situation in [24].
(Submitted on 1 Sep 2014 (v1), last revised 2 Oct 2014 (this version, v2))

The Wilsonian educated theorist neither (?)
... we revisited the hierarchy problem, i.e., stability of mass of a scalar field against large radiative corrections, from the Wilsonian renormalisation group (RG) point of view. We first saw that quadratic divergences can be absorbed into a position of the critical surface m2c(λ), and the scaling behavior of RG flows around the critical surface is determined only by logarithmic divergences. The subtraction of the quadratic divergences is unambiguously fixed by the critical surface. In another word, the subtraction is interpreted as taking a new coordinate of the space of parameters such that m2new=m2m2c(λ). These arguments gave a natural interpretation for the subtractions, and another justification for the subtracted theories as in [3a, 3b, 4, 5]. The fine-tuning problem, i.e., the hierarchy between the physical scalar mass and the cutoff scale, is then reduced to a problem of taking the bare mass parameter close to the critical surface in taking the continuum limit. It has nothing to do with the quadratic divergences in the theory. Therefore the quadratic divergences are not the real issue of the hierarchy problem. If we are considering a low energy effective theory with an effective cutoff, the subtraction of the quadratic divergences corresponds to taking a boundary condition at the effective cutoff scale. Hence it has nothing to do with the dynamics at a lower energy scale, and when such divergences appear in radiative corrections, we can simply subtract them. 
We also considered another type of the hierarchy problem. If a theory consists of multiple physical scales, e.g., the weak scale mand the GUT scale mGUT besides the cutoff scale Λ, a mass of the lower scale mreceives large radiative corrections δm2Wm2GUT*log(Λ/16π2) through the logarithmic divergences. Such a mixing of physical mass scales is interpreted as a mixing of relevant operators along the RG flows. Unlike the first type of the hierarchy problem, the mass of the larger scale mGUT cannot simply be disposed of by a subtraction. In order to solve such a mixing problem, we need to suppress the mixing by some additional conditions. Of course, if these two scalars are extremely weakly coupled, the mixing can be suppressed. If the couplings are not so weak, we need to cancel the mixing by symmetries or some nontrivial dynamics... 
Let us comment on scheme dependence of the subtraction. In this paper, we fixed one scheme to perform RG transformations. Then the critical surface is unambiguously determined. If we change the scheme... the RG transformations are changed, and so accordingly is the position of the critical surface. But the definition of bare parameters is correlated with the choice of the scheme. Hence a shift of the position of the critical surface by a change of scheme does not mean an ambiguity of the critical surface. Rather it corresponds to changing coordinates of the theory space. In this sense, the subtraction of a position of the critical surface from the bare mass is performed for each fixed scheme without any ambiguity.  
What is the meaning of the coefficients of various terms in the bare action? In the investigation of the RG flows, we encountered two kinds of quantities, the mass parameter m2 and the subtracted mass (m2−m2c(λ)). The issue of the fine-tuning problem, i.e., the stability against the quadratic divergences, is related to which quantity we should consider to be a physical parameter. In the renormalization-group-improved field theory, as we studied in this paper, the subtracted one is considered to be physical. The mass parameter itself depends on a choice of coordinates of the theory space, and changes scheme by scheme. 
We are thus left with the second type of the hierarchy problem, namely a mixing of the weak scale with another physical scale like mGUT. We classify possible ways out of it :
1. SM up to Λ ;
2. New physics around TeV, but nothing beyond up to Λ ;
3. New physics at a higher scale, but extremely weakly coupled with SM ;
4. New physics at a higher scale with nontrivial dynamics or symmetries.
The first possibility is to consider a model without any further physical scale up to the cutoff scale Λ. The Planck scale may play a role of a cutoff scale for the SM. As we saw in this paper, the quadratic divergence of the cutoff order can be simply subtracted and it does not cause any physical effect. In the second possibility, we introduce a new scale which may be coupled with the SM, but suppose that the new scale is not so large compared with the weak scale. Then even if the mixing is not so small, the weak scale does not receive large radiative corrections. Various kinds of TeV scale models are classified into this category. Some examples are νMSM [9] and the classically conformal TeV-scale B −L extended model [10]. The third one is to consider a very large physical scale, but with the mutual coupling suppressed to be very small. The final possibilities include a supersymmetric GUT, but the low energy theory of broken supersymmetry must be supplemented with the second type of scenario. If we worry about quadratic divergences, the first three categories need fine-tunings against the cutoff scale and are excluded by the naturalness condition. Hence, most model building beyond the SM has been restricted to the last category. Once we admit that quadratic divergence is not the real issue of the hierarchy problem, it broadens our possibilities of model constructions. 
(Submitted on 4 Jan 2012 (v1), last revised 6 Jul 2012 (this version, v2))


What about Wilson himself ?
... of the papers by Wilson I read while in graduate school, the most exciting by far was this one about the renormalization group. Toward the end of the paper Wilson discussed how to formulate the notion of the “continuum limit” of a field theory with a cutoff. Removing the short-distance cutoff is equivalent to taking the limit in which the correlation length (the inverse of the renormalized mass) is infinitely long compared to the cutoff — the continuum limit is a second-order phase transition. Wilson had finally found the right answer to the decades-old question, “What is quantum field theory?” And after reading his paper, I knew the answer, too! This Wilsonian viewpoint led to further deep insights mentioned in the paper, for example that an interacting self-coupled scalar field theory is unlikely to exist (i.e. have a continuum limit) in four spacetime dimensions. 
Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.” 

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the Higgs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong? ...
Posted on June 18, 2013 by preskill on the blog Quantum Frontiers

I would answer "no" to the last question of John Preskill. Wilson ideas are still precious to analyse the Higgs naturalness problem for fundamental physics in particular to make connection between the classical Standard Model Lagrangian and the spectral action in the noncommutative geometric paradigm. 

Wilson himself modestly recognizes some blunders :
In the early 1970’s, I committed several blunders that deserve a brief mention. The blunders all occurred in the same article [27]: a 1971 article about the possibility of applying the renormalization group to strong interactions, published before the discovery of asymptotic freedom. My first blunder was not recognizing the theoretical possibility of asymptotic freedom. In my 1971 article, my intent was to identify all the distinct alternatives for the behavior of the Gell-Mann–Low function β(g), which is negative for small g in the case of asymptotic freedom. But I ignored this possibility. The only examples I knew of such beta functions were positive at small coupling; it never occurred to me that gauge theories could have negative beta functions for small g. Fortunately, this blunder did not delay the discovery of asymptotic freedom, to my knowledge. The articles of Gross and Wilczek [6] and Politzer [7] soon established that asymptotic freedom was possible, and ‘t Hooft had found a negative beta function for a non-Abelian gauge theory even earlier [2]. 
The second blunder concerns the possibility of limit cycles, discussed in Sect. III.H of [27]. A limit cycle is an alternative to a fixed point. In the case of a discrete renormalization group transformation ... a limit cycle occurs whenever a specific input Hamiltonian H* is reproduced only after several iterations of the transformation T, such as three or four iterations, rather than after a single iteration ... In the article, I discussed the possibility of limit cycles for the case of “at least two couplings”, meaning that the renormalization group has at least two coupled differential equations: see [27]. But it turns out that a limit cycle can occur even if there is only one coupling constant g in the renormalization group, as long as this coupling can range all the way from –∞ to +∞. Then all that is required for a limit cycle is that the renormalization group β function β(g) is never zero, i.e., always positive or always negative over the whole range of g. This possibility will be addressed further in the next section, where I discuss a recent and very novel suggestion that QCD may have a renormalization group limit cycle in the infrared limit for the nuclear three-body sector, but not for the physical values of the up and down quark masses. Instead, these masses would have to be adjusted to place the deuteron exactly at threshhold for binding, and the di-neutron also [28]. 
The final blunder was a claim that scalar elementary particles were unlikely to occur in elementary particle physics at currently measurable energies unless they were associated with some kind of broken symmetry [23]... The claim was that it would be unnatural for such particles to have masses small enough to be detectable soon. But this claim makes no sense when one becomes familiar with the history of physics. There have been a number of cases where numbers arose that were unexpectedly small or large. An early example was the very large distance to the nearest star as compared to the distance to the Sun, as needed by Copernicus, because otherwise the nearest stars would have exhibited measurable parallax as the Earth moved around the Sun. Within elementary particle physics, one has unexpectedly large ratios of masses, such as the large ratio of the muon mass to the electron mass. There is also the very small value of the weak coupling constant. In the time since my paper was written, another set of unexpectedly small masses was discovered: the neutrino masses. There is also the riddle of dark energy in cosmology, with its implication of possibly an extremely small value for the cosmological constant in Einstein’s theory of general relativity. This blunder was potentially more serious, if it caused any subsequent researchers to dismiss possibilities for very large or very small values for parameters that now must be taken seriously. But I want to point out here that there is a related lesson from history that, if recognized in the 1960’s, might have shortened the struggles of the advocates of quarks to win respect for their now accepted idea. The lesson from history is that sometimes there is a need to consider seriously a seemingly unlikely possibility... 
(Submitted on 28 Dec 2004 (v1), last revised 24 Feb 2005 (this version, v2))

It could be that the need today in fundamental physics is to consider the seemingly unlikely possibility to solve the naturalness problem of the Higgs with an almost commutative fine-structure of spacetime and if recognized this might shorten the struggles of the advocates of noncommutative spectral models of particle physics...

Comments