jeudi 25 août 2016

On Her Majesty's Secret Service

In praise of the noncommutative description of the current phenomenological world

The proper spectral M(atrixgeometric theory to model the dynamics of spacetime and matter has not been completed yet. But an outer voice tells me that it could be the good Jacob's ladder from the electroweak scale to some gauge grand unification. The spectral action principle says a lot and brings us closer to the secret of the "great mother". I, at all events, am convinced that She does roll dice.
The hyper-augmented;-) blogger

In due respect to what we owe to the father of (the most famous quote about) quantum mechanics
Die Quantenmechanik ist sehr achtung-gebietend. Aber eine innere Stimme sagt mir, daß das doch nicht der wahre Jakob ist. Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.
Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one'. I, at any rate, am convinced that He is not playing at dice.
 Albert Einstein, in a Letter to Max Born (4 December 1926)

Waiting for the proper ear to listen how Nature does perform the quantum trick that we definitely see
Since our early childhood we know in our bones that in order to interact with an object we have either to go to it or to throw something at it. Yet, contrary to all our daily experience, Nature is nonlocal: there are spatially separated systems that exhibit nonlocal correlations. In recent years this led to new experiments, deeper understanding of the tension between quantum physics and relativity and to proposals for disruptive technologies...
Many physicists feel uneasy with nonlocality [Some conclude that it must be realism that is faulty. But I don’t see in which sense this could save locality? Moreover, realism is often confused with determinism, an uninteresting terminology issue, see ... Non-realism: deep thought or a soft option?]. A part of the uneasiness comes from a confusion between nonlocal correlations and nonlocal signallingThe latter means the possibility to signal at arbitrarily fast speeds, a clear contradiction to relativity. However, the nonlocal correlations of quantum physics are nonsignalling. This should remove some of the uneasiness. Furthermore, note that in a nonsignalling world, correlations can be nonlocal only if the measurement results were not pre-determined. Indeed, if the results were predetermined (and accessible with future theories and technologies), then one could exploit nonlocal correlations to signal. This fact has recently been used to produce bit strings with proven randomness [4]. This is fascinating because it places quantum nonlocality no longer at the center of a debate full of susceptibilities and prejudice, but as a resource for future quantum technologies. We’ll come back to this, but beforehand let us present a few recent experimental tests of quantum nonlocality.
The pioneering experiment by Clauser[5] suffered from a few loopholes, but these have since been separately closed[6, 7, (*)]. Still, correlations cry out for explanations, as emphasized by Bell[8]. Everyone confronted with nonlocal correlations feels that the two systems somehow influence each other (e.g. Einstein’s famous spooky action at a distance). This is also the way textbooks describe the process: a first measurement triggers a collapse of the entire state vector, hence modifying the state at the distant side. In recent years these intuitions have been taken seriously, leading to new experimental tests. If there is an influence from Alice to Bob, this influence has to propagate faster than light, as existing experiments have already demonstrated violation of Bell’s inequality between space-like separated regions [9]. But a faster than light speed can only be defined with respect to a hypothetical universal privileged reference frame, as the one in which the cosmic background radiation is isotropic. The basic idea is that if correlations are due to some ”hidden influence” that propagates at finite speed, then, if the two measurements are sufficiently well synchronized in the hypothetical privileged frame, the influence doesn’t arrive on time and one shouldn’t observe nonlocal correlations. Remains the problem that one doesn’t know a priori the privileged frame in which one should synchronize the measurements. This difficulty was recently circumvented by taking advantage of the Earth’s 24 hours rotation, setting thus stringent lower bounds on the speed of these hypothetical influences[10]. Hence, nonlocal correlations happen without one system influencing the other. In another set of experiments the two observers, Alice and Bob, were set in motion in opposite directions in such a way that each in its own inertial reference frame felt he performed his measurement first and could thus not be influenced by his partner[11, 12]. Hence, quantum correlations happen without any time-ordering...
To conclude let us come to the conceptual implications. In modern quantum physics entanglement is fundamental; furthermore, space is irrelevant - at least in quantum information science space plays no central role and time is a mere discrete clock parameter. In relativity space-time is fundamental and there is no place for nonlocal correlations. To put the tension in other words: no story in space-time can tell us how nonlocal correlations happen, hence nonlocal quantum correlations seem to emerge, somehow, from outside space-time.
N. Gisin (Submitted on 8 Dec 2009)

(*) a personal update :  Closing the Door on Einstein and Bohr’s Quantum Debate by Alain Aspect.

mardi 23 août 2016

Casino Royal

Place your bets, no more bets (on the geometry of spacetime)?
From Gerard 't Hooft recorded message at the "event adjudicating the bet on SUSY first made in 2000 and then in 2011" (to quote the yesterday Not Even Wrong post of Peter Woit):
In addition of my being skeptic against the expectations as supersymmetry will one day be an elementary ingredient of particle theories I would find even more unlikely that the mass values would be just small enough to allow detection of such a symmetry in the next range of experiments. That really sounds a bit too much like wishfool thinking. I do know the arguments I just think they are far too optimistic. However let me end with a positive note. You see supersymmetry refers to the spin of particles, a property involving rotations in space. If proven true - contrary to my expectations- supersymmetry would be the first major modification of our views of space and time since Einstein's theory of general relativity in 1915...

I find it interesting that this Nobel Prize winner underlines through SUSY the need for a better geometric vision. I guess he has in mind his research program on the quantum description of black holes and his recent results(*) in "removing" the Firewall problem with "higly non-trivial topological space-time features at the Planck scale".
Time for another bet with a different perspective on the proper new geometric tools build from the multi-TeV data already understood? Well, a more interesting question is what could be the crucial experiment to untie the abstract nodes of space-time-matter models in my opinion.

//The part of the post above appeared first as a comment on Resonaances.

*The results of 't Hooft on the entanglement of Hawking particles antipodal on the black hole horizon have been reported on my blog first here, then here where the curious reader is now offered a last summer refreshing dive into strange waters to get away from the current heat wave. Said it more explicitly these older but updated posts are today more than ever an invitation to travel through the possibly most illuminating analysis of the quantum gravity physics of a black hole, offering a tentative Einstein-Rosen bridge quantum crossing without threatening from any firewall but with a topological twist ;-).

//last edit August 24, 2016

lundi 22 août 2016

The world is not enough

.... according to our standard way to look at it
Leaving for a while my tentative defence and illustration of spectral noncommutative geometry I elect to focus today on a specific Standard Model extension called SMASH by its authors that I regard as worthy of report for their discussing with some detail a quite minimal inflation scenario relying on a complex singlet scalar with a ultra-high scale vacuum expectation value involved both in the  seesaw mechanism and a Peccei-Quinn symmetry breaking:
... new physics beyond the Standard Model (SM) is needed to achieve a complete description of Nature. First of all, there is overwhelming evidence, ranging from the cosmic microwave background (CMB) to the shapes of the rotation curves of spiral galaxies, that nearly 26% of the Universe is made of yet unidentified dark matter (DM) [1]. Moreover, the SM cannot generate the primordial inflation needed to solve the horizon and flatness problems of the Universe, as well as to explain the statistically isotropic, Gaussian and nearly scale invariant fluctuations of the CMB [2]. The SM also lacks enough CP violation to explain why the Universe contains a larger fraction of baryonic matter than of antimatter. Aside from these three problems at the interface between particle physics and cosmology, the SM suffers from a variety of intrinsic naturalness issues. In particular, the neutrino masses are disparagingly smaller than any physics scale in the SM and, similarly, the strong CP problem states that the θ-parameter of quantum chromodynamics (QCD) is constrained from measurements of the neutron electric dipole moment to lie below an unexpectedly small value.
In this Letter we show that these problems may be intertwined in a remarkably simple way, with a solution pointing to a unique new physics scale around 1011 GeV. The SM extension we consider consists just of the KSVZ axion model [3, 4] and three right-handed (RH) heavy SM-singlet neutrinos [One may also chose alternatively the DFSZ axion model. The inflationary predictions in this model stay the same, but the window in the axion mass will move to larger values [46]. Importantly, in this case the PQ symmetry is required to be an accidental rather than an exact symmetry in order to avoid the overclosure of the Universe due to domain walls [61].]. This extra matter content was recently proposed in [6], where it was emphasized that in addition to solving the strong CP problem, providing a good dark matter candidate (the axion), explaining the origin of the small SM neutrino masses (through an induced seesaw mechanism) and the baryon asymmetry of the Universe (via thermal leptogenesis), it could also stabilize the effective potential of the SM at high energies thanks to a threshold mechanism [7, 8]. This extension also leads to successful primordial inflation by using the modulus of the KSVZ SM singlet scalar field [9]. Adding a cosmological constant to account for the present acceleration of the Universe, this Standard Model Axion Seesaw Higgs portal inflation (SMASH) model offers a self-contained description of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today...

We extend the SM with a new complex singlet scalar field σ and two Weyl fermions Q and Q in the 3 and 3 representations of SU(3)c and with charges −1/3 and 1/3 under U(1)Y . With these charges, Q and can decay into SM quarks, which ensures that they will not become too abundant in the early Universe. We also add three RH fermions Ni. The model is endowed with a new Peccei-Quinn (PQ) global U(1) symmetry, which also plays the role of lepton number. The charges under this symmetry are: q(1/2), u(−1/2), d(−1/2), L(1/2), N(−1/2), E(−1/2), Q(−1/2) Q(−1/2), σ(1); and the rest of the SM fields (e.g. the Higgs) are uncharged. The new Yukawa couplings are: L ⊃ −[FijLiHNj + 1/2 YijσNiNj + yQσQ + yQdiσQdi + h.c.]. The two first terms realise the seesaw mechanism once σ acquires a vacuum expectation value (VEV) <σ> = vσ/√2, giving a neutrino mass matrix of the form mv = −FY -1Fv2/(√2 vσ), with v = 246 GeV. The strong CP problem is solved as in the standard KSVZ scenario, with the role of the axion decay constant, fA, played by vσ = fA. Due to non-perturbative QCD effects, the angular part of σ = (ρ + vσ) exp(iA/fA)/√2, the axion field A, gains a potential with an absolute minimum at A = 0. At energies above the QCD scale, the axion-gluon coupling is L ⊃ −(αS/8π)(A/fA)GG, solving the strong CP problem when <A> relaxes to zero. The latest lattice computation of the axion mass gives mA=(57.2 ± 0.7)(1011GeV/fA) µeV [23].

The scalar sector of the model has the potential 
V(H,σ) = λH ( HH − v2/2)2 + λσ (|σ|2 − vσ2 /2)+ 2λHσ (HH − v2/2)(|σ|2 − vσ2 /2). (1) 
In the unitary gauge, there are two scalar fields that could drive inflation: h, the neutral component of the Higgs doublet Ht=(0, h)/√2, and the modulus of the new singlet, ρ = √2|σ|.
... inflation in SMASH is mostly driven by ρ, with a non-minimal coupling 2×10-3≲ξσ≲1. The upper bound on ξσ ensures that the scale of perturbative unitarity breaking is at MP (provided that also ξH≲1), whereas the lower bound on ξσ corresponds to a tensor-to-scalar ratio r≲0.07 (as constrained by the Planck satellite and the BICEP/Keck array [1, 29]). Neglecting ξH, predictive slow-roll inflation in SMASH in the Einstein frame can be described by a single canonically normalized field χ with potential  

V(χ) = (λ/4)ρ(χ)(1 + ξσ ρ(χ)2/MP2)-2 , (2)  
where λ can be either λσ or  λσ=λσ −λHσ 2/λH, with the second case being possible only if λHσ<0, corresponding to an inflationary valley in a mixed direction in the plane (ρ, h). The field χ is the solution of Ω2dχ/dρ≃(bΩ2+6ξσ2ρ2/M2P)1/2, where Ω≃1+ξσ ρ2/MP2 is the Weyl transformation into the Einstein frame and b=1 for λ=λσ  or b=1+|λHσ  /λH|∼ 1 for λ=λσ. The value of b determines the angle in field space described by the inflationary trajectory: h22≃ b−1. The predictions in the case λ=λσ (or b → 1) for r vs the scalar spectral index ns are shown in FIG. 1 for various values of ξσ. The running of ns is in the ballpark of 10-4–10-3, which may be probed e.g. by future observations of the 21 cm emission line of Hydrogen [30]. These values of the primordial parameters are perfectly compatible with the latest CMB data, and the amount of inflation that is produced solves the horizon and flatness problems. Given the current bounds on r and ns, fully consistent (and predictive) inflation in SMASH occurs if 10-13≲ λ≲10-9.

FIG. 1. The tensor-to-scalar ratio, r, vs the scalar spectral index, ns, at k=0.002Mpc-1 for the SMASH inflationary potential (2), assuming λHσλH. The color coded contours represent current observational constraints at 68% and 95% CL from [1]. The threading of thin continuous lines indicates the number e-folds N from the time the scale k=0.002 Mpc-1 exits the horizon to the end of inflation. Lines of constant ξσ are shown dotted. The thick black line takes into account the fact that after inflation the Universe enters a radiation era. The line identified as “quartic inflation” shows the prediction of N for a purely quartic monomial potential (ξσ → 0), which is ruled out by the data.

For the measured central values of the Higgs and top quark masses, the Higgs quartic coupling of the SM becomes negative at h=ΛI∼1011 GeV [31]. If no new physics changes this behaviour, Higgs inflation is not viable, since it requires a positive potential at Planckian field values...
Remarkably, the Higgs portal term ∝ λHσ  in (1) allows absolute stability (even when the corresponding low-energy SM potential would be negative if extrapolated to large h) via the threshold-stabilisation mechanism of [7, 8, 22]. In SMASH, instabilities could also originate in the ρ direction due to quantum corrections from the RH neutrinos and KSVZ fermions. For λHσ >0, absolute stability requires  
 λHλσ  >0, for h<√2 Λh and λHλσ  >0, for h >√2 Λh , (3) 
where we define Λh2=λHσ vσ2/λHλH=λH− λHσ2/λσ  and λσ=λσ λHσ2/λH . Instead, for λHσ<0, the stability condition is λHλσ >0, for all h [33]. 
An analysis based on two-loop renormalization group (RG) equations for the SMASH couplings and one-loop matching with the SM [22] shows that stability can be achieved for δ ≡ λHσ 2/λσ  between 10-3 and 10-1, depending on mt ... The Yukawas must satisfy the bound 6y4 + ∑Yii416π2 2λσ/log(30MP/√2 λσvσ). It will prove convenient to define SMASH benchmark units 
λ10= λσ /10-10; δ3=δ/0.03; v11=vσ /1011 GeV. (4)...

For λHσ>0, the PQ symmetry is restored nonthermally after inflation and then spontaneously broken again before reheating. On the other hand, for λHσ<0 and efficient reheating, the restoration and breaking are thermal. In the phase transition, which happens at a critical temperature Tc≳ λσ 1/4 vσ  , a network of cosmic strings is formed. Its evolution leads to a low-momentum population of axions that together with those arising from the realignment mechanism [43–45] constitute the dark matter in SMASH. Requiring that all the DM is made of axions restricts the symmetry breaking scale to the range 3×1010GeVvσ1.2×1011GeV, which translates into the mass window 50 µeVmA200 µeV, (6) updating the results of [46] with the latest axion mass data [23]. The main uncertainty now arises from the string contribution [46, 47], which is expected to be diminished in the near future [48, 49]. Importantly, the SMASH axion mass window (6) will be probed in the upcoming decade by axion dark matter direct detection experiments such as CULTASK [50], MADMAX [51], and ORPHEUS [52], see also [23, 53] and FIG. 3 for our estimates of their future sensitivity...
FIG. 3. SMASH predictions for the axion-photon coupling (thick solid horizontal line) with current bounds on axion DM (ADMX,BRF) and prospects for next generation axion dark matter experiments, such as ADMX2(3) [54], CULTASK [50], MADMAX [51], ORPHEUS [52], X3 [55], and the helioscope IAXO [56].

The origin of the baryon asymmetry of the Universe is explained in SMASH from thermal leptogenesis [57]. This requires massive RH neutrinos acquiring equilibrium abundances and then decaying when production rates become Boltzmann suppressed. If λHσ<0, then Treheating>Tc for stable models in the DM window (5). The RH neutrinos become massive after the PQ phase transition, and those with masses Mi<Tc retain an equilibrium abundance. The stability bound on the Yukawas Yii enforces Tc>M1, so that at least the lightest RH neutrino stays in equilibrium. Moreover, the annihilations of the RH neutrinos tend to be suppressed with respect to their decays. This allows for vanilla leptogenesis from the decays of a single RH neutrino, which demands M1≳ 5×108 GeV [5859]. However, for vσ as in (5), this is just borderline compatible with stability. Nevertheless, leptogenesis can occur with a mild resonant enhancement [60] for a less hierarchical RH neutrino spectrum, which relaxes the stability bound and ensures that all the RH neutrinos remain in equilibrium after the phase transitions.

vendredi 19 août 2016

Spectre (quantum reality is full of phantoms from the unfinished past)

Reality bites irony too
On August 4, 2008 Alain Connes wrote a post entitled Irony in the blog Noncommutative geometry:

In a rather ironical manner the first Higgs mass that is now excluded by the Tevatron latest results is precisely 170 GeV, namely the one that was favored in the NCG interpretation of the Standard Model, from the unification of the quartic Higgs self-coupling with the other gauge couplings and making the "big desert" hypothesis, which assumes that there is no new physics (besides the neutrino mixing) up to the unification scale.

Lubos Motl's commented it in the following way:
thanks for this amazing speed and integrity. I am sure it wouldn't be matched by any other author of unusual and unexpected predictions in physics I know of... 
I wonder whether you appreciate the special role of the value 170 GeV. It's the value for which the Higgs quartic self-interaction is high enough for the RG running to push it to a divergent value - the Landau pole - at an accessible energy scale, namely the GUT scale.

[The Standard Model] has certain couplings and they simply must be allowed to be any real number.

Only the requirement of enhanced symmetries or the absence of anomalies or divergences or ghosts at special points are legitimate reasons to pick preferred values of the masses and other parameters of the Standard Model in any quantum field (-like) theory

So even if a particular language makes it harder (or impossible) to write the Standard Model with generic values of the couplings in your (or another) formalism, it can't be viewed as a trusted prediction because it is always numerology that depends on the "language" i.e. the particular NCG reformulation of the Standard Model.

So QFT itself can't answer such questions, e.g. the values of the couplings, not even when one rewrites it with new symbols... 
Now, when you have a sufficient anti-NCG momentum this week, it may be a great idea for someone like me to ask you to learn the rest of the string/M massive tower that you have been neglecting so far...  
So I would like to boldly use the opportunity to invite you to throw away a young-man's maverickness for a while, to learn string/M-theory this and next week (or month) ... and to solve all the remaining open problems of string theory by this Christmas, including a non-perturbative universal definition of the theory applicable across the configuration space (landscape), the vacuum selection problem, and the cosmological constant problem...
Thanks, you can surely do it. I admire you and I have met way too many smart people who admire you even qualitatively more than I do so all this stuff is surely realistic.
All the best

Jacques Distler reacted to Lubos' claim about the Landau pole at Planck scale for a 170 GeV prediction:
I am hesitant to wade into this, but I believe that's wrong. 170 GeV came out of an RG analysis, alright. But, rather than diverging at the GUT scale, the Higgs quartic self-coupling "unifies" with the gauge couplings at that scale (the precise formula is in their paper, or in my blog post). 
There are plenty of problems with their scenario, but the Higgs self-coupling hitting its Landau pole at the GUT scale is not one of them.
August 6, 2008 at 6:56 AM

Motl replied:
Your article didn't help me to increase my belief that it should be possible to predict from a new, different, equivalent formulation of a field theory relationships that cannot be extracted from the old-fashioned definitions of the theory.
... in heterotic string theory, all these couplings come from some worldsheet correlators that share a "common ancestor". So there could perhaps be a "unification" of quartic couplings with gauge couplings at the string scale. But these things are not model-independent, not even in big classes of string vacua, so I don't believe that they could be universal in all "good" theories sharing the same low-energy SM limit.
August 6, 2008 at 9:26 AM
 Distler did also:
... At low energies (E≪Λ), everything can be phrased in the language of an effective QFT. What they have is something completely different from a local QFT, for energies above the GUT scale.
August 6, 2008 at 1:01 PM

Fedele Lizzi eventually remarked:
Alain will certainly remember that in various incarnations of the model, starting from his work with Lott, and in particular before the spectral action, the mass of the Higgs went as far up as 250 GeV if I remember well, if not higher. 
The spectral action has been consistent in favouring a more or less "light" Higgs, and some sort of desert. This is probably a consequence of the requirement of the unification of coupling constants, which makes the model akin to SU(5).

The latest version of the model of Chamseddine Connes and Marcolli (ACM)^2 is certainly the most powerful and coherent one, but it assumes an almost commutative geometry all the way to very high scale, and as a consequence that the renormalization group analysis can be done in the usual way. I still find astonishing the fact that a model comes up with an Higgs mass "in the right ball park" from purely geometric considerations.

As first thing we should judge how unchangeable is the Higgs mass prediction in the (ACM)^2 model. The model is complicated, and I cannot say I mastered all of its details, but I think it is fairly solid. It is its strenght, it may become its problem. So its enhancement cannot come from the inside. It probably requires a wider change. My personal bias is to think that probably already at scales which would lie in the big desert some effects of the fact that spacetime is not describable by ordinary geometry should show. Probably the fact that neutrino masses are so small hint at a different mechanism, beyond see-saw.

In 2012, one month exactly after the official announcement of the discovery of a Higgs boson like 125 GeV resonance at LHC, Connes wrote in his blog post :
Since 4 years ago I thought that there was an unavoidable incompatibility between the spectral model and experiment... Now 4 years have passed and we finally know that the Brout-Englert-Higgs particle exists and has a mass of around 125 Gev... this certainly slowed down quite a bit the interest in the spectral model since there seemed to be no easy way out and whatever one would try would not succeed in lowering the Higgs mass {prediction}. The reason for this post today is that this incompatibility has now finally been resolved in a fully satisfactory manner in a joint work with my collaborator Ali Chamseddine, the paper is now on arXiv at 1208.1030 

What is truly remarkable is that there is no need to modify the spectral model in any way, it had already the correct ingredients and our mistake was to have neglected the role of a real scalar field which was already present and whose couplings (with the Higgs field in particular) were already computed in 2010 as one can see in 1004.0464This completely changes the perspective on the spectral model, all the more because the above scalar field has been independently suggested by several groups as a way for stabilizing the Standard Model in spite of the low experimental Higgs mass. So, after this fruitful interaction with experimental results, it is fair to conclude that there is a real chance that the spectral approach to high energy physics is on the right track for a geometric unification of all known forces including gravity...

Then two years later on November 9, 2014 came probably the most unexpected and rewarding result of the  spectral noncommutative geometric program. As advertised by his leader:
The purpose of this post is to explain a recent discovery that we did with my two physicists collaborators Ali Chamseddine and Slava Mukhanov. We wrote a long paper Geometry and the Quantum: Basics which we put on the arXiv, but somehow I feel the urge to explain the result in non-technical terms...

In particle physics there is a well accepted notion of particle which is the same as that of irreducible representation of the Poincaré group. It is thus natural to expect that the notion of particle in Quantum Gravity will involve irreducible representations in Hilbert space, and the question is "of what?". What we have found is a candidate answer which is a degree 4 analogue of the Heisenberg canonical commutation relation [p,q]=ih. The degree 4 is related to the dimension of space-time. The role of the operator p is now played by the Dirac operator D. The role of q is played by the Feynman slash of real fields, so that one applies the same recipe to spatial variables as one does to momentum variables. The equation is then of the form E(Z[D,Z]4)=γ where γ is the chirality and where the E of an operator is its projection on the commutant of the gamma matrices used to define the Feynman slash. 
Our main results then are that:

1) Every spin 4-manifold M (smooth compact connected) appears as an irreducible representation of our two-sided equation.
2) The algebra generated by the slashed fields is the algebra of functions on M with values in A=M2()M4(), which is exactly the slightly noncommutative algebra needed to produce gravity coupled to the Standard Model minimally extended to an asymptotically free theory.
3) The only constraint on the Riemannian metric of the 4-manifold is that its volume is quantized, which means that it is an integer (larger than 4) in Planck units.
The great advantage of 3) is that, since the volume is quantized, the huge cosmological term which dominates the spectral action is now quantized and no longer interferes with the equations of motion which as a result of our many years collaboration with Ali Chamseddine gives back the Einstein equations coupled with the Standard Model.

The big plus of 2) is that we finally understand the meaning of the strange choice of algebras that seems to be privileged by nature: it is the simplest way of replacing a number of coordinates by a single operator. Moreover as the result of our collaboration with Walter van Suijlekom, we found that the slight extension of the Standard Model to a Pati-Salam model given by the algebra M2()M4() greatly improves things from the mathematical standpoint while moreover making the model asymptotically free!

To get a mental picture of the meaning of 1), I will try an image which came gradually while we were working on the problem of realizing all spin 4-manifolds with arbitrarily large quantized volume as a solution to the equation.

"The Euclidean space-time history unfolds to macroscopic dimension from the product of two 4-spheres of Planckian volume as a butterfly unfolds from its chrysalis."

Last year (August 11, 2015) Connes announced his last physical progress obtained with Ali Chamseddine and Walter van Suijlekom:
From the spectral action principle, the dynamics and interactions are described by the spectral action, tr(f(D/Λ)) where Λ is a cutoff scale and f an even and positive function. In the present case, it can be expanded using heat kernel methods, 
where F4,F2,F0 are coefficients related to the function f and ak are Seeley deWitt coefficients, expressed in terms of the curvature of M and (derivatives of) the gauge and scalar fields. This action is interpreted as an effective field theory for energies lower than Λ. 
One important feature of the spectral action is that it gives the usual Pati–Salam action with unification of the gauge couplings...  
Normalizing {the terms from the scale-invariant part Fa4 in the spectral action for the spectral Pati–Salam model} to give the Yang–Mills Lagrangian demands
which requires gauge coupling unification...  
Since it is well known that the SM gauge couplings do not meet exactly, it is crucial to investigate the running of the Pati–Salam gauge couplings beyond the Standard Model and to find a scale Λ where there is grand unification:  

This would then be the scale at which the spectral action is valid as an effective theory. There is a hierarchy of three energy scales: SM, an intermediate mass scale mR where symmetry breaking occurs and which is related to the neutrino Majorana masses (10111013 GeV), and the GUT scale Λ. 

In the paper, we analyze the running of the gauge couplings according to the usual (one-loop) RG equation. As mentioned before, depending on the assumptions on {the Dirac operator which operates within the noncommutative fine structure of spacetime}, one may vary to a limited extent the scalar particle content ... {nevertheless} we establish grand unification for all of the scenarios with unification scale of the order of 1016 GeV, thus confirming validity of the spectral action at the corresponding scale. 

As one can see it took more than one year as wished by Lubos for Connes and his mathematician and physicist colleagues to propose some tentative solutions to the vacuum selection problem and the cosmological constant one (o_~)

mercredi 17 août 2016

The spy who loved me (neither I ;-)

Où comment Lubos Motl nous montre qu'il aime la géométrie noncommutative lui non plus ;-)
It happens that Lubos went so far as writing two posts about noncommutative geometry last July. They were motivated by the writings of another blogger and scientist named Florin Moldoveanu whose current research on quantum theory lead to get interested in the spectral noncommutative geometrisation (SNcG) of physics. I have already made some comments about Lubos' posts on my blog here for the first post and there for the second one (I put some comments on his own blog too). 

Today I choose to offer a personal collection of Lubos thoughts with some new comments so the reader may form a possible picture of some failings of the spectral Standard Model. My digest may also appear as a voluntary malicious or involuntary just naïve interpretation of Lubos' philosophically emotional objections as mirrors of some shortcomings of a particular string theorist practitioner. 

Connes and collaborators claim to have something clearly different from the usual rules of quantum field theory (or string theory). The discovery of a new framework that would be "on par" with quantum field theory or string theory would surely be a huge one, just like the discovery of additional dimensions of the spacetime of any kind. Except that we have never been shown what the Connes' framework actually is, how to decide whether a paper describing a model of this kind belongs to Connes' framework or not. And we haven't been given any genuine evidence that the additional dimensions of Connes' type exist.
...they have made some truly extraordinary claims that have excited me as well. I can't imagine how could I be unexcited at least once; but I also can't imagine that I would preserve my excitement once I see that there's no defensible added value in those ideas...
In 2006, for example, Chamseddine, Connes, and Marcolli have released their standard model with neutrino mixing that boldly predicted the mass of the Higgs boson as well. The prediction was 170 GeV which is not right, as you know: the Higgs boson of mass 125 GeV was officially discovered in July 2012... 
But in August 2012, one month after the 125 GeV Higgs boson was discovered, Chamseddine and Connes wrote a preprint about the resilience of their spectral standard model. A "faux pas" would probably be more accurate but "resilience" sounded better... In that paper, they added some hocus pocus arguments claiming that because of some additional singlet scalar field σ that was previously neglected, the Higgs prediction is reduced from 170 GeV to 125 GeV.
I can't make sense of the technical details – and I am pretty sure that it's not just due to the lack of effort, listening, or intelligence. There are things that just don't make sense. Connes and his co-author claim that the new scalar field σ which they consider a part of their "standard model" is also responsible for the Majorana neutrino masses...
Now, this just sounds extremely implausible because the origin of the small neutrino masses is very likely to be in the phenomena that occur at some very high energy scale near the GUT scale – possibly grand unified physics itself. The seesaw mechanism produces good estimates for the neutrino masses m ν m h 2 m G U T.

So how could one count the scalar field responsible for these tiny masses to the "Standard Model" which is an effective theory for the energy scales close to the electroweak scale or the Higgs mass mh∼125GeV? If the Higgs mass and neutrino masses are calculable in Connes' theory, the theory wouldn't really be a standard model but a theory of everything and it should work near the GUT scale, too.
The claim that one may relate these parameters that seemingly boil down to very different physical phenomena – at very different energy scales – is an extraordinary statement that requires extraordinary evidence. If the statement were true or justifiable, it would be amazing by itself. But this is the problem with non-experts like Connes. He doesn't give any evidence because he doesn't even realize that his statement sounds extraordinary – it sounds (and probably is) incompatible with rather basic things that particle physicists know (or believe to know) 
I don't believe one can ever get correct predictions out of a similar framework, except for cases of good luck. But my skepticism about the proposal is much stronger than that. I don't really believe that there exists any new "framework" at all. 
What are Connes et al. actually doing when they are constructing new theories? They are rewriting some/all terms in a Lagrangian using some new algebraic symbols, like a "star-product" on a specific noncommutative geometry. But is it a legitimate way to classify quantum field theories? You know, a star-product is just a bookkeeping device. It's a method to write down classical theories of a particular type.

But the quantum theory at any nonzero couplings isn't really "fully given by the classical Lagrangian". It should have some independent definition. If you allow the quantum corrections, renormalization, subtleties with the renormalization schemes etc., I claim that you just can't say whether a particular theory is or is not a theory of the Connes' type. The statement "it is a theory of Connes' type" is only well-defined for classical field theories and probably not even for them...

There are many detailed questions that Connes can't quite answer that show that he doesn't really know what he's doing. One of these questions is really elementary: Is gravity supposed to be a part of his picture? Does his noncommutative compactification manifold explain the usual gravitational degrees of freedom, or just some polarizations of the graviton in the compact dimensions, or none? ...
Again, I want to mention the gap between the "physical beef" and "artefacts of formalism". The physical beef includes things like the global symmetries of a physical theories. The artefacts of formalism include things like "whether some classical Lagrangian may be written using some particular star-product". Connes et al. just seem to be extremely focused on the latter, the details of the formalism. They just don't think as physicists.
... even if the theories of Connes' type were a well-defined subset of quantum field theories, I think that it would be irrational to dramatically focus on them. It would seem just a little bit more natural to focus on this subset than to focus on quantum field theories whose all dimensions of representations are odd and the fine-structure constant (measured from the electron-electron low-energy scattering) is written using purely odd digits in the base-10 form. ;-) You may perhaps define this subset but why would you believe that belonging to this subset is a "virtue"?

I surely don't believe that "the ability to write something in Connes' form" is an equally motivated "virtue" as an "additional enhanced symmetry" of a theory.   
This discussion is a somewhat more specific example of the thinking about the "ultimate principles of physics". In quantum field theory, we sort of know what the principles are. We know what theories we like or consider and why. The quantum field theory principles are constructive. The principles we know in string theory – mostly consistency conditions, unitarity, incorporation of massless spin-two particles (gravitons) – are more bootstrapy and less constructive. We would like to know more constructive principles of string theory that make it more immediately clear why there are 6 maximally decompactified supersymmetric vacua of string/M-theory, and things like that. That's what the constantly tantalizing question "what is string theory" means.  
But whenever we describe some string theory vacua in a well-defined quantitative formalism, we basically return to the constructive principles of quantum field theory. Constrain the field/particle content and the symmetries. Some theories – mostly derivably from a Lagrangian and its quantization – obey the conditions. There are parameters you may derive. And some measure on these parameter spaces.

Connes basically wants to add principles such as "a theory may be written using a Lagrangian that may be written in a Connes form". I just don't believe that principles like that matter in Nature because they don't really constrain Nature Herself but only what Nature looks like in a formalism...
Even though some of my objections are technical while others are "philosophically emotional" in some way, I am pretty sure that most of the people who have thought about the conceptual questions deeply and successfully basically agree with me. This is also reflected by the fact that Connes' followers are a restricted group and I think that none of them really belongs to the cream of the theoretical high-energy physics community. Because the broader interested public should have some fair idea about what the experts actually think, it seems counterproductive for non-experts ... to write about topics they're not really intellectually prepared for.
Lubos Motl (Sunday, July 10, 2016)

This post is a perverse teaser to the one to come...

mardi 16 août 2016

Goldfinger (and The man with the golden gun)

If you want to quit particle physics, what after : Finance or Data Science?
Here are a few hints grasped into the comment section about the post "After the hangover" by the blogger with a golden gun to kill science news overhyped

From Flakmeister (Semi-retired, 20 years experience as a professional Higgs Boson Hunter and other beasts at CERN, BNL, SLAC and FNAL.... Worked on Wall St. 2005-08 modelling CDOs so I had a front row seat for what was coming)
Well, call me gobsmacked, a fundamental scalar exists at mass which is clearly suggesting to us that the next new physics threshold may be forever beyond our reach... 
At Snowmass 2001, you would have garnered strange looks if you asked what if SM and nothing else at the LHC. Looking up the chimney and seeing nothing but blue sky so to speak. It would seem the famous "No Lose Theorem" of a TeV scale collider paid off with the lowest possible jackpot...
20 March 2013 at 20:24

I think the field is big trouble as the feared outcome of a SM Higgs and a desert appears to be materializing. I remember discussions with Marcela and Howie H. (and other theorists) about the Higgs being all there is "all the way up the chimney" and noticing that it was anathema to them. We are victims of our own success. 
I left {high energy physics} HEP (some might say was pushed out) in 2005. I am now in charge of Quantitative Analytics for the US operations of a top 10 bank by assets. There is a dearth of people that really understand how to organize and process data in the financial world and there are opportunities out there for those willing to make the shift.
This new career is joke compared to what I used to do, however, HEP was never going to pay me $300,000 a year for working 20 hrs a week... 
By the way, I co-authored a pretty famous paper on Higgs searches that most here would have known about. It was quite a result at the time and was bourne out by data.
8 August 2016 at 17:29

An anonymous asks...
Hi Flakmeister, ... what do you mean by "This new career is joke compared to what I used to do". Can you explain? This might be very useful for many particle physicists who are looking to make the switch from physics to Wall Street/ data analysis. 
8 August 2016 at 23:10
Flakmeister answers
By joke, I refer to "what qualifies as a quality analysis" and what it takes to satisfy management. The 60 hour work week is now a distant memory except for maybe 2 or 3 weeks a year. The challenge now is explain basic statistical techniques and results to people without any quantitative background. The really advanced actuarial techniques for OpRisk such as LDA are now frowned upon by regulators. (a reflection of the dearth of skill to internally and externally review the analysis.) 
The main difference is that the politics in a large financial institution is brutal because merit plays a small role in who decides what. There are no shortage of characters who are clueless bullies trying to climb the ladder or maintain their little fiefdom, sometimes at your expense.

I think the golden age of shifting to the financial sector from HEP is over, at least for the theorists, that being said, people with good quantitative skills are always in demand. Don't underestimate the value of understanding the basics of data presentation in the real world.

If anyone is thinking of making the change, start by immersing yourself in learning fixed income and the associated concepts. Any Hep-Ex Ph.D should be able to learn the basics in 2 weeks. The data is basically simple time-series derived n-tuples (relational database entries). One simple test if you know what you are doing is to use Excel to compute your mortgage payment (interest and principle) from first principles. Also learn how to bullshit your way with SQL, R, SAS, Excel for the interview. You can figure it out on the fly as needed.
9 August 2016 at 18:00

... there is serious politics in HEP-Ex, but at the end of the day a few egos get bruised and someone whines a bit that their version of an analysis was not the headliner or they did not get a certain piece of a hardware project. It's hardball but no one gets carried out on a stretcher. 
In the corporate world, it's downright gladitorial. In HEP, a Ph.D. is the entrance fee, which for better or worse does imply a certain level of scholarship and academic acumen; in the corporate world, MBAs are a dime-a-dozen and thuggery more than accomplishment is your ticket to the C-suite.. 
People get shit-canned for the mere reason that a new hire two levels up has been brought in and feels they needs to make "changes". In a meeting, if you don't know who is to be scapegoated for some failure, it is likely that it is you in the crosshairs. 
Another difference is that compensation is not always commensurate with skill or responsibility. Overpaid mediocre HEP'sters don't exist. There are no shortage of clowns in Banks whose primary ability is to deflect blame and be a sycophant. 
10 August 2016 at 17:34

Real politics is not about junior people. If pissing off a PhysCo means you didn't get a tenure track position interview, you were a marginal candidate to begin with. I know plenty of people that pissed off lots of people that now have tenure in no small part because they had real merit.

There is huge difference being "chucked to the curb" at 30 after a post-doc vs being 50 and getting dumped in a re-organization. The former is likely to quickly find a job that will likely double their income, the latter is likely toast and will have to take a 30% paycut at a new institution.

I do agree that the corporate-like structures that HEP experiments have morphed into is similar to the financial world. One big difference is that an LHC-type experiment is really a republic consisting of institutions whereas a Bank is an autocratic bureaucracy. For example, you don't work for ATLAS, you work on ATLAS.

Bank stocks were an incredible investment at one time as they can be insanely profitable. Going forward there are serious headwinds from collapsing interest rates. Banks live off of the spread, and those spreads have all but collapsed.
11 August 2016 at 16:53

From Disputationist (another commentator)
How to get a Data Science job as an HEP PhD 
... an HEP PhD alone is not enough to get you a data-science job, but you just need a few weeks/months of additional preparation. Roughly 50% of data scientists have PhDs - it's something the industry highly prefers for some reason, although its not a sufficient condition. 
This is what you need to study and put on your resume:
  • Machine Learning - Read a couple of textbooks. Start with "Learning from Data" by Abu-Mostafa, a very concise and easy read. Next read Elements of Statistical Learning by Hastie for a deeper look into many algorithms. Concurrently, do a couple of projects to demonstrate your ML skills - Kaggle has straightforward problems to work on. You'll need to read some blog posts/tutorials to get a decent ranking. Anywhere within top 10-20% will be impressive and give you a lot to talk about during interviews.
  • Basic computer science - Learn python if you don't already. Read up on the common algorithms and data structures - sorting, search, trees, linked lists etc. Practice some coding problems and hackerrank and read a book like "Crack the coding interview"
  • Stats/probability - Read a couple of intro stats/probability books and work out some of the problems
The first several interviews will be a learning experience, but if you put some effort into the above, you'll get a DS job after a few months that will pay 100-130K at first, and if you change companies every year or so you can very quickly get to 200K. And the work can be a lot more interesting and meaningful than finance. The first company may be lame, but after that you will be in high demand and you can pick a company/product/domain that you think is meaningful and interesting. Feel free to ask more questions 
11 August 2016 at 17:44

lundi 15 août 2016


Citius, Altius, Fortius : the motto of particle physics?

Robert Oppenheimer showed the way ;-)

Robert Oppenheimer, 1958
“At the Institute for Advanced Study in Princeton, Dr J Robert Oppenheimer jumped for me, the arm outstretched and the hand extended toward the ceiling. ‘What do you read in my jump?’ he asked,” wrote Halsman. “’Your hand pointed upward,’ I hazarded, ‘maybe you were trying to show a new direction, a new objective.’” But the theoretical physicist denied any symbolism. “’No,’ said Dr Oppenheimer, laughing, ‘I was simply reaching.’” 
(Credit: Philippe Halsman/Magnum Photos)

There might be no sign of new physics beyond the standard model below 1020 eV and the neutrino particle sector appears to be the first (most reliable?) available solid experimental window through the unknown. The Ice Cube experiment encompassing a cubic kilometer of ice provided the first evidence for cosmogenic neutrinos (the specific sources have not been clearly identified yet). As the sensitivity of neutrino detectors roughly scales with their volume one can wonder what could be the expectation for a detector the size of the Moon? Huge one can guess...

The lunar Askaryan technique is a method to study the highest-energy cosmic rays, and their predicted counterparts, the ultra-high-energy neutrinos. By observing the Moon with a radio telescope, and searching for the characteristic nanosecond-scale Askaryan pulses emitted when a high-energy particle interacts in the outer layers of the Moon, the visible lunar surface can be used as a detection area. Several previous experiments, at Parkes, Goldstone, Kalyazin, Westerbork, the ATCA, Lovell, LOFAR, and the VLA, have developed the necessary techniques to search for these pulses, but existing instruments have lacked the necessary sensitivity to detect the known flux of cosmic rays from such a distance. This will change with the advent of the Square Kilometre Array. The SKA will be the world’s most powerful radio telescope. To be built in southern Africa, Australia and New Zealand during the next decade, it will have an unsurpassed sensitivity over the key 100 MHz to few-GHZ band. We introduce a planned experiment to use the SKA to observe the highest-energy cosmic rays and, potentially, neutrinos. The estimated event rate will be presented, along with the predicted energy and directional resolution. Prospects for directional studies with phase 1 of the SKA will be discussed, as will the major technical challenges to be overcome to make full use of this powerful instrument. Finally, we show how phase 2 of the SKA could provide a vast increase in the number of detected cosmic rays at the highest energies, and thus to provide new insight into their spectrum and origin.
Projected 90%-confidence limits on the ultra high energy (UHE) neutrino flux from 1,000 hours of observations of the Square kilometer area (SKA) radio telescope. Predictions are shown for neutrino fluxes from the so-called “top-down models" involving the production of UHE neutrinos in the Early Universe from kinks (Lunardini & Sabancilar (2012), dash-dotted) and cusps (Berezinsky et al. (2011), dot-dash-dotted) in cosmic strings, and also for the neutrino flux produced in interactions of UHE cosmic-rays with the Cosmic Microwave Background radiation - “cosmogenic neutrinos" (Allard et al. (2006), shaded). Limits set by other experiments - the Pierre Auger Observatory (Aab et al. 2015), RICE (Kravchenko et al. 2012) and ANITA (Gorham et al. 2010, 2012) - are also shown.

The problem of searching for highest-energy cosmic rays and neutrinos in the Universe is reviewed. Possibilities for using the radio method for detecting particles of energies above the Greisen–Zatsepin–Kuzmin (GZK) cut-off are analyzed. The method is based on the registration of coherent Cherenkov radio emission produced by cascades of most energetic particles in radio-transparent lunar regolith. The Luna-26 space mission to be launched in the nearest future involves the Lunar Orbital Radio Detector (LORD). The potentialities of the LORD space instrument to detect radio signals from showers initiated by ultrahigh-energy particles interacting with lunar regolith are examined. The comprehensive Monte Carlo calculations were carried out within the energy range of 1020 to 1025 eV with the account for physical properties of the Moon such as its density, lunar-regolith radiation length, radio-wave absorption length, refraction index, reflection from the lower regolith boundary, and orbit altitude of a lunar satellite. The design of the LORD space instrument and its scientific potentialities for registration of low-intense cosmic-ray particle fluxes of energies above the GZK cut-off up to 1025 eV are discussed, as well. The designed LORD module (including the antenna, amplification, and data-acquisition systems) now is under construction. The LORD space experiment will make it possible to obtain important information on the highest-energy particles in the Universe, to verify modern models for the origin and the propagation of ultrahigh-energy particles. It is expected that the LORD space experiment will surpass in its apertures and detection capability the majority of well-known current and proposed experiments that deal with the detection of both ultrahigh-energy cosmic rays and neutrinos. The future prospects in the study of ultrahigh-energy particles by orbital radio detectors are also considered, namely, a multi-satellite lunar systems and space missions to largest ice planets of the solar system.

The Moon provides a huge effective detector volume for ultrahigh energy cosmic neutrinos, which generate coherent radio pulses in the lunar surface layer due to the Askaryan effect. In light of presently considered lunar missions, we propose radio measurements from a Moon-orbiting satellite. First systematic Monte Carlo simulations demonstrate the detectability of Askaryan pulses from neutrinos with energies above 1020 eV, i.e. near and above the interesting GZK limit, at the very low fluxes predicted in different scenarios.

E2-weighted flux of ultrahigh energy cosmic neutrinos (UHECν). Solid (color) curves show the projected detection limits from Eq. (4), based on one year of satellite measurements with a beam-filling antenna for frequencies of 100 MHz (lower set of curves) and 1000 MHz (upper set of curves). Within each set, the curves from top to bottom are for satellite altitudes H of 100, 250 and 1000 km, respectively. Dashed lines show predicted fluxes from the GZK process [3] (consistent with the Waxman-Bahcall bound 5×10-8 [33]), Z-bursts and Topological Defects (TD) [30]. Thin solid lines show current flux limits from ANITA-lite [9], RICE [34], GLUE [15] and FORTE [28]. Dotted lines show predicted sensitivities for ANITA [9], LOFAR [17] and LORD [18].
(Submitted on 10 Apr 2006 (v1), last revised 15 Feb 2007 (this version, v2))