Next station : naturalness, terminus cosmos

What if the cosmological constant was a nonlocal quantum residue of discreteness of spacetime geometry ... just like mimetic dark matter is a nonlocal noncommutative consequence of the quantisation of space-time volume?


Lieber Ehrenfest!  

... Ich habe auch wieder etwas verbrochen in der Gravitationstheorie, was mich ein wenig in Gefahr setzt, in einem Tollhaus interniert zu werde.
Einstein's letter February 4th 1917



The former post provides an(other) opportunity to the readers of this blog to watch a presentation of the new conceptual framework envisioned by the geometer Alain Connes and his closest physicist collaborator Ali Chamseddine with the help of cosmologist Sacha Mukhanov in order to show how the standard model of quantum matter-radiation interactions emerges from the discreteness of spacetime formulated with a spectral noncommutative geometric equation. If the physical consequences of this breakthrough that leads to understanding dark matter as some mimetic gravity is analysed in detail by Chamseddine in a very recent review article, the impact on cosmological constant is more elusive not to speak about the quantisation of spacetime dynamics.

Looking for more insight about what could be a heuristic hypothesis toward the discreteness of spacetime volume I could not afford to talk about, or rather quote, causal set approach to quantum gravity:

The evidence ... points to a cosmological constant of magnitude, Λ≈10-120κ-2 , and this raises two puzzles: [I prefer the word puzzle or riddle to the word problem, which suggests an inconsistency, rather than merely an unexplained feature of our theoretical picture.] Why is Λ so small without vanishing entirely, and Why is it so near to the critical density ρcritical = 3H2... 
Is the latter just a momentary occurrence in the history of the universe (which we are lucky enough to witness), or has it a deeper meaning? Clearly both puzzles would be resolved if we had reason to believe that Λ ≈ H2 always. In that case, the smallness of Λ today would merely reflect the large age of the cosmos. But such a Λ would conflict with our present understanding of nucleosynthesis in the early universe and of “structure formation” more recently. (In the first case, the problem is that the cosmic expansion rate influences the speed with which the temperature falls through the “window” for synthesizing the light nuclei, and thereby affects their abundances. According to {the Friedmann equations} a positive Λ at that time would have increased the expansion rate, which however is already somewhat too big to match the observed abundances. In the second case, the problem is that a more rapid expansion during the time of structure formation would tend to oppose the enhancement of density perturbations due to gravitational attraction, making it difficult for galaxies to form.) But neither of these reasons excludes a fluctuating Λ with typical magnitude |Λ|∼H2 but mean value <Λ>=0. The point now is that such fluctuations can arise as a residual, nonlocal quantum effect of discreteness, and specifically of the type of discreteness embodied in the causal set... 
In order to explain this claim, I will need to review some basic aspects of causet theory. [5] According to the causal set hypothesis, the smooth manifold of general relativity dissolves, near the Planck scale, into a discrete structure whose elements can be thought of as the “atoms of spacetime”. These atoms can in turn be thought of as representing “births”, and as such, they carry a relation of ancestry that mathematically defines a partial order, x ≺ y. Moreover, in our best dynamical models [6], the births happen sequentially in such a way that the number n of elements plays the role of an auxiliary time-parameter. (In symbols, n ∼ t.)[It is an important constraint on the theory that this auxiliary time-label n should be “pure gauge” to the extent that it fails to be determined by the physical order-relation ≺. That is, it must not influence the dynamics, this being the discrete analog of general covariance] Two basic assumptions complete the kinematic part of the story by letting us connect up a causet with a continuum spacetime. One posits first, that the underlying microscopic order ≺ corresponds to the macroscopic relation of before and after, and second, that the number of elements N comprising a region of spacetime equals the volume of that region in fundamental (i.e. Planckian) units. (In slogan form: geometry = order + number.) The equality between number N and volume V is not precise however, but subject to Poisson fluctuations, whence instead of N=V, we can write only 
N∼V±√V.                     (5) 
(These fluctuations express a “kinematical randomness” that seems to be forced on the theory by the noncompact character of the Lorentz group.) To complete the causet story, one must provide a “dynamical law” governing the birth process by which the causet “grows” (the discrete counterpart of {the Einstein} equation...). This we still lack in its quantum form, but for heuristic purposes we can be guided by the classical sequential growth (CSG) models referred to above; and this is what I have done in identifying n as a kind of time-parameter...   
We can now appreciate why one might expect a theory of quantum gravity based on causal sets to lead to a fluctuating cosmological constant. Let us assume that at sufficiently large scales the effective theory of spacetime structure is governed by a gravitational path-integral, which at a deeper level will of course be a sum over causets. That n plays the role of time in this sum suggests that it must be held fixed, which according to (5) corresponds to holding V fixed in the integral over 4-geometries. If we were to fix V exactly, we’d be doing “unimodular gravity”, in which setting it is easy to see that V and Λ are conjugate to each other in the same sense as energy and time are conjugate in nonrelativistic quantum mechanics. [This conjugacy shows up most obviously in the Λ-term in the gravitational action-integral, which is simply 
−Λ √−g d2x = −ΛV . (6) 
It can also be recognized in canonical formulations of unimodular gravity [7], and in the fact that (owing to (6)) the “wave function” Ψ(3g; Λ) produced by the unrestricted path-integral with parameter Λ is just the Fourier transform of the wave function Ψ(3g; Λ)n produced at fixed V.] In analogy to the ∆E∆t uncertainty relation, we thus expect in quantum gravity to obtain 
∆Λ ∆V ∼ ℏ  (7) 
Remember now, that even with N held exactly constant, V still fluctuates, following (5), between N + √ N and N − √ N; that is, we have N ∼ V ± √N ⇒ V ∼ N ± √V , or ∆V∼√V . In combination with (7), this yields for the fluctuations in Λ the central result 
∆Λ ∼ V -1/2 (8) 
Finally, let us assume that, for reasons still to be discovered, the value about which Λ fluctuates is strictly zero: <Λ>=0. (This is the part of the Λ puzzle we are not trying to solve...) A rough and ready estimate identifying spacetime volume with the Hubble scale H -1 then yields V∼(H -1)4⇒ Λ∼-1/2∼H2∼ρcritical (where I’ve used that Λ=Λ−<Λ> since <Λ>=0). In other words, Λ would be “ever-present” (at least in 3+1 dimensions)... 
In trying to develop (8) into a more comprehensive model, we not only have to decide exactly which spacetime volume ‘V’ refers to, we also need to interpret the idea of a varying Λ itself. Ultimately the phenomenological significance of V and Λ would have to be deduced from a fully developed theory of quantum causets, but until such a theory is available, the best we can hope for is a reasonably plausible scheme which realizes (8) in some recognizable form.  
As far as V is concerned, it pretty clearly wants to be the volume to the past of some hypersurface, but which one? If the local notion of “effective Λ at x” makes sense, and if we can identify it with the Λ that occurs in (8), then it seems natural to interpret V as the volume of the past of x, or equivalently (up to Poisson fluctuations) as the number of causet elements which are ancestors of x: 
V = volume(past(x)).
One could imagine other interpretations... but this seems as simple and direct as any... 
As far as Λ is concerned, the problems begin with Einstein's equation itself, whose divergence implies (at least naively... ) that Λ = constant. The model of [2] and [3] addresses this difficulty... we are forced to modify the Friedmann equations... The most straightforward way of doing so is to retain only one of them, or possibly some other linear combination... Then our dynamical scheme is just 
3(ȧ/a)2 = ρ + ρΛ                (9a)
 2 ä/a + (ȧ/a)2 = −(p + pΛ)  (9b) 
with ρΛ=Λ and pΛ= −Λ − ̇Λ/3H. Finally, to complete our model and obtain a closed system of equations, we need to specify Λ as a (stochastic) function of V , and we need to choose it so that ∆Λ∼-1/2. But this is actually easy to accomplish, if we begin by observing that (with κ =  = 1) Λ = S/V ≈ S/N can be interpreted as the action per causet element that is present even when the spacetime curvature vanishes. (As one might say, it is the action that an element contributes just by virtue of its existence.† ) Now imagine that each element contributes (say) ± to S, with a random sign. Then S is just the sum of N independent random variables, and we have S/ ∼ ±√ N ∼ ±√(V /ℓ4), where ℓ∼√(κ) is the fundamental time/length of the underlying theory, which thereby enters our model as a free phenomenological parameter. This in turn implies, as desired, that 
Λ = S/V ∼ ± (/ℓ2)/√V       (10) 
We have thus arrived at an ansatz that, while it might not be unique, succeeds in producing the kind of fluctuations we were seeking. Moreover, it lends itself nicely to simulation by computer...    
An extensive discussion of the simulations can be found in [3] and [2]. The most important finding was that... the absolute value of Λ follows ρradiation very closely during the era of radiation dominance, and then follows ρmatter when the latter dominates. Secondly, the simulations confirmed that Λ fluctuates with a “coherence time” which is O(1) relative to the Hubble scale. Thirdly, a range of present-day values of ΩΛ is produced, and these are O(1) when ℓ2= O(κ). (Notice in this connection that the variable Λ of our model cannot simply be equated to the observational parameter Λobs that gets reported on the basis of supernova observations, for example, because Λobs results from a fit to the data that presupposes a constant Λ, or if not constant then a deterministically evolving Λ with a simple “equation of state”. It turns out that correcting for this tends to make large values of ΩΛ more likely [3].) Fourthly, the Λ-fluctuations affect the age of the cosmos (and the horizon size), but not too dramatically. In fact they tend to increase it more often than not. Finally, the choice of (9a) for our specific model seems to be “structurally stable“ in the sense that the results remain qualitatively unchanged if one replaces (9a) by some linear combination thereof with (9b), as discussed above...
Heuristic reasoning rooted in the basic hypotheses of causal set theory predicted Λ∼±1/√V , in agreement with current data. But a fuller understanding of this prediction awaits the ... new ... quantum causet dynamics”... Meanwhile, a reasonably coherent phenomenological model exists, based on simple general arguments. It is broadly consistent with observations but a fuller comparison is needed. It solves the “why now” problem: Λ is “ever-present”. It predicts further that pΛ  −ρΛ (w  −1) and that Λ has probably changed its sign many times in the past.[ It also tends to favor the existence of something, say a “sterile neutrino”, to supplement the energy density at nucleosynthesis time. Otherwise, we might have to assume that ΩΛ had fluctuated to an unusually small value at that time. It also carries the implication that “large extra dimensions” will not be observed at the LHC...] The model contains a single free parameter of order unity that must be neither too big nor too small.[unless we want to try to make sense of imaginary time (= quantum tunneling?) or to introduce new effects to keep the right hand side of (9) positive (production of gravitational waves? onset of large-scale spatial curvature or “buckling”?).] In principle the value of this parameter is calculable, but for now it can only be set by hand.   
In this connection, it’s intriguing that there exists an analog condensed matter system the “fluid membrane”, whose analogous parameter is not only calculable in principle from known physics, but might also be measurable in the laboratory! [9]...
In itself the smallness of Λ is a riddle and not a problem. But in a fundamentally discrete theory, recovery of the continuum is a problem, and I think that the solution of this problem will also explain the smallness of Λ. (The reason is that if Λ were to take its “natural”, Planckian value, the radius of curvature of spacetime would also be Planckian, but in a discrete theory such a spacetime could no more make sense than a sound wave with a wavelength smaller than the size of an atom. Therefore the only kind of spacetime that can emerge from a causet or other discrete structure is one with Λ≪1.) One can also give good reasons why the emergence of a manifold from a causet must rely on some form of nonlocality. The size of Λ should also be determined nonlocally then, and this is precisely the kind of idea realized in the above model. 
One pretty consequence of this kind of nonlocality is a certain restoration of symmetry between the very small and the very big. Normally, we think of G (gravity) as important on large scales, with h (quantum) important on small ones. But we also expect that on still smaller scales G regains its importance once again and shares it with ℏ  (quantum gravity). If the concept of an ever-present Λ is correct then symmetry is restored, because ℏ rejoins G on the largest scales in connection with the cosmological constant. 
Finally, let me mention a “fine tuning” that our model has not done away with, namely the tuning of the spacetime dimension to d=4. In any other dimension but 4, Λ could not be “ever-present”, or rather it could not remain in balance with matter. Instead, the same crude estimates that above led us to expect Λ∼H2 , lead us in other dimensions to expect either matter dominance (d>4) or Λ-dominance (d<4). Could this be a dynamical reason favoring 3+1 as the number of noncompact dimensions?...[10]...  
A last word
The cosmological constant is just as constant as Hubble’s constant.

Rafael D. Sorkin (Perimeter Institute and Syracuse University)
(Submitted on 9 Oct 2007)

Comments