A Correlation Between the Higgs Mass and Dark Matter

Depending on the value of the Higgs mass, the Standard Model acquires an unstable region at large Higgs field values due to RG running of couplings, which we evaluate at 2-loop order. For currently favored values of the Higgs mass, this renders the electroweak vacuum only meta-stable with a long lifetime. We argue on statistical grounds that the Higgs field would be highly unlikely to begin in the small field meta-stable region in the early universe, and thus some new physics should enter in the energy range of order, or lower than, the instability scale to remove the large field unstable region. We assume that Peccei-Quinn (PQ) dynamics enters to solve the strong CP problem and, for a PQ-scale in this energy range, may also remove the unstable region. We allow the PQ-scale to scan and argue, again on statistical grounds, that its value in our universe should be of order the instability scale, rather than (significantly) lower. Since the Higgs mass determines the instability scale, which is argued to set the PQ-scale, and since the PQ-scale determines the axion properties, including its dark matter abundance, we are led to a correlation between the Higgs mass and the abundance of dark matter. We find the correlation to be in good agreement with current data.


Introduction
Recent LHC results are consistent with the predictions of the Standard Model, including the presence of a new boson that appears to be the Higgs particle with a mass 퐻 ∼ 125-126 GeV [1,2] (more recent measurements are summarized in [3]). With the Higgs at this mass, the Standard Model is well behaved up to very high energies if we evolve its couplings under the renormalization group (RG) equations. By no means does this imply that the Standard Model will be valid to these very high energies, and in fact there are good phenomenological reasons, such as dark matter, strong CP problem, baryogenesis, inflation, and hierarchy problem, to think it will be replaced by new physics at much lower energies, say O(TeV). But it is logically possible, albeit unlikely, that the Standard Model, or at least the Higgs sector, will persist to these very high energies and the explanation of these phenomena will be connected to physics at these high, or even higher, energy scales.
So at what energy scale must the Standard Model breakdown? Obviously new physics must enter by the Planck scale Pl where quantum gravity requires the introduction of new degrees of freedom. However, the RG running of the Higgs self-coupling can dictate the need for new physics at lower energies, depending on the starting value of . The Higgs mass is related to the self-coupling by 퐻 = √ 2 V EW , where the Higgs VEV is V EW ≈ 246 GeV. For moderate to high values of the Higgs mass, the initial value of , defined at energies of order of the electroweak scale, is large enough that it never passes through zero upon RG evolution. On the other hand, for small enough values of the Higgs mass, the self-coupling passes through zero at a sub-Planckian energy, which we denote by * , primarily due to the negative contribution to the beta function from the top quark, acquiring an unstable region at large field values [4,5]. The latter occurs for a light Higgs as has been observed. One finds that this renders the electroweak vacuum only metastable with a long lifetime. However, we will argue in this paper that it is highly unlikely for the Higgs field in the early universe to begin in the metastable region as that would require relatively small field values as initial conditions. Instead it would be much more likely to begin at 2 Advances in High Energy Physics larger field values, placing it in the unstable region. Hence, the energy scale * sets the maximum energy scale for new physics beyond the Standard Model to enter.
There are many possible choices for the new physics. One appealing possibility is supersymmetry, which alters the running of the Higgs self-coupling due to the presence of many new degrees of freedom, likely entering at much lower energies, conceivably O(TeV), or so. In addition to possibly stabilizing the Higgs potential, supersymmetry can also alleviate the hierarchy problem, improve unification of gauge couplings, and fit beautifully into fundamental physics such as string theory. So it is quite appealing from several perspectives. It is conceivable, however, that even if supersymmetry exists in nature, it is spontaneously broken at very high energies, and in such a scenario we would be forced to consider other possible means to stabilize the Higgs potential.
One intriguing possibility that we examine in this paper is to utilize dynamics associated with the solution of the strong CP problem; the problem that the CP violating term ∼ 휇]훼훽 푎 휇] 푎 훼훽 in the QCD Lagrangian is experimentally constrained to have coefficient | | ≲ 10 −10 , which is highly unnatural. The leading solution involves new Peccei-Quinn (PQ) dynamics [7], involving a new complex scalar field and a new global (1) symmetry that is spontaneously broken at some energy scale PQ . This leads to a new light scalar field known as the axion [8,9]. Since it is bosonic, the field adds a positive contribution to the effective for the Higgs, potentially removing the unstable region, depending on the scale PQ . This elegant mechanism to remove the unstable region was included in the very interesting reference [10], where this and other mechanisms were discussed, and was a source of motivation for the present work (also related is [11][12][13]).
In the present paper, we would like to take this elegant mechanism for vacuum stability and push it forward in several respects. Firstly, as already mentioned, we will argue on statistical grounds why the metastable vacuum requires stabilization. Secondly, we will allow the PQ-scale to scan and argue, again on statistical grounds, why it should be of order of the instability scale * , rather than orders of magnitude lower. Finally, we will furnish a correlation between the Higgs mass and the axion dark matter abundance and use the latest LHC [1,2] and cosmological data [6] to examine the validity of this proposal. The outcome of this series of arguments and computation is presented in Figure 1, which is the primary result of this work.
The outline of our paper is as follows. In Section 2, we examine the running of the Standard Model couplings at 2-loop order. In Section 3, we examine the metastability of the Standard Model vacuum and argue that it is statistically unfavorable for the Higgs to begin in this region. In Section 4, we include Peccei-Quinn dynamics to remove the Higgs instability and argue why PQ ∼ * . In Section 5, we relate the PQ-scale to the axion dark matter abundance, which furnishes a correlation between the Higgs mass and the abundance of dark matter. Finally, in Section 6, we compare the correlation to data and discuss our results. The dashed-blue curves are for 푡 = 173.1 ± 0.7 GeV, with the upper value on the right and the lower value on the left. The red vertical lines indicate the measured Higgs mass range 퐻 = 125.7 ± 0.6 GeV from combining ATLAS and CMS data [1,2]. The green horizontal lines indicate the measured dark matter density to baryon density = Ω DM /Ω 퐵 range, where Ω DM = 0.229 ± 0.015 and Ω 퐵 = 0.0458 ± 0.0016, from WMAP7 data [6].

Standard Model RG Evolution
We begin with a reminder of the structure of the Higgs sector of the Standard Model. The Higgs field is a complex doublet with Lagrangian In the unitary gauge, we expand around the VEV as = (0, V EW + ℎ)/ √ 2, where in our convention V EW ≈ 246 GeV. The associated Higgs mass is 퐻 = √ 2 V EW in terms of the starting value of , normally defined around the boson mass. At higher energies, the self-coupling undergoes RG evolution due to vacuum fluctuations from selfinteraction, fermion interactions, and gauge interactions. Defining / = 휆 with = ln / , the associated 1-loop beta function (suppressing external leg corrections for now) is where the only fermion Yukawa coupling we track is that of the top quark 푡 since it is by far the largest. For sufficiently large Higgs mass, the positive self-interaction term ∼ + 2 is large enough to keep the beta function positive, or only slightly negative, to avoid running negative at sub-Planckian energies. For sufficiently small Higgs mass, the negative top quark contribution ∼ − 4 푡 can dominate and Advances in High Energy Physics 3 cause the beta function to go negative, in turn causing to pass through zero at a sub-Planckian energy, which we denote by * . The top quark Yukawa coupling itself runs towards small values at high energies with 1-loop beta function which is quite sensitive to the value of the strong coupling 푠 . To compute the evolution of couplings and the quantity * = * ( 퐻 , 푡 , . . .) accurately, we do the following: (i) starting with couplings defined at the mass, we perform proper pole matching and running up to the top mass, (ii) we include external leg corrections (and the associated wave function renormalization), (iii) we simultaneously solve the 5 beta function differential equations for the 5 important couplings , 푡 , 耠 , , 푠 , and (iv) we include the full 2-loop beta functions for the Standard Model; these are presented in the Appendix (see [14,15] for more information). In our numerics, we use particular values of the couplings 耠 , , 푠 , derived from the best fit values , In our final analysis, we will allow for three different values of 푡 = √ 2 푡 V EW , namely, the central value and 1-sigma variation 푡 = 173.1 ± 0.7 GeV, and we will explore a range of 퐻 = √ 2 V EW , with V EW = 246.22 GeV. Performing the RG evolution leads to the energy dependent renormalized coupling ( ). A plot of ( ) is given in Figure 2 for three Higgs mass values, namely, 퐻 = 116 GeV (lower curve), 퐻 = 126 GeV (middle curve), and 퐻 = 130 GeV (upper curve), with the top mass fixed to the central value 푡 = 173.1 GeV. This shows clearly that for the lighter Higgs masses the coupling passes through zero at a sub-Planckian energy scale * and then remains negative. Furthermore, since the coupling only runs logarithmically slowly with energy, the value of * can change by orders of magnitude if the starting value of the couplings changes by relatively small amounts. The domain > * involves a type of "attractive force" with negative potential energy density, as we now examine in more detail.

Metastability and Probability
If we think of the field value ℎ as being the typical energy pushed into a scattering process at energy , then we can translate the RG evolution of the couplings into an effective potential. Using ( ) and replacing → ℎ, we obtain the (RG improved) effective potential at high energies (ℎ ≫ V EW ) (see [16] for a precise analysis): where the wave function renormalization factor is given in terms of the anomalous dimension by ( ) = , and we replace → ln ℎ/ . Hence for a Higgs mass in the range observed by the LHC, the effective potential eff goes negative at a field value ℎ = * that is several orders of magnitude below the Planck scale, as can be deduced from the behavior of ( ) with 퐻 = 126 GeV in Figure 2.
We could plot eff (ℎ) directly; however the factor of ℎ 4 makes it vary by many orders of magnitude as we explore a large field range. Instead a schematic of the resulting potential will be more illuminating for the present discussion in order to highlight the important features, as given in Figure 3. The plot is not drawn to scale; the 3 energy scales satisfy the hierarchy V EW ≪ * ≪ Pl for a Higgs mass as indicated by LHC data 퐻 ∼ 125-126 GeV. Note that the local maximum in the potential occurs at a field value that is necessarily very close to * (only slightly smaller) and so we shall discuss these 2 field values interchangeably.
In this situation, the electroweak vacuum is only metastable. Its quantum mechanical tunneling rate can be estimated by Euclideanizing the action and computing the associated bounce action 0 . This leads to the following probability of decaying in time 푈 through a bubble of size [17] The computation of the rate is rather involved, and we shall not pursue the details here. Suffice to say that, for the central values of Higgs mass and top mass from LHC data, it is found that the lifetime of the electroweak vacuum is longer than the present age of the universe [18,19].

Advances in High Energy Physics
Effective potential Higgs field ℎ  ％７ Figure 3: Schematic of the effective potential eff as a function of the Higgs field ℎ. This is not drawn to scale; for a Higgs mass in the range indicated by LHC data, the hierarchy is V EW ≪ * ≪ Pl , where each of these 3 energy scales is separated by several orders of magnitude.
It is conceivable that it is an acceptable situation for the electroweak vacuum to be metastable. However, here we would like to present an argument that such a situation is statistically disfavorable. We imagine that, in the very early universe, the Higgs field was randomly distributed in space. For instance, during cosmological inflation the Higgs field could have been frozen at some value as the universe rapidly expands (if there is high scale inflation) until after inflation when the field will oscillate and its initial value could plausibly have been random and uniformly distributed. If this is the case, then what is the probability that the Higgs field began in the metastable region ℎ ≲ * , rather than the unstable region ℎ ≳ * ? The answer depends on the allowed domain the Higgs can explore. Here we estimate the allowed domain to be Planckian, that is, 0 < ℎ < Pl , but our argument only depends on the upper value being much larger than * . Naively, this would lead to a probability ∼ * / Pl ; however we should recall that the Higgs is a complex doublet, composed of 4 real scalars, and each one would need to satisfy ℎ ≲ * in the early universe to be in the metastable region. Hence, we estimate the probability as Prob (Higgs begins in metastable region) For example, if we describe the physics in Coulomb gauge, then we have both the modulus of the Higgs field ℎ and angular modes 푎 , with = 1,2,3. From this point of view, it seems most reasonable to take the probability density weighted by an appropriate Jacobian factor associated with transforming from Cartesian field coordinates to such radial plus angular coordinates. This Jacobian scales as ∼ ℎ 3 and so again will lead to the probability growing like the fourth power of the energy. Another way to put it is to say that there is much more field space available at large Higgs fields values than for small values. This seems reasonable, especially if one imagines initial conditions laid down by inflation. The number of states in the Hilbert space, whose typical Higgs value is large, is much greater than the number of states in the Hilbert space whose typical Higgs value is small. One might reach a different perspective in, say, unitary gauge where the angular modes appear as the longitudinal modes of the ± and bosons. However, the unitary gauge is not a useful way of describing physics above the electroweak scale. So we consider the above point of view with multiple scalars to be more physically reasonable.
So, for instance, for 퐻 ≈ 125.5 GeV and 푡 = 173.1 GeV, we have * ∼ 10 11 GeV, leading to a probability ∼ (10 11 GeV/10 19 GeV) 4 = 10 −32 , which indicates that the chance of randomly landing in the metastable region in the early universe is exceedingly unlikely. Instead it is far more likely to land in the unstable region indicated in Figure 3. Here the effective potential is negative leading to a catastrophic runaway instability, perhaps to a new VEV that is close to Planckian. This would in turn lead to a plethora of problems for the formation of complex structures, so we can safely assume such a regime is uninhabitable and irrelevant. This leads us to examine a scenario in which new physics enters and removes this problem.

Peccei-Quinn Dynamics and Distribution
One of the phenomenological reasons for new physics beyond the Standard Model is the fine tuning of the CP violating term in the QCD Lagrangian. The following dimension 4 operator is gauge invariant and Lorentz invariant and should be included in the QCD Lagrangian with a dimensionless coefficient : From bounds on the electric dipole moment of the neutron, this term is experimentally constrained to satisfy | | ≲ 10 −10 , which requires extreme fine tuning. There appears no statistical explanation of this fine tuning if it were purely random, since a small but moderate value of would have very little consequences for the formation of complex structures. Instead this requires a dynamical explanation, which we take to be due to a new global symmetry, known as a Peccei-Quinn (PQ) symmetry [7], involving a new heavy complex scalar field . This field undergoes spontaneous symmetry breaking at a scale PQ , and in the resulting effective field theory, the quantity is essentially promoted to the angular degree of freedom in = 푖휃 / √ 2, a light scalar field known as the axion [8,9]. The zero temperature Lagrangian for includes a symmetry breaking potential for and a QCD instanton generated sinusoidal potential for : Advances in High Energy Physics 5 If we expand around the -field's VEV at low energies, we see that the angular component of , the axion, is very light with a mass 푎 = Λ 2 / PQ (where Λ is of order of the QCDscale) and will be dynamically driven to zero, solving the strong CP problem. Since we will require PQ to be a very high energy scale (compared to, say, the electroweak scale), the radial mode of is very heavy, with mass ∼ PQ . Hence at low energies, the radial mode is essentially irrelevant; it can be integrated out and, apart from a possible renormalization of the Standard Model couplings, can be ignored. However at energies approaching PQ , the radial mode cannot be ignored. The field couples to various Standard Model particles in any realization of the PQ symmetry, including the Higgs field through the interactions of the type ∼ ℎ휌 † * (where ℎ휌 is a dimensionless coupling). This causes an alteration in the effective coupling at an energy scale of order ∼ 휌 where the new field becomes dynamical.
Since is bosonic, it generally leads to a positive increase in , either through tree-level corrections or through loop corrections as follows: as long as ℎ휌 is not very small, then it makes a significant and rapid change in the -function for of the form Δ 휆 ∼ 2 ℎ휌 . This is because, in the cases of interest, is otherwise very small in the vicinity of this effect turning on, as seen in Figure 2. So even a small positive change in its function can cause a rapid change and stabilization of the effective potential leading to a threshold boost in at the scale at which this new degree of freedom becomes active, a point that was included nicely in [10]. This conclusion can only be avoided by an atypically tiny coupling between the Higgs and the PQ-field.
In the most common case then, this leads to a reduction or removal of the unstable region depending on the scale PQ relative to the instability scale * , assuming O(1) couplings between the Peccei-Quinn dynamics and the Higgs sector. The more precise statement is that the mass of the radial field is 휌 = √ 휌 PQ . This really sets the scale at which a correction to the function becomes active.
We obviously require PQ to be in the range 휌 = √ 휌 PQ ≲ * in order for the new physics to prevent the effective potential eff (ℎ) from having a large negative regime (note that a small negative dip is statistically allowable for ℎ, but not a large field dip). But since * is very large, this leaves several orders of magnitude uncertainty in the value of PQ . In other words, it would be sufficient for PQ ≪ * in order to remove the unstable region. However, here we would like to present a statistical argument that is much more likely. We shall take 휌 = O(1) in the following discussion to illustrate the idea, though it is simple to generalize the argument. There are indications that the PQ-scale may be associated with GUT or Planckian physics, and indeed typical realizations of the QCD-axion in string theory suggest that PQ is much closer to the GUT or Planck scale Pl [20], rather than a more intermediate scale, such as ∼10 11 GeV. In some landscape, we can imagine PQ scanning over different values. For lack of more detailed knowledge, we can imagine that it scans on, say, a uniform distribution in the range 0 < PQ < Pl . If this is the case, then PQ will be as small as is required but would not be significantly smaller as that would be even rarer in the landscape. By placing PQ on a uniform distribution, the probability that it will be small enough to alleviate the instability is roughly where almost all of the phase space pushes PQ ∼ * , rather than orders of magnitude lower. It is important to note that, for * ≪ Pl , as arises from the measured Standard Model's couplings, the probability in (11) is small but still much greater than the probability in (7). Hence it is much more likely to have an atypically small PQ-scale and no constraint on the initial Higgs field, than an atypically small Higgs field and no constraint on the PQ-scale. We now examine the cosmological consequences of PQ ∼ * .

Axion Dark Matter
The light scalar axion particle is neutral, is very stable, and acts as a form of dark matter. The computation of its abundance is nontrivial and has been studied in many papers, including [21,22]. The final result for the axion abundance is essentially controlled by the scale PQ . Its value is normally measured by the quantity Ω DM ≡ DM / crit , where DM is the energy density in axion dark matter and crit is the so-called "critical density" of the universe defined through the Friedman equation as crit = 3 2 /(8 ). Tracking the nontrivial temperature dependence of the axion potential and redshifting to late times lead to the following expression for Ω DM : where the Hubble parameter is represented as = 100ℎ km/s/Mpc and is the CMB temperature. The coefficient is an O(0.1-1) fudge factor due to uncertainty in the detailed QCD effects that set the axion mass and its temperature dependence. In our numerics, we have taken = 0.5 as a representative value. It is quite possible that the true value may be smaller than this, such as ≈ 0.15 as taken in [22], but other effects, including contributions from string-axions, can potentially push the true value to be larger [23]. Also, 푖 is the initial angle in the early universe (which later redshifts towards zero, solving the strong CP problem). Here we take ⟨ 2 푖 ⟩ = 2 /3, which comes from allowing 푖 to be uniformly distributed in the domain − < 푖 < and then spatial averaging. Another interesting possibility arises if inflation occurs after Peccei-Quinn symmetry breaking, allowing 푖 to be homogeneous and possibly small, as studied in [24,25]. The latter scenario is subject to various constraints, including bounds on isocurvature fluctuations [26], and will not be our focus here.
The quantity Ω DM is slightly inconvenient for expressing the main results for the following two reasons: (i) in a flat 6 Advances in High Energy Physics universe (as we are assuming) it is bounded to satisfy Ω DM < 1, which obscures the fact that a priori the dark matter abundance could be enormous, and (ii) it is manifestly time dependent (due to ℎ and ), which requires some choice of physical time to compare different universes. To avoid these complications, we prefer to compute the dark matter density in units of the baryon density. Fixing the baryon to photon ratio at the measured value, we have with Ω 퐵,0 ≈ 0.046 from observation. From this, we define the (unbounded and time independent) measure of dark matter as Observations show that the dark matter density parameter is nonzero in our universe, although its particular particle properties (whether axion or WIMP, etc.) are still unknown. The observational evidence for dark matter comes from a range of sources, including CMB, lensing, galaxy rotation and clustering, structure formation, and baryon-acousticoscillations, and is very compelling; for example, see [27][28][29][30][31][32], and its abundance has been measured quite accurately. Hence our prediction for the value of (coming from setting PQ ∼ * with * determined by 퐻 ) can be compared to observation; see Figure 1.

Results and Discussion
6.1. Comparison with Data. Let us summarize our argument: holding other parameters fixed, the Higgs mass 퐻 determines the instability scale * , which we evaluate at 2-loop order. We have argued on statistical grounds in Section 3 why the scale of new physics should not be larger than * and in Section 4 why the scale of new physics should not be (significantly) smaller than * , leading to PQ ∼ * . Since PQ determines the dark matter abundance in (15), this establishes a correlation between 퐻 and . The result was displayed earlier in the paper in Figure 1. The solid-blue curve is for the central value of the top mass 푡 = 173.1 GeV, and the dashed-blue curves are for 푡 = 173.1 ± 0.7 GeV. We compare this prediction to the latest LHC and cosmological data. Firstly, we have taken the ATLAS value 퐻 = 126.0 ± 0.4 (stat) ± 0.4 (syst) [1] and the CMS value 퐻 = 125.3 ± 0.4 (stat) ± 0.5 (syst) [2] and produced our own combined value of 퐻 = 125.7 ± 0.6 GeV, which is indicated by the red vertical lines. Secondly, we have taken the WMAP7 data, plus other observations, for the dark matter abundance Ω DM = 0.229±0.015 and the baryon abundance Ω 퐵 = 0.0458±0.0016 [6] and combined them to obtain , which is indicated by the green horizontal lines. The predicted correlation between the Higgs mass 퐻 and the dark matter abundance in Figure 1 displays good agreement with current data.

Precision and Uncertainties.
Improved accuracy in testing this scenario comes in several experimental directions. This includes measuring the Higgs mass 퐻 to better precision, as well as the top mass 푡 and the strong coupling 푠 (which we set to the central value 푠 = 0.1184), while the current accuracy in is quite good. A theoretical uncertainty surrounds the specific choice of PQ relative to * . Here we have taken PQ ∼ * , due to a statistical argument that allowed the scale PQ to scan, leading to the conclusion that it should be as small as required, but no smaller-an argument that is similar to the argument for the magnitude of the cosmological constant [33]. One might argue that a factor of a few smaller may be required to properly alleviate the instability [10], which would lead to a slight lowering of the blue curves in Figure 1, but a small negative dip is tolerable statistically, which makes PQ ∼ * plausible.
Related to this uncertainty is the particular prior distribution for PQ , which we assumed to be uniform. The expectation of a flat distribution is plausible for the cosmological constant Λ if one allows both positive and negative values, making Λ ∼ 0 not special. In the case of PQ , it is necessarily positive, so PQ ∼ 0 is arguably a special part of the distribution. This may render the true distribution nonuniform. However, as long as the distribution does not vanish in the PQ → 0 limit faster than ( PQ ) 3 , then our arguments go through. In other words, the probability of an atypically small PQ and no constraint on the initial Higgs field would still be larger than the probability of an atypically small Higgs field and no constraint on PQ . Also, one may question whether the uniform distribution assumed for the initial values of each of the 4 components of the Higgs field is reasonable. Since we have a sufficiently limited understanding of the early universe, including a measure problem for inflation, any such assumptions could be called into question. However, since the metastable region occupies such a tiny fraction of the volume of field space, roughly ∼(10 −8 ) 4 = 10 −32 or so, an alteration in prior probabilities would need to be quite drastic to change the conclusions.

6.3.
Outlook. An important test of this scenario involves unravelling the nature of dark matter directly. The QCDaxion is actively being searched for in a range of experiments, including ADMX [34], with no positive detection so far. But the regime of parameter space in which the axion can be the dark matter will be explored in coming years. If an axion is discovered, it will be important to unravel its particular properties including its coupling to other fields. An explicit embedding of the discovered version (popular models include KSVZ [35,36] and DFSZ [37,38]) into the Higgs stability analysis would be important. Searches such as ADMX rely upon the axion being all or most of the dark matter, so a related verification would be the associated lack of discovery of WIMPs, or other dark matter candidates, in direct or indirect searches. Or at least these forms of dark matter should comprise a relatively small fraction of the total.
The discovery of the Higgs boson at the LHC is a final confirmation of the Standard Model. This leaves the scale at which the theory breaks down unclear. Here we have investigated the possibility that the theory, or at least the Higgs sector, remains intact until the scale at which the Higgs potential runs negative which would lead to a runaway instability at large field values. By introducing Peccei-Quinn dynamics, we can potentially solve the strong CP problem, remove the unstable region, and obtain roughly the correct amount of dark matter due to a collection of statistical arguments that sets PQ ∼ * . This is remarkably minimal but does still leave questions regarding unification, baryogenesis, inflation, hierarchy problem, and so forth. It is conceivable that unification can still occur at higher energies by the introduction of new degrees of freedom, that the physics of baryogenesis and inflation is associated with such high scale physics [39,40], and that the hierarchy problem has no dynamical explanation. Alternatively, the LHC or other experiments may discover new degrees of freedom at much lower energies, which would radically alter this picture. Currently all such issues remain largely unclear, requiring much more guidance from experiment and observation.