Exploring new models in all detail with SARAH

I give an overview about the features the Mathematica package SARAH provides to study new models. In general, SARAH can handle a wide range of models beyond the MSSM coming with additional chiral superfields, extra gauge groups, or distinctive features like Dirac gaugino masses. All of these models can be implemented in a compact form in SARAH and are easy to use: SARAH extracts all analytical properties of the given model like two-loop renormalization group equations, tadpole equations, mass matrices and vertices. Also one- and two-loop corrections to tadpoles and self-energies can be obtained. For numerical calculations SARAH can be interfaced to other tools to get the mass spectrum, to check flavour or dark matter constraints, and to test the vacuum stability, or to perform collider studies. In particular, the interface to SPheno allows a precise prediction of the Higgs mass in a given model comparable to MSSM precision by incorporating the important two-loop corrections. I show in great detail at the example of the B-L-SSM how SARAH together with SPheno, HiggsBounds/HiggsSignals, FlavorKit, Vevacious, CalcHep, MicrOmegas, WHIZARD, and MadGraph can be used to study all phenomenological aspects of a model. Even if I concentrate in this manuscript on the analysis of supersymmetric models most features are also available in the non-supersymmetric case.

However, the negative results from SUSY searches at the LHC 1 as well as the measured Higgs mass of about 125 GeV [29,30] put large pressure on the simplest scenarios. Wide regions of the parameter space, which had been considered as natural before LHC has started, have been ruled out. This has caused more interest in non-minimal SUSY models. Beyond-MSSM model can provide many advantages compared the MSSM not only addressing the two issues mentioned so far. A more complete list of good reasons to take a look on extensions of the MSSM is: • Naturalness: the Higgs mass in SUSY is not a free parameter like in the SM. In the MSSM the tree level mass is bounded from above by m (T ) h < m Z . Thus, about one third of the Higgs mass has to be generated radiatively to explain the observation. Heavy SUSY masses are needed to gain sufficiently large loop corrections, i.e. a soft version of the hierarchy problem appears again. The need for large loop corrections gets significantly softened if F -or D-terms are present which already give a push to the tree-level mass [31][32][33][34][35].
• SUSY searches: the negative results from all SUSY searches at the LHC have put impressive limits on the Sparticle masses. However, the different searches are based on certain assumptions like a stable, neutral and colourless lightest SUSY particle (LSP), a sufficiently mass splitting between the SUSY states, and so on. As soon as these conditions are no longer given like in models with broken R-parity, the limits become much weaker [36][37][38]. Also in scenarios with compressed spectra where SUSY states are nearly degenerated, the strong limits do often not apply [39][40][41][42][43][44][45][46][47][48][49][50].
• Strong CP-problem: the strong CP problems remains not only an open question in the SM but also in the MSSM. In principle, for both models the same solution exists to explain the smallness of the Θ term in QCD: the presence of a broken Peccei-Quinn (PQ) symmetry [64]. In its supersymmetric version PQ models not only predict an axion but also an axino which could be another DM candidate [65][66][67][68][69][70][71]. In general, the phenomenological aspects of axionaxino models are often even richer, in particular if the DSFZ version is considered [72,73]: the minimal, self-consistent supersymmetric DSFZ-axion model needs in total three additional superfields compared to the MSSM [74].
• µ-problem: the superpotential of the MSSM involves one parameter with dimension mass: the µ-term. This term is not protected by any symmetry, i.e. the natural values would be either exactly 0 or O(M GU T ). However, both extreme values are ruled out by phenomenological considerations. The optimal size of this parameter would be comparable to the electroweak scale. This could be obtained if the µ-term is actually not a fundamental parameter but generated dynamically. For instance, in singlet extensions an effective µ-term appears as consequence of SUSY breaking and is therefore naturally O(M SU SY ) [31,75].
• Top-Down approach: starting with a GUT or String theory it is not necessarily clear that only the gauge sector and particle content of the MSSM is present at the low scale. Realistic UV completions come often with many additional matter close to the SUSY scale. In many cases also additional neutral and even charged gauge bosons are predicted [76][77][78][79].
• R-symmetry: if one considers R-symmetric models, Majorana gaugino masses are forbidden.
To give masses to the gauginos in these models, a coupling to a chiral superfield in the adjoint representation is needed. This gives rise to Dirac masses for the gauginos which are in agreement with R-symmetry . Dirac gauginos are also attractive because they can weaken LHC search bounds [121][122][123], and flavour constraints [124][125][126].
Despite the large variety and flexibility of SUSY, many dedicated public computer tools like SoftSUSY [127][128][129], SPheno [130,131], Suspect [132], Isajet [133][134][135][136][137][138] or FeynHiggs [139,140] are restricted to the simplest realization of SUSY, the MSSM, or small extensions of it. Therefore, more generic tools are needed to allow to study non-minimal SUSY models with the same precision as the MSSM. This precision is needed to confront also these models with the strong limits from SUSY searches, flavour observables, dark matter observations, and Higgs measurements. The most powerful tool in this direction is the Mathematica package SARAH [141][142][143][144][145]. SARAH has been optimized for an easy, fast and exhaustive study of non-minimal SUSY models. While the first version of SARAH has been focused on the derivation of tree-level properties of a SUSY model, i.e. mass matrices and vertices, and interfacing this information to Monte-Carlo (MC) tools, with the second version of SARAH the calculation of one-loop self-energies as well as two-loop renormalization group equations (RGEs) has been automatized. With version 3, SARAH became the first 'spectrum-generator-generator': all analytical information derived by SARAH can be exported to Fortran code which provides a fully-fledged spectrum generator based on SPheno. This functionality has been later extended by the FlavorKit [146] interface which allows a modular implementation of new flavour observables based on the tools FeynArts/FormCalc -SARAH -SPheno. Also different methods to calculate the two-loop corrections to the Higgs states in a non-minimal model are available with SPheno modules generated by SARAH today: the radiative contributions to CP even scalar masses at the two-loop level can be obtained by either using the effective potential approach [147] based on generic results given in Ref. [16], or a fully diagrammatic calculation [148]. Both calculations provide Higgs masses with a precision which is otherwise just available for the MSSM. Beginning with SARAH 4, the package is no longer restricted to SUSY models but can handle also a general, renormalizable quantum field theory and provides nearly the same features as for SUSY models. Today, SARAH can be used for SUSY and non-SUSY models to write model files for CalcHep/CompHep [149,150], FeynArts/FormCalc [151,152], WHIZARD/O'Mega [153,154] as well as in the UFO format [155] which can be handled for instance by MadGraph 5 [156], GoSam [157], Herwig++ [158][159][160], and Sherpa [161][162][163]. The modules created by SARAH for SPheno calculate the full one-loop and partially two-loop corrected mass spectrum, branching ratios and decays widths of all states, and many flavour and precision observables. Also an easy link to HiggsBounds [164,165] and HiggsSignals [166] exists. Another possibility to get a tailor made spectrum generator for a non-minimal SUSY model based on SARAH is the tool FlexibleSUSY [167]. Finally, SARAH can also produce model files for Vevacious [168]. The combination SARAH-SPheno-Vevacious provides the possibility to find the global minimum of the one-loop effective potential of a given model and parameter point.
The range of models covered by SARAH is very broad. SARAH and its different interfaces have been successfully used to study many different SUSY scenarios: singlet extensions with and without CP violation [169][170][171][172][173][174][175][176][177][178][179], triplet extensions [180,181], models with R-parity violation [182][183][184][185][186][187][188], different kinds of seesaw mechanisms [59,[61][62][63][189][190][191][192][193][194], models with extended gauge sectors at intermediate scales [195][196][197][198] or the SUSY scale [35,[199][200][201][202][203], models with Dirac Gauginos [110,112,[204][205][206] or vectorlike states [207], and even more exotic extensions [208][209][210][211]. In addition, SARAH can be also very useful to perform studies in the context of the MSSM which can't be done with any other public tool out of the box. That's the case for instance, if new SUSY breaking mechanisms should be considered [212][213][214][215][216][217][218][219] or if the presence of charge and colour breaking minima should be checked [220,221]. For the NMSSM, despite the presence of specialized tools like NMSSMTools [222], SoftSUSY [223] or NMSSMCalc [224], the SPheno version created by SARAH is the only code providing two-loop corrections beyond O(α S (α t + α b )) not relying on MSSM approximations [225]. Also the full one-loop corrections to all SUSY states in the NMSSM have first been derived with SARAH [226]. This paper is organized as follows: in the next section an overview about the models supported by SARAH is given. In sec. 3, I'll discuss the possible analytical calculations which can be done with SARAH, and list the possible output of the derived information for further evaluation. The main part of this manuscript is a detailed example how SARAH can be used to study all phenomenological aspects of a model. That's done in sections 4-8: in sec. 4 the implementation of the B-L-SSM in SARAH is described, in sec. 5 it is discussed how the model can be understood at the analytical level in Mathematica. The SPheno output with all its features is presented in sec. 6. In sec. 7 I'll show how other tools can be used together with SARAH and SPheno to study for instance the dark matter and collider phenomenological. In sec. 8 different possibilities to perform parameter scans are presented. I summarize in sec. 9. Throughout the paper and in the given examples I will focus mainly on SUSY models, but many statements apply one-to-one also to non-SUSY models.

Input needed by SARAH to define a model
SARAH is optimized for the handling of a wide range of SUSY models. The basic idea of SARAH was to give the user the possibility to implement models in an easy, compact and straightforward way. Most tasks to get the Lagrangian are fully automatized: it is sufficient to define just the fundamental properties of the model. That means, that the necessary input to completely define the gauge eigenstates with all their interactions are: 1. Global symmetries 2. Gauge symmetries 3. Chiral superfields

Superpotential
That means that SARAH automatizes many steps to derive the Lagrangian from that input: 1. All interactions of matter fermions and the F -terms are derived from the superpotential 2. All vector boson and gaugino interactions as well as D-terms are derived from gauge invariance 3. All gauge fixing terms are derived by demanding that scalar-vector mixing vanishes in the kinetic terms 4. All ghost interactions are derived from the gauge fixing terms 5. All soft-breaking masses for scalars and gauginos as well as the soft-breaking counterparts to the superpotential couplings are added automatically Of course, the Lagrangian of the gauge eigenstates is not the final aim. Usually one is interested in the mass eigenstates after gauge symmetry breaking. To perform the necessary rotations to the new eigenstates, the user has to give some more information: 1. Definition of the fields which get a vacuum expectation value (VEV) to break gauge symmetries

Definition of what vector bosons, scalars and fermions mix among each other
Using this information, all necessary re-definitions and fields rotations are done by SARAH. Also the gauge fixing terms are derived for the new eigenstates and the ghost interactions are added. For all eigenstates plenty of information can be derived by SARAH as explained in sec. 3. Before coming to that, I'll give more details what kind of models and what features are supported by SARAH.

Supported models and features
As we have seen in the introduction, there are many possibilities to go beyond the widely studied MSSM. Each approach modifies the on or the other sector of the model. In general, possible changes compared to the MSSM are: (i) using other global symmetries to extent the set of allowed couplings, (ii) adding chiral superfields, (iii) extending the gauge sector, (iv) giving VEVs to other particle than only the Higgs doublets, (v) adding Dirac masses for gauginos, (vi) considering non-canonical terms like non-holomorphic soft SUSY breaking interactions or Fayet-Iliopoulos D-terms. All of these roads can in principle be gone by SARAH and I'll briefly discuss what is possible in the different sectors and which steps are done by SARAH to get the Lagrangian. Of course, extending the gauge sector or adding Dirac masses to gauginos comes inevitable with an extended matter sector as well. Thus, often several new effects appear together and can be covered by SARAH.

Global symmetries
SARAH can handle an arbitrary number of global symmetries which are either Z N or U (1) symmetries. Also a continuous R-symmetry U (1) R is possible. Global symmetries are used in SARAH mainly for three different purposes: first, they help to constrain the allowed couplings in the superpotential. However, SARAH doesn't strictly forbid terms in the superpotential which violate a global symmetry. SARAH only prints a warning to point out the potential contradiction. The reason is that such a term might be included on purpose to explain its tininess. Global symmetries can also affect the soft-breaking terms written down by SARAH. SARAH always tries to generate the most general Lagrangian and includes also soft-masses of the form m 2 φ i φ * j for two scalars φ i ,φ j with identical charges. However, these terms are dropped if they are forbidden by a global symmetry. By the same consideration, Dirac gaugino mass terms are written down or not. Finally, global symmetries are crucial for the output of model files for MicrOmegas to calculate the relic density. For this output at least one unbroken discrete global symmetry must be present. By modifying the global symmetries one can already go beyond the MSSM without changing the particle content: choosing a Z 3 (Baryon triality) instead of R-Parity [227][228][229][230][231], lepton number violating terms would be allowed while the proton is still stable. SARAH comes not only with R-parity violating models based on Baryon triality, but also a variant for Baryon number violation but conserved Lepton number is included.

Gauge sector
Gauge groups The gauge sector of a SUSY model in SARAH is fixed by defining a set of vector superfields. SARAH is not restricted to three vector superfields like in the MSSM, but many more gauge groups can be defined. To improve the power in dealing with gauge groups, SARAH has linked routines from the Mathematica package Susyno [232]. SARAH together with Susyno take care of all group-theoretical calculations: the Dynkin and Casimir invariants are calculated, and the needed representation matrices as well as Clebsch-Gordan coefficients are derived. This is not only done for U (1) and SU (N ) gauge groups, but also SO(N ), Sp(2N ) and expectational groups can be used. For all Abelian groups also a GUT normalization can be given. This factor comes usually from considerations about the embedding of a model in a greater symmetry group like SU (5) or SO (10). If a GUT normalization is defined for a group, it will be used in the calculation of the RGEs. The soft-breaking terms for a gaugino λ of a gauge group A are usually included as I'm using here and in the following capital letters A, B to label the gauge groups and small letter a, b, c to label the generators, vector bosons and gauginos of a particular gauge group. The field strength tensor is defined as and the covariant derivative is D µ λ a A = ∂ µ λ a A + g A f abc A b µ λ c . (2.4) Here, f abc A is the structure constant of the gauge group A. Plugging eq. (2.3) in the first term of eq. (2.2) leads to self-interactions of three and four gauge bosons. In general, the procedure to obtain the Lagrangian from the vector and chiral superfields is very similar to Ref. [233]. Interested readers might check this reference for more details.
Gauge interactions of matter fields Vector superfields usually don't come alone but also matter fields are present. I'm going to discuss the possibilities to define chiral superfields in sec. 2.2.4. Here, I assume that a number of chiral superfields are present and I want to discuss the gauge interactions which are taken into account for those. First, the D-terms stemming from the auxiliary component of the superfield are calculated. These terms cause four scalar interactions and read Here, the sum is over all scalars i, j in the model, T a Ar are the generators of the gauge group A for a irreducible representation r. For Abelian groups T a Ar simplify to the charges Q A φ of the different fields. In addition, Abelian gauge groups can come also with another feature: a Fayet-Iliopoulos D-term [234]: This term can optionally be included in SARAH for any U (1).
The other gauge-matter interactions are those stemming from the kinetic terms: with covariant derivatives D µ ≡ ∂ µ − ig A V A,a µ (T a Ar ). The SUSY counterparts of these interactions are those between gauginos and matter fermions and scalars: Gauge kinetic mixing The terms mentioned so far cover all gauge interactions which are possible in the MSSM. These are derived for any other SUSY model in exactly the same way. However, there is another subtlety which arises if more than one Abelian gauge group is present. In that case are allowed for field strength tensors F µν of two different Abelian groups A, B [235]. κ is in general a n × n matrix if n Abelian groups are present. SARAH fully includes the effect of kinetic mixing independent of the number of Abelian groups. For this purpose SARAH is not working with field strength interactions like eq. (2.9) but performs a rotation to bring the field strength in a diagonal form. That's done by a redefinition of the vector Υ carrying all gauge fields V µ X : Υ → √ κΥ (2.10) This rotation has an impact on the interactions of the gauge bosons with matter fields. In general, the interaction of a particle φ with all gauge fields can be expressed by Θ T φG Υ (2.11) Θ φ is a vector containing the charges Q x φ of φ under all U (1) groups x andG is a n × n diagonal matrix carrying the gauge couplings of the different groups. After the rotation according to eq. (2.10) the interaction part can be expressed by Θ T φ GΥ (2.12) with a general n × n matrix G which is no longer diagonal. In that way, the effect of gauge kinetic mixing has been absorbed in 'off-diagonal' gauge couplings. That means the covariant derivative in SARAH reads x, y are running over all U (1) groups, and g xy are the entries of the matrix G. Gauge-kinetic mixing is not only included in the interactions with vector bosons, but also in the derivation of the D-terms. Therefore, the D-terms for the Abelian sector in SARAH read while the non-Abelian D-terms keep the standard form eq. (2.5). Finally, also 'off-diagonal' gaugino masses are introduced. The soft-breaking part of the Lagrangian reads then L SB,λ,U (1) ⊃ xy 1 2 λ x λ y M xy + h.c. (2.15) SARAH takes the off-diagonal gaugino masses to be symmetric: M xy = M yx .

Gauge fixing sector
All terms written down so far lead to a Lagrangian which is invariant under a general gauge transformation. To break this invariance one can add 'gauge fixing' terms to the Lagrangian. The general form of these terms is (2. 16) Here, F a A is usually a function involving partial derivatives of gauge bosons V A,a µ . SARAH uses R ξ gauge. That means that for an unbroken gauge symmetry, the gauge fixing terms are For broken symmetries, the gauge fixings terms are chosen in a way that the mixing terms between vector bosons and scalars disappears from the Lagrangian. This generates usually terms of the form Here, G A is the Goldstone boson of the vector boson V A µ with mass M A . From the gauge fixing part, the interactions of ghost fieldsη a A are derived by Here, δ assigns the operator for a BRST transformation. All steps to get the gauge fixing parts and the ghost interactions are completely done automatically by SARAH and adjusted to the gauge groups in the model.

Matter sector
There can be up to 99 chiral superfields in a single SUSY model in SARAH. All superfields can come with an arbitrary number of generations and can transform as any irreducible representation with respect to the defined gauge groups. In the handling of non-fundamental fields under a symmetry, SARAH distinguishes if the corresponding symmetry gets broken or not: for unbroken symmetries it is convenient to work with fields which transform as vector under the symmetry with the appropriate length. For instance, a 6 under SU (3) c is taken to be φ α α = 1, 2, . . . 6 (2.20) I.e. it carries one charge index. In contrast, non-fundamental fields under a broken gauge symmetry are represented by tensor products of the fundamental representation. For instance, a 3 under SU (2) L is taken to be φ ab a, b = 1, 2 (2.21) Thus, the triplet can be given as usual as 2 × 2 matrix. For Abelian gauge groups one can not only define charges for superfields which are real numbers, but also variables can be used for that. All interactions are then expressed keeping these charges as free parameter.
For all chiral superfield SARAH adds the soft-breaking masses. For fields appearing in N generations, these are treated as hermitian N × N matrices. As written above, also soft-terms mixing two scalars are included if allowed by all symmetries. Hence, the soft-breaking mass terms read in general Note, i, j label different scalar fields, generation indices are not shown.δ ij is 1, if fields φ i and φ j have exactly the same transformation properties under all local and global symmetries, and otherwise 0.

Models with Dirac Gauginos
Another feature which became popular in the last years are models with Dirac gauginos. In these models mass terms mφ iA D λ A ψ i between gauginos λ A and a fermionic component ψ i of the chiral superfieldφ i in the adjoint representation of the gauge group A are present. In addition, also new D-terms are introduced in these models [99]. Thus, the new terms in the Lagrangian are D A is the auxiliary component of the vector superfield of the group A. To allow for Dirac mass terms, these models come always with an extended matter sector: to generate Dirac mass terms for all MSSM gauginos at least one singlet, one triplet under SU (2) and one octet under SU (3) must be added. Furthermore, models with Dirac gauginos generate also new structures in the RGEs [236]. All of this is fully supported in SARAH. If Dirac masses for gauginos are explicitly turned on in SARAH, it will check for all allowed combinations of vector and chiral superfields which can generate Dirac masses and which are consistent with all symmetries. For instance, in models with several gauge singlets, the bino might even get several Dirac mass terms.

Superpotential, soft-terms and non-canonical interactions
The matter interactions in SUSY models are usually fixed by the superpotential and the soft-SUSY breaking terms. SARAH fully supports all renormalizable terms in the superpotential (2.24) and generates the corresponding soft-breaking terms c L , c M , c T are real coefficients. All parameters are treated by default in the most general way by taking them as complex tensors of appropriate order and dimension. If identical fields are involved in the same coupling, SARAH derives also the symmetry properties for the parameter. As discussed below, SARAH can also handle to some extent non-renormalizable terms with four superfields in the superpotential From the superpotential, all the F -terms and interactions of matter fermions (2.28) are derived. HereW is the superpotential W with all superfieldsφ i replaced by their scalar component φ i . ψ i is the fermionic component of that superfield. Usually, the F -and D-terms and the soft-breaking terms for chiral and vector superfields fix the full scalar potential of the model. However, in some cases also non-canonical terms should be studied. These are for instance non-holomorphic soft-terms Those can be added as well and they are taken into account in the calculation of the vertices and masses and as consequence also in all loop calculations. However, they are not included in the calculation of the RGEs because of the lack of generic results in literature.

Symmetry breaking and VEVs
All gauge symmetries can also be broken. This is in general done by decomposing a complex scalar into its real components and a VEV: Assigning a VEV to a scalar is not restricted to colourless and neutral particles. Also models with spontaneous colour or charge breaking (CCB) can be studied with SARAH. Also explicit CP violation in the Higgs sector is possible. There are two possibilities to define that. Either a complex phase is added or a VEV for the CP odd component is defined Both options are possible in SARAH, even if the first one might often be preferred.
In the case of an extended gauge sector also additional gauge bosons are present. Depending on the quantum numbers of the states which get a VEV these gauge bosons might mix with the SM ones. Also this mixing is fully supported by SARAH. There is no restriction if the additional gauge bosons are ultra-light (dark photons) or much heavier (Z , W -bosons).

Mixing in matter sector
Mixing between gauge eigenstates to new mass eigenstate appears not only in the gauge but also in the matter sector. In general the mixing is induced via bilinear terms in the Lagrangian between gauge eigenstates. These bilinear terms can either be a consequence of gauge symmetry breaking or they can correspond to bilinear superpotential-or soft-terms. In general, four kinds of bilinear terms can show up in the matter part of the Lagrangian: Here, φ, ϕ, Ψ x (x = 0, 1, 2) are vectors whose components are gauge eigenstates. φ are complex and ϕ are real scalars, Ψ 0 , Ψ 1 and Ψ 2 are Weyl spinors. The rotation of complex scalars φ to mass eigenstates φ happens via an unitary matrix U which diagonalizes the matrix m C . For real scalars the rotation is done via a real matrix Z which diagonalizes m R : We have to distinguish for fermions if the bilinear terms are symmetric or not. In the symmetric case the gauge eigenstates are rotated to Majorana fermions. The mass matrix m M is then diagonalized by one unitary matrix. In the second case, two unitary matrices are needed to transform Ψ 1 and Ψ 2 differently. This results in Dirac fermions. Both matrices together diagonalize the mass matrix m D .
There is no restriction in SARAH how many states do mix. The most extreme case is the one with spontaneous charge, colour and CP violation where all fermions, scalars and vector bosons mix among each other. This results in a huge mass matrix which would be derived by SARAH. Phenomenological more relevant models can still have a neutralino sector mixing seven to ten states. That's done without any problem with SARAH. Information about the calculation of the mass matrices in SARAH are given in sec. 3.3.

Superheavy particles
Extensions of the MSSM can not only be present at the SUSY scale but also appear at much higher scales. These superheavy states have then only indirect effects on the SUSY phenomenology compared to the MSSM: they alter the RGE evolution and give a different prediction for the SUSY parameters. In addition, they can also induce higher dimensional operators which are important. SARAH provides features to explore models with superheavy states: it is possible to change stepwise the set of RGEs which is used to run the parameters numerically with SPheno. In addition, the most important thresholds are included at the scale M T at which the fields of mass M are integrated out. These are the corrections to the gauge couplings and gaugino masses [237] (2.39) S A (r) is the Dynkin index of a superfield transforming as representation r with respect to the gauge group A. When evaluating the RGEs from the low to the high scale the contribution is positive, when running down, it is negative. Eqs. (2.38)-(2.39) assume that the mass splitting between the components of the chiral superfield integrated out is negligible. That's often a good approximation for very heavy states. Nevertheless, SARAH can also take into account the mass splitting among the components if necessary. Also higher dimensional operators can be initialized which give rise to terms like eq. (2.26). However, those are only partially supported in SARAH. That means that only the RGEs are calculated for these terms and the resulting interactions between two fermions and two scalars are included in the Lagrangian. The six scalar interactions are not taken into account. This approach is for instance sufficient to work with the Weinberg operator necessary for neutrino masses [238,239].

Checks of implemented models
After the initialization of a model SARAH provides functions to check the (self-) consistency of this model. The function CheckModel performs the following checks: Causes the particle content gauge anomalies? Gauge anomalies are caused by triangle diagrams with three external gauge bosons and internal fermions [240]. The corresponding conditions for all SU (N ) A groups to be anomaly free are Again, T a Ar (ψ i ) are the generators for a fermion ψ i transforming as irreducible representation r under the gauge group SU (N ) A . The sum is taken over all chiral superfields. In the Abelian sector several conditions have to be fulfilled depending on the number of U (1) gauge groups The mixed condition involving Abelian and non-Abelian groups is Finally, conditions involving gravity G are If one if these conditions is not-fulfilled a warning is printed by SARAH. If some U (1) charges were defined as variable, the conditions on these variables for anomaly cancellation are printed.
Leads the particle content to the Witten anomaly? SARAH checks that there is an even number of SU (2) doublets. This is the necessary for a model in order to be free of the Witten anomaly [241] Are all terms in the (super)potential in agreement with global and local symmetries? As mentioned above, SARAH doesn't forbid to include terms in the superpotential which violate global or gauge symmetries. However, it prints a warning if this happens.
Are there other terms allowed in the (super)potential by global and local symmetries? SARAH will print a list of renormalizable terms which are allowed by all symmetries but which haven't been included in the model file.

Calculations and output
SARAH can perform in its natural Mathematica environment many calculations for a model on the analytical level. For an exhaustive numerical analysis usually one of the dedicated interfaces to other tools is the best approach. I give in this section an overview what SARAH calculates itself, and how that information is linked to other codes.

Renormalization group equations
SARAH calculates the SUSY RGEs at the one-and two-loop level. In general, the β-function of a parameter c is parametrized by are the coefficients at one-and two-loop level. For the gauge couplings the generic one-loop expression is rather simple and reads S(R) is the Dynkin index for the gauge group summed over all chiral superfields charged under that group, and C(G) is the Casimir of the adjoint representation of the group. The two-loop expressions are more complicated and are skipped here. They are for instance given in Ref. [242].
The starting point for the calculation of the RGEs for the superpotential terms in SARAH are the anomalous dimensions γ for all superfields. These can be also parametrized by I want to stress again that i, j are not generation indices but label the different fields. Generic formula for the one-and two-loop coefficients γ (1) , γ (2) are given in Ref. [242] as well. SARAH includes the case of an anomalous dimension matrix with off-diagonal entries, i.e.φ i =φ j . That's for instance necessary in models with vector like quarks where the superpotential reads γûÛ is not vanishing but receives already at one-loop contributions ∝ Y u Y U .
From the anomalous dimensions it is straightforward to get the β-functions of the superpotential terms: for a generic superpotential of the form eqs. (2.24) and (2.26) the coefficients β (x) are given by up to constant coefficients. In the soft-breaking sector SARAH includes also all standard terms of the form The generic expressions for B's, T 's, m 2 's and M 's up to two-loop are given again in Ref. [242] which is used by SARAH. The β-function for the linear soft-term t is calculated using Ref. [243]. For the quartic soft-term Q the approach of Ref. [244] is adopted. In this approachγ is defined bȳ The β-functions for Q can then expressed by γ andγ: In principle, the same approach can also be used for B and T terms as long as no gauge singlet exists in the model. Because of this restriction, SARAH uses the more general expressions of Ref. [242].
The running of the Fayet-Iliopoulos D-term ξ receives two contributions: The first part is already fixed by the running of the gauge coupling of the Abelian group, the second part, β (x) ξ , is known even to three loops [245,246]. SARAH has implemented the one-and two-loop results which are rather simple: 14) σ 1 and σ 3 are traces which are also used to express the β-functions of the soft-scalar masses at oneand two-loop, see for instance Ref. [242].
Finally, the β-functions for the gaugino mass parameters are where the expressions for β M are also given in Refs. [242,243,247]. β (1) M has actually a rather simple form similar to the one of the gauge couplings. One finds Therefore, the running of the gaugino masses are strongly correlated with the one of the gauge couplings. Thus, for a GUT model the hierarchy of the running gaugino masses is the same as the one for the gauge couplings.
The expressions presented in the early works of Refs. [242,243,247] did actually not cover all possibilities and are not sufficient for any possible SUSY models which can be implemented in SARAH. Therefore, SARAH has implemented also some more results from literature which became available in the last few years. In the case of several U (1)'s, gauge-kinetic mixing can arise if the groups are not orthogonal. Substitution rules to translate the results of Ref. [242] to those including gauge kinetic mixing where presented in Ref. [248] and have been implemented in SARAH 2 . For instance, to include gauge-kinetic mixing in the running of the gauge couplings and gaugino masses eqs. (3.2) and (3.16) can be used together with the substitutions Here, G and M are matrices carrying the gauge couplings and gaugino masses of all U (1) groups, see also sec. 2.2, and I introduced Vφ = G T Qφ. The sums are running over all chiral superfieldsφ. Also for all other terms involving gauge couplings and gaugino masses appearing in the β functions similar rules are presented in Ref. [248] which are used by SARAH.
Furthermore, also the changes in the RGEs in the presence of Dirac gaugino mass terms are known today at the two-loop level, see Ref. [236]. SARAH makes use of Ref. [236] to obtain the β-functions for the new mass parameters as well as to include new contribution to the RGEs of tadpole terms in presence of Dirac gauginos. The β functions of a Dirac mass terms mφ λ D Ψλ i are related to the anomalous dimension of the involved chiral superfieldφ, whose fermionic component is Ψ, and to the running of the corresponding gauge coupling: The tadpole term receives two new contributions from Fayet-Iliopoulos terms discussed above and terms mimicking B insertions β 2 Another method to deal with gauge-kinetic mixing was proposed in Ref. [249] Thus, the only missing piece is β D which is also calculated by SARAH up to two-loop based on Ref. [236].
Finally, the set of SUSY RGEs is completed by using the results of Refs. [250,251] to get the gauge dependence in the running of the VEVs. As consequence, the β-functions for the VEVs consist of two parts which are calculated independently by SARAH γ S is the anomalous dimension of the scalar φ which receives the VEV v φ . The gauge dependent parts which vanish in Landau gauge are absorbed inγ S . All details about this calculation and the generic results forγ S,(x) are given in Refs. [250,251].
I want to mention that SARAH provides the same accuracy also for the RGEs for a non-SUSY model by making use of the generic results of Refs. [252][253][254][255]. These results are completed by [256] to cover gauge kinetic mixing and again by Refs. [250,251] to include the gauge-dependence of the running VEVs also in the non-SUSY case.
Output The RGEs calculated by SARAH are outputted in different formats: (i) they are written in the internal SARAH format in the output directory, (ii) they are included in the L A T E X output in a much more readable format, (iii) they are exported into a format which can be used together with NDSolve of Mathematica to solve the RGEs numerically within Mathematica, (iv) they are exported into Fortran code which is used by SPheno.

Tadpole equations
During the evaluation of a model, SARAH calculates 'on the fly' all minimum conditions of the tree-level potential, the so called tadpole equations. In the case of no CP violation, in which complex scalars are decomposed as the expressions are calculated. These are equivalent to ∂V ∂vi . For models with CP violation in the Higgs sector, i.e. where either complex phases appear between the real scalars or where the VEVs have an imaginary part, SARAH calculates the minimum conditions with respect to the CP-even and CP-odd components: The set of all tadpole equations is in this case Output The tadpole equations are exported into L A T E X format as well as in Fortran code used by SPheno. This ensures that all parameter points evaluated by SPheno are at least sitting at a local minimum of the scalar potential. Moreover, the tadpole equations are included in the model files for Vevacious which is used to find all possible solutions of them with respect to the different VEVs.

Masses and mass matrices
SARAH uses the definition of the rotations defined in the model file to calculate the mass matrices for particles which mix. The mass matrices for scalars are calculated by where φ can be either real or complex, i.e. the resulting M S corresponds to m C or m R of eq. (2.33). In the mass matrices of states which include Goldstone bosons also the R ξ dependent terms are included.
The mass matrices for fermions are calculated as with x = y = 0 for Majorana fermions, and x = 1, y = 2 for Dirac fermions. SARAH calculates for all states which are rotated to mass eigenstates the mass matrices during the evaluation of a model. In addition, it checks if there are also particles where gauge and mass eigenstates are identical. In that case, it calculates also the expressions for the masses of these states.
Output The tree-level masses and mass matrices are also exported to L A T E X files as well as to Fortran code for SPheno. In addition, they are used in the Vevacious output to enable the calculation of the one-loop effective potential. The mass matrices can also be exported to the CalcHep model files if the user wants to calculate the masses internally with CalcHep instead of using them as input.

Vertices
Vertices are not automatically calculated during the initialization of a model like this is done for mass matrices and tadpole equations. However, the calculation can be started very easily. In general, SARAH is optimized for the extraction of three-and four-point interactions with renormalizable operators. That means, usually only the following generic interactions are taken into account in the calculations: interactions of two fermions or two ghosts with one scalar or vector bosons (FFS, FFV, GGS, GGV), interactions of three or four scalars or vector bosons (SSS, SSSS, VVV, VVVV), as well as interactions of two scalars with one or two vector bosons (SSV, SSVV) or two vector bosons with one scalar (SVV). In this context, vertices not involving fermions are calculated by Here, η i are either scalars, vector bosons, or ghosts. The results are expressed by a coefficient C which is a Lorentz invariant and a Lorentz factor Γ which involves γ µ , p µ , or g µν . Vertices for Dirac fermions are first expressed in terms of Weyl fermions. The two vertices are then calculated separately. Taking two Dirac fermions F a = (Ψ 1 a , Ψ 2 * a ), F b = (Ψ 1 b , Ψ 2 * b ) and distinguishing the two cases for fermion-vector and fermion-scalar couplings, the vertices are calculated and expressed by Here, the polarization operators P L,R are used.
The user can either calculate specific vertices for a particular set of external states or call functions that SARAH derives all existing interactions from the Lagrangian. The first option might be useful to check the exact structure of single vertices, while the second one is needed to get all vertices to write model files for other tools.
Output The vertices are exported into many different formats. They are saved in the SARAH internal format and they can be written to L A T E X files. The main purpose is the export into formats which can be used with other tools. SARAH writes model files for FeynArts, WHIZARD/OMEGA, CalcHep/CompHep as well as in the UFO format. The UFO format is supported by MadGraph, Herwigg+ and Sherpa. Thus, by the output of the vertices into these different format, SARAH provides an implementation of a given model in a wide range of HEP tools. In addition, SARAH generates also Fortran code to implement all vertices in SPheno.
3.5 One-and two-loop corrections to tadpoles and self-energies 3.5.1 One-loop corrections SARAH calculates the analytical expressions for the one-loop corrections to the tadpoles and the oneloop self-energies for all particles. For states which are a mixture of several gauge eigenstates, the self-energy matrices are calculated. For doing that, SARAH is working with gauge eigenstates as external particles but uses mass eigenstates in the loop. The calculations are performed in DR-scheme using 't Hooft gauge. In the case of non-SUSY models SARAH switches to MS-scheme. This approach is a generalization of the procedure applied in Ref. [257] to the MSSM. In this context, the following results are obtained: • The self-energies Π of scalars and scalar mass matrices • The self-energies Σ L , Σ R , Σ S for fermions and fermion mass matrices • The transversal self-energy Π T of massive vector bosons The approach to calculate the loop corrections is as follows: all possible generic diagrams at the one-loop level shown in Fig. 1 are included in SARAH. Each generic amplitude is parametrized by Here 'Symmetry' and 'Colour' are real factors. The loop-functions are expressed by standard Passarino-Veltman integrals A 0 and B 0 and some related functions: Ref. [257].
As first step to get the loop corrections, SARAH generates all possible Feynman diagrams with all field combinations possible in the considered model. The second step is to match these diagrams to the generic expressions. All calculations are done without any assumption and always the most general case is taken. For instance, the generic expression for a purely scalar contribution to the scalar self-energy reads In the case of an external charged Higgs φ + = ((H − d ) * , H + u ) together with down-and up-squarks in the loop the correction to the charged Higgs mass matrix becomes is the charged Higgs-sdown-sup vertex where the rotation matrix of the charged Higgs are replaced by the identity matrix to get the projection on the gauge eigenstates. One can see that all possible combinations of internal generations are included, i.e. also effects like flavour mixing are completely covered. Also the entire p 2 dependence is kept.
Output The one-loop expressions are saved in the SARAH internal Mathematica format and can be included in the L A T E X output. In addition, all self-energies and one-tadpoles are exported into Fortran code for SPheno. This enable SPheno to calculate the loop-corrected masses for all particles as discussed below.

Two-loop corrections
It is even possible to go beyond one-loop with SARAH and to calculate two-loop contributions to the self-energies of real scalars. There are two equivalent approaches implemented in the SPheno interface of SARAH to perform these calculations: an effective potential approach, and a diagrammatic approach with vanishing external momenta. Because of the very complicated form of the results there is no output of the corresponding expressions in the Mathematica or L A T E X format but the results are just included in the Fortran code for numerical evaluation. I'll discuss both calculations a bit more.
Effective potential calculation The first calculation of the two-loop self-energies is based on the effective potential approach. The starting point of the calculation are the generic results for the twoloop effective potential given in Ref. [16]. These have been translated to four component notation and were implemented in SARAH. When SARAH creates the SPheno output it writes down the amplitude for all two-loop diagrams which don't vanish in the gaugeless limit. This limit means that contributions from broken gauge groups are ignored. The remaining generic diagrams which are included are shown in Fig. 2. Using these diagrams includes for instance all two-loop contributions which are also taken into account in the MSSM. To get the values for the two-loop self-energies and two-loop tadpoles, the derivatives of the potential with respect to the VEVs are taken numerically as proposed in Ref. [258]. There are two possibilities for this derivation implemented in SARAH/SPheno: (i) a fully numerical procedure which takes the derivative of the full effective potential with respect to the VEVs. A semi-analytical derivation which takes analytical the derivative of the loop functions with respect to involved masses, but derives the masses and coupling numerically with respect to the VEVs. More details about both methods and the numerical differences are given in Ref. [147].
Diagrammatic calculation A fully diagrammatic calculation for two-loop contributions to scalar self-energies with SARAH-SPheno became available with Ref. [148]. In this setup a set of generic expressions first derived in Ref. [148] is used. All two-loop diagrams shown in Fig. 3 are included in the limit p 2 = 0. These are again the diagrams which don't vanish in general in the gaugeless limit. The results of Ref. [148] have the advantage that the expressions which are derived from the effective potential are much simpler than taking the limit p 2 → 0 in other two-loop functions available in literature [259]. The diagrammatic method gives completely equivalent results to the effective potential calculation but is usually numerically more robust.
The need for both calculations Since both calculations are based on a completely independent implementation and use a different approach they are very useful to perform cross checks. For the MSSM and NMSSM both calculations reproduce exactly the results obtained by widely used routines based on Refs. [260][261][262][263][264][265]. However, for non-minimal SUSY models are no references available to compare with. Thus, the only possibility to cross check the results is within SPheno and comparing the two different methods.
Output The two-loop expressions for the effective potential, the tadpoles and the self-energies are just exported to SPheno at the moment to calculate the loop corrected mass spectrum.

Loop corrected mass spectrum
The information about the one-and two-loop corrections to the one-and two-point functions introduced in sec. 3.5 can be used to calculate the loop corrected mass spectrum. Sticking to approach of Ref. [257], the renormalized mass matrices (or masses) are related to the tree-level mass matrices (or masses) and the self-energies as follows.

Loop corrected masses
Real scalars For a real scalar φ, the one-loop, and in some cases also two-loop, self-energies are calculated by SPheno. The loop corrected mass matrix squared m The one-shell condition for the eigenvalue M 2 φi (p 2 ) of the loop corrected mass matrix m A stable solution of eq. (3.35) for each eigenvalue M 2 φi (p 2 = M 2 φi ) is usually just found via an iterative procedure. In this approach one has to be careful how m 2,(T ) φ is defined: this is the tree-level mass matrix where the parameters are taken at the minimum of the effective potential evaluated at the same loop-level at which the self-energies are known. The physical masses are associated with the eigenvalues M 2 φi (p 2 = M 2 φi ). In general, for each eigenvalue the rotation matrix is slightly different because of the p 2 dependence of the self-energies. The convention by SARAH and SPheno is that the rotation matrix of the lightest eigenvalue is used in all further calculations and the output.
Complex scalars For a complex scalar η the one-loop corrected mass matrix squared is related to the tree-level mass and the one-loop self-energy via The same on-shell condition, eq. (3.35), as for real scalars is used.
Vector bosons For vector bosons we have similar simple expressions as for scalar. The one-loop masses of real or complex vector bosons V are given by Majorana fermions The one-loop mass matrix of a Majorana fermion χ is related to the tree-level mass matrix m (T ) χ and the different parts of the self-energies by Here, the eigenvalues of (m are used in eq. (3.35) to get the pole masses.

Renormalization procedure
I have explained so far how SPheno does calculate the one-and two-loop self-energies and how these are related to the loop corrected masses. Now, it is time to put this in a more global picture by describing step-by-step the entire renormalization procedure SPheno uses: 1. Everything starts with calculating the running parameters at the renormalization scale Q = M SU SY from the given input parameters. The parameters can be given either directly at M SU SY as input or they are fixed by some GUT conditions and a RGE running is performed. M SU SY itself can be either be a fixed value or can be dynamically chosen. It is common to choose the geometric mean of the stop masses because this usually minimizes the scale dependence of the Higgs mass prediction.
2. Not all parameters are fixed by the input but some parameters are kept free. These parameters are arranged in a way that all further calculations are done at the minimum of the potential. For this purpose the tadpole equations T i are solved at tree-level with respect to these free parameters.
3. As soon as all running parameters are known at the SUSY scale, they are used to calculate the tree-level mass spectrum. ) are used to get the tree-level value of the electroweak VEV v. v and the running value of tan β are used to get tree-level VEVs v u , v d . Note, in this step it is assumed that always two Higgs doublets are present in SUSY models which give mass to up-and down quark as well as leptons and gauge bosons. 6. Now, all tree-level parameters are known and the tree-level masses and rotation matrices are re-calculated using the obtained values.
7. Tree-level masses, rotation matrices and parameter are used to get all vertices at tree-level. The vertices and masses are then plugged in the expressions for the one-and two-loop corrections to the tadpoles δt . The conditions to work at the minimum of the effective potential are These equations are again solved for the same set of parameters as at tree-level.
8. The self-energies for all particles are calculated at the highest available loop-level as explained above. Note, these calculations involve purely tree-level parameters but not the ones obtained from eq. (3.40).
9. Eqs. (3.34)-(3.39) are used to get the loop corrected mass matrices for all particle. Now, the parameters coming from loop corrected tadpoles are used to express the tree-level mass matrices. All calculations are iterated until the on-shell condition is satisfied for all masses.

Thresholds
So far, I haven't mentioned another subtlety: in general, the running SM parameters depend on the SUSY masses. The reason are the thresholds to match the running parameters to the measured ones. These thresholds change, when the mass spectrum changes. Therefore, the above procedure is iterated until the entire loop corrected mass spectrum has converged. The calculation of the thresholds is also dynamically adjusted by SARAH to include all new physics contributions. The general procedure to obtain the running gauge and Yukawa at M Z is as follows: 1. The first step is the calculation of α DR (M Z ), α DR S (M Z ) via Here, α The sum runs over all particles i which are not present in the SM and which are either charged or coloured. The coefficients c i depends on the charge respectively colour representation, the generic type of the particle (scalar, fermion, vector), and the degrees of freedom of the particle (real/complex boson respectively Majorana/Dirac fermion).
2. The next step is the calculation of the running Weinberg angle sin Θ DR and electroweak VEV v. For that the one-loop corrections δM 2 Z and δM 2 W to the Z-and W -mass are needed. And an iterative procedure is applied with Θ DR W = Θ SM W in the first iteration together with: Here, G F is the Fermi constant and δ r is defined by where δ V B are the corrections to the muon decay µ → eν iνj which are calculated at one-loop as well. The ρ parameter is calculated also at full one-loop and the known two-loop SM corrections are added.
3. With the obtained information, the running gauge couplings at M Z are given by 4. The running Yukawa couplings are also calculated in an iterative way. The starting point are the running fermion masses in DR obtained from the pole masses given as input: The two-loop parts are taken from Ref. [266,267]. The masses are matched to the eigenvalues of the loop-corrected fermion mass matrices calculated as Here, the pure QCD and QED corrections are dropped in the self-energiesΣ. Inverting this relation to get the running tree-level mass matrix m Since the self-energies depend also one the Yukawa matrices, this calculation has to be iterated until a stable point is reached. Optionally, also the constraint that the CKM matrix is reproduced can be included in the matching.
Output The calculation of the loop corrected mass spectrum and the thresholds is included in the SPheno output.

Decays and branching ratios
The calculation of decays widths and branching ratios can be done by using the interface between SARAH and SPheno. SPheno modules created by SARAH calculate all two-body decays for SUSY and Higgs states as well as for additional gauge bosons. In addition, the three-body decays of a fermion into three other fermions and of a scalar into another scalar and two fermions are included. In the Higgs sector, possible decays into two SUSY particles, leptons and massive gauge bosons are calculated at tree-level. For two quarks in the final state the dominant QCD corrections due to gluons are included [268]. The loop induced decays into two photons and gluons are fully calculated at leading-order (LO) with the dominant next-to-leading-order corrections known from the MSSM. For the LO contributions all charged and coloured states in the given model are included in the loop, i.e. new contributions rising in a model beyond the MSSM are fully covered at one-loop. In addition, in the Higgs decays also final states with off-shell gauge bosons (ZZ * , W W * ) are included. The only missing piece is the γZ channel. The corresponding loops are not yet derived by SARAH and the partial width is set to zero.
In contrast to other spectrum generators, SPheno modules by SARAH perform a RGE running to the mass scale of the decaying particle. This should give a more accurate prediction for the decay width and branching ratios. However, the user can also turn off this running and use always the parameters as calculated at M SU SY in all decays as this is done by other codes.
Output All necessary routines to calculate the two-and three-body decays are included by default in the Fortran output for SPheno.

Higgs coupling ratios
With the discovery of the Higgs boson at the LHC and the precise measurements of its mass and couplings to other particles, a new era of high energy physics has started. Today, many SUSY models have not only be confronted with the exclusion limits from direct searches, but they have also to reproduce the Higgs properties correctly. The agreement with respect to the mass can be easily read off a spectrum file. For the rates this is usually not so easy. One can parametrize how 'SM-like' the couplings of a particular scalar are by considering the ratio Here, c SU SY φXY is the calculated coupling between a scalar φ and two SM particles X and Y for a particular parameter point in a particular model. This coupling is normalized to the SM expectation for the same interaction. Nowadays, all r φXY are constrained to be rather close to 1 if φ should be associated with the SM Higgs. SARAH uses the information which is already available from the calculation of the decays to obtain also values for r φXY . Of course, also here the γZ channel is missing and r φγZ is therefore put always to 0.
Output All necessary routines to calculate the Higgs coupling ratios are included by default in the Fortran output for SPheno. Constraints for new physics scenarios come not only from direct searches and the Higgs mass observation but also from the measurement of processes which happen only very rarely in the SM and/or which are known to a very high accuracy. These are in particular flavour violation observables. When using the SPheno output of SARAH, routines for the calculation of many quark and lepton flavour (QFV, LFV) observables are generated.

Lepton flavour violation
The radiative decays of a lepton into another lepton and a photon (Br(l i → l j γ)), and the purely leptonic three-body decays of leptons are included (Br(l → 3l )). Also flavour violating decays of the Z-boson (Br(Z → ll )) are tested by SARAH/SPheno. Moreover, there are also semi-hadronic observables in the output: µ-e conversion in nuclei (CR(µ − e, N )) where the considered nuclei are (N =Al, Ti, Sr, Sb, Au, Pb) as well as decays of τ 's into pseudo-scalars, Br(τ → l + P ) with (P =π, η,η ).
The approach in SARAH to generate the routines to calculate all these observables is similar to the approach used for loop calculations needed for radiative corrections to the masses: generic formulas for SARAH generates all possible one-loop diagrams and uses the generic expression to get their amplitudes. In this context, not only all possible particles in the loop are included but also all different propagators for penguin diagrams are considered. Thus, not only photonic penguins which are often considered to be dominant in many processes are taken into account. Also all Higgs, Z and -if existing -Z penguins are generated. After the calculation of the Wilson coefficient, these are then combined to calculate the observables. This can easily be done by using expressions from literature which are usually model independent.
With the development of the FlavorKit [146] interface all information to calculate flavour observables is no longer hard-coded in SARAH but provided by external files. This makes it possible for the user to extent the list of flavour observables when necessary. The FlavorKit is an automatization of the procedure presented in Ref. [269] to implement B 0 s,d → ll in SARAH and SPheno. Users interested in the internal calculation might take a look at these two references.
Also some other observables are calculated by the combination SARAH-SPheno which are measured with high precision: electric dipole moments (EDMs), anomalous magnetic moments of leptons ((g − 2) l ) and δρ Output When generating SPheno code with SARAH, the above listed flavour and precision observables are included in the Fortran code. In addition, SARAH writes also L A T E X files with all contribution to the Wilson coefficients from any possible diagram.

Fine-Tuning
A measure for the electroweak fine-tuning was introduced in Refs. [270,271] α is a set of independent parameters. ∆ −1 α gives an estimate of the accuracy to which the parameter α must be tuned to get the correct electroweak breaking scale [272]. Using this definition the finetuning of a given models depends on the choice what parameters are considered as fundamental and at which scale they are defined. The approach by SARAH is that it takes by default the scale at which the SUSY breaking parameters are set. This corresponds in models where SUSY is broken by gravity to the scale of grand unification (GUT scale), while for models with gauge mediated SUSY breaking (GMSB) the messenger scale would be used. For simplicity, I call both M Boundary . The choice of the set of parameters α is made by user.
To calculate the fine-tuning in practice, an iteration of the RGEs between M SU SY andM Boundary happens using the full two-loop RGEs. In each iteration one of the fundamental parameters is slightly varied and the running parameters at M SU SY are calculated. These parameters are used to solve the tadpole equations numerically with respect to all VEVs and to re-calculate the Z-boson mass. To give an even more accurate estimate, also one-loop corrections to the Z mass stemming from Π T Z can be included.
Output A fine-tuning calculation is optionally included in the Fortran output for SPheno.

Summary
SARAH derives a lot of information about a given model. This information can be used in different interfaces to study a model in all detail. In general, one can get (i) L A T E X files, (ii) a spectrum generator based on SPheno, (iii) model files for different HEP tools. L A T E X All analytical information derived about a model can be exported to L A T E X files. These files provide in a human readable format the following information: (i) list of all superfields as well as component fields for all eigenstates; (ii) the superpotential and important parts of the Lagrangian like soft-breaking and gauge fixing terms added by SARAH; (iii) all mass matrices and tadpole equations; (iv) the full two-loop RGEs; (v) analytical expressions for the one-loop self energies and tadpoles; (vi) all interactions and the corresponding Feynman diagrams; (vii) details about the implementation in SARAH. Separated files are also generated for the flavour observables showing all contributing diagrams with their amplitudes. Spectrum Generator SARAH 3 has been the first 'spectrum-generator-generator': using the derived information about the mass matrices, tadpole equations, vertices, loop corrections and RGEs for the given model SARAH writes Fortran source code for SPheno. Based on this code the user gets a fully functional spectrum generator for a new model. The features of a spectrum generator created in this way are: 4 Example -Part I: The B-L-SSM and its implementation in SARAH I'll discuss in this section and the subsequent ones the implementation of the B-L-SSM in SARAH and how all phenomenological aspects of this model can be studied. The B-L-SSM is for sure not the simplest extension of the MSSM, but it provides many interesting features. There are some subtleties in the implementation which won't show up in singlet extensions for instance. I hope that the examples presented in the following sections show how even such a complicated model can be studied with a very high precision without too much effort. Applying the same methodology to other SUSY or non-SUSY models should be straightforward. However, I can't discuss here all topics which might be interesting and useful for some models. In particular, I won't show how models with threshold scales are implemented. SARAH supports thresholds where heavy particles get integrated out and where either the gauge symmetries do change or not. Users interested in that topic might take a look at the manual as well as the implementations of seesaw models 4 or left-right symmetric models 5 . A brief summary of the general approach to include thresholds is also given in Appendix A.5.
In the first part of this section I'll give a short introduction to the B-L-SSM, before I come to the tools which are going to be used. The implementation of the B-L-SSM in SARAH is discussed in sec. 4.4 and how it is evaluated is shown in sec. 4.5. The next section explains what can be done with the model using just Mathematica. It is shown how the L A T E X output is generated, how mass matrices and tadpole equations can be handled with Mathematica and how RGEs are calculated and solved. Sec. 6 is about the interface between SARAH and SPheno. The mass spectrum calculation is explained, and what else can be obtained with SPheno: decays and branching ratios, flavour and precision observables and the fine-tuning. Afterwards, we include more tools in our study in sec. 7: HiggsBounds/HiggsSignals to test Higgs constraints, Vevacious to check the vacuum stability, MicrOmegas to calculate the dark matter relic density and direct detections rates, WHIZARD/O'Mega to produce monojet events and MadGraph to make a simple dilepton analysis. Sec. 8 is about parameter scans and how the different tools can be interfaced.

The model
Supersymmetric models with an additional U (1) B−L gauge symmetry at the TeV scale have recently received considerable attention: they can explain neutrino data, they might help to understand the origin of R-parity and its possible spontaneous violation [273][274][275][276][277], and they have a rich phenomenology [278]. In the R-parity conserving case, these models come with a new Higgs which mixes with the MSSM one [199], they provide new dark matter candidates [279], and can have an impact on the Higgs decays [280]. In both cases of broken and unbroken R-parity these models have interesting consequences for LHC searches [281][282][283][284].

Particle content and superpotential
We study the minimal supersymmetric model where the SM gauge sector is extended by a U (1 and where R-parity is not broken by sneutrino VEVs. This model is called the B-L-SSM. In this model the matter sector of the MSSM is extended by three generations of right-handed neutrino superfieldŝ ν c and two fields which are responsible for B − L breaking,η andˆη. These fields carry lepton number 2 and are called 'bileptons'. The chiral superfields and their quantum numbers are summarized in Tab. 1. The superpotential of the B-L-SSM is given by Here, i, j are generation indices and we suppressed all colour and isospin indices. The first line is identical to the MSSM, and all new terms are collected in the second line of eq. (4.2). After B − L breaking a Majorana mass term Y x η for the right-handed neutrinos is generated. This term causes a mass splitting between the CP even and odd parts of the complex sneutrinos.
The additional soft-breaking terms compared to the MSSM are Table 1. Chiral superfields of the B-L-SSM and their quantum numbers under The electroweak and B − L symmetry are broken to U (1) em by the following VEVs of Higgs states and bileptons: In analogy to the MSSM definition tan β = vu v d , we call the ratio of the two bilepton VEVs tan β = vη vη . As mentioned above, the Majorana mass term causes a mass splitting between the CP even and odd parts of the right sneutrinos. This makes it necessary to definẽ Here, i = 1, 2, 3 is the generation index of the sneutrinos.

Gauge kinetic mixing
The particle content of the B-L-SSM gives rise to gauge-kinetic mixing because the matrix is not diagonal. The indices A and B run over all U (1) groups, and φ over all superfields. For the particle content of Tab. 1 we find N contains the GUT normalization of the two Abelian gauge groups. We will take 3 5 for U (1) Y and 3 2 for U (1) B−L . From eq. (4.8) we see that even if gauge-kinetic mixing vanishes at one scale, it is induced again by RGE running and must be included therefore. However, as long as the two Abelian gauge groups are unbroken, we can make a change of basis. This freedom is used go to a basis where electroweak precision data is respected in a simple way: by choosing a triangle form of the gauge coupling matrix, the bilepton contributions to the Z mass vanish: The gauge couplings are related by [285]:

Mass eigenstates
After electroweak and B − L breaking and also because of gauge-kinetic mixing there is a mixing between the neutral SUSY particles from the MSSM and the new sector.
Neutral gauge bosons In the gauge sector, the neutral gauge bosons B, W 3 and B mix. This gives three mass eigenstates γ, Z, Z which are related by a rotation matrix which can be expressed by two angles Θ and Θ The entire mixing between the B − L and the SM gauge boson comes from Θ which can be approximated by [286] tan 2Θ W 2g Neutral Higgs bosons In the Higgs sector, the CP even and odd parts of the doublets mix with the corresponding CP eigenstates of the bileptons. This leads to four physical scalar Higgs particles and four pseudo-scalars. Two pseudo-scalars are physical, the other two are Goldstone bosons of the Z and Z .
Neutralinos There are seven neutralinos in the model which are an admixture of the three gauginos, the two Higgsinos and the two bileptinos.
Neutrinos and sneutrinos There are six Majorana neutrinos which are a superposition of the leftand right neutrino gauge eigenstates Similarly, the sneutrinos are admixtures of left-and right sneutrino gauge eigenstates. Because of the mass splitting due to the Majorana mass term the CP-even and odd parts mix separately, and there are 12 real mass eigenstates:

Constrained model
The B-L-SSM has 55 additional parameters, including all phases, compared to the general MSSM. Therefore, an organizing principle to relate these parameter to reduce the number of free parameters is often helpful. We choose a CMSSM-like setup inspired by minimal supergravity. We set the boundary conditions at the GUT scale which is taken to be We assume that at the GUT gauge-kinetic mixing is absent, but gets induced below that scale. This gives as additional conditions Thus, at the GUT scale the relations g GU T and g GU T BL = g GU T BB hold. In minimal supergravity all scalar and gaugino soft masses unify at the GUT scale, i.e. there are only two free parameters to fix all soft masses: m 0 , M 1/2 . The boundary conditions at the GUT scale are: with the identity matrix 1. We assume also that no gauge kinetic mixing is present in the gaugino sector at this scale, i.e.
For the trilinear soft-breaking terms we use in analogy to the CMSSM the relation with a free parameter A 0 . The remaining soft-terms B µ , B µ are not taken to be input, but the tadpole equations are, if not stated otherwise, solved with respect to µ, B µ , µ and B µ . The advantage of this choice is that these parameters don't enter the RGE evaluation of all other terms and they can be considered independently.
To fix the gauge part of the B − L sector, we need two more input parameters. These are the mass of the Z and the ratio of the bilepton VEVs. Finally, there are two more Yukawa-like matrices, Y ν and Y x . Thus, the full set of input parameter is The masses of the light neutrinos in this model are proportional to Y ν . This gives strong constraints and we can often neglect Y ν because the entries must be tiny.

Setup
We want to study all aspects of the B-L-SSM: the mass spectrum, decays, flavour observables, Higgs constraints, dark matter, vacuum stability and collider physics. For this purpose, we make not only use of SARAH but also interface it to several other public tools. To simplify the following presentation, I'm going to assume that all the following tools are installed in the same directory $PATH: X.Y.Z has to be replaced by the version which was downloaded. In the last line a symbolic link from the directory SARAH to SARAH-X.Y.Z is created. There is no compilation necessary, but SARAH can directly be used with any Mathematica version between 7 and 10.
Installation of all other tools For the installation of all other tools, please, check the manual or readme files of these tools. Global symmetries SARAH supports Z N as well as U (1) global symmetries which are defined in the array Global. Usually, one considers the MSSM with conserved R-parity. This discrete symmetry is defined via First, the kind of the symmetry is defined (Z[N] with integer N, or U [1]). Note, Z N symmetries are always understood as multiplicative symmetries. The second entry gives a name to the symmetry. For the U (1) there is one specific name which can be used to define R-symmetries: RSymmetry. In that case, the R charges of the SUSY coordinates are considered as well. There are two possibilities to define the charges of SUSY fields with respect to a global symmetry: 1. if in the definition of the vector or chiral superfields, which will be explained below, only one quantum number is given per superfield per global symmetry this number is used for the superfield itself but also for component fields.
2. if a list with three entries is given as charge for a vector or chiral superfield the following convention applies: for chiral superfields, the first entry is the charge for the superfield, the second one for the scalar component, the third one for the fermionic component. For vector superfield, the second entry refers to the gaugino, the third to the gauge boson.
With these conventions a suitable definition of the global symmetries for states with R-parity ±1 would be In principle, the B-L-SSM has no global symmetry but R-parity is just a remnant of the broken U (1) B−L . However, it turns out to be helpful to keep the standard definition for R-parity: we can use this Z 2 to get the relic density with MicrOmegas. Sometimes, one uses also matter parities to have an additional Z 2 in models with gauged B − L [288,289].
Gauge symmetries The next step to define a SUSY model is to fix the gauge sector. That's done by adding for each gauge group one entry to the array Gauge. For SUSY models for each entry in Gauge also the corresponding vector superfield are set automatically. For instance, the SM gauge First, the names of the gauge fields are given 18 , the second entry defines the kind of the group. The third entry gives a name to the gauge group 19 and the fourth entry fixes the name of the corresponding gauge coupling. In the fifth entry it is defined if the group will be broken later. The last entry sets the global charges of the gauginos and vector bosons of the corresponding vector superfield using the conventions explained above: here, the vector superfields as well as its fermionic components get R-parity -1, while the spin-1 states have R-parity +1.
For the B-L-SSM only a fourth entry is needed. We call the group Bp, i.e. the vector boson will be named VBp by SARAH and the gaugino fBp. For the gauge coupling we chose as name gBL 20 . Thus, the definition of the B-L-SSM gauge sector is Since this is the second U (1) beside hypercharge, SARAH will generate two off-diagonal gauge couplings g1BL and gBL1 stemming from kinetic mixing, see also sec. 2. The soft-masses for the gauginos which are added by SARAH are MassB, MassWB, MassG and MassBp. And as consequence of gauge-kinetic mixing MassBBp appears as well.
Chiral superfields Chiral superfields in SARAH are defined via the array SuperFields. The conventions are as follows: first, a name for the superfield is given, in the second entry the number of generations is set and in the third entry the names for the isospin components are defined. Afterwards, the transformation under the different gauge groups are given, the last entries set the charges under the global symmetries. To define the transformation with respect to Abelian groups the charge is given. For non-Abelian groups the dimension is given as integer, where conjugated representations are negative. If the dimension is not unique, also the Dynkin labels can be defined. The treatment of higher-dimensional representation is different for groups which get broken and those which stay unbroken 1. Representation with respect to groups which get broken: in that case it is convenient to define higher-dimensional representations as tensor products of the fundamental representation.
2. Representation with respect to groups which get not broken: in that case the representation is treated as vector which the appropriate dimension. To do this SARAH makes use of the generators as well as the Clebsch-Gordan-coefficient provided by Susyno.
We have to deal here only with doublets under a broken SU (2), i.e. we need a vector of length two for the isospin components.
The particle content of the MSSM are the squark superfields q, d and u, the lepton superfields l and e as well as the two superfields for the Higgs doublets Hd and Hu. We have 3 generations for all matter fields and the usual charges with respect to the SM gauge groups. These are defined by The SARAH conventions for the component fields of a superfield are: scalars start with S and fermions with F, e.g. the left down-squark is called SdL and the right lepton FeR. The soft masses for the scalars are mq2, ml2, mHd2, etc.
In the B-L-SSM, we have first to define the U (1) B−L charge of all MSSM fields SARAH includes a factor 1 2 for all U (1) charges as this is done usually by conventions for the hypercharge. Therefore, quark superfields have B − L charge ± 1 6 and lepton superfields carry ± 1 2 . The other changes are the definition of the right sneutrino superfield vR which comes in three generations and which is a gauge singlet under the SM gauge groups. The bileptons C1 and C2 are also SM singlets and come with B − L charge ±1 according to the just mentioned conventions. We see here that we don't have to define the charge indices and the contraction of these. All of this is done automatically by SARAH and the term Yu u.q.Hu gets interpreted internally as  Eigenstates In the MSSM and the B-L-SSM we have to deal with two sets of eigenstates. First, the gauge eigenstates which will always be present. These states are called by default GaugeES. Second, we have the mass eigenstates after symmetry breaking which we call EWSB. Both sets are defined in the array NameOfStates. Sometimes, it might be useful to use also intermediate states to make step-wise rotations from the gauge to the final matter eigenstates. Therefore, the length of NameOfStates is not restricted.  In the B-L-SSM we have to extend DEFINITION[EWSB][VEVs] to give also VEVs to bileptons. Here, we use the same normalization as for the ew VEVs. As mentioned in sec. 4.1.1, also the sneutrino will split into their real and imaginary parts because of the Majorana mass term. We can also define this splitting in DEFINITION[EWSB][VEVs], and keep 0 for the VEVs. We could study spontaneous R-parity violation by adding a non-zero VEV for these particles.    21 One has to be careful with the rotations of charged vector bosons like W + . Here, the mass matrix in the basis (W 1 , W 2 ) is diagonal and the calculated rotation matrix from diagonalizing this matrix would be just the identity matrix. In that case it is inevitable to give the standard parametrization Here i = 1, 2, 3 is the generation index for the gauge eigenstatesd L ,d R , j = 1, . . . , 6 is the generation index for the mass eigenstatesd, and Z D is the rotation matrix. α is a colour index. In the case of charginos, first the two basis vectors are defined, before the names for the mass eigenstates and the rotation matrices follow. Thus line 105 leads to the following relations: The two rotation matrices U − (Um) and U + (Up) diagonalize the chargino mass matrix.  In principle, one can also add the definition of Dirac spinors for the gauge eigenstates in exactly the same way. However, we are not interested in performing calculations for gauge eigenstates and I skip this part here. Interested readers can take a look at Appendix C.1 to the see the entire model file for the implementation of the B-L-SSM in SARAH.

Parameter definitions
When the changes in B-L-SSM.m are done, the model is already ready to be used with SARAH. While in principle all calculations in Mathematica can be performed, the different outputs need some more information. These are mostly formal points. All possible options which can be used to define the properties of a parameter in the file parameters.m are the following: • Description: defines a string to describe the parameter.
• OutputName: defines a string which is used for the parameter in the different outputs. No special characters should be used to be compatible with C++ and Fortran.
• Real: defines if a parameter should be considered as real (True) or complex (False). Default is False.
• Form: can be used for matrices to define if it is diagonal (Diagonal) or a scalar (Scalar). By default no assumption is made.
• LaTeX: defines the name of the parameter used in the L A T E X output. Standard L A T E X language should be used 22 .
• GUTnormalization: defines a GUT normalization for an Abelian gauge coupling.
• Dependence: defines a dependence on other parameters which should always be used.
• DependenceOptional: defines a dependence which is optionally used in analytical calculations.
• DependenceNum: defines a dependence which is used in numerical calculations.
• DependenceSPheno: defines a dependence which is just used by SPheno.
• MatrixProduct: can be used to express a matrix as product of two other matrices. This can be used for instance to relate the CKM matrix to the quark rotation matrices.
• LesHouches: defines the position of the parameter in a Les Houches spectrum file. For matrices just the name of the block is defined, for scalars the block and an entry has to be given: {block, number}.
• Value: assigns a numerical value to the parameter.
Many of the above definitions are just optional and are often not needed. I'll show some parts of parameters.m to define the properties of new parameters in the B-L-SSM, or to change properties of MSSM parameters. However, I won't show here all changes compared to the MSSM but pick just some important and interesting cases. The full list of changes is given in Appendix C.2.
Gauge sector In the gauge sector we have three new gauge couplings: the one corresponding to the new B − L gauge group and the two gauge couplings induced by gauge kinetic mixing. We define for all three couplings an output name, a L A T E X name and the position in the Les Houches file which will become important later.
OutputName -> gYB }} , OutputName -> gBY }} , LesHouches -> { gauge ,4} , 13 OutputName -> gBL }} , Gauge couplings are by default taken to be real, i.e. it is not necessary to define this explicitly. We also used here 3/2 for the GUT normalization of the B − L charge. The GUT normalization of the two off-diagonal charges are automatically set by SARAH. Similarly, the properties of two new gaugino mass parameters are set, see Appendix C.2.
We have seen in eq. (4.11) how the γ − Z − Z rotation matrix can be parametrized by two angles.
To do this, we use the Dependence statement for the parameter ZZ used to label the rotation in the neutral gauge sector: Real -> True , 7 LaTeX -> " Z^{\\ gamma Z Z '} " , 8 LesHouches -> None , 9 OutputName -> ZZ }} , Here, we put this rotation matrix explicitly to real. The reason why we also add an output name is that this matrix shows up internally in SPheno when it diagonalizes the gauge boson mass matrix.
Since the rotation matrix is completely defined by the two rotation angles, it is not necessary to include it in a spectrum file. Therefore, LesHouches -> None is used. In eq. (4.12) an approximation for the new angle Θ was given. It might be useful to use this approximation sometimes in numerical calculations in Mathematica. Therefore, it is included. However, the better method for high precision calculations with SPheno is to use the numerical result for the rotation matrix calculated by SPheno when diagonalizing the vector bosons mass matrix. To translate this matrix into the rotation angle, we use Θ = arccos |Z γZZ SPheno calculates this value, uses it internally to get the vertices, and writes it into the block ANGLES as entry 10 of the spectrum file. The needed lines to define all that are: Real -> True , 5 DependenceSPheno -> ArcCos [ Abs [ ZZ [3 ,3]]] , 6 OutputName -> TWp , 7 LesHouches -> { ANGLES ,10} }} ,

Rotations in matter sector
In the matter sector we have to define new rotation matrices for the CP even and odd sneutrinos and for the neutrinos. The definitions are very short and just include the descriptions, the L A T E X commands, the Les Houches and output names. For the neutrino rotation matrix the entries read LaTeX -> " U^V " , 3 LesHouches -> UVMIX , 4 OutputName -> UV }} , Similarly, ZVR and ZVI are set. In addition, there are some rotation matrices which change compared to the MSSM. There is no need to modify the definition for the neutralino rotation matrix because no assumptions are made for these in the MSSM. However, the rotation matrices for scalar and pseudoscalar Higgs are parametrized in the MSSM by two angles This parametrization is used to express dependences optionally used in the MSSM. Since the rotation matrices have grown to 4 × 4 matrices in the B-L-SSM, we can no longer make use of that. Therefore, we can take the definition of the MSSM and overwrite the dependences by None Value -> None , 15 LesHouches -> PSEUDOSCALARMIX , 16 OutputName -> ZA }} , We refereed to these in parameters.m of the B-L-SSM by using the same string as Description, but we overwrote locally the dependences which changed.

Particles definitions
We turn now to the particles. To define the properties of any particle present in the model, several options are available which can be put in particles.m: • Description: a string for defining the particle.
• PDG: defines the PDG numbers of all generations.
• PDG.IX: defines a nine-digit number of a particle supporting the proposal Ref. [290,291]. By default, the entries of PDG are used in the output 23 .
• ElectricCharge: defines the electric charge of a particle in units of e. This information is exported to the CalcHep/CompHep and UFO model files.
• Width: can be used to define a fixed numerical value for the width.
• Mass: defines, how MC tools obtain a numerical value for the mass of the particle: 23 To switch to the new scheme, either at the beginning of a SARAH session or in the model files, one has to set UsePDGIX = True; a numerical value can be given the keyword Automatic assigns that SARAH derives the tree level expression for the mass from the Lagrangian. The mass is then calculated by using the values of the other parameters.
the keyword LesHouches assigns that this mass is calculated numerically by a spectrum generator like SPheno and provided via a Les Houches spectrum file. This is usually the preferred method because also loop corrections are included.
• OutputName: defines the name used in the different outputs for other codes.
• LaTeX: defines the name of the particle used in the L A T E X output.
• FeynArtsNr: the number assigned to the particle used in the FeynArts output • LHPC: defines the column and colour used for the particle in the steering file for the LHPC Spectrum Plotter 24 . All colours available in gnuplot can be used.
• Goldstone: for each massive vector boson the name of corresponding Goldstone boson is given.

};
The properties of gauge eigenstates can be defined in the array ParticleDefinitions[GaugeES]. However, since rarely calculations are done for these states, it is also often sufficient to give just the L A T E X names. Of course, also all other information can be set as for the mass eigenstates if demanded. Width -> Automatic , 22 Mass -> LesHouches , 23 FeynArtsNr -> 10 , 24 LaTeX -> " {Z '} " , ElectricCharge -> 0 , 27 OutputName -> " Zp " }} 28 { gZp , { Description -> "Z ' -Ghost " , 29 PDG -> 0 , 30 PDG . IX -> 0 , 31 Width -> 0 , 32 Mass -> Automatic , 33 FeynArtsNr -> 10 , 34 LaTeX -> " \\ eta^{ Z '} " , 35 ElectricCharge -> 0 , 36 OutputName -> " gZp " }} , Here, we used for the first two entries of the pseudo-scalars 0. This means that these two states are not physical, but the Goldstones of the massive, neutral gauge bosons. In the similar way, the additional PDGs for neutralinos Chi and neutrinos Fv are defined, see Appendix C.3. This function makes all checks listed in sec. 2.3. With our implementation above there will be some messages like The last message is caused because it is not obvious from the Lagrangian that there is no γ-Z mixing: the relation between the rotation angels Θ, Θ and all five involved gauge couplings (g Y Y , g BY , g Y B , g BB , g 2 ) are not known at this stage. Hence, it is not obvious from the Lagrangian that these terms cancel. The reason for the first two messages is that SARAH found terms in the Lagrangian of the mass eigenstates which seem to cause a mixing betweenν R andν I as well as between h and A h . The origin of this is that we didn't define terms like B µ , B µ or Y ν explicitly as real. Therefore, SARAH considers them to be complex. In the complex case a mixing between CP even and odd scalars would occur unless specific relations among the phases are satisfied. This mixing would be missed in our implementation so far. We just have to keep that in mind that the model is supposed to be used for the CP conserving case, or just for parameters which satisfy conditions that the mixing between CP even and odd states vanishes. Of course, one can also use the entries in parameters.m to define the parameters explicitly as real. Even if a study of CP violation is clearly beyond the scope of this example, I want to show briefly what has to be changed to incorporate it. First, the definition of the VEVs has to be changed by introducing relative phases between the scalars: ...

}
However, this is less common and mainly supposed to be used for generating Vevacious model files.
After introducing the phases also the definition of the rotations must be changed to respect the mixing between CP even and odd states: ... } The rotation matrices and mass eigenstates in particles.m and parameters.m have to be adjusted accordingly: the matrices ZVR and ZH would no longer be real, and the states Sv need twelve PDGs, while hh includes two Goldstones and six physical scalars. Also assignments of the Goldstones have to be adjusted in particles.m However, as I said this is beyond the scope of the discussion here. We start now to study the CP conserving version of this model in detail.
5 Example -Part II: Masses, vertices, tadpoles and RGEs with Mathematica After we are done with the implementation of the model and all consistency checks are passed, we can generate a pdf file to get a first overview about the B-L-SSM. The .tex files generated by SARAH include all information about the model, e.g. particle content and superpotential, and all information which SARAH derives like RGEs as well as masses and vertices for specific eigenstates. To get this output one has first to run the command ModelOutput which tells SARAH what it has to calculate and for which eigenstates. We are just interested in the eigenstates after EWSB:

ModelOutput [ EWSB ];
This command calculates always all vertices for the considered eigenstates. Loop corrections and RGEs are not included in the calculations by default. To include them, the user can choose IncludeRGEs -> True and IncludeLoopCorrections -> True. This information will then also be included in the L A T E X files. In principle, one can also use options for ModelOutput to directly generate the L A T E X output and all other outputs for the different codes by setting: WriteTeX -> True, WriteFeynArts -> True, WriteCHep -> True, WriteWHIZARD -> True, WriteUFO -> True. However, we won't make use of this here but discuss each output separately.
When SARAH is done with all calculations, one can run We used here the option that not only the information about the model and its physics is included, but also details about the implementation in SARAH are attached to the pdf. Additional options which are available are • FeynmanDiagrams: defines, if the Feynman diagrams should or should not be included in the output. By default they are included. To draw the Feynman diagrams, SARAH makes use of the L A T E X package feynmf [292] • ShortForm: defines, if a shorter notation for the vertices should be used by not using a separate equation environment for each vertex and skipping Feynman diagrams When SARAH is done with the output, the .tex file are stored in The main file which can be compiled with pdflatex is B-L-SSM EWSB.tex. In the case that the Feynman diagrams are included, the compilation is a bit more complicated because mpost has to be used for each diagram after the first run of pdflatex. Afterwards, a second run of pdflatex is needed. SARAH provides a shell script MakePDF.sh which does take care of that. Thus, the easiest way to get the pdf is The second step is just necessary to make the script executable if it is not.

Particles and parameters of the B-L-SSM in SARAH
The reason why we have chosen the option WriteSARAH -> True is that this includes helpful information about the implementation: the names for all particles and parameters in SARAH are given, and it is shown how the pieces are called in L A T E X and in the output for other codes. This output is given for all eigenstates. However, to follow the subsequent examples only the mass eigenstates after EWSB are necessary. So, I skip the output for the other eigenstates and show here only the corresponding tables for the mass eigenstates.
Particles The whole list of fermions, scalars, vector bosons and ghosts are listed in Tabs. 2-5. One sees that not only the names of each particle are given which are used at the different stage, but also what indices the particles carry. In the case of fermions, the Dirac spinors together with their Weyl components are listed. An alternative to get an overview about all particles during the Mathematica session, is to use Table 2. Fermions in the B-L-SSM after electroweak symmetry breaking.
{Hm, Hp} Table 3. Scalars in the B-L-SSM after electroweak symmetry breaking.
{Wm, Wp} Table 4. Vector bosons in the B-L-SSM after electroweak symmetry breaking. Table 5. Ghost particles in the B-L-SSM after electroweak symmetry breaking.
Parameters All parameters which are present at some stage in the B-L-SSM are listed in Tab. 6. This includes not only the fundamental parameters like gauge couplings, superpotential couplings and soft-breaking terms, but also rotation matrices and angles, VEVs as well as auxiliary parameters which just show up via dependences defined in parameters.m. One can get the entire list of parameters also during the Mathematica session by using G f Gf Gf Table 6. All parameters in the B-L-SSM with their names used internally by SARAH as well as the names for the L A T E X and other outputs.

Extracting mass matrices, tadpole equations and vertices
We can finally do some physics. At first, we want to study the analytical properties of the B-L-SSM within Mathematica. For this purpose I'll give some example how to extract mass matrices, vertices or tadpole equations and how to deal with them. To improve the readability I'll give the input in Mathematica format but the output of Mathematica will be translated into L A T E X. If the user just wants to see the expressions without modifying them it is also possible to generate the entire L A T E X output for the model and just read the pdf as just shown in the previous section.

Tadpole equations
We start with the tadpole equations. All four minimum conditions are returned via  In addition, we introduced new parameters for |µ| 2 and |µ | 2 . There are mainly two reasons for this: (i) we can solve the equations in that way for |µ| 2 ; (ii) if X and B[X] appear in the equations, Mathematica interprets B[X] as function of X instead as an independent parameter. The consequence is that it can't solve the equations. To circumvent this, a replacement like B[X]->BX would be necessary.
We are just interested for the moment in the solutions for |µ| 2 and |µ | 2 . They read For a better understanding, we can make the approximation of vanishing kinetic mixing (g Y B = 0): in this limit, as we will see below, the vector boson masses are given by M Z = 1 4 (g 2 1 + g 2 2 )v 2 and In addition, we make the replacements v d → v sin β, v u → v cos β, x 1 → x sin β , and x 2 → x cos β and express β, β by tan β (TB) and tan β (TBp): We find quite simple expressions: The first expression is just the one of the MSSM. Thus, any new contribution to µ comes only from gauge kinetic mixing. The equation for |µ | looks very similar. However, the large ratio between M Z and M Z gives a much larger constraint: for radiative symmetry breaking tan β is usually restricted to be close to 1 to minimize the negative contributions. we get the plot shown in Fig. 6. We see that the numbers for |µ | quickly drop and the entire area with tan β > 1.1 is ruled out.

Mass Matrices
We turn now to the mass matrices. First, I have to provide the proof that our approximation for the vector boson masses in the limit of vanishing gauge kinetic mixing to obtain eqs. ( where all three possibilities (VectorBoson=VP, VectorBoson=VZ, VectorBoson=VZP) return the same mass matrix:  This matrix is block diagonal for g BY = g Y B = 0 with a upper 2 × 2 matrix which is identical to the SM. Thus, we can see that gauge-kinetic mixing in this model is an import effect because it leads to Z-Z mixing already at tree-level.
In the scalar sector the mass matrix for the CP odd scalars is printed via and reads and gauge fixing contributions m 2 (Z)ξ Z and m 2 (Z )ξ Z . We see that the matrix is block-diagonal. That means that there is no mixing between the CP odd components of the Higgs doublets and bileptons. However, this is a statement which is only strictly true at tree-level. Therefore, we kept the mixing of both states in the model definition. In the last line, we replaced ax * + ax → 2a (x). The outcome is this handy list of eigenvalues: So, we find the expected two massless modes. The physical pseudo-scalars have at tree-level the same expressions as in the MSSM by replacing the corresponding VEVs and B-terms for the B − L sector.
The scalar mass matrix is given by

MassMatrix [ hh ]
and is a bit more complicated. We parametrize it by One can see that this matrix is in general not block diagonal, i.e. there is already a mixing between the MSSM doublets and the bileptons at tree-level. However, all terms m φiφj with i = d, u and j = η,η are proportional to g 1 g BY + g Y B g B , i.e. this mixing is only visible if gauge kinetic mixing is taken into account. That's another reason why gauge-kinetic mixing is in general a very important effect in this model. We can also try to get an estimate of the size of this mixing. For this purpose, we plug the solution of the tadpole equations in the Higgs mass matrix and fix some numerical values: g 1 = 0.36, g 2 = 0.63, g BL = 0.5, µ = µ = 1 TeV, B µ = B µ = 1 where NUMBER=1,2,3 should be used to get all three plots shown in Fig. 7. We see at these plots that the mixing is especially large when both states are close in mass and can be easily O(10%) and more.  We can now turn the sfermion sector. The matrices there are usually quite lengthy. I just want to pick out one important effect which we see in the diagonal entries of the charged sleptons for instance.
The entry corresponding toτ L in the 6 × 6 mass matrix of the sfermions (Se) is shown via

MassMatrix [ Se ][[3 , 3]]
We can re-write the terms a bit by first assuming that only third generation Yukawas a non-zero. For this purpose, we expand the sum and put all entries of Y e to zero but the (3,3) one. In addition, we make the assumption that gauge-kinetic mixing vanishes for simplicity and we can use the usual replacements for the VEVs The entries read then The important point is the appearance of M Z which gives large negative contributions because the lower limit on this mass is about 2.5 TeV. Thus, tan β must be close to 1 to minimize this term and to prevent tachyons.
The last mass matrix we want to check is the one of the neutralinos. This mass matrix is returned by

MassMatrix [ Chi ]
and reads  The upper 4 × 4 block is the one known from the MSSM. The lower 3 × 3 block is the counterpart in B − L sector of this model. Here, we make a similar observation as for the scalar Higgs mass matrix: both blocks are only coupled if gauge-kinetic mixing is taken into account.
In the same way all other mass matrices of the model can be checked with SARAH and interesting observations can be made like the CP even and odd mass-splitting for the sneutrinos.

Vertices
Single vertices We continue with vertices and show how they can be handled in SARAH. Let's assume one is interested in the coupling between two up-quarks and the neutral CP even scalars: We find here also projections on the third gauge eigenstate which comes from theνην-term in the superpotential. In general, many vertices get modified with respect to the MSSM and a discussion of all effects is far beyond the scope of this manuscript. I just want to pick out one more vertex: the electron-Z interaction:

Vertex [{ bar [ Fe ] , Fe , VZ }]
We find that this vertex receives important modification due to the Z-Z mixing: Working in the triangle basis, the contributions from g BY vanish. However, the coupling compared to SM expectation gets still modified by the presence of sin Θ W . This gives strong constraints on the angle Θ W . That's of course the case for any Z-interaction in this model.
All vertices It has been already be mentioned that it is also possible to calculate all vertices at once. The command to do this is

MakeVertexList [ EWSB ]
This creates lists it turns out that this is the case for any vertex. That's of course not surprising because of the D-term contributions, but it underlines the importance of this effect again.

Analytical results
The full two-loop RGEs of the B-L-SSM are calculated just via

CalcRGEs [];
The options for CalcRGEs are • TwoLoop: defines, if two-loop RGEs should be calculated. This is done by default.
• ReadLists: defines, if the results from previous calculations should be read instead of calculating the RGEs again.
• VariableGenerations: defines, if the generations of some particles should be treated as free parameters. The RGEs are then expressed in terms of NumberGenertions[X], where X is the name of the superfield.
• NoMatrixMultiplication: can be set if the RGEs should not be expressed in terms of matrix multiplication but by using sums over indices.
• IgnoreAt2Loop: can be used to define parameters which should be put to zero in the two-loop calculation.
• WriteFunctionsToRun: defines, if a file should be written to evaluate the RGEs numerically in Mathematica. This is done by default and we are going to make use of it.
When the calculation is finished, the full two-loop RGEs are saved in different arrays: • Gij: Anomalous dimensions of all chiral superfields • BetaMuij: Bilinear superpotential parameters (µ, µ ) • BetaTijk: Trilinear soft-breaking parameters (T d , T e , T u , T x , T ν ) • BetaBij: Bilinear soft-breaking parameters (B µ , B µ ) • BetaVEVs:  and get One sees that gauge kinetic mixing is also taken here into account. In the limit g B = g BY = g Y B = 0 this reproduces the well known MSSM result. Maybe, more interesting are new features in the gauge sector. To get just all one-loop RGEs at once, we can execute The expressions for g 2 and g 3 are just the MSSM results but g 1 gets modified by gauge-kinetic mixing. Solving these equations analytically is no longer possible but we are going to study them numerically.
Finally, we can also check analytical expressions for soft-breaking terms. We will do this at the example of the bilepton masses which are given in the last two entries of Betam2ij. Even more interesting is the difference between both β-functions: we see from eq. (5.8) that a large mass splitting between both masses is needed to get radiative symmetry breaking. Thus, starting with the same values at the GUT scale, the differences in the β-functions are crucial in order to break B − L or not. To see the difference, we can use σ 1,4 and σ 1,1 are abbreviations for often appearing traces over scalar masses. These can be found in TraceAbbr: If one starts with a model in which all scalars unify as we have in mind according to sec. 4.2, one gets σ 1,1 = σ 1,4 = 0. This is a RGE invariant and does always hold. Thus, the only possibility to get a large mass splitting are large values for Y x , T x and soft-masses m 2 ν , m 2 η . We will come back to this when we study the model with SPheno.

Numerical results
The RGEs for the gauge coupling demand a closer look. However, the analytical expressions at oneloop are already a bit complicated and it is hard to learn something from them. Before turning to the full numerical analysis of the entire set of RGEs with SPheno there is the possibility to study the RGEs in Mathematica first: CalcRGEs creates a file which contains all β-functions in a format which can be used with NDSolve in Mathematica to solve the RGEs. This file is loaded via << " $PATH / SARAH / Output /B -L -SSM / RGEs / RunRGEs . m " In addition, also a function is provided to perform the RGE running:

RunRGEs [ values , start , finish , Options ]
The input is: • values: all non-zero values for parameters at the scale where the running starts Since the running at one-loop for g 1 , g 2 and g 3 is the same as in the MSSM in the limit of vanishing gauge kinetic mixing, we should find the same unification at about 2 · 10 16 GeV. This can be tested via Here, I used for the SM gauge couplings as input the DR values which are found when including thresholds discussed in sec. 3.6.3 with SUSY states of about 1 TeV. The plot which is created via these two commands is shown in Fig. 8  We can now use this value and demand a strict unification (g 1 = g 2 = g 3 = g B ) and run down the RGEs   The plot is shown on the right in Fig. 8 The hierarchy is similar to the CMSSM, but the B soft mass is smaller than all other gaugino masses. So, it might be that this particle would be a new dark matter candidate. I'll come back to that in the next section. We also see that the off-diagonal terms runs negative similar to the off-diagonal coupling. In the limit of vanishing kinetic mixing it is easy to explain the hierarchy of the gaugino masses: combining eq. (3.2) and eq. (3.16) we see that the value M i /g 2 i is a constant at one-loop, i.e.
Plugging in the numbers from eq. (5.45) we get for the gaugino mass terms 6 Example -Part III: Mass spectrum, decays, flavour observables and fine-tuning with SPheno

Calculating the mass spectrum with SPheno
We start now to make use of the different outputs SARAH provides to use the derived information about a model with other tools. Maybe, the most important interface it the one to SPheno which gives a very flexible, fully functional and highly precise spectrum generator for the model under consideration. A similar functionality to get a tailor made spectrum generator based on SARAH became available with FlexibleSUSY 26 . Before we can use SPheno we have to provide an additional input file for SARAH. I'll start with a description what this file is supposed to do.

Defining the boundary conditions
In general, there are two different kinds of SPheno versions the user can create which need a different amount of input • 'GUT Version': in a GUT version of SPheno a RGE running between the electroweak, SUSY and GUT scale is supported. The user can define appropriate boundary conditions at each of these three scales. Furthermore, also threshold effects by including additional scales where heavy particles are integrated out can optionally be included. Finally, the user can define a condition which has to be satisfied to identify the GUT scale. The most common choice is the unification scale of gauge couplings, but also other choices like Yukawa unification are possible. In addition, these version include also the possible to define the entire input at the SUSY scale and skip the RGE running to the GUT scale.
• 'Low Scale' Version: in a low scale version no RGE running is included, but the SPheno version expects all free parameters to be given at the SUSY scale.
I concentrate on the first option because we are interested in a GUT model. Actually, the input for the second version is much shorter and can be easily derived form the information given here. The file to define the properties should be called SPheno.m and must be located in The exact meaning of this definition will become clear when we discuss the Les Houches input in sec. 6.1.3. In addition, it would also be possible to use the array EXTPAR to define input which will be given via the block EXTPAR in the Les Houches file. This might be useful to remove parameters not present in the MSSM from MINPAR. In that case the corresponding lines in SPheno.m would read This definition is especially for tan β and tan β important because we will use trigonometric functions with these parameters as argument. If not defined as real, Fortran might return NaNs.
Tadpole equations We choose to solve the tadpole equations with respect to µ, B µ , µ and B µ :
In the very first iteration when the stop masses are unknown, SPheno needs a crude first guess of the renormalization scale. We choose m 2 0 + 4M 2 1/2 . However, also a constant like 1 TeV could be used. The two lines to define both scales for the B-L-SSM are GUT condition The condition for the GUT scale in our model is g 1 = g 2 . However, because of gauge kinetic mixing one has to be careful by choosing the correct g 1 : in the running the general 2 × 2 gauge coupling matrix is used, i.e. for the check of the GUT scale it is necessary to rotate it to the triangle form: SPheno.m 16 Cond itionG UTscal e = ( g1 * gBL -g1BL * gBL1 ) / Sqrt [ gBL^2+ gBL1^2] == g2 ; We demand that gauge-kinetic mixing vanishes at the GUT scale and this condition will simplify to g1 == g2. Nevertheless, one should keep the full form to stabilize numerics in the first few iterations.
Boundary conditions Now, we define the boundary conditions at the GUT scale. First, we make sure again that the U (1) coupling matrix is in the triangle form. In order to further stabilize numerics in the first iterations, where the unification might not be too good, we average g 1 and g 2 before we set the couplings in the B − L sector and the off-diagonal ones. All other entries just parametrize in an obvious form the boundary conditions from eqs. ( At the SUSY scale we rotate again the gauge couplings to the triangle basis because the (2, 1) entry won't be zero any more due RGE effects. In addition, we calculate v η and vη from the input values of tan β and M Z . Finally, the input parameters for Y x and Y ν are used here. Since Y x and Y ν are matrices, it is not possible to define them via MINPAR or EXTPAR. Therefore, LHInput[x] is used. With this command, SPheno expects the parameters are given via blocks YXIN and YNUIN in the Les Houches file 27 27 The convention is that the block name for the output is used together wit the suffix IN By convention, the following decays are included that way: (i) all two-body decays of SUSY particles, Higgs states, the top quark and additional vector bosons; (ii) three-body decays of SUSY fermions in three other fermions and decays of SUSY scalars in another scalar and two fermions.
Precision We are mostly going to neglect neutrino masses in the following. However, if a study of the neutrino phenomenology should be included, it might be necessary to calculate the masses of the neutrino eigenstates with a higher precision as this is usually done in SPheno. The reason is the potential large hierarchy between left and right neutrinos. Details about this are given in Appendix A.3.

Obtaining and running the SPheno code
To obtain the SPheno output we run in Mathematica after SARAH is loaded and the B-L-SSM is initialized the command

MakeSPheno [];
The different options are: • ReadLists->True: can be used if all vertices and RGEs have already been calculated for the model and the former results should be used.
• InputFile: defines the name of the SPheno input file. By default SPheno.m is used.
• StandardCompiler->Compile: defines the compiler which should be set as standard in the Makefile. Default is gfortran.
• IncludeFlavorKit: can be used to disable the output of flavour observables based on FlavorKit.
When executing MakeSPheno, SARAH calculates first all information which it needs, i.e. it is not necessary that the user has done the calculation of vertices or RGEs before. When SARAH is done, the source code for SPheno is stored in $SPATH/SARAH/Output/B-L-SSM/EWSB/SPheno/. The compilation of this codes is done as follows: one has to enter the directory of the SPheno installation, a new sub-directory has to be created and the code must copied into this directory: $ cd $PATH/SPHENO $ mkdir BLSSM $ cp $PATH/SARAH/ Output /B−L−SSM/EWSB/SPheno / * BLSSM/ Afterwards, the code is compiled via $ make Model=BLSSM and a new executable SPhenoBLSSM is created which we will use in the following. 2 : in principle, it is possible to define in SPheno.m different boundary conditions for the GUT input, for instance if different SUSY breaking mechanism should be studied. This flag can be used to choose one. Since we haven't made use of that, this flag has no effect here.

Setting the input for
5 : if put to 1, CP violation is allowed and the phase of the CKM matrix is included.
6 : if put to 1, flavour violation is allowed and all off-diagonal entries in soft-or superpotentialparameters can receive non-zero values. If put to 0, the CKM matrix is taken to be the identity matrix.
12 : this flag can be given optionally to fix the SUSY scale to a constant value.
The block SMINPUTS contains all important values for the SM parameters like Fermi constant G F , strong coupling constant α S (M Z ), pole mass of the Z-boson, third generation fermion masses m t , m b , m τ . Also other parameters can be set as explained in the SLHA write ups but this is usually not necessary.  Table 7. Input parameters for the example points EP1 and EP2. In addition, we use sign(µ) = sign(µ ) = 1 and Yν = 0.
We turn now to the input to fix the parameter point we want to study. I have chosen two points as examples, EP1 and EP2, with slightly different input. All necessary input parameters are given in Tab. 7. The neutrino Yukawa coupling is highly constrained by neutrino data and we can ignore it here for our purposes. It will become important if the user wants to study lepton-flavour violation for instance. The input is highly motivated by the observations we made at the analytical level: we need small tan β and large m 0 , A 0 , Y x to break B − L radiatively.  • Fine-tuning: the calculation of the fine-tuning is skipped when setting flag 550 to 0

Calculating the spectrum and Higgs couplings with SPheno
When the input file is filled with numbers we can run the point as explained above. The entire output including all parameters, masses, branching ratios and low-energy observables is saved by default in SPheno.spc.BLSSM. This file is rather lengthy and contains a lot of information. I will pick some parts of it and discuss them.
Sometimes, it is convenient to work with in-and output files with other names. In that case, the names can be used as argument for SPhenoBLSSM. For instance, we can make two input files for the points EP1 and EP2 and run them via The entire output is written to the files given as second argument, i.e. SPheno.spc.BLSSM EP1 and SPheno.spc.BLSSM EP2.
The running parameters and mass spectrum for EP1 First, we check if gauge coupling unification at about 10 16 GeV remains. From the block gaugeGUT we see that the unification scale is a bit higher than in the MSSM and that we have no strict unification here because there is a small offset of g 3 . This behaviour is also known from the MSSM and one assumes that higher order corrections as well as threshold corrections from super heavy particles are responsible for an exact unification once taken into account. That's a consequence of the large m 0 and M 1/2 we used to get radiative B − L breaking. Note, g 1 in this block is the value without GUT normalization. Thus, it is related to g 1 used in sec. 5.3.2 by a factor of 5/3. We find also an off-diagonal coupling of -0.11 and g B close to 0.55 as expected from our estimates with Mathematica, see eq. (5.45). Many more blocks appear after the one for the gauge couplings and contain all other running parameters at the SUSY scale. We just want to take a look at two other blocks: BL and MSOFT. The first one contains the soft-terms, VEVs and µ in the B − L sector, the second one the soft-terms Higgs and gaugino masses from the MSSM part.

3.57502541 E +0# M3
We see here that m 2 Hu is negatives as expected to break the electroweak symmetry. However, both soft-terms for the bileptons are actually positive and one might wonder if B − L is really broken radiatively. It is broken because the full condition is not that a soft-term has to be negative, but that the determinant of the mass matrix has a negative eigenvalue in the limit of vanishing VEVs. For the lower 2 × 2 block of the scalar mass matrix this conditions reads (m η + |µ | 2 )(mη + |µ | 2 ) − |B µ | 2 < 0 (6.1) Because of the large value of B µ the condition is fulfilled for this point and B − L is broken. Another observation is that the blino soft mass M B is lighter than the bino one as we already expected from sec. 5.3.2. Actually, M B is also smaller than the other gaugino masses and the µ-term not shown here. So, one would expect that the lightest neutralino is a blino. However, we will see below that this is not the case.
After all blocks with the running parameters, all masses are printed in the block MASS. The squarks (Su and Sd) are very heavy, and also the charged sleptons (Se) have masses well above 1 TeV. However, there are some light CP even sneutrinos (SvRe) while the CP odd ones are much heavier (SvIm). Thus, the mass splitting between the CP eigenstates of the sneutrinos is another interesting and import aspect of the B-L-SSM. Actually, scrolling down, we see that all other SUSY states like neutralinos (Chi) and charginos (Cha) are actually heavier thanν R 1 . Thus, the LSP and therefore the DM candidate is the lightest CP even sneutrino for this point. The hierarchy of the heavy neutrino (Fv) reflects the input values of Y x . We see also that the Z mass is not exactly identical to the input. The reason is that the input is taken to be the tree-level mass in the limit of vanishing gauge-kinetic mixing, while here the full one-loop mass including all mixing effects is shown. However, the differences are very small and one has usually not to worry about them. In the Higgs sector, we have the lightest scalar (hh 1) in the mass range preferred by measurements. Also the second scalar h 2 is not much heavier. In contrast, the two heavier scalars as well as the two physical pseudo-scalars (Ah) and the charged Higgs (Hpm) are all about 3 TeV. This is maybe a good time to briefly comment on the importance of the loop corrections in this model. We can turn off the two-loop corrections via Block SPhenoInput # SPheno specific input ...   ( ZN (1 ,1) , dp ) ( ZN (1 ,2) , dp ) 558 1 3 2.32548331 E -02 # Real ( ZN (1 ,3) , dp ) 559 1 4 -1.10389021 E -02 # Real ( ZN (1 ,4) , dp ) 560 1 5 -2.68824741 E -02 # Real ( ZN (1 ,5) , dp ) 561 1 6 -6.00307896 E -02 # Real ( ZN (1 ,6) , dp ) 562 1 7 6.42452196 E -02 # Real ( ZN (1 ,7) , dp )
So, we see that the lightest state has a bino fraction of about 99% 29 , a wino fraction of O(10 −4 %) 30 , a Higgsino fraction of roughly 0.07% 31 and a similar blino fraction 32 , and finally a bileptino fraction of a bit less than 1% 33 . This is surprising because we have seen above that the blino soft-term is smaller than the bino one. Why isn't then the LSP a blino? To understand this, we can check the mixing of the second lightest neutralino. Since the default convention by SPheno is that all Majorana masses are negative, some entries of the rotation matrix are imaginary. Hence, we find the composition ofχ 0 2 in IMNMIX: ... 28 The exact value is (2.3 · 10 −2 ) 2 + (1.1 · 10 −2 ) 2 29 Given by (0.995) 2 30 Given by (1.9 · 10 −3 ) 2 31 Given by (2.3 · 10 −2 ) 2 + (1.1 · 10 −2 ) 2 32 Given by (2.6 · 10 −2 ) 2 33 Given by (6.0 · 10 −2 ) 2 + (6.4 · 10 −2 ) 2 2 1 -6.79846118 E -02 # Aimag ( ZN (2 ,1)   At tree-level the lighter mass is the doublet, and the heavier one is the bilepton. Thus, there is a level crossing at one-loop compared to tree-level. The bilepton mass nearly changes by a factor of 3 when including radiative corrections. Therefore, to have some trust in the mass prediction, also two-loop corrections for the bileptons are crucial for this point. These corrections are even a bit larger than for the MSSM-like particle and give a push of about 10 GeV.

Decay widths and branching
SPheno modules by SARAH do not only calculate the mass spectrum and effective couplings for the Higgs scalars, but they provide also functions for calculating decays. To adjust the output of the decays, the important flags in the Les Houches input are The width is very small. The reason is that it is suppressed twice: by the small blino admixture to the neutralino and the small right fraction of the neutrino.
The second lightest neutralino consists mostly of a B − L states, i.e the coupling to the second lightest Higgs (bilepton) is larger than the lightest one. This explains whyχ 2 mostly decays into the second lightest scalar but not the lightest one despite the larger phase space. Also the heavy neutrinos are expected to decay. Since these are mostly right-handed states the coupling to the lightest neutralino and sneutrino is much larger than for the light neutrino. Thus, the width of these states is much larger than the width of the lightest neutralino even if the involved particles seem to be similar. Another interesting topic in these models are the decays of the Z : new decay channels can alter significantly the width of the Z and have an impact on the limits of Z masses from collider searches [200,283]. However, for our point EP1 we see that 98% of the final states are SM fermions. The reason is that the important channels in right-neutrinos are kinematically forbidden.

DECAY
These BR are similar to the ones we got with SPheno but they don't agree exactly. Also the expected width is smaller by about 10% than the one calculated by SPheno. The reason for this is the enhanced coupling of the Higgs to bottom quarks for this point which was already visible from the HiggsBounds blocks as discussed in the last subsection.

Flavour and precision observables
The SPheno modules written by SARAH contain already out of the box the routines to calculate many quark and lepton flavour violating observables. In addition, also other observables like (g − 2) l and δρ are calculated. We are going to start with a short discussion of the results which can be obtained just by running SARAH and SPheno out of the box. In a second step, I show how the FlavorKit functionality [146] can be used to implemented Wilson coefficients for new operators, and how to use these coefficients to calculate new observables with SPheno.

Observables out of the box
To turn the calculation of low-energy observables on, the Les Houches input file must contain:

Adding new observables
I show now how the FlavorKit functionality can be used to implement new observables in SPheno.
To make it even more interesting we choose a process for which SPheno doesn't even now the Wilson coefficients. Namely, we decide to study the flavour violating, radiative decays of the top quark: The process is interesting, because it is highly suppressed in the SM by GIM but can receive large contributions in SUSY models [301]. The transition amplitude can be expressed by [302] M =ū(p q )[iσ µν q ν (A γ + B γ γ 5 )]u(p t ) * µ (q) (6.6) with the quark momenta p q and p t . The partial width can be expressed using A γ and B γ as and the BRs we are interested in are with the total width Γ tot of the top quark. One can also work in the chiral basis using the effective Lagrangian with the operators The relations between the coefficients are just We are going to make use of this relation in the following by first calculating A L , A R and translating them later into A γ , B γ to calculate the partial width according to eq. (6.7).
Our to-do list is the following:

Filters = {};
This defines a new process called TopPhotonQ which involves two fermions and one vector boson (ConsideredProcess). The Fierz ordering of the external states is defined via FermionOrderExternal.
PreSARAH is absolutely agnostic concerning particle physics and it tries always to calculate the most general case. For us, this would mean that the results are a function of three masses: those of the fermions and the one of the vector boson. To make sure that in the generic expression the photon mass doesn't show up, we put NeglectMasses={3}. The reason is that the photon is the third particle defined in the list of all external states. The information in ExternalFields is not used at all by PreSARAH. PreSARAH just includes the information in the output used for SARAH. SARAH knows then what a 'top quark' and a 'photon' is. Also CombinationGenerations is not used by PreSARAH but just passed to SARAH and SPheno. This list contains all combinations of external generation indices for which the coefficients are later calculated by SPheno. We need here (3,2) for top-charm and (3,1) for top-up operators. The two operators from above are called OTgQSL (A L ) and OTgQSR (A R ) and their expressions are given in FeynArts syntax using ec [3] for the helicity of the third particle and k [1] as momentum of the first particle. The meaning of these symbols and how to use for instance Dirac matrices in the definition of operators is explained in the FeynArts manual and briefly summarized in the FlavorKit reference as well. Note, in the case of Wilson coefficients for QFV observables, SARAH will automatically generate for each Wilson coefficient X another coefficient XSM which just includes SM contributions. Finally, we tell PreSARAH that the output should be written into the file TopPhotonQ.m and we don't want to filter out any diagrams (Filters={}).
We run now this file in Mathematica with PreSARAH similar as we run models with SARAH: The output file is located in the PreSARAH output directory and has to be copied to the NameObservables is an array containing all observables which should show up in the spectrum file. The first part of each entry gives the name of a variable, the second one the number used in the Les Houches block FlavorKitQFV and the third one the comment which is used in the Les Houches file to make clear to which variable the shown number belongs. All operators, which we need to calculate the observables, are given in NeededOperators. As mentioned above, SARAH creates not only routines to calculate the Wilson coefficients including all new physics, but also coefficients in the SM limit. For our purpose, we need both sets of coefficients, because we want not only to calculate the BR but also normalize it to the SM expectation calculated under the same assumptions. The name of another file is given at the end of the steering file. This file contains the 'body' of the Fortran routine to calculate the observable TqGamma.f90. With 'body' I mean that the head of the routine is automatically generated by SARAH. The user can start at the stage of initializing variables needed for the calculation of the observable. The entire file TqGamma.f90 looks as follows: TqGamma.f90 In the first three lines we initialize a few local variables we need. In the entire routine we make a loop over two iterations. In the first iteration we pick up the generation indices (3,1) for the top-u decay, in the second one the indices (3,2). These indices are saved in the variables gt1 and gt2 which are then used as argument for the coefficients. We first express A γ and B γ by A L and A R and do the same with the coefficients for the SM part only. In line 20, we calculate the overall normalization factor ((m 2 t − m 2 q )/(2m t )) 3 /π and plug everything into eq. (6.7). The variable width in line 22 is the partial width including all contributions, widthSM in line 23 is the partial width only with SM contributions. Our final observables are then calculated using the total width of the top (gTFu (3)) or taking the ratio of both widths. We are now done with step (3) of the to-do list.
Step (4) happens automatically if all files have been put into the correct directories. We can now generate the SPheno code again.

<< $PATH / SARAH . m Start [ "B -L -SSM " ]; MakeSPheno [ ReadLists -> True ];
To save time, I used the option ReadLists->True, i.e. SARAH reads the list with all analytical results from the previous run. The output has to be copied again in the BLSSM subdirectory of SPheno and can be compiled as usual. After running SPheno with the input for EP1 we get the new entries in the block FlavorKitQFV SPheno.spc.BLSSM (EP1) ...
Obviously, the numbers and comments show up as expected. We see some deviations from the SM prediction. However, this is still far away from the experimental limits which still allow branching ratios in the percent range [303]. To get large effects of this size one might search for points with light stops for instance. One can also compare these numbers with the SM prediction in literature. The partial with is strongly suppressed by GIM mechanism and therefore rather sensitive on the values of the CKM matrix as well as on the running quark masses in the loop. Therefore, the predicted rates in the SM come with a sizable, theoretical error. In Ref. [302] SM prediction for BR(t → qγ) and other flavour violating top decays were given. The number for the radiative decay into a charm quark reads BR(t → cγ) = (4.6 +1.2 −1.0 ± 0.4 +1.6 −0.5 ) × 10 −14 (6.14) and agrees with our calculation within errors. BR(t → uγ) is supposed to be suppressed by a factor |V ub /V cb | 2 7.9 · 10 −3 where V qb are the entries of the CKM matrix. That's also similar to what we find.

Getting the fine-tuning
One of the main motivation for SUSY was naturalness: it solves the hierarchy problem of the SM by stabilizing the unprotected Higgs mass. However, with the more and more severe limits on the SUSY masses, the question about the fine-tuning rises again. SARAH and SPheno provide functions to calculate the fine-tuning according to eq. The list FineTuningParameters contains the parameters which are varied at the GUT and a numerical coefficient to 'normalize' the fine-tuning. The factor 1/2 for m 0 is there because at the GUT scale the boundary conditions are in terms of m 2 0 . We see that the fine-tuning can not only be calculated with respect to the input parameters defined in MINPAR. Also other parameters which for instance are fixed by the tadpoles equations can be included. In principle, one can also calculate the fine-tuning with respect to SM parameters like the top Yukawa coupling (Yu [3,3]) or the strong interaction (g3) by including those in the list above.
After editing SPheno.m, it is necessary to reproduce the SPheno code with SARAH and to compile the new version. Exactly the same steps are in sec. 6.1.2 are used for that. When SPheno is compiled, the fine-tuning is calculated and included in spectrum file if the corresponding flag is set in the Les Houches input file: The overall fine-tuning is given n the first entry coming with number 0. All other entries list the finetuning with respect to the different parameters. This makes is obvious what parameters contribute mostly to the fine-tuning. In our example these are mainly M 1/2 and µ which have a similar finetuning. Moreover, one sees that even the additional parameters from the B − L sector can have quite some impact on the fine-tuning. The reason is that the tadpole equations are coupled because of gauge-kinetic mixing. We can check this assumption by using the flag . 3 60 0 # Include possible , kinetic mixing to turn off gauge-kinetic mixing. The impact on the overall fine-tuning is moderately small. However, the contributions of µ and B µ do vanish in this limit as expected. Also the fine-tuning with respect to M 1/2 becomes smaller because the off-diagonal gauge couplings and gaugino do not further contribute to the running of the gauginos.
BR Hplus.dat, BR t.dat, effC.dat). While SLHA files can be used with HiggsBounds for models with up to five neutral scalars, the separated files can be used with even up to nine neutral and nine charged scalars. Since the second input works in more cases I'm going to concentrate on it. First, I discuss how exclusion limits are checked with HiggsBounds, afterwards I show the usage of HiggsSignals.

HiggsBounds
In the same directory in which the SPheno spectrum file is located, also all other input files for HiggsBounds and HiggsSignals are saved by SPheno. The (relative) path to this directory has to be given as last argument to HiggsBounds when executing it. Thus, working from the directory $PATH, HiggsBounds is started via:   All information to understand the output is already given in the file: HiggsBounds finds that the strongest constraints come from pp → h 1 → ZZ → 4l at CMS (chan=682) but the rate normalized to the exclusion limit (obsratio) is smaller than 1, i.e. the point is allowed (HBresult=1). A list with all processes which are implemented and which were checked is also written to Key.dat in the same directory.
We are far away from any exclusion limit, i.e. we don't have to worry about small uncertainties in the Higgs masses because they won't change the overall result. For point which are closer to the border, one has to think more about this. In these cases it might be helpful to provide an input file MHall uncertainties.dat which includes the uncertainties in the Higgs mass calculation. I'll give more details about this in the HiggsSignals part. If this file is provided, HiggsBounds runs several times varying all Higgs masses in the range of their uncertainty and checks for the strongest constraints.
Checking light singlet We can also run the point EP2 which has a light singlet of just 79 GeV and find that also this point is allowed by all Higgs searches because of the highly reduced coupling of this scalar to SM particles: HiggsBounds results.txt The most dominant search channel comes from LEP, but the rate is just about half of the one needed to rule this point out.

HiggsSignals
HiggsSignals is the complement to HiggsBounds and checks how good a point reproduces the Higgs mass and rate measurements. The syntax is very similar to HiggsBounds and to run it with the data for our standard point we have to call from the directory $PATH $ . / H i g g s S i g n a l s / H i g g s S i g n a l s l a t e s t r e s u l t s peak 2 e f f C 6 1 SPHENO/ It would be possible to use also here absolute paths. The first three arguments are different compared to HiggsSignals and have the following meaning: (i) what experimental data should be used (refers to the corresponding sub-directory in $PATH/HIGGSSIGNALS/Expt tables/), (ii) the χ 2 method (peak for peak centred, mass for mass centred, or both) 36   Also, the HiggsSignals output is rather self-explaining. The important numbers are the χ 2 µ (csq(mu)) for the Higgs rates, the χ 2 m (csq(mh)) for the Higgs mass and the combined χ 2 tot (csq(tot)). The combined one is also translated into a p-value (Pvalue) 37 . However, one warning appears in the terminal when running HiggsSignals in that way: O p t i o n a l d a t a f i l e SPHENO/ M H a l l u n c e r t a i n t i e s . dat not found . → Using d e f a u l t v a l u e s .
That means that the file MHall uncertainties.dat was not found. That's not surprising because it wasn't created by SPheno. This file contains the theoretical uncertainty of the Higgs mass prediction. Thus, to produce this file some estimate of the size of missing higher order corrections to the Higgs masses is needed. This is something what SPheno can't do automatically at the moment. However, HiggsSignals assumes no theoretical uncertainty if the file is missing. That's, of course, unrealistic. The theoretical uncertainty for the corrections included in the SARAH-SPheno interface is expected to be similar to the one in the MSSM using standard two-loop corrections. Thus, we put 3.0 GeV for the two light scalars and 1.0 GeV for all others 38 . We create MHall uncertainties.dat and put it in the SPheno directory where also all other input files for HiggsBounds/HiggsSignals are stored. The content of MHall uncertainties.dat reads Now, running HiggsSignals again we find that the χ 2 values become slightly smaller. The p-value in this context is the ratio of χ 2 tot divided by the numbers of degrees of freedom (ndf). For ndf HiggsSignals takes the numbers of observables. To change this and to define the number of free parameters in the model, Nparam has to be set in the file usefulbits HS.f90 in the HiggsSignals source code. 38 An estimate for this uncertainty could be obtained by varying the renormalization scale in SPheno in the range [Q/2, 2Q] and checking the impact on the masses.
The couplings of the SM-like Higgs to the SM-fermions don't change significantly between EP1 and EP2, i.e. also χ 2 µ stayed the same. However, the mass of the SM-like state is a bit closer to the best-fit of the measurements, hence χ 2 m has slightly decreased.

Checking the vacuum stability with Vevacious
The parameter points EP1 and EP2 have passed the first checks. The mass spectrum looks promising and they are consistent with all bounds from flavour and Higgs physics. As next step we want to check if the points have really a stable vacuum: since SPheno found a solution for the tadpole equations, it is sure that the given parameters are at least at a local minimum with respect to the scalar potential where the set {v d , v u , x 1 , x 2 } of VEVs is non-zero. However, this doesn't ensure that this is also the global minimum. First, there might be a deeper minimum for other values of {v d , v u , x 1 , x 2 }. Those minima are in general are ruled out because they would predict another mass for the Z-boson. Another possibility is that other particles could receive VEVs as well. These can either be points with spontaneous R-parity violation where the sneutrino get a VEV ({v d , v u , , or points where charge and colour are broken by squark VEVs The last two possibilities are completely forbidden and points would always be ruled out by that. However, the dangerous regions for charge or colour breaking are those where the trilinear soft-terms are large compared to the soft-masses in the stop or stau sector [22,[304][305][306][307][308][309][310][311][312][313][314]. This is not the case for the points EP1 and EP2 and we don't have to worry about that. Spontaneous R-parity violation is not completely forbidden and could lead to a different phenomenology. However, in our approach it is also very likely that the electroweak VEV changes at the global minimum where the sneutrinos gain non-zero VEVs. Hence, such a scenario is ruled out as well by the Z mass. We are going to check the stability of the vacuum with neglecting and with including the possibility of sneutrino VEVs. For this purpose we use the package Vevacious [168].
Vevacious is a tool to check for the global minimum of the one-loop effective potential for a given model allowing for a particular set of non-zero VEVs. For this purpose Vevacious finds first all tree-level minima by using HOM4PS2 [315]. Afterwards, it minimizes the one-loop effective potential starting from these minima using minuit [316]. If the input minimum turns out not to be the global one, life-time of meta-stable vacua can be calculated using Cosmotransitions [317]. Vevacious takes the tadpole equations, the polynomial part of the scalar potential and all mass matrices as input. All of this information has to be expressed including all VEVs which should be tested. That means, that to check for charge and colour breaking minima the stop and stau have to show up in the potential and mass matrices and the entire mixing triggered by these VEVs should be included. To take care of all that, the corresponding input files can be generated by SARAH as explained below.

Finding the global minimum without sneutrino VEVs
If one is just interested in the global minimum for the case that no other VEVs are allowed, it is straightforward to get the model files for Vevacious: the standard implementation of the model can be used together with the command MakeVevacious: MakeVevacious comes with some options which I list for completeness. However, we stick to the default settings.
• ComplexParameters: defines, if specific parameters should be treated as complex. By default, all parameters are assumed to be real in the Vevacious output.
• IgnoreParameters: defines, if a given set of parameters should be set to zero when writing the Vevacious model files.
• OutputFile: defines the name for the model files. By default BLSSM.vin is used.
• Scheme, defines, the renormalization scheme. For SUSY models SARAH uses DR and for non-SUSY MS by default One sees from the first option that the parameters are handled less general in the Vevacious output as this is usually done by SARAH. The reason is that the evaluation of a parameter point with Vevacious can be very time consuming. Thus, doing reasonable approximations might be an option to speed this up.
As soon as the model file is created, it is convenient to copy them to the model directory of the local Vevacious installation. In addition, one can also generate a new subdirectory which contains the SPheno spectrum files for the B-L-SSM used as input for Vevacious, as well as the output written by Vevacious The only changes we apply here is to give the paths for HOM4PS2 39 and CosmoTransitions 40 which are used by Vevacious. I'm going to assume here that these are installed in the same directory $PATH as all other tools are. The other information needed is the location of the model and spectrum files as mentioned above. After about 30s Vevacious is done with checking for the global minimum of the one-loop effective potential. Since it hasn't started CosmoTransitions to calculate the tunnelling time, the point is stable. This can also be seen from the file SPheno.spc.BLSSM where a new block has been appended: If the user is interested in some more information about all possible minima found at tree-level and one-loop, he/she can check the file Vevacious tree-level extrema.txt. This file contains all VEV combinations which are actually a minimum of the tree-level potential. Also the depth of the potential at each minimum is given at tree-level and one-loop level: Vevacious tree-level extrema.txt We can see from that file that there are actually four minima which are not related by a phase transformation of the VEVs. The minimum without symmetry breaking (all VEVs are zero) has a depth of 0 at tree-level as expected but receives large loop corrections. Nevertheless, the depth is still much less than for all other combinations where at least one symmetry (electroweak or B − L is broken). There is just one minimum where both symmetries are broken and this corresponds to our input minimum. The full list of minima found at one-loop is given in the file Vevacious loop-corrected minima.txt. All additional minima listed there are small variations of the tree-level ones.
We can do the same check for EP2 and find that also this point is stable.

Checking for spontaneous R-parity violation
As mentioned above, one can't be completely sure that the point is stable if Vevacious doesn't find a deeper minimum if the first check is passed. There is still the possibility that additional particles might receive VEVs. We are checking here the possibility of spontaneous R-parity violation. For this purpose, it is necessary to create a new SARAH model file. We call it B-L-SSM RpV.m. The simplest way is to take our B-L-SSM.m file as basis and apply the following changes: First, we change the name of the model to make sure that no files of the other implementation are overwritten. The main modification is to give VEVs to the left and right sneutrinos as done by the changes in DEFINITION[EWSB] [VEVs]. However, we didn't consider the most general case where all three generations get VEVs, but restrict VEVs to the third generation only. Even in this case we have to deal with a 6-dimensional parameter space. In the general case, we would even have 10 VEVs and running Vevacious would take significant longer. Therefore, it's always good to check what degrees of freedom can be rotated away. The other lines are a consequence of R-parity violation: a mixing between the CP even and odd sneutrinos and the Higgs scalars happens, the charged Higgs scalar mix with the charged sleptons. In the fermionic sector the charginos mix similarly with the charged leptons, and the neutralinos with the neutrinos. We have to include this mixing because Vevacious checks not only the tree-level potential but also the one-loop effective potential. This mixing will give additional contributions to the one-loop corrections. If we are really just interested in the Vevacious output, we can skip the modifications of particles.m and parameters.m and come back directly to your study with Vevacious: the remaining steps are the same as for the R-parity conserving case: (i) running the B-L-SSM RpV with SARAH, (ii) running MakeVevacious[], (iii) copying the file to the Vevacious installation, (iv) creating a new initialization file VevaciousInitialization BLSSM RpV.xml with the location of the model file, (v) running Vevacious.
We find that both parameter points pass also this check. For example, for EP1 the Vevacious output reads:  mv2 (1 ,1) , dp ) 2 2 9.42973968 E +05 # Real ( mv2 (2 ,2) , dp ) 3 3 8.52847167 E +05 # Real ( mv2 (3 ,3) , dp ) Nevertheless, we find that at the global minimum R-parity is broken by sneutrino VEVs: We see that the stability is labelled as 'long lived but thermally excluded' ([0,0]=-2). This means that the point is long-lived at zero temperature but quickly decays if temperature effects are taken into account. In that case the entry [0,2] shows at which temperature the tunnelling is likely to happen. Thus, this point is actually ruled out.
One sees at this example that the condition sometimes used in literature for distinguishing R-parity violation and conservation is not necessary. On the other hand, it is also possible to find points where this condition is fulfilled, but R-parity is still unbroken at the global minimum [276], i.e. it is also not sufficient. Therefore, one shouldn't rely on such simple minded conditions but perform always a numerical check to test the vacuum stability. The same statement holds for charge and colour breaking minima: analytical thumb rules like [22,[304][305][306][307]] which are, unfortunately, still widely used in literature don't bear up against numerical checks and turn out to be pretty useless [220,221]. These conditions miss the large majority of points which actually suffer from an unstable ew vacuum.

Calculating the dark matter properties with MicrOmegas
As next step we want to study the dark matter (DM) properties of the model by using MicrOmegas.
MicrOmegas is a tool which not only calculates the relic density for one or more dark matter candidates, but it also gives cross sections for direct and indirect DM searches. • FeynmanGauge: defines, if Feynman gauge should be supported beside Landau gauge. This is done by default.
• CPViolation: defines, if parameters should be handled as complex. By default, all parameters are treated as real because CalcHep is not really optimized for the usage of complex parameters and this option should be used carefully.
• ModelNr: numbers the model files. SARAH starts by default with 1.
• CompHep: can be used to write model files in CompHep instead of CalcHep format.
• NoSplittingWith: SARAH does not decompose four-scalar interactions in pairs of two scalar interactions with auxiliary fields if particular fields are involved. Such a decomposition is usually done because of the implicit colour structure in CalcHep which doesn't allow four-point interactions of coloured states. To keep the model files shorter, SARAH makes the same decomposition also for non-coloured states.
• NoSplittingOnly: One can define particles, for which SARAH does not decompose four-scalar interactions in pairs of two scalar interactions with auxiliary fields if only the given fields are involved the interaction.
• UseRunningCoupling: defines, if α S should run in the model files.
• SLHAinput: defines, if parameter values should be read from a spectrum file.
• CalculateMasses: defines, if tree-level masses should be calculated internally by CalcHep.
• RunSPhenoViaCalcHep: writes C code to run SPheno from the graphical interface of CalcHep to calculate the spectrum on the fly.
• DMcandidate1: sets the first DM candidate.
For our example we can stick to the default options. I'll just comment on two important switches which demand a further explanation: Mass spectrum By using SLHAinput -> True the model files are written in a way that CalcHep respectively MicrOmegas expect all input parameters to be provided in a spectrum file which is called SPheno.spc.BLSSM. CalcHep and MicrOmegas are going to read this file and extract all important information using the SLHA+ functionality [318] from it. With the other options MicrOmegas/CalcHep expect either all masses and rotation matrices given in the file vars.mdl (SLHAinput -> False, CalculateMasses -> False), or it expects all fundamental parameters (soft-terms, couplings and VEVs) as input and diagonalizes the mass matrices internally (SLHAinput -> False, CalculateMasses -> True).
Dark Matter candidates One can work either with one or two dark matter candidates in MicrOmegas.
The first DM candidate is the lightest particle of all states having a particular charge under a discrete symmetry To define the symmetry and the charge, the option DMcandidate1->Value is used. There are two possibilities for Value: (i) when set to Default, the DM candidate is the lightest odd particle odd under the first Z 2 defined as global symmetry; (ii) for any other choice, one can give first the name of the global symmetry and then the quantum number with respect to that symmetry GlobalSymmetry == Charge.
When To use the model with MicrOmegas a steering or 'main' file has to be provided either in Fortran or C language, and must be compiled. Examples for these files are delivered with MicrOmegas and called main.F and main.c. SARAH writes also two examples which can be used for the following calculations: • CalcOmega.cpp: this file calculates only the DM relic density Ωh 2 and prints the result at the screen and into a file called omg.out • CalcOmega with DDetection.cpp: this file calculates the DM relic density Ωh 2 and in addition some direct detection rates: (i) spin independent cross section with proton and neutron in pb, (ii) spin dependent cross section with proton and neutron in pb, (iii) recoil events in the 10 -50 keV region at 73 Ge, 131 Xe, 23 Na and 127 I nuclei. The output is also written into a file called omg.out. Note, the syntax for the direct detection calculations has been changed in MicrOmegas compared to earlier versions. SARAH includes also a file CalcOmega with DDetection old.cpp which is compatible with versions 2.X of MicrOmegas.
We are going to choose the second file which includes the calculation of direct detection rates.  In the first line, the freeze out temperature and the relic density is given. We find that this point falls into the preferred 2σ region 0.1153 < Ω CDM h 2 < 0.1221 (7.5) combining Planck, WMAP polarization, high-resolution CMB data, and baryon acoustic oscillation results [319]. The important channels contributing to the annihilation follow in the next lines. This point is a bit boring, because the annihilation in two bileptons makes 98% of the entire annihilation. All other individual channels are not printed because they are below 1%. This threshold can be changed in CalcOmega with DDetection.cpp by changing the cut to lower values: CalcOmega with DDetection.cpp 18 double cut = 0.01; // cut -off for channel output The same information is also written in the file omg.out. The style of this file is inspired by the SLHA format: omg.out The values shown for the direct detection rates can be compared with limits from experiments. For this purpose it is helpful to multiply these values by a factor of 10 −36 to get the rates in cm 2 which is usually used to present the direct detection limits in the (m DM , σ) plane.
We can do the same for the EP2. This point has a neutralino LSP, i.e. MicrOmegas has to compile again many channels and we have to wait again some time for the results. The output on the screen looks less promising: Xf =2.15 e+01 Omega hˆ2=2.66 e+01 # Channels which c o n t r i b u t e t o 1/( omega ) more than 1%. # R e l a t i v e c o n t r i b u t i o n s i n % a r e d i s p l a y e d Despite the many different channels which contribute to the annihilation, the relic density is much too high. This is not surprising because it is well known that for a neutralino LSP often particular conditions are needed to fulfil the relic density bounds. Either a charged particle close in mass, resonances, or a large Higgsino fraction are needed. This holds not only for a bino LSP in the CMSSM but also for a blino LSP in the constrained B-L-SSM as we have it here [279].

Monojet events with WHIZARD
We change topics again and enter the wide field of collider studies with Monte Carlo (MC) tools. That's nothing what can be addressed in detail in this manuscript. Tools like CalcHep, WHIZARD, MadGraph, Herwig++ or Sherpa are very powerful and offer a rather unlimited number of possibilities what can be done. Therefore, I'm just going to show at two examples how the output of SARAH can be used together with WHIZARD and MadGraph to perform simple studies. As soon as a model is implemented in these tools and is working fine for one study, it can be used in the same way as all models delivered with the different tools. Thus, to become more familiar with these tools, one can check for the many examples and tutorials which can be found online.
Actually, there is one big advantage when working with model files produced by SARAH: the chosen MC tool needs not only the model files containing all vertices but also numerical values for all parameters have to be provided. This can be a delicate task especially in supersymmetric models coming with a lots of parameters and rotation matrices. When using numerical values for all these parameters obtained with another code, one has to make always sure that the conventions which are used in the model file and these of the spectrum generator are identical. This problem is absent when working with model files produced by SARAH and spectrum files generates with a SPheno version also produced by SARAH. In that case, the implementation of models in the MC tool and in SPheno are based on single model file in SARAH. Thus, the same conventions are used for sure in both parts. to include only FFV and FFS vertices. This shortcut is very helpful for our purposes here to get quickly some results. However, it has to be used carefully in order to make sure that no relevant vertices are dropped. In the case that all vertices should be kept, there is another possibility to speed up compilation a bit: usually, SARAH splits the entire list of vertices in pieces containing 150 vertices and writes for each part a separate file. Especially for SSSS and SSS interactions even 150 vertices can cause a large file which needs some time to be compiled. Thus, for complicated models where the expressions for the vertices are lengthy, it might be helpful to go even for less couplings per file. That's done by the option MaximalCouplingsPerFile -> X with some integer X. A good choice for the full model files for the B-L-SSM is 50 or less. There are some more flags which can be used to adjust the WHIZARD output. The full list of options is: • MaximalCouplingsPerFile: defines the maximal number of vertices per file.
• WriteOmega: defines, if the model files for O'Mega should be written.
• WriteWHIZARD: defines, if the model files for WHIZARD should be written.
• WOModelName: defines the name for the model in the output.
• Version: defines, for which version of WHIZARD the files are generated. By default 2.2.0 is used.
• ReadLists: defines, if the information from a former evaluation should be used.

Compiling the model files
After the interface has completed, the generated files are stored in the directory If WHIZARD has not been installed globally in the home directory of the current user, WHIZARD won't be able to find the binaries. Thus, the WO CONFIG environment variable was used to point explicitly to the binaries. By default, the configure script would install the compiled model into .whizard in the home directory of the user. If the user wants to have several WHIZARD installations or install WHIZARD locally, it might be better to provide a model just for one installation. For these cases the installation path has been defined via the --prefix option of the configure script. More information on the available options is shown with the command . / c o n f i g u r e −−h e l p The configure script prints also another import information, namely the name of the model which is used to load it in WHIZARD: . . . The model files produced by SARAH are supposed to be used with WHIZARD2.x. The possibility to patch these files for a use with WHIZARD1.x does exit in principle. However, I won't go into detail here and highly recommend to use version 2.

Parameter values from SPheno
WHIZARD is able to read Les Houches files for the MSSM generated by public spectrum generators using SLHA conventions. However, WHIZARD does not provide a possibility to read spectrum files which go beyond that. Therefore, to link WHIZARD and SPheno, all SPheno modules created by SARAH write the information about the parameters and masses into an additional file. This file is written in the WHIZARD specific format and can be directly read by WHIZARD. In our example the file is called WHIZARD.par.BLSSM and it is written to the same directory where SPheno writes the standard spectrum file. One just has to make sure that the corresponding flag is turned on the Les Houches input for SPheno to get this output:

Sindarin input and running WHIZARD
WHIZARD comes with its own steering language called Sindarin. With Sindarin all settings to define a process in a specific model, to apply cuts and even to make plots can be put in one single input file. The input file BLSSM monojet.sin for our example of monojets at the LHC might look as follows: BLSSM monojet.sin 1 model = blssm_sarah First, we set the model and tell WHIZARD where it finds the spectrum file written by SPheno. In general, the SPheno file contains non-zero and different masses for all SM fermions. However, to group fermions together into one object, those have to have the same masses. Therefore, we put all first and second generation quark masses explicitly to zero in lines 5-8. Afterwards, we can combine these quarks, their antiparticles and the gluon into one object called parton. For the final state we define another object jet which consists of the same particles. When we now define a process involving parton and jet, WHIZARD will generate all non-vanishing subprocesses on parton level. A name for the process (monojet) is given. This name is used in the following to refer to this process. Thus, one can also define several processes in one file, treat them separately, and run one after the other. We save this file in the root directory of WHIZARD ($PATH/WHIZARD). However, running it in the same directory would give some mess because WHIZARD produces several output files. Therefore, we generate a new sub-directory which contains at the end the entire WHIZARD output: $ cd $PATH/WHIZARD $ mkdir run BLSSM monojet $ cd run BLSSM monojet $ . / . . / b i n / whizard BLSSM monojet . s i n The last line runs the executable whizard in the binary directory on our Sindarin input file. Note, we didn't move BLSSM monojet.sin to the sub-directory run BLSSM monojet. The reason is that we might want to clean this directory by rm * in order to make a new run with other settings. After some time, WHIZARD is done and has created a pdf including both plots shown in Fig. 9. The output directory includes also all events in the WHIZARD native format called evx. To turn on the output of other formats, it's possible to add the flags to the Sindarin input file: where <format> can be for instance lhef to get files in the Les Houches accord event format. For a complete list of all supported formats, I refer to the WHIZARD manual.  In the first line we import the model in MadGraph. The option -modelname is used to keep the names of the particles as given in the model files. By default, MadGraph will try to use the default naming conventions. However, this would fail for this model, because there are more than two CP even scalars and h3 can be used as name for the CP odd one as MadGraph wants to do 42 . We define a multi-particle called p which consists of all light quarks. We can skip the gluon because it won't contribute to our process. The muon is the second lepton which is called e2 and the anti-muon is accordingly e2bar. Thus, in the third line we generate the process pp → µμ. The output for MadEvent is written to a new subdirectory ppMuMu and we close MadGraph when it is done via exit.

Dilepton analysis with
After MadGraph has created the output for MadEvent and finished, we can enter the new subdirectory ppMuMu. The important settings to generate events are done via the files in the Cards-directory: the file param card.dat is used to give the input for all parameters, run card.dat controls the event generations. In the last file, the user can for instance set the beam type and energy, define the renormalization scale, apply cuts, and fix the number of events.
We want to use, of course, the spectrum file as written by SPheno. However, there is one caveat: MadEvent has problems with reading the HiggsBounds specific blocks in the SPheno spectrum file (HiggsBoundsInputHiggsCouplingsFermions and HiggsBoundsInputHiggsCouplingsBosons). If these blocks are included, MadEvent won't accept the file. Therefore, we either modify the output by hand and delete these blocks or we re-generate the file by changing the options in the Les Houches input file. The HiggsBounds blocks are disabled by the flag LesHouches The other settings we have to do demand small modifications on the run-card: we want to generate one million events and we want to apply a cut on the invariant mass of leptons to get rid of the Z-peak.

200
= mmll ! min invariant mass of l +l -( same flavour ) lepton pair We are now ready to generate the events. This can either be done again in the interactive mode by starting . / b i n / madevent or we can directly start the event generation with . / b i n / g e n e r a t e e v e n t s 0 0 The two 0's are used as argument because we don't want to make any further modifications on the param-or run-card, and we also don't want to run pythia or any detector simulation. When starting MadEvent in that way a long list of warnings appears on the screen: The reason is that the UFO model files by SARAH in general can handle complex parameters. However, SPheno does only print the real parts if we don't turn on CP violation. The zeros for all imaginary parts are not given explicitly in the spectrum file. Thus, MadEvent doesn't find an input for the imaginary parts and takes them as zero as it should. In addition, MadEvent prints a warning for each parameter where this happens. Thus, we don't have to worry about these many warnings. MadEvent will give a status update in a new browser window. When it is done, the events are saved in the Les Houches event format and can be processed further.
We are just going to make a plot to check if the Z peak shows up. This can for instance be done with MadAnalysis [320] which I assume here to be installed as well in $PATH. We make another short input file called plotMuMu.txt and save it in $PATH/MADANALYSIS. The content of the file is the following plotMuMu.txt In the first line we import the unweighted events which are generated by MadGraph and which are saved by default in the LHE format. In the second line, we make a histogram of the invariant mass of the muon pair in the mass range of 500 to 3000 GeV using 100 bins. For the x-axis we use a log scale (logX). Finally, everything is submitted to be evaluated by MadAnalysis and the output directory should be called ppMuMu. We run MadAnalysis on that file: The output of MadAnalysis is stored in $PATH/MADANALYSIS/ppMuMU and contains also the plot shown in Fig. 10 with the expected peak at 2.5 TeV.

Example -Part V: Making scans
We have learned in the last sections how SARAH can be used together with other tools to study all aspects of a model. Of course, it is often not sufficient to consider just one single parameter point. SUSY models like the B-L-SSM have even in their constrained version a large parameter space which wants to be explored. Thus, at some point one has to start making scans to check many different points. I'll discuss two possibilities how to perform scans: the first one is only using functions the Linux bash provides together with simple scripts 43 . That might be sufficient to check quickly the dependence of a few observables on a single parameter. Afterwards, I'll introduce the Mathematica package SSP which is a dedicated tool for more sophisticated scans.

Using shell scripts
Let's assume that one is just interested in the dependence of the two lightest Higgs masses on tan β in a small range starting from our parameter point EP1. In principle, one doesn't need any additional software to do a small scan but Linux provides everything what's needed. For this purpose we create a file called LesHouches.in.BLSSM Template which is the input file for EP1 with just one change: we replace the input value for tan β by an unique string set key off ; 6 plot " results_h1 . dat " , " results_h2 . dat " ; 7 exit I used here basic gnuplot commands to adjust the output format (line 1), the name of the output file (TBpMH.eps), the labels for the axes (lines 3 & 4), disabling the legend (line 5) and plotting the content of the two files with our data (line 6). We run gnuplot on that file $ g n u p l o t gnuplot mh . t x t and get the plot shown in Fig. 11. There are now many possibilities to improve this ansatz. One can include easily in the script to run SPheno also other codes; the scans can be varied by playing with seq, more observables can be stored; the appearance of the plot can improved by using the full power gnuplot provides to polish the layout; and so on. However, I think there is no need to invent the wheel again and again. There are public tools which can be used for scanning and plotting. I'll discuss briefly SSP now which is one of these tools.

Making scans with SSP
A tool which is optimized for parameter scans using SPheno and the other tools discussed so far, is the Mathematica package SSP (SARAH Scan and Plot). SSP provides functions for simple random or grid scans, but can also make use of intrinsic Mathematica functions to sample the parameter space or to include constraints directly during the scan.  Of course, $PATH has to be replaced everywhere by the installation directory of the different tools.
The absolute path to the executable has to be defined for SPheno and the name for the in-and output has to be given. Also the path for the executable for MicrOmegas is set. The names of the spectrum file used as input, and the output file written by MicrOmegas is the other information necessary to include MicrOmegas in the scan. In addition, one can define if MicrOmegas should only calculate the relic density if a specific particle is the LSP. In that case the PDG has to be given, i.e. either 1000022 for a neutralino LSP or 1000012 for a CP even sneutrino LSP. We use here ALL to calculate the relic for any particle. One could also use 1 DEFAULT [ D ar k M at t e rC a n di d a te ] = 1000022 | 1000012; to just consider a subset of particles. The lines below give the commands to run HiggsBounds and HiggsSignals as explained in sec. 7.1. Finally, to run Vevacious the path to the executable as well as the desired initialization file have to be given as done in the last two lines.

Defining a scan
A second input file defines the scan we want to make. SARAH also writes templates for this file during the SPheno output which could be used as starting point. The file names of these templates start with SSP Template. We call the file for our examples here BLSSM TBpMZp.m. The different parts are: At the very beginning, the file which contains the information about the installation of the codes involved in the scans is loaded. This is the file we have set up in the first step. Then, identifiers for all scans which we want to make are defined using the list RunScans. We just perform two scans here as said above called MZpLinear and MZpTBpGrid, but there is in principle no limit how many scans are done within a single file. By default, SSP always runs SPheno. We also want to include here HiggsBounds and HiggsSignals and put therefore the flags to True. To include MicrOmegas as well, it would just be necessary to put also that flag to True. The output is stored in the sub-directories $PATH/SSP/Output/MZpLinear $PATH/SSP/Output/MZpTBpGrid These directories contain not only the scan data saved in the Les Houches format (SpectrumFiles.spc) and Mathematica format (Data.m) but also the plots we wanted to see. I'll show these in Fig. 12 for the linear scan and in Fig. 13 for the grid scan.

Using data of a scan in Mathematica
Of course, it is also possible to use the results of scans performed by SSP later in Mathematica. For this purpose SSP provides a function MakeSubNum to translate the data saved in the SLHA or Mathematica format into a list of Mathematica substitutions. These substitutions can then be used to either to apply cuts or to extract points, or to make more plots.
To load and format the data of the grid scan, we can either use With both options we get an array of substitutions called SubData which we can use. However, there is one caveat: the data files for large scans are huge. These file include any information as calculated by SPheno and the other tools. Often, not all information is really needed, but one is only interested in the behaviour of a subset of parameters or observables. Therefore, it often saves a lot of time and memory to extract the information from the big files which is actually needed, and store that information in smaller files. This can be done for instance under Linux with a small shell script using again grep and the comments appearing in each line of the SPheno output: and we can use it for instance to make some more plots. For instance, to make a contour plot of the doublet fraction of the second lightest Higgs in the (tan β , M Z ) plane, we can use The obtained plot is shown in Fig. 14. We can also apply some cuts and collect points with a neutralino mass below 500 GeV We see that there is now an infinite numbers of possibilities to study data within Mathematica using the Select or also other Mathematica commands.

Summary
In the first part of this paper I have given an overview what models SARAH can handle and what calculations it can do for these models. In addition, I have discussed to what other HEP tools the information derived by SARAH can be linked. In the second part I have discussed in great detail how all aspects of a SUSY model can be studied with SARAH and the related tools. For this purpose I choose the B-L-SSM as example. The implementation of the B-L-SSM in SARAH was explained, and it was shown what can be done within Mathematica to gain some understanding about the model. Afterwards, I have explained how SARAH in combination with SPheno is used to calculate the mass spectrum, decays, flavour and precision observables, and the fine-tuning. The next step was to check parameter points with HiggsBounds and HiggsSignals for their Higgs properties, with Vevacious for their vacuum stability, and with MicrOmegas for their dark matter relic density. I have given two short examples for collider studies using SARAH model files. First, monojet events with WHIZARD were generated. Second, a simple dilepton analysis with MadGraph was done. Finally, I discussed possibilities how to perform parameter scans using either shell scripts or SSP. This manuscripts hopefully shows how helpful SARAH can be to study models beyond the SM or MSSM. Because of a very high level of automatization the user can get quickly results with a precision which is otherwise just available for the MSSM. Of course, also the possibility to make mistakes is highly reduces compared to a home-brewed calculation. I hope that the detailed explanation of a specific example simplifies the first contact of interested users with the many different tools which are available today.
A Some more details I couldn't address all interesting topics in the main part. Therefore, I give in this appendix to selected topics a few more details.

A.1 Flags in SARAH model files
There are a different flags to enable or disable distinctive features which might be present in some models: • AddDiracGauginos = True/False;, default: False, includes/excludes Dirac Gaugino mass terms In addition, the different steps to derive the Lagrangian of the gauge eigenstates are also saved in different variables: • Superpotential: Superpotential • Fermion -scalar interactions coming from the superpotential: Wij In some cases a numerical more precise calculation is needed to diagonalize mass matrices in SPheno. This is the case if the hierarchy in the mass matrix very large. In that case double precision with about 15 digits precision might not be sufficient. The best example are models with R-parity violation where neutrinos and neutralinos mix. Another example are seesaw type-I like models where TeV-scale right-neutrinos mix with the left-neutrinos. In this case one has to go for quadruple precision which gives a precision of about 32 digits. To enable quadruple precision for specific masses, two small changes are necessary: 1. In SPheno.m used to set up the SPheno output, one has to define for which particles the higher precision is needed. This is done with the variable QuadruplePrecision which accepts a list of mass eigenstates. If we just want to have the masses of the neutrinos, which are called Fv in the considered model, with higher precision, the corresponding line reads By doing that, the routines necessary for a higher precision get compiled. To make sure that everything is consistent, it might a good idea to re-compile the entire code after changing the Makefile: cd $PATH/SPHENO make c l e a n a l l make MODEL=$NAME

A.4.1 Numerical solutions
In the main part of this paper we solved the tadpole equations for the SPheno output with respect to µ, µ , B µ and B µ for which an analytical solution exists. This must not always be the case. For instance, if one wants not to use tan β and M Z as input, but obtain x 1 and x 2 from the minimum conditions, an analytical solution does not exist. To solve the equations numerically and to define the initialization used by the Broydn routines used for that, SPheno.m has to contain the following lines: The first line is the same as for the analytical approach and defines that the tadpole equations have to be solved with respect to µ, B µ , x 1 , and x 2 this time. Without the other two lines, Mathematica's function Solve would try to find an analytical solution but fail. SARAH would then stop the output with an error message. However, due to the second line the attempts to solve the tadpole equations inside Mathematica are skipped. The third line assumes that µ, x 1 and x 2 are O(m 0 ) and B µ is O(m 2 0 ) These values are used in the numerical routines for initializing the calculation. Of course, other possible and reasonable choices would have been to relate µ, B µ with the running soft-breaking terms of the Higgs, and x 1 and x 2 to µ which is now used as input: Usually, the time needed to find the solution changes only slightly with the chosen initialization values as long as they are not completely off. Note, all choices above would only find the solution which is the closest one to the initialization values. However, the equations are cubic in the VEVs and there will be in general many solutions. Thus, it would be necessary to check if the found vacuum is the global one or at least long-lived. This could be done for instance with Vevacious.

A.4.2 Assumptions and fixed solutions
Assumptions It is possible to define a list with replacements which are done by SARAH when it tries to solve the tadpole equations. For instance, to approximate some matrices as diagonal and to assume that all parameters are real, one could use That has, of course, no impact on our example because these matrices don't show up in the tadpole equations. However, in the R-parity violating case with sneutrinos VEVs this might help to find analytical solutions which don't exists in the most general case.

A.5 Thresholds in SUSY models
Assumptions It is possible to include threshold effects in the RGE evaluation with SPheno. I concentrate here on the simpler case where the gauge symmetry doesn't change. In that case SARAH can derive the RGEs for all scales from the RGEs of the highest scale above all thresholds as follows: • The number of generations of the fields which are supposed to be integrated out during the RGE evaluation are parametrized by new variables n gen (Φ i ). All gauge group constants like the Dynkin index S(R) are expressed as function of n gen (Φ i ). n gen (Φ i ) is dynamically adjusted during the SPheno run when the RGEs cross the different thresholds.
• The superpotential and soft-couplings which involve the heavy are set to zero when a threshold is crossed. For instance, we take the Yukawa-like coupling Y ij Φ Φ i φ j H which involves three generations of the heavy field Φ. At the threshold of Φ k , the k-th row of Y Φ is set to zero.
In addition, two assumptions have to be satisfied: (i) the difference between the masses of the scalar and fermionic component of the heavy superfield is negligible, i.e. the masses coming from superpotential interactions are much larger than the soft-breaking term; (ii) these masses are a consequence of bilinear terms in the superpotential. Both assumptions are fulfilled for instance for very heavy vector-like particles or for singlets which have a large Majorana mass.
Procedure There are two steps necessary to implement thresholds according to the above assumptions. First, small changes in the model file of the considered model are necessary: the heavy states have to be 'deleted' at the SUSY scale. This is done by adding the superfields to the array DeleteFields.

DeleteFields = {...};
This ensures that the heavy particle are not take into account in the calculation of mass matrices, vertices or loop corrections at the SUSY scale.