Computational Method for Global Sensitivity Analysis of Reactor Neutronic Parameters

The variance-based global sensitivity analysis technique is robust, has a wide range of applicability, and provides accurate sensitivity information for most models. However, it requires input variables to be statistically independent. A modification to this technique that allows one to deal with input variables that are blockwise correlated and normally distributed is presented. The focus of this study is the application of the modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross-sections. The main effort in this work is in establishing a method for a practical numerical calculation of the global sensitivity indices. The implementation of the method involves the calculation of multidimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals, namely, Monte Carlo and sparse grids methods, are used, and their efficiency is compared. The method is illustrated and tested on a two-group cross-section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy while using a significantly smaller number of samples. This aspect is addressed in a ministudy, and a preliminary explanation of the results obtained is given.


Introduction
The apportioning of uncertainty in the output of a model (numerical or otherwise) to different sources of uncertainty in the model input is known as sensitivity analysis [1], and the associated quantitative values are known as sensitivity indices.The sensitivity indices can be used to rank the input variables of the model, based on the influence they have on the output.It thus becomes possible to recognize the probabilistically insignificant/unessential input variables that exert little influence on the output.This allows for the reduction of the dimensionality of the problem by fixing the unessential input variables, whilst more experiments, computations, research, and so forth can be done to determine the essential input variables with a higher degree of accuracy.
The focus of this study will be on global sensitivity analysis (GSA), which explores the full phase space of input parameters, as opposed to local sensitivity analysis (LSA) methods that are usually based on derivatives and analyse the behaviour of the model output around a chosen point.The implementation of GSA can be achieved by using either variance- [1][2][3] or entropy- [4,5] based methods.In our study, we will use the Sobol's variance-based method [3].This method is referred to as "variance-based" because within the framework of this approach, the uncertainty of the output is characterized by its (output) variance.The Sobol's method is robust, has a wide range of applicability, and, as stated in [6,7], provides accurate sensitivity information for most models.However, the Sobol's method is defined for mutually independent input variables that are uniformly distributed.A modification of the method which allows one to deal with input variables that are blockwise correlated and normally distributed is presented in this work.
The modified method can then be applied to nuclear reactor calculations.Many reactor parameters of interest (such as the neutron multiplication factor, decay heat, reaction rates, etc.) are dependent on neutron cross-sections.These cross-sections are often described by only their first two statistical moments and are assumed to be normally distributed [8].The uncertainties associated with the crosssections are propagated to the final result of the calculated reactor parameters, and the uncertainty in a calculated reactor parameter can be apportioned to the different sources of uncertainty in the neutron cross-sections.
In this paper, we will present the method of global sensitivity analysis that will address the previous limitations and take into account the previously mentioned assumptions with an emphasis on the numerical/calculational aspects in implementing the method.The rest of the paper is organised in the following way.Section 2 contains two major parts: in the beginning, we give theoretical background and some mathematical derivations for the method we present, and in the second part of the section, we discuss its practical numerical implementation.The theory description is supported by two appendices: Appendix A is used to summarize the definitions and properties of the functional ANOVA decomposition, and Appendix B provides explanations concerning the sparse grid integration method.In Section 3, we describe the particularities of our implementation of the proposed method and the problem we use to test and characterise the method, as well as the results obtained.Finally, Section 4 is used to present our conclusions.

Method
2.1.Definitions and Assumptions.Consider a problem in which some important reactor parameters, such as the neutron multiplication factor and the decay heat, depend on multigroup or few-group neutron cross-sections.We will use Y to denote the reactor parameter of interest and X i (i = 1, 2, . . ., d) to denote the cross-sections.The dependence of the parameter of interest on cross-sections can be written as a model where X i are called inputs and Y is called the output or response.Model (1) is generally nonlinear and often calculated numerically in practice.
The cross-sections can be gathered in a column vector X = (X 1 , X 2 , . . ., X d ) T , where the symbol "T" denotes the operation of transposing a row to a column.If input X is a random vector with a joint probability density function p(x) = p (x 1 , x 2 . . ., x d ), then the response Y is a random variable with the expected value E[Y ] and the variance Var[Y ] defined as correspondingly.Note that we will use, as it is the rule in statistics, a capital letter to denote a random variable and a lowercase letter to denote its value (realizations).
In this work, we will assume that the cross-sections are random variables distributed according to the normal law with known means and covariances.The multivariate normal distribution for the probability Pr[X i < x i : i = 1, . . ., d] is characterized by the probability density function [9] where X is the column vector of random variables, µ = E[X] is the column vector of their expected values, and T ] is the covariance matrix.

Block-Correlated Random Variables.
Let us assume that the input vector X can be partitioned into Γ subsets of variables, that is, X = (X 1 , X 2 , . . ., X Γ ), and that random vectors X α and X β from this partitioning are mutually independent for α, β = 1, 2, . . ., Γ.
Using the definition of a covariance matrix, one can show [9] that in this case Σ αβ = Σ βα = 0 for α / = β.Hence, the covariance matrix becomes block diagonal, that is, Σ = diag(Σ 11 , Σ 22 , . . ., Σ ΓΓ ), where Σ αα is the covariance matrix of X α (α = 1, 2, . . ., Γ).The inverse of a block diagonal matrix is another block diagonal matrix, composed of the inverse of each block, that is, Moreover, taking into account that for block matrices det(Σ) = Γ α=1 det(Σ αα ), one can write the expression for the joint probability density function defined in (3) in a form that reflects the block independence of variables: where p (x α ) is the joint probability density function of a subset α and d α = dim(x α ) is the number of variables in X α .

Global Sensitivity Analysis.
The variance-based global sensitivity analysis method aims to quantify the relative importance of each input parameter in the response variance.It involves the calculation of the global sensitivity indices, sometimes called Sobol's sensitivity indices [2,10].
In order to describe the global sensitivity indices, let us introduce the following notations: let {1, 2, . . ., d} be the set of input variable indices and let u be its arbitrary subset.Hence, X u is a subset of variables whose indices are in u, whereas X −u are the complimentary variables, that is, variables with indices not in u.Notation |u| will be used for the cardinality of the set u. Variables X i from nonoverlapping sets u and −u constitute the input vector X = (X u , X −u ) T .
Let us consider a subset X u of input variables.Two types of sensitivity indices of the model response to the input random variables X u can be introduced: (i) the main effect sensitivity index S Xu , which describes the fraction of variance of the output Y that is expected to be removed if the true values of variables X u become known.
(ii) the total sensitivity index S tot Xu , which can be interpreted as the fraction of variance of the output Y that is expected to remain if the true values of variables X −u become known.
In other words, S Xu represents the effect due to X u only, and S tot Xu represents the contribution to the variance of X u with all the interactions of this variable with other variables.
The definition of sensitivity indices and their theoretical justification comes from functional ANOVA (analysis of variance).In Appendix A, we summarize formulae of the functional ANOVA decomposition, assuming that inputs are independent random variables with arbitrary continuous distributions.
Sobol [2,3] introduced an alternative way of calculating sensitivity indices by sampling directly from f (x), that is, without passing through the ANOVA decomposition.Sobol's alternative formulae are valid for uniformly distributed, independent random variables.Generalizing this result for continuous independent random variables with an arbitrary probability density function p(x) = p(x 1 ) • • • p(x d ), one can write: Here, the prime symbol over a variable (e.g., as in x u ) means that this variable has to be sampled independently from the corresponding marginal distribution (p(x u ) in this case) of its unprimed analogue.Using the results from ( 5)- (7), the global sensitivity indices can be calculated as ratios: Note that f ∅ and D correspond to the output mean and the output variance introduced in (2).The independence condition for input variables can be relaxed.As discussed in [11], it is not necessary that all variables are mutually independent-this result holds when assuming independent blocks of input variables X α instead of single independent input variables X i .Thus, if subsets of variables from X u and X −u are mutually independent, that is, p(x) = p(x u )p(x −u ), the sensitivity analysis formulas (6) and (7) are still applicable.Moreover, as one can see from (5), the formula for the output variance does not explicitly involve any particular subset of input variables.As a result, the variance of the output (D) can be calculated with the method presented here even in the case when all input variables are correlated.Since the variance is used to characterise the uncertainty in the output due to the uncertainty of the input, the method from this paper can be used for uncertainty analysis disregarding whether normally distributed inputs are correlated or not.
As follows from the previous description, the evaluation of sensitivity indices requires the calculation of the integrals in ( 5)- (7), which can be written in the following general form: where I deff [•] is the integration operator, g( x) represents a function being integrated, d eff = dim( x) is the effective dimensionality of the integral, and p( x) is the joint probability density function of x.For instance, in integral ( 6), function g , and the effective dimensionality is

Standard Normal Law Representation.
Though the blockwise representation (4) of the joint probability density function (3) allows the exploiting of the independence of different subsets of variables, it gives no information about the practical way of a sensitivity index calculation.It is convenient to rewrite the expression in the so-called standard form in order to simplify future numerical evaluations of the global sensitivity indices.Since covariance matrices are both symmetric and positive definite, for each Σ αα there is a nonsingular matrix P αα such that Σ αα = P αα P T αα (Cholesky factorization).Consider the linear transformation z α = P −1 αα (x α − µ α ).For any α, it leads to and one can show that E[z α ] = 0, Cov[z α ] = I α , where , the joint probability density function can be written in the standard form: where p( x)d x = p( z)d z.New standard random variables Z i (i = 1, 2, . . ., d) have zero mean, standard deviations equal to one, and are not correlated, that is, Z i ∼ N(0, 1).
Representation (11) can now be used for the calculation of sensitivity indices: variables Z i can be sampled individually from N(0, 1) and the corresponding x-points can be calculated as where α goes over all subsets of X.Nevertheless, in order to simplify the sampling procedure, to allow the use of a single calculational path and make a wider range of numerical integration techniques suitable for solving the problem, we do one more transformation from the normally distributed variables to the uniformly distributed ones.Consider the following coordinate-wise change of variable from z i ∈ R to s i ∈ (0, 1): where Φ(•) is the cumulative distribution function for the normal distribution.From the properties of Applying this transformation coordinate-wise (i.e., for i = 1, 2, . . ., d eff ) and introducing h( s) = g( x[ z( s)]) give the representation of the integral (9) in the form Here, is the inverse cumulative distribution function for the normal distribution, called the probit function, and x ( z) is defined by (12).

Numerical Calculation of Sensitivity Indices
2.4.1.Numerical Quadratures.The integral in ( 15) can be approximated with a quadrature (sometimes called cubature in the literature), that can be written in the following general form: where w n are method-dependent quadrature weights, h ( s n ) are samples of the integrand at method-dependent nodes s n ∈ [0, 1] deff , and N is the number of samples.The integral in ( 15) is multidimensional, and, therefore, special numerical techniques, that can cope with the curse of dimension, are required to calculate it.Monte Carlo (including quasi-Monte Carlo) and sparse grid integration methods are suitable for this task and will be considered in our paper.Later, we will briefly introduce these methods and discuss their implementation in our work.

Monte Carlo and
Quasi-Monte Carlo Quadratures.In the case of the traditional Monte Carlo method, the integral is sampled on a set of d eff -dimensional pseudo-random points s n , uniformly distributed in the unit hypercube [0, 1] deff .In the case of quasi-Monte Carlo, so-called low discrepancy sequences of quasirandom points (also uniformly distributed in [0, 1] deff ), are used for integration.For both traditional Monte Carlo and quasi-Monte Carlo, the weights w n are point-independent and equal, that is, w n = 1/N.The quasi-Monte Carlo quadratures have a higher asymptotic convergence rate and often outperform the traditional Monte Carlo quadrature in practical applications [12].
There is a strong similarity between traditional Monte Carlo and quasi-Monte Carlo quadratures except for the type of sampling points (pseudo-random or quasi-random) and the way of error estimation.The error estimation will be done in the same way for both quadratures (see the discussion later).Hence, in this paper, both the traditional Monte Carlo and the quasi-Monte Carlo quadratures will be referred to as Monte Carlo quadratures.
In this work, we follow Sobol's recommendations [3] on the implementation of the Monte Carlo quadratures for the calculation of sensitivity indices.In particular, sampling is done from hypercube [0, 1] 2d instead of [0, 1] deff and, in order to improve the accuracy of the estimation in (15), the function f (x) − c 0 is evaluated instead of f (x) in ( 5)- (7), where c 0 ≈ f ∅ .
Our estimation of the accuracy of the Monte Carlo quadratures is based on a so-called randomization procedure [13].This procedure consists of calculating R independent estimates, I N r , of integral (15).The approximation to integral (15) is then calculated as an average of independent estimates, that is: and the error of such an approximation is characterized by the sample standard deviation, defined as Each estimate I N r is based on an independent sequence of N quasi-or pseudo-random points, where each new sequence of points is obtained from the initial one by a random modulo 1 shift [13].

Sparse Grid Quadratures.
A sparse grid H ,deff is a set of d eff -dimensional points, which is generated using Smolyak construction [14] and is based on a chosen sequence of the univariate quadrature formulas Q l , where l ≥ 0 is the accuracy level of Q l (see Appendix B for details).When applied to the integration of multivariate functions, the Smolyak construction is a multidimensional quadrature Q ,deff based on a tensor product of one-dimensional quadratures Q l , which are combined in a special way in order to optimize the quadrature convergence rate [15,16].The sequence of univariate quadrature formulae Q l leads to a sequence of sparse grid quadratures with an increasing sparse grid accuracy level ≥ 0.
Q ,deff [h( s)] is a linear functional that depends on h( s) through function values at the set H ,deff , and the number of terms N in ( 16) is defined by its cardinality.The sparse grid points s n ∈ H ,deff and the quadrature weights w n can be calculated using the procedure described in Appendix B.
If H ,deff ⊂ H +1,deff , the quadrature is called nested.Nested quadratures permit the use of function values from previous levels, thus making integration less computationally expensive.Quadrature rules are said to be open when they do not include points on the boundary and closed otherwise.Points on the boundary (i.e., s i = 0 or s i = 1 for i = 1, 2, . . ., d eff ) represent a problem for the numerical integration in (9), as a transformation s i → z i will lead to infinities in these points.Hence, only nested and strictly open sparse grid quadratures will be used in this work.
The sequence of sparse grid quadratures naturally leads to a formula for a practical estimation of the integration error although this estimation is usually quite conservative.Note that sparse grids are often defined on the hypercube s * ∈ [−1, 1] deff .In this case, they can easily be mapped to the unit hypercube [0, 1] deff using the transformation of variables s i = (s * i + 1)/2.When this mapping is performed, all sparse grid quadrature weights w n have to be adjusted by a factor of 2 deff .

Inversion of the Standard Normal Cumulative Density
Function.According to the methodology discussed in the previous section, each sample vector s n , generated with either Monte Carlo or sparse grid techniques, requires transformation to the corresponding z n vector.
The traditional way to generate normally distributed points in conventional Monte Carlo is to sample from the uniform distribution and then to use the so-called Box-Muller transformation [17].Unfortunately, it is not recommended [18] for use with quasi-Monte Carlo and is not suitable for use with sparse grids.An alternative way is to sample from the uniform distribution and then to use the inverse of the standard normal cumulative density function.It is recommended to use Moro's inversion algorithm [19], which is reported to be faster than the Box-Muller approach and has good accuracy for both the central region and the tails of the normal distribution [18].

Algorithms for Calculation of Global Sensitivity Indices.
Algorithms 1 and 2 provide examples of how to calculate sensitivity indices based on a Monte Carlo quadrature and a sparse grid quadrature, respectively.Note that these algorithms are given for the sake of illustration and do not contain details about possible memory management or performance enhancements.

Test Problem Description.
The OECD LWR UAM (OECD: Organization for Economic Co-operation and Development; LWR: light water reactor; UAM: Uncertainty Analysis in Modelling) benchmark [8] seeks to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations.The benchmark specification consists of three phases, where the first phase is the neutronic phase.
The neutronic phase involved obtaining multigroup microscopic cross-section libraries.These libraries would then be used to calculate few group macroscopic crosssections, which are to be used in criticality (steady state) stand-alone calculations.One of the reactors that was chosen as a reference LWR for the benchmark was the Peach Bottom    1, they are assumed to be independent and normally distributed, and their mean values (given in the third column of Table 1) correspond to the vector µ used in our methodology.The covariance matrix is shown in Table 2, where the diagonal terms are the percentage relative standard deviation and the off-diagonal terms are the correlation coefficients.
The global sensitivity analysis methodology discussed in Section 2 was applied to nuclear reactor calculations.The reactor parameter of interest that was chosen for this study is the infinite neutron multiplication factor, k ∞ , and it was modelled as [22] where the traditional notation for macroscopic crosssections is used (see Table 1).To illustrate our methodology, three cases were considered (all three cases are shown in Table 2): one with a diagonal covariance matrix, another one with a block-diagonal covariance matrix, and the last one with the full covariance matrix.
In the first case (hereafter referred to as Case A), it was assumed that the input parameters (cross-sections) are not correlated, and the covariance matrix consisted of only the diagonal entries, highlighted in bold, while all other elements of the matrix were set to zero.
In the second case (hereafter referred to as Case B), we assumed a test block-diagonal covariance matrix.The test Xu , end for Algorithm 2: The calculation of sensitivity indices using sparse grid quadrature.matrix was artificially constructed based on the 2-group covariance matrix from [21] in such a way that the input variables can be partitioned into three mutually independent subsets }, such that elements in the off-diagonal blocks are set to zero, that is, terms highlighted in italic are set to zero.It should be noted that the elements of the first subset correspond to those terms that contribute to the absorption cross-section.The elements of the second subset correspond to those terms that contribute to the production of neutrons, and the last subset corresponds to the removal of neutrons from the fast group to the thermal group.
For the last case (hereafter referred to as Case C), it was assumed that all the input parameters (cross-sections) are correlated with one another, and the full covariance matrix was used, that is, all entries highlighted in bold, italic, and non-italic.
It should be emphasised that neither of the first two examples (Cases A and B) considered pretends to reflect physical reality, but both the cross-section values and the elements of the test covariance matrix are of a plausible order of magnitude (close to the values given in [21]): hence, this example is representative and suitable for testing of our method.Therefore, the results and conclusions will be given in order to characterise the method presented and not the neutron multiplication properties of the Peach Bottom reactor.

Method Implementation. A Fortran 90 program was
written to implement all the steps of the methodology outlined in Section 2. The program was subdivided into blocks of code, where each block had an input to be evaluated to give an expected output and corresponded to step(s) along the calculational path of the methodology.The testing, verification, and validation of the program were done for each block of code using test functions, for which the corresponding results could be evaluated analytically.Pseudo-random points were generated using the Fortran intrinsic subroutine random number().In implementing quasi-Monte Carlo, a Sobol quasi-random number generator written by J. Burkardt [23] was used.Furthermore, a randomization procedure was used in estimating the integration error, RN , for the Monte Carlo quadratures, by considering R = 100 independent sequences with N = 10 6 samples in each sequence.
The implementation of sparse grid quadratures was greatly facilitated by subroutines written by J. Burkardt [23].Different open sparse grid quadrature rules such as Fejer, Gauss-Patterson, and Gauss-Legendre rules were applied (note that closed rules were also tested and, as expected, numerical problems for the boundary points were encountered).The Gauss-Legendre quadrature outperformed the other rules in terms of computational time needed to achieve a given accuracy for the cases considered, and its results will be reported up to a sparse grid level of = 4.A conservative procedure defined by (19) was used in estimating the integration error for the sparse grid quadratures and is reported in this paper.
Variations were introduced into the neutron crosssections by using a standardizing transformation as explained in (12), that is, x ( z) = µ + P z, where P is the extended Cholesky decomposed neutron cross-section covariance matrix, and z is obtained by using Moro's inversion of samples required by each of the implemented quadratures.Finally, in order to improve the accuracy of the Monte Carlo estimation of integral (15), a variance reduction technique [3], which consists of sampling function 5)- (7), where c 0 ≈ f ∅ , was used.3 for Cases A, B, and C. Though the output variance is the natural result of variance-based sensitivity analysis, the standard deviation is preferred in the literature because it allows an intuitive interpretation as the error bar for the value of the analysed parameter.The standard deviations are calculated as square root of variance, √ D, and reported in Table 3 in parentheses for all cases and each quadrature.The uncertainty of the multiplication factor (expressed in relative units as 100% × δk/k), which we obtained in Case C, was 0.51% for each of the three quadratures, and this result is in good agreement with the value of 0.49% reported in [21].
The computed sensitivity indices for each of the variables (cross-sections) in Case A and for each subset in Case B are given in Table 4.No sensitivity analysis was performed for Case C, since any sensitivity indices computed with our method would be meaningless because all input parameters are correlated in this case.
Considering the results of Case A, the input variable with the greatest influence on the infinite neutron multiplication factor is the thermal neutron production, νΣ 2  f , and the input variable with the least influence is the fast neutron fission, Σ 1 f .This is similar to what we anticipated, given the fact that the infinite neutron multiplication factor is highly dependent on the number of neutrons produced in the system.Since the system being considered is thermal, the thermal neutron production should account for most of the neutrons produced, and the effect of fast neutron fission was not expected to be significant.
Considering the results of Case B, the subset which corresponds to the neutron production, had the greatest influence on the infinite neutron multiplication factor.The subset {Σ 1 → 2 s }, which corresponds to the fast neutron removal, was the least influential.It should be noted that the value of the sensitivity index for {Σ 1 → 2 s } is different in Cases A and B. This is because the off-diagonal terms of the covariance matrix influenced the results for {Σ 1 → 2 s }.In other words, due to the off-diagonal terms in the correlation matrix, Cases A and B define different problems.

Error Analysis.
The results for Cases A and B, in Tables 3  and 4, were obtained by using a high number of samples with all three numerical quadratures (N = 10 6 , R = 10 2 in the case of Monte Carlo quadratures and = 4, N = 56785 in the case of sparse grid quadrature), and these results are taken as the reference.There seems to be very good agreement of the computed uncertainties and sensitivity indices between all three quadratures.The accuracy obtained for the reference results is much better than the accuracy needed to draw practical conclusions concerning the contribution of uncertainties of different cross-sections.By this we mean that the accuracy of the sensitivity index estimation has to be, at least, sufficient to discriminate between the contribution of different inputs and should also be able to discriminate between S tot Xu and S Xu for a given input X u .
Therefore, an error estimation study was done in order to determine the influence of the number of samples on the absolute and relative quadrature error of the computed sensitivity indices, where the relative quadrature error is given by where the absolute quadrature error is given by either (18) or (19).This study would help in determining the number of samples that is needed to get a good estimation of the sensitivity indices with the different numerical methods.For Monte Carlo methods, three different sample sizes were considered, N = 10 2 , N = 10 4 , and N = 10 6 .In all cases, the number of independent sequences R was taken as 10 2 .For the sparse grid, levels = 1 to = 4 were considered.
It was observed in both cases that the results obtained for S tot Xu and S Xu are statistically similar for all subsets of the input variables, for all the three numerical methods that were used.This implies that the interaction effects can be neglected.It was also observed that the integration error for S tot Xu was smaller than for S Xu in all the cases; hence, from now on, we will only consider S tot Xu .When considering Case A, it was observed that increasing N by a factor of 100 resulted, as expected, in a reduction of the integration error by a factor of approximately 10 for all the computed total sensitivity indices when using traditional Monte Carlo.The results for quasi-Monte Carlo showed that increasing N from 10 2 to 10 4 , and subsequently from 10 4 to 10 6 , resulted in a decrease of the integration error by a factor of about 30 and 40, respectively, for all the computed total sensitivity indices.For the sparse grid, a level change from = 2 to = 3 and from = 3 to = 4 both resulted in a decrease of the integration error by a factor of about 3.
The maximal absolute and relative errors for the total sensitivity indices computed with different number of samples are reported in Table 5 for Monte Carlo quadratures and Table 6 for sparse grid quadrature.These maximal absolute errors are obtained by taking the maximal absolute error of all the sensitivity indices for a given case, a given number of samples, and a given quadrature.The maximal relative error is obtained in the same way.
As one can see from Table 5, a relatively small number of samples (100 × 100) in the case of Monte Carlo gave fairly good accuracy (about 2%) in the estimation of the total sensitivity indices.
It should be noted that levels = 0 and = 1 for the sparse grid were not considered in the error estimation.This is because for level = 0, the abscissa consists of only one point, and the variance is zero; hence, the total sensitivity index will be undefined.For the same reason, the application of (19) cannot give reasonable results for level = 1.However, looking at Table 6, it can be seen that the maximal difference between the results obtained for levels = 1 and = 2 is smaller than 2 • 10 −5 , and the maximum relative quadrature error obtained when moving from level = 1 to = 2 is smaller than 3.8 • 10 −2 %.Hence, this shows that for both cases, level = 1, which contains only 29 points, is sufficient to estimate the total sensitivity indices with a very good accuracy.
The relatively small number of sparse grid points needed for an accurate estimation of the sensitivity indices as well as the absence of interactions between input variables (as discussed earlier) was unexpected.This result can potentially be explained in the following way: the uncertainty in crosssections is so small that only the vicinity of the crosssection mean values contributes to the integrals used in the estimation of sensitivity indices.In this vicinity, the neutron multiplication factor, which is used as the example, can be approximated with a fairly linear function.
A small numerical experiment was done to clarify this aspect.The standard deviations given in Table 1 were initially multiplied by arbitrary factors between 1 and 10 and, in the second phase, by arbitrary factors between 1 and 20,  and the sensitivity indices were recalculated.These factors were chosen to make the effect of a wider distribution more prominent without introducing a significant nonphysical effect due to negative cross-section values at the left tail of the distributions.It was observed that as the values of the standard deviations increase, interaction effects can be observed, that is, S tot Xu becomes statistically different from S Xu .Furthermore, a larger number of points (higher levels) is needed to achieve the same accuracy as in the reference case.These results may be used to confirm our assumption on the nature of the good performance of the sparse grid quadrature.However, a proper study was done to confirm our conclusion, and the results are reported in [24].

Conclusions
In this paper, the global variance-based sensitivity and uncertainty analysis of reactor parameters dependent on fewgroup or multigroup neutron cross-sections was discussed.It was assumed that the cross-sections are normally distributed random variables, with known means and correlation matrices, which can be partitioned into statistically independent blocks of variables and that this partitioning allows one to formulate scientifically and practically sound sensitivity analysis problems.The theoretical and mathematical aspects of the calculation of the global sensitivity indices under the previous assumptions have been discussed.The problem of practical numerical calculations of the variance-based global sensitivity indices was addressed; namely, different options for numerical integration were considered.A consistent overall path for the calculation of sensitivity indices was proposed and described.
The method was successfully implemented in practice and was tested on a problem that involved two-group assembly homogenised cross-sections as input variables.The performance of different numerical integration techniques was tested on a reactor problem with arbitrary, but plausible, two-group cross-sections and covariance matrices.Different implementations gave consistent results for the test problem under consideration.The implementation based on sparse grid quadrature demonstrated the best accuracy with as low as a few dozen samples.
This good performance of sparse grid integration was not expected and a special mini-study was performed with the purpose of explaining its origin as well as the absence of interactions in the obtained sensitivity indices.The results of this study confirmed our hypothesis that the observed results can be explained by the very small cross-section error.Nevertheless, this conclusion still has to be supported by a theoretical explanation.
From the methodological point of view, the method presented in the paper is applicable to problems with an arbitrary number of input variables.Nevertheless, one has to be cautious when dealing with multivariate problems in order to escape the curse of dimension.In this work, the applicability of our method to a few-group problem was demonstrated, but its applicability to multigroup reactor problems will be the topic of future studies.
where l 1 = d i=1 l i , and the multi-index l = (l 1 , l 2 , . . ., l d ) ∈ Z d contains the accuracy level of the one-dimensional quadrature (B.4) for each dimension.The tensor product ⊗ in (B.5) can be calculated as where the tensor product of quadrature weights w li ji is replaced with the ordinary product, since they are real numbers.As one can see from the structure of (B.5) and (B.6), quadrature Q ,d [ϕ(x)] is a linear functional that depends on ϕ through function values at a finite set of points.This set of points is called a "sparse grid" and is denoted by H ,d .A sparse grid is defined as the union (B.7) For nested one-dimensional sets (H li ⊂ H li+1 ), the corresponding sparse grids are also nested H ,d ⊂ H +1,d and can be simplified, yielding The integral (B.2) can now be approximated by the sum: w l j ϕ x l j , (B.9) where multidimensional knots x l j = (x l1 j1 , x l2 j2 , . . ., x ld jd ) can be constructed based on (B.7) and (B.8).The formulae for the quadrature weights w l j in (B.9) can be obtained in an analytical form only in a few particular cases; in all the other cases weights can be either precalculated or calculated online using (B.5) and (B.6).

3. 3 .
Computed Uncertainty and Sensitivity.The uncertainty of multiplication factor k ∞ , computed in terms of variance D, are given in Table

Table 2 :
[21] covariance matrix[21].Values in bold correspond to Case A, in bold and non-italic correspond to Case B, and the full covariance matrix correspond to Case C.

Table 3 :
Estimated uncertainty of the infinite multiplication factor in terms of variance and standard deviation (given in parenthesis).

Table 4 :
Estimated sensitivities of the infinite multiplication factor to different cross-sections (Case A) or their subsets (Case B).

Table 5 :
Maximal error of Monte Carlo quadratures.

Table 6 :
Maximal error for the sparse grid quadrature.