The OECD/NEA Uncertainty Analysis in Modeling (UAM) expert group organized and
launched the UAM benchmark. Its main objective is to perform uncertainty analysis in light
water reactor (LWR) predictions at all modeling stages. In this paper, multigroup microscopic
crosssectional uncertainties are propagated through the DRAGON (version 4.05) lattice code in
order to perform uncertainty analysis on
The significant increase in capacity of new computational technology made it possible to switch to a newer generation of complex codes, which are capable of representing the feedback between core thermalhydraulics and neutron kinetics in detail. The coupling of advanced, best estimate (BE) models is recognized as an efficient method of addressing the multidisciplinary nature of reactor accidents with complex interfaces between disciplines. However, code predictions are uncertain due to several sources of uncertainty, like code models as well as uncertainties of plant, materials, and fuel parameters. Therefore, it is necessary to investigate the uncertainty of the results if useful conclusions are to be obtained from BE codes.
In the current procedure for light water reactor analysis, during the first stage of the neutronic calculations, the socalled lattice code is used to calculate the neutron flux distribution over a specified region of the reactor lattice by solving deterministically the transport equation. Lattice calculations use nuclear libraries as input basis data, describing the properties of nuclei and the fundamental physical relationships governing their interactions (e.g., crosssections, halflives, decay modes and decay radiation properties,
In the major nuclear data libraries (NDLs) created around the world, the evaluation of nuclear data uncertainty is included as data covariance matrixes. The covariance data files provide the estimated variance for the individual data as well as any correlation that may exist. The uncertainty evaluations are developed utilizing information from experimental crosssection data, integral data (critical assemblies), and nuclear models and theory. The covariance is given with respect to pointwise crosssection data and/or with respect to resonance parameters. Thus, if such uncertainties are intended to be propagated through deterministic lattice calculations, a processing method/code must be used to convert the energydependent covariance information into a multigroup format. For example, the ERRORJ module of NJOY99 or the PUFFIV code is able to process the covariance for crosssections including resonance parameters and generate any desired multigroup correlation matrix.
Among the different approaches to perform uncertainty analysis, the one based on statistical techniques begins with the treatment of the code input uncertain parameters as random variables. Thereafter, values of these parameters are selected according to a random or quasirandom sampling strategy and then propagated through the code in order to assess the output uncertainty in the corresponding calculations. This framework has been highly accepted by many scientific disciplines not only because of its solid statistical foundations, but also because it is affordable in practice and its implementation is relatively easy thanks to the tremendous advances in computing capabilities. In this paper, the microscopic crosssections of certain isotopes of various elements, belonging to the 172group DRAGLIB library format, are considered as normal random variables. Two different DRAGLIB’s are created, one based on JENDL4 and the other one based on ENDF/BVII.1 data, because a large amount of isotopic covariance matrices have been compiled for these two major NDLs [
The preferred sampling strategy for the current study corresponds to the quasirandom Latin Hypercube Sampling (LHS). This technique allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. In fact, LHS was created in the field of safety analysis of nuclear reactors [
In the next sections, the multigroup microscopic crosssection uncertainties computed with ERRORJ are shown for some important nuclides. Thereafter, a deeper review on how to perform a statistical uncertainty analysis is presented, with emphasis on a developed methodology to properly sample the scattering kernel and the fission spectrum. This allows a correct uncertainty propagation through the lattice code since the neutron balance is preserved in the transport equation. Finally, results of the uncertainty analyses are shown for the test case and discussed.
The uncertainty information in the major NDLs is included in the socalled “covariance files” within the ENDF6 formalism. The following covariance files are defined:
data covariances obtained from parameter covariances and sensitivities (MF30),
data covariances for number of neutrons per fission (MF31),
data covariances for resonance parameters (MF32),
data covariances for reaction crosssections (MF33),
data covariances for angular distributions (MF34),
data covariances for energy distributions (MF35),
data covariances for radionuclide production yields (MF39),
data covariances for radionuclide production crosssections (MF40).
To propagate nuclear data uncertainties in reactor lattice calculations, it is necessary to begin by converting energydependent covariance information in ENDF format into multigroup form. This task can be performed conveniently within the latest updates of NJOY99 by means of the ERRORJ module. In particular, ERRORJ is able to process the covariance data of the ReichMoore resolved resonance parameters, the unresolved resonance parameters, the
In the presence of narrow resonances, GROUPR handles selfshielding through the use of the Bondarenko model [
The most important input parameters to ERRORJ are the smooth weighting function
In this section, results of the ERRORJ module are shown from Figures
Covariance plot for
JENDL3.3
ENDF/BVII.1
Covariance plot for
JENDL4
ENDF/BVII.1
Covariance plot for
JENDL4
ENDF/BVII.1
Covariance plot for
Covariance plot for
Covariance plot for
Covariance plot for
Each of the following figures contains 3 main plots. The plot on the right corresponds to the value of a certain reaction crosssection, while the plot at the top corresponds to the relative variance (i.e., the variance of the crosssection divided by the actual value of the crosssection at a certain energy group). These two plots are presented in multigroup format as a function of energy (eV). Finally, the plot at the center represents the correlation that exists among the different 172 energy groups for that type of reaction.
From the isotopic composition of the TMI1 exercise,
As seen in the previous figures, for each crosssection of a given nuclide, the variability of the probability of interaction at a certain energy group is related to the probability of interactions at other energy groups since the same measuring equipment was used when determining such probabilities. Such correlation can be studied through the selfreaction covariance matrix. In the same way, the variability of the probability of interaction at a certain energy group of a certain type of reaction is also related to the probability of interaction of a second type of reaction at the same energy group due to the same reason as above. Such correlation can be studied through the multireaction covariance matrix.
It should be noted that in the modern JENDL libraries, covariances for
Relative standard deviation as a function of background crosssection at the resonance groups for the
Relative standard deviation as a function of background crosssection at the resonance groups for the
Regarding the ENDF/BVII.1 resonant uncertainties, only an absolute dependency was observed, leaving the relative terms intact for any temperature and/or dilution conditions. This is an important issue, because as will be seen in Section
The first step of the standard statistical framework is to identify from the code inputs the most important uncertain parameters defined as
Scheme of statistical uncertainty analysis [
Once a sample of the code output has been taken, a statistical inference of the output population parameters is performed. During recent years, it has been common in the field of nuclear reactor safety to use the theory of nonparametric tolerance limits for the assessment of code output uncertainty. This approach, proposed by Gesellschaft für Anlagenund Reaktorsicherheit (GRS) [
For example, if the 5th and 95th percentiles of the population are to be inferred with a 95% of confidence, a sample size of 93 elements is required. It should be noticed that this analysis is solely based on the number of samples and applies to any kind of PDF the output may follow. Also, since the input space is only used as an indirect way to sample the output space, the use of nonparametric tolerance limits is independent from the number of uncertain input parameters. When the code output is comprised by several variables that depend on each other, the uncertainty assessment should be based on the theory of multivariate tolerance limits. Wald [
Other authors have done some work to derive the minimum sample size for multivariate nonparametric tolerance limits, such as the equation presented by Scheffe and Tukey [
The simplest sampling procedure for developing a mapping from input space to output space is through SRS. In this procedure, each sample element is generated independently from all other sample elements; however, there is no assurance that a sample element will be generated from any particular subset of the input space. In particular, important subsets with low probabilities but high consequences are likely to be missed if the sample is not large enough [
LHS can be viewed as a compromise, since it is a procedure that incorporates many of the desirable features of random and stratified sampling. LHS is done according to the following scheme to generate a sample of size
Coverage of a probability space formed by a uniform and normal distributions using LHS and for a sample size of 10 elements.
In the field of computational experiments, the concept of tolerance limits applied to the code uncertainty assessment is valid even if the input space is sampled with LHS. This is due to the fact that such a theory does not assume any kind of parametric distribution of the code output space, and is only founded on the ranking of a statistically significant number of samples. Therefore, since this theory is independent from the dimensionality of the input space, it does not matter how the input space is sampled as long as the minimum sample size requirement is being fulfilled. In other words, LHS is used to cover much better the input space and ergo, to much better handle the code nonlinearities in order to try to infer more realistic output percentiles that the ones SRS might infer for the same sample size, and for the same level of confidence. For example, the use of LHS applied to the inference of code output tolerance limits in a nonparametric way can be found in [
Since uncertainty analysis in this work is performed to both
It is common that thermal upscattering is not present and thus,
The DRAGON code is the result of an effort made at
Advanced lattice codes essentially feature selfshielding models with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space dependent isotropic or anisotropic streaming effect, availability of the characteristics method and burnup calculation with energyresolved reaction rates. The advanced selfshielding models available in DRAGON version 4.05 are based on two main approaches: equivalence in dilution or subgroup models. Stateofthe art resonance selfshielding calculations with such models require dilutiondependent microscopic crosssections for all resonant reactions, and for more than 10 specific dilutions. Ultrafine multigroup crosssection data are also required in the resolved energy domain. Thus, the crosssections library energy structure should comprise at least 172 groups. Since these capabilities require information that is not currently available in for example, the WIMSformatted library, a nuclear data library production system was written by Hébert [
The management of a crosssection library requires capabilities to add, remove, or replace an isotope, and the capability to reconfigure the burnup data without recomputing the complete library. For these purposes, DRAGR was developed by Hébert [
The DRAGON code solves the multigroup criticality equation at the pin cell level using the collision probability theory, and at the fuel assembly level by means of the method of characteristics. In its integrodifferential form, the zerolevel transport corrected multigroup equation is given by
The left hand side of (
The scattering source can be expanded such as
Let us analyze the
Since uncertainties are only given to the isotropic scattering reaction
In the nominal case of the transport corrected version, a degree of linear anisotropy can be taken into account by modifying the diagonal of the scattering matrix as follows:
As shown before in Section
If it is considered that any nondiagonal element of the scattering matrix is isotropic (i.e.,
Equation (
If a sample is to be drawn for the different spectrum groups, the perturbed spectrum should be carefully renormalized to unity. In the statistical uncertainty approach, this can be achieved by dividing each of the perturbed groupterms of the spectrum by the sum of all of the perturbed groupterms. For example, for a certain sample, this can be illustrated as follows:
For our study, the multigroup microscopic crosssections of certain isotopes are treated as random variables following a normal PDF. Therefore, for each crosssection of a given nuclide, the nominal crosssection value at each energy group corresponds to the mean value. Since the LHS methodology described in the previous section assumes that the different variables are independent, the Latin hypercube procedure developed by Iman and Conover [
A final total correlation matrix needs to be computed in order to evaluate all the individual self and mutualreaction correlation matrices. This corresponds to a square matrix of size 172*(number of individual correlation matrices). Before starting the sampling procedure, the total correlation matrix should be positive definite. If not, the negative eigenvalues contained in the diagonal of the
For each nuclide, the procedure for correlated variables begins by taking an LHS sample based on the individual group variances, and assuming that the group crosssection values are independent, for example:
If the correlation matrix of
In the end,
Since ERRORJ only can evaluate one dilution at a time, a methodology was developed in this work to shield the crosssections covariances at all dilutions and temperatures. Due to the fact that ERRORJ gives both the relative and absolute covariance matrices, only one evaluation is necessary at one temperature and one dilution (i.e., infinite dilution and 273 K). Afterwards, it is only required to multiply the crosssections value at each energy group by the relative multigroup covariance matrix. This scheme is exemplified in Figure
DRAGLIB statistical perturbation system.
For moderators and some other materials, only
The TMI1 test case corresponds to a
Test case geometry.
Parameter  Value (mm) 

Pellet diameter  9.39 
Cladding thickness  0.673 
Rod outside diameter  10.992 
Rod pitch  14.427 
The nominal solution to this exercise is shown in Tables

Diffusion coefficient (cm)  NUSIGF (1/cm)  Absorption (1/cm)  Scattering (ingroup) (1/cm)  Scattering (outgroup) (1/cm)  

JENDL4  1.40851  1.45121  0.00672  0.00855  0.47355  0.01917 
ENDF/BVII.1  1.40373  1.45209  0.00677  0.00859  0.47514  0.01911 
Macroscopic crosssections and diffusion coefficient computed for the thermal group (nominal calculation).
Diffusion coefficient (cm)  NUSIGF (1/cm)  Absorption (1/cm)  Scattering (ingroup) (1/cm)  Scattering (outgroup) (1/cm)  

JENDL4  0.47952  0.13048  0.07683  1.23170  0 
ENDF/BVII.1  0.48583  0.13086  0.07730  1.21337  0 
The final sample of 450 elements is significant to cover 95% of the output space formed by the different homogenized macroscopic crosssections,
Then, uncertainty results for
Uncertainty analysis of
Max. value  Min. value  Mean 




JENDL4  1.47408  1.36896  1.40101  0.01532  1.094 
ENDF/BVII.1  1.41076  1.38967  1.40236  0.00250  0.178 
Uncertainty analysis of homogenized macroscopic crosssections (fast group, JENDL4).
Parameter  Min. value (1/cm)  Max. value (1/cm)  Mean (1/cm) 


NUSIGF  0.00679  0.00719  0.00697 

Absorption  0.00861  0.00895  0.00878 

Scattering (ingroup)  0.46812  0.47424  0.47120 

Scattering (outgroup)  0.01826  0.01864  0.01851 

Uncertainty analysis of homogenized macroscopic crosssections (thermal group, JENDL4).
Parameter  Min. value (1/cm)  Max. value (1/cm)  Mean (1/cm) 


NUSIGF  0.13188  0.14736  0.13744 

Absorption  0.07938  0.08202  0.08074 

Scattering (ingroup)  0.99676  0.99820  0.99734 

Scattering (outgroup)  0  0  0  0 
Uncertainty analysis of fast and thermal diffusion coefficients (JENDL4).
Min. value (cm)  Max. value (cm)  Mean (cm) 



Fast diffusion coefficient  1.42150  1.49531  1.45501  0.01118 
Thermal diffusion coefficient  0.58332  0.58582  0.58472  0.00042 
Uncertainty analysis of homogenized macroscopic crosssections (fast group, ENDF/BVII.1).
Parameter  Min. value (1/cm)  Max. value (1/cm)  Mean (1/cm) 


NUSIGF  0.00689  0.00716  0.006974 

Absorption  0.00868  0.00888  0.00879 

Scattering (ingroup)  0.46901  0.47385  0.47127 

Scattering (outgroup)  0.01847  0.01859  0.01852 

Uncertainty analysis of homogenized macroscopic crosssections (thermal group, ENDF/BVII.1).
Parameter  Min. value (1/cm)  Max. value (1/cm)  Mean (1/cm) 


NUSIGF  0.136721  0.13750  0.13710 

Absorption  0.08014  0.08103  0.08077 

Scattering (ingroup)  0.99708  0.99742  0.99732 

Scattering (outgroup)  0  0  0  0 
Uncertainty analysis of fast and thermal diffusion coefficients (ENDF/BVII.1).
Min. Value (cm)  Max. Value (cm)  Mean (cm) 



Fast diffusion coefficient  1.42330  1.48890  1.45488  0.01123 
Thermal diffusion coefficient  0.58439  0.58470  0.58474  0.00005 
The correlation matrices among the different output parameters are shown, respectively, in Figures
Correlation matrix of the output parameters based on JENDL4 data.
Correlation matrix of the output parameters based on ENDF/BVII.1 data.
As can be appreciated from the previous study, computed uncertainties in the output parameters are much higher for the JENDL4 case, than for the ENDF/BVII.1 case. For example, the standard deviation of the JENDL4 NuSigmaFission crosssection for JENDL4 is 78 times larger than its ENDF/BVII.1 counterpart. In a previous sensitivity study applied to a
172 groups relative variances computed with ERRORJ for the
It can be seen that up to 1000 eV, uncertainties based on JENDL4 data are much larger than the uncertainties based on ENDF/BVII.1. This creates a large sampling variability of the
100 LHS samples taken from the
A big difference is observed in the spread of the samples for thermal energies and almost up to the last resonant energies. The fact of having large relative variances in JENDL4 for the thermal groups (~7%) compared to small relative variances in ENDF/BVII.1 (~0.5%), and also large variance differences (up to 10 times) at the resonances, is the cause of such a huge sampling variability between both libraries.
Since uncertainties included in JENDL4 for
In this paper, a statistical uncertainty analysis was performed on lattice calculations using the DRAGONv4.05 code. The input uncertainty space corresponded to the microscopic crosssections of the different nuclides of the DRAGLIB library. This work is one of the first attempts to process in multigroup format uncertainties from modern nuclear libraries such as JENDL4 and ENDF/BVII.1, so they could be applied to the uncertainty assessment of lattice calculations. Thus, confidence in the results of advance lattice codes can be obtained through the use of a statistical uncertainty analysis.
By comparing the obtained
The results obtained in this work are important because they demonstrate that it is feasible to statistically perturb and propagate basic uncertainty data through lattice calculations with the current computational technology. This is also the first step to develop an integral statistical uncertainty methodology for nuclear reactor predictions using advanced models, since the lattice code outputs are to be used as inputs to the core simulators. Further studies may include a global and nonparametric sensitivity analysis, where the correlation between the different microscopic and macroscopic crosssections can be assessed. Also, geometrical uncertainties, as well as statevariable uncertainties can be included.
Uncertainty analysis applied to lattice calculations is very important to trust LWR core designs, because the computation of the homogenized and energycollapsed macroscopic crosssections is the first step in the modeling of LWRs. Therefore, confidence in the further calculation of the effective neutron multiplication factor is totally bounded to the computed uncertainties of lattice codes output parameters.
Fast fission factor
Resonance escape probability
Thermal utilization factor
Thermal fission factor
Removal macroscopic crosssection (1/cm)
Fast downscattering macroscopic crosssection (1/cm)
Thermal upscattering macroscopic crosssection (1/cm)
Fast absorption macroscopic crosssection (1/cm)
Thermal absorption macroscopic crosssection (1/cm)
Fast Nusigmafission macroscopic crosssection (1/cm)
Thermal Nusigmafission macroscopic crosssection (1/cm)
Scalar neutron flux at the energy group
Transportcorrected total macroscopic crosssection at the energy group
Transportcorrected scattering macroscopic crosssection at the energy group
Capture microscopic crosssection, from the
Fission microscopic crosssection, from the
Nubar at the energy group
Mubar at the energy group
Normalized fission spectrum at the energy group