How can spatially distributed uncertainties be included in FEA and in parameter estimation for model updating?

Owing to manufacturing composite materials and others show considerable uncertainties in wall-thickness, fluctuations in material properties and other parameter, which are spatially distributed over the structure. These uncertainties have a random character and can therefore not being reduced by some kind of mesh refinement within the FE model. What we need is a suitable statistical approach to describe the parameter changing that holds for the statistics of the process and the correlation between the parameter spatially distributed over the structure. The paper presents a solution for a spatial correlated simulation of parameter distribution owing to the manufacturing process or other causes that is suitable to be included in the FEA. The parameter estimation methods used in updating algorithms for FE-models, depend on the choice of a priori to be determined weighting matrices. The weighting matrices are in most cases assumed by engineering judgement of the analyst carrying out the updating procedure and his assessment of uncertainty of parameters chosen and measured and calculated results. With the statistical description of the spatial distribution at hand, we can calculate a parameter weighting matrix for a Baysian estimator. Furthermore, it can be shown in principle that with model updating it is possible to improve the probabilistic parameter distribution itself.


Introduction
If a structure has been modelled by a FE-model, we can conduct a Finite Element Analysis (FEA). Ordinarily, the accuracy of any vibration prediction or measurement is almost entirely dependent upon the skill of the practitioner and the quality of the model and parameters used. Our findings suggest that we have in a number of cases to take into account spatial statistical varying coefficients in the model, which can not be cured by local or global mesh enrichment by means of refining. To this end we are looking for a statistical description of FE model parameters so as to generate realisations with characteristic that fits to those fluctuations owing to the manufacturing process, e.g. The descriptions can be employed within the FEA in Monte- (1) Λ = diag(λ ei ) the eigenmodes and eigenvalues (square of the circular frequency). Problems arise for instance from: -noise corruption of data -subjective errors and equipment errors and limitations -product and measurement variability -matching of data . . .
After FE discretisation the linear dynamic vibration problem is represented by the following system of differential equations that leads to a matrix eigen-value problem and orthogonality relations of the natural modes Mẍ(t) + Cẋ(t) + Kx(t) = f (t) In brevity well-known problems in FE modelling can be summed up as errors by: -model projection (3D,shell, plate, beam, etc.) -simplifications, omissions -uncertainties (manufacturing, material, loading, constrains, multi-physics, etc.) -discretisation, truncation of modal space, -model reduction -numerics, systematically error of methods -subjective errors (model Preparation, application of methods, etc.) . . .
To exemplify the solution for random spatial parameter distributions presented in this paper, let us consider composites (plates). These lightweight materials with high strength and stiffness are formed by reinforcing fibres in polymer matrix forms. However, manufacturing leads to uncertainties that can not be improved by sheer mesh refinement or more detailed modelling. Our carbon fibre-reinforced structures (plates) that consist of nine layers, these are carbon fibre fabrics and polyester insulation fleeces and the resin have owed to the manufacturing process remarkable variations in the thickness of the layers (see, e.g., Fig. 1) and thus for the total thickness as well (see, Fig. 2), which influence the dynamics of such thin-walled structures. In the paper we will take the distribution of the total wall-thickness as an example of random spatial distributed parameters. This example of the carbon fibre-reinforced plates is not supposed to represent an extensive scientific evaluation of the method. Its purpose is for illustration. The first step in our investigation is to produce an variogram function determined by measurement in a number of sample points in a representative section of the structure. From that, we are able to calculate a correlated statistical parameter distribution that fits better to the real distribution of the whole structure. Hence, such kind of probabilistic description of uncertainties in the parameters -if we can get them -could be used in simulations to determine realisations of parameter distributions closer to real uncertainties owing the process of manufacturing or to calculate the weighting matrices for parameter estimators in model-updating algorithms, for example. Furthermore, one objective of our research is to improve the probabilistic parameter description itself in the updating procedure by utilisation of measurement and FE results. Then, we shall obtain not only parameter changes in the model-updating procedure, but also improved statistics of uncertainties, consequently.

Stochastic modelling
For a statistical description of an over the structure varying parameter θ, say the thickness of the thinwalled structures, θ is regarded as a random number with the probability density f (θ). To do that, we begin by assuming that the parameters in different locations (with the co-ordinates x, y for the plate thickness in the mid-surface D of the undeformed structure) are statistically independent. Under this assumption an empirical distribution function for the parameters (random test function) can be determined from the sample {θ 1 , θ 2 , . . . , θ n } in ascending order θ 1 θ 2 . . . θ n , namelŷ Figure 2 shows the measured thickness distribution and Fig. 3 depicts the empirical distribution function for two carbon fibre composite plates. Evincing that the distribution for the thickness can be regarded as Gaussian (the dotted line indicates the standard normal distribution), which is defined by a probability density function whereθ is the empirical mean andŝ 2 the empirical variance, respectively.
Corresponding to empirical determined normal distribution in the FE model a parameter distribution can be generated by means of pseudo-random numbers for knots or be assigned element wisely. If we do so, for the two sample plates (with the empirical data taken from   rors as shown with Fig. 2. Therefore, this is not a suitable way to describe random spatial parameter distribution. For that reason, a procedure and algorithm for the calculation of a spatial-correlated probabilistic distribution has been developed. The work has much benefited from the excellent book by Cressie [4]. Providing, the description of the spatial distribution for the random value θ depends on the (deterministic) co-ordinates of the domain D (for the plates the undeformed midsurface), then θ can be regarded as a stochastic pro- for the plates) is the vector of the co-ordinates. Supposing, that {θ(v) : v ∈ D} is a stationary Gaussian process, we can introduce for θ(v) the notation is the mean (expectation), 1 n a unit vector, and Σ 0 the co-variance matrix ( . . , n), respectively. The full description of Eq. (6) is given by For a stationary process {θ(v) : v ∈ D} of second order, as the Gaussian process represents, follows can be derived from Eq. (9) (see, e.g. [4]), such that where h = v i − v j is the die Euclidean norm, then C(h) and γ(h) respectively is called isotrope. The implication of this assumption is the co-variance matrix Σ 0 can be parameterised and be determined following a parametric concept An admissible parametric assumption has to guarantee that the co-variance matrix based on this assumption is positive definite. In accordance to [4], we follow a procedure of valid parameterisation for the co-variance matrix. As possible parametric variogram assumptions have been proposed, for example: The distribution parameter of the assumed Gaussian process for the spatial uncertainties (mean θ 0 and the parameter η of the corresponding assumptions for the co-variance matrix Σ 0 and the assumptions itself) have to be estimated on the basis of in a representative area gathered measurements. In this point, we are still in a trial and error state. To the best of the authors' knowledge there is not much theory available for mechanical structures that gives us assistance in the selection of suitable areas, the necessary size, and data density. Unfortunately, to make it more difficult the process is influenced by measurement errors and other parameter uncertainties as well. Starting from an empirical (nonparametric) estimation of the semi-variogram with data taken from the prepresentative area, obtained bŷ where the notation |N (h)| in Eq. (15) means the number of distinct pairs in With the help of the empirical variogram (and its graphical representation), it is possible to select a fitting type for the parametrical variogram assumption (e.g., from Eqs (14a) to (14f) or others) but it can prove troublesome in some cases and it is not a plain sailing. If the number of data to be included for Eq. (15) is not small, it can take some computation time to get the estimations. The parameters in η (in the variogram assumption) can be determined, for instance, with help of a least square estimator such that The computation of the parameters in η is carried out by iterative minimising of the nonlinear function F (η) with regard to the parameters in η (e.g., by utilisation of a Gauss-Newton method).
The results of the variogram adaptation for two carbon-fibre sample plates (of the same size and different manufacturing time) are depicted in Fig. 5. The parameter C(0) needed for the parameterisation of the co-variance matrix can be computed from the empirical co-variogram estimator, given bŷ while h is set to zero. If the parameters once have been calculated and the co-variance matrix has been computed in conjunction with the parametric assumption chosen, the mean θ 0 of the spatial distribution can be estimated by a maximum likelihood method. For the Gaussian distributed parameters from Eq. (7) , we can construct a negative logarithmic likelihood function such that where θ(v) = (θ(v 1 ), . . . , θ(v n )) T denotes the vector of measured parameter values. The likelihood estimationθ can be obtained by minimising of L(θ 0 ) with regard to θ 0 . The necessary condition for a local minimum in θ 0 =θ requires that In the vicinity ofθ the function L(θ 0 ) has to be smooth. From Eq. (20) a likelihood estimation for the mean θ 0 can be obtained by (see [4]) Then, with statistical correlated description of the varying of the structures parameters on hand, its distribution can be simulated in a more natural way. Therefore, we have to generate random vectors θ(v) = (θ(v 1 ), . . . , θ(v n )) T as realisations of the ndimensional normal distribution with the distribution density function as defined by Eq. (7). A simple method for the generation of the spatial correlated components with regard to the co-ordinates consist of a matrix decomposition of Σ 0 . For the model assumptions in Eqs (8) to (13) the random value vector θ(v) has to satisfy The co-variogram function C(h ij ) will be calculated based on the parametric assumption gathered from measurements in the representative area. Then, the so determined n × n co-variance matrix Σ 0 has to meet the requirements to be symmetric and positive definite. The Cholesky decomposition of Σ 0 is given by Despite the positive definite character of Σ 0 , the Cholesky decomposition can cause some trouble. To avoid this a singular value or spectral decomposition can be used instead. The singular value decomposition of Σ 0 may be written as (s 1 , s 2 , . . . , s n ) and since matrix Σ 0 is symmetric the orthogonal matrix U is equal to V. Thus, The simulation of the correlated normal distributed random value vector can then be obtained such that the Eqs (22)  where ε = (ε 1 , ε 2 , . . . , ε n ) T is a generated vector with standardised independent normal distributed values (mean E(ε i ) = 0, variance E(ε 2 i ) = 1). With the procedure presented here, which is based on some measured parameters taken from a representative part of the structure, it is possible for our example, the thin-walled carbon fibre reinforced plates, to simulate a more realistic thickness distribution as shown in Fig. 6. If we look at both of the distributions depicted in Fig. 4 (uncorrelated) and Fig. 6 (correlated) the latter is apparently closer to the original distribution as shown in Fig. 2.
Such distribution can be utilised in Monte Carlo and others statistic approaches for the computation of the dynamical behaviour of the structure within the FE model. The co-variance matrix Σ 0 is a by-product of this simulation process. Hence, the idea is on hand: why not using it in a Bayes estimator for model updating? Due to the non-linear relation between the result and parameter data iterative model updating methods are employed to update the FE-model in several iteration steps. The esimators are based on the minimisation of a cost function in each iteration. Typically, these parameter estimators depend on the choice of a priori to be determined weighting matrices. The weighting matrices are in most cases assumed by engineering judgement of the analyst conducting the updating procedure and his assessment of uncertainty of parameters chosen, measured, and calculated. It has been shown by several authors in the literature that the success of updating strongly depends on the choice of the weighting matrices in the numeric of the process and in the results. A more natural approach to cope with the uncertainties in parameters and results is a statistical one. In a Bayesian-like parameter estimation process, we start from an assumed distribution for the parameters and measurement errors -usually we consider normal distribution. If such a distribution is known, we can determine the weighting matrices by the inverses of the co-variance matrices. With those matrices, we can try to calculate parameter changes that should improve our FE model and brings it closer to reality, hopefully.

Basics of model improvement
For any model improvement, we need for the socalled indirect identification a sensible starting model (FE model), which is close enough to the dynamical behaviour of the real structure. Furthermore, we need model parameters, which are suitable and representative to reduce the model error and sufficient sensitive to produce numerical results. Next, we need the results from a FEA and EMA to obtain the eigenfrequencies and eigenvectors to be employed as result values in the model improvement. With the objective to get better model parameters, we utilise as parameter estimator the following cost function that is minimal in the linear case for the optimal model parameters θ (opt) R mp,1 . The vector θ (0) contains the start values for the parameters chosen, z ex R m,1 is the vector of the measured results, and z(θ) R m,1 is the vector of the calculated results by using the parameters in θ, respectively. W ex R m,m is a positive definite weighting matrix for the differences between measured and calculated results. W θ R mp,mp is a weighting matrix as well, weighting the parameter differences with regard to the start values in the socalled regularisation term. In many publications it is stressed that the proper choice of weighting matrices is crucial for the resulting parameters θ (opt) and the numerical solution. Our experiences comply with that. Whereas for the measurement error some assumptions could be found to construct a matrix W ex , for W θ in most cases only knowledge and experience about the character of parameter uncertainties can help. Some useful support beside experience can be drawn from a variety of methods for model assessment and error localisation. In a more deterministic approach Eq. (28) is called extended weighted least square method. The properties of Eq. (28), especially if some terms are neglected are not covered in this paper, for that purpose we refer to [1][2][3].
If Eq. (28) is expanded in a truncated (after the second term) Taylor series with the Hessian matrix H θ (0) . For the optimal parameter θ = θ (opt) the first partial derivative of P (θ) with regard to θ becomes zero, so that we can write The relations between the parameters in θ and the results in z(θ) are in general non-linear. Hence, the Eq. (30) will be used iteratively. That means we start in each iteration step with θ (0) to compute a minimum of the penalty function (28) with respect to the parameters in θ, which are supposed to converge against θ (opt) in several steps, so that θ (0) θ (j) θ (opt) . Thus, Eq. (30) gives a local optimum for each iteration step and can be written as These set of equations suggest the following iteration scheme to compute the updated parameters θ (j+1) where S(θ (j) ) R m,mp is the sensitivity matrix, that needs a "controllability" for numerical stable behaviour (requires a rank of S(θ (j) ) at least equal to the number of parameter) For the termination of the iteration a number of criteria are available, which are not dealt with in this paper. If we take the parameters and measured results with its inherent statistics the a priori distribution can be regarded in most cases as normal distributed. In this case the mean for the parameters is given by E(θ) = θ (0) respectively E(∆θ) = 0 and co-variance matrix is defined by Cov(θ) = Cov(∆θ) = E(∆θ i , ∆θ k ) = Σ 0 (or in short notation θ ∼ N (θ (0) , Σ 0 )). For the output values the mean is then given by E(∆z) = E(ε) = 0 (with the measurement error ε) and the co-variance matrix can be written as Cov(∆z) = E(ε i , ε k ) = V. For θ (opt) the calculated output values become optimal as well z(θ (opt) ) = z (opt) , so that z ex = z (opt) + ε. Furthermore, it will be assumed that there is no correlation between the measurement error and the estimated parameter changes, that means E(∆θ i , ε k ) = 0 (in reality they are not completely independent, for the measurements will be used in the updating process).
The conditional density function for z ex (θ given) is f (z ex |θ) = N (z(θ), V) and with the a priori density function f (θ) = N (θ (0) , Σ 0 ) the so-called a posteriori density function, can be expressed by The denominator in Eq. (34a) (integral) is called the boundary density function of z ex , which is constant for a given realisation of z ex so that we can write for the a posteriori density function (C is a scaling constant). Accordingly, a maximum likelihood approach from Eq. (34) it is straightforward and easy to see that the maximum of the density function can be obtained by such vectors θ, that minimises the following equation Recalling Eq. (28), where Eq. (35) is analogous to, if the weighting matrixes are replaced by the inverses of the covariance matrices, correspondingly. Such an estimator is then called Bayesian estimator. For the iterative employment of the Bayes estimator are the Eqs (29) to (32) also valid for the determination of new parameter results. The inversion of the co-variance matrix can be done more efficient for larger matrices with the help of recursive formulas. In both procedures, there is no guarantee that the iteration will not seized up in a local minimum. The best possibility to avoid this, consist in a good parameter choice, error localisation, and model check, before real model updating is being started.
In the vector z(θ) of the output quantities, even within an updating run, physically different quantities could be deployed, to provide more information richness about the structure itself. Typically, one will do without this, because the scaling of the various quantities can cause considerably problems in the sensitivity matrix and leads to numerical difficulties, which will fade the expected benefits rapidly. In many applications, therefore the vector z(θ) is confined to the natural frequencies and eigenvectors as output quantities. However, the necessary differentiation of the eigenvectors leads also to considerable additional computational efforts, which often stands out of all proportion to possible result improvement. With the eigenfrequencies as output quantities, Eq. (33) can be expressed by The carbon fibre reinforced square plates we have taken as examples are modelled in a FE model with special semiloof shell elements referred to the midsurface. With the manufactured plates, experimental modal analysis has been carried out. Table 1 contains the measured and with co-ordinate correlated thickness distribution calculated eigenfrequencies (without and by averaging the eigenfrequencies [50 realisations]), and one example for model updating where the calculated Σ 0 , has been applied. The MAC values are ranging from 0.95 to 0.75. It could be noticed that the results for a single realisation of a thickness distribution are quite close to the measured value (that is due to the small changes in the thickness, but holds not true for a more general case).

Outlook and conclusions
With procedure proposed a better statistical description for spatial distributed uncertainties can be em- ployed in simulations and model updating algorithms. The distribution calculated as described could be based on a representative area of measured parameters of a more complex structure. However, this will imply to tackle the influence of other uncertainties combined.
Other difficulties could arise in the determination of acceptable measurements in representative areas and the selection as well as size of the areas itself. Once the distribution gained from the representative area, we can set up a distribution assumption for the structure as a whole. With the method and procedures presented improved values for the spatial correlated parameter distributionθ = θ (v 1 ), . . . ,θ(v n ) ·Σ −1 r (η l ) z ex −X k − X k 1 n θ l Algorithm: 0. start with θ 0 , η 0 , θ (0) k := 0 1. Compute new distribution for θ (k) in accordance with the method presented in Section 2 with distribution parameters θ k and η k 2. Compute S(θ (k) ) (Eq. (33)) andX k and X k in accordance to Eq. (37) 3. Iterate new estimations for θ (k+1) using Eq. (32), if convergence is obtained go to 4.) 4. start with θ l = θ k , η l = η k l := 0 5. Iterate new distribution parameter θ l+1 and η l+1 using Eq. (40), if Eq. (39) converges to a minimum, then k := k + 1, θ k+1 = θ l+1 , and η k+1 = η l+1 go to 1.) The whole problem is beset with difficulties in the algorithm and in the numeric. Accordingly, it remains a fruitful area for research. In the framework of this paper the topic can not be tackled in any detail.