The posterior density of structural parameters conditioned by the measurement is obtained by a differential evolution adaptive Metropolis algorithm (DREAM). The surface of the formal log-likelihood measure is studied considering the uncertainty of measurement error to illustrate the problem of equifinality. To overcome the problem of equifinality, the first two derivatives of the log-likelihood measure are proposed to formulate a new informal likelihood measure for the sake of improving the accuracy of the estimator. Moreover, the proposed measure also reduces the standard deviation (uncertain range) of the posterior samples. The benefit of the proposed approach is demonstrated by simulations on identifying the structural parameters with limit output data and noise polluted measurements.
1. Introduction
Recent years witness the increasing desire of Bayesian estimation for structural parametric system when quantifying the inevitable uncertainties, such as measurement error or structural model error and so forth, as is reviewed by Simoen et al. [1]. In particular, Beck and Au [2] used Laplace’s method of asymptotic approximation to obtain a posterior PDF with a small-dimensional parameter space. To solve higher dimensional problems, Muto and Beck [3] developed an adaptive Markov chain Monte Carlo (MCMC) simulation for the Bayesian model updating. Gibbs sampling and transitional Markov chain Monte Carlo (TMCMC) were used by Ching and Chen [4] to obtain the posterior PDF of parameters. Cheung and Beck [5] used a hybrid Monte Carlo method, known as the Hamiltonian Markov chain, to solve higher dimensional model updating problems. Huhtala and Bossuyt [6] explored a Bayesian inference framework to solve the inverse problem of locating structural damage. An et al. [7] proposed a statistical model parameter estimation using Bayesian inference when parameters are correlated and observed that data have noise. Green [8] used a novel MCMC algorithm, data annealing, which is similar to simulated annealing, for the Bayesian identification of a nonlinear dynamic system.
The difficulty of Bayesian estimation lies in the efficiency in the convergence of posterior samples in the Markov chain to the acceptable model set. Moreover, because of the noise corrupted measurement, the surface of the prediction error lies in a hypersurface of a multidimensional parametric space. It will cause the surface of the probability density for the posterior sequences to have multiple regions of attraction and numerous local optima. It thus inevitably yields a biased estimator (no matter what is called maximum likelihood estimator, ML, or maximum a posteriori estimator, MAP). This problem is defined as the “equifinality” [9–11]. The surfaces of the prediction error, using formal likelihood measures, maximum log-likelihood (ML), are studied. From the surfaces of fitness measures, it can be concluded that the formal likelihood measure underestimates or overestimates the uncertain intervals of the posterior samples. The reason is that there are several possible models which can also give high values of likelihood around the neighborhood of the estimator.
In this paper, the bias between the ML/MAP estimator and the actual value is deduced by the Taylor expansion. It is found that the gradient and Hessian matrix of the likelihood measure can bridge the biased estimator and the actual value, which is thus proposed to improve the accuracy of the posterior samples. The parameter estimation problem is proposed as a two-step strategy. In the first step, the MAP/ML estimator is obtained by the formal Bayesian likelihood measures using the differential evolution adaptive Metropolis-Hastings (DREAM) algorithm. In the second step, a new fitness measure is proposed, which can be seen as the informal likelihood measure under the framework of the generalized likelihood uncertainty estimation (GLUE) [12–14]. Numerical examples of a linear structural system are presented, with which the effectiveness and efficiency of the proposed method are investigated.
2. Problem Statement2.1. Least Squares (LS) Estimator for the Inverse Problem
Let ym(t) denote the measured response at each time interval (t=1,…,Nt) and y^(x^,t) denotes the output of candidate models. The difference between the measured response and model outputs is defined as the residual error: e(x^,t)=ymt-y^(x^,t), where j=1,…,m, and m is the number of outputs. The common measure for the inverse problem is to attempt to force the residual vector as close to zero as possible by tuning the model parameter vector, x^. Thus, the fitness measure can be defined as follows: (1)LSx^=-1m*Nt∑1m∑1Ntex^,t2,x^*=argmaxLSx^.
This is an n-dimensional optimization issue which maximizes the likelihood measure of SSR (equivalent to minimize the measure of LS formulation). But such measure can only provide an estimate of optimal value of x^*. If we need to quantify the uncertainty of the estimator, it would be a desire to estimate the underlying posterior PDF of parameter, P(x^(θ)∣ym), which is under the framework of Bayesian probabilistic estimation.
2.2. Bayes Estimate Using Formal Log-Likelihood (LL) Measures
In the Bayesian estimation framework, the model set, M, is a class of probabilistic models, each of which predicts the response of the actual system. The identification problem is to infer the plausibility of each candidate model with a posterior density distribution conditioned by the measured data, P(x^(θ)∣ym); it is not a quest for the true structural parameters. x^(θ) is a stochastic parameter vector defining each possible model in the model set. The model set, M, is defined by random parameters, x^θ=x^1θ,x^2θ,…,x^nθ∈Rn (θNs∈Ω, θ is the random variables in probability space Ω), where n is the number of parameters for model Mm∈M and Ns is the number of stochastic samples. The initial plausibility of each model parameterized by x^θk(k=1,2,…,Ns) is defined as a prior density function, P(x^(θ)∣M). The updated plausibility of I/O model using Bayes’ theorem is as follows: (2)P(x^θ∣ym,M)=P(ym∣x^θ,M)·P0(x^θ∣M)P(ym∣M).
P(ym∣x^θ,M) is the likelihood measure, L(x^θ). If the measurement error is considered as obeying the Gaussian distribution with a constant variance, σj2 (jth available observed response), the posterior PDF in (2) is thus as follows:(3)P(x^θ∣ym,M)=c-1·P0∏j=1m2πσjNtee-∑j=1m1/2σj2∑t=1Ntyjmt-y^jx^θ,t2,where c is the evidence of model class (c=P(ym∣M)=∫P(ym∣x^θ,M)·P(x^θ∣M)dθ), which is a high-dimensional integral. The difficulty in estimating the posterior PDF is none other than approximating the model evidence. Strives to overcome this challenge are the purpose of the Metropolis-Hastings algorithm. For simplicity, (2) is rewritten as pθ=Lθ·p0(θ)/c=f(θ)/c. The MH algorithm generates the posterior PDF with four steps. (1) Start with initial samples, θ0, and compute the likelihood measures f(θ0). (2) The updated samples, θ*, are produced by the jumping distribution, q(θ1,θ2), which is the probability of returning a value of θ2 given a previous value of θ1. The restriction on the jumping is that the transition is probability symmetric, q(θ1,θ2)=q(θ2,θ1). (3) The acceptance ratio at the updated candidate (θ*) and the source posterior samples (θt-1) is αθ=pθ*/pθt-1=fθ*/fθt-1=Lθ*/Lθt-1. (4) If the acceptance ratio, α>1, accept the candidate sample, θ*; if the jump decreases the density, α<1, then it rejects the updating and keeps the current samples,θt-1. The acceptance ratio is as follows: (4)αθ=minLθ*Lθt-1,1.
It is clear that the advantage of MH algorithm lies in the fact that when computing acceptance ratio there is no need to obtain the model evidence since the constant c cancels out. The transition of samples generates a Markov chain (θ0,θ1,…,θk,…). Following a burn-in period, the Markov chain approaches its stationary distribution, and the samples after the burn-in period converge into the posterior PDF, pθ, as that in (2). From (4), it can be found that the Bayesian estimate relies on the likelihood measure, P(ym∣x^θ,M). It is more convenient to use the logarithm of the likelihood measures (LL(x^(θ))) rather than the likelihood function itself:(5)LLx^θ=-Nt2ln2π-Nt2∑j=1mlnσj2-12∑j=1mσj-2·∑t=1Ntymt-y^x^θ,t2.
Either the log-likelihood measure, as is in (5), or the least square measure, as is in (1), obeys the rule of “goodness-of-fit.” This is because only the model with high probabilistic value of likelihood in the MH method will be accepted.
2.3. The Surface of the Likelihood Measures
To illustrate the problem of “equifinality,” the surfaces of the common-used likelihood measure, the LLx^θ as is in (5), are simulated in identifying the stiffness parameters of a 2-DOF linear dynamic system. The state space of the system is written as follows: (6)v˙1(t)v˙2(t)=0I-M-1K-M-1Cv1(t)v2(t)+0-IΓTut,where M, C, and K are the mass, damping, and the stiffness matrices, I is n×n identity matrix, and Γ=1,1,…,1T is 1×n position vector. v1(t) and v2(t) are state space vectors, respectively, representing the displacement and velocity, and u(t) is the input of the system. The system output is an acceleration which is assumed to be contaminated by Gaussian white noise wj(t)~N(0,σj(θ)),(j=1,…,m). The measured output vector is thus(7)yt=-M-1K-M-1Cv1(t)v2(t)-ΓTut+wσθ,t.
The mass and stiffness of each DOF are defined as 100 kg and 1000 N/m. Equation (7) includes a Rayleigh damping C Mita [15], where the first two-modal damping ratio (ζr) is set as 5%:(8)C=αM+βK,ζr=α2ωr+βωr2,r=1,2.
The parametric domain is meshed by 5% deviation of the true value. The output acceleration (acc.) with different noise levels (7) was used in the simulation, in which the noise level (nl.) was defined as σi=nl.×σacc.. The contour plots of the likelihood measure, as is in (5), in the scenarios of noise-free and different noise level scenarios are exhibited in Figure 1.
Contour plot of the likelihood measures. (“∘” denotes the actual value; “*” denotes the MAP/ML estimator.)
(No noise)
(10% noise)
(30% noise)
(100% noise)
From Figure 1, we can find that only when the measurement error is ignored, the center of the posterior samplings model set (the ML/MAP estimator) will be unbiased; however, taking the noise into account, the optimal solution with maximum PDF is biased. Moreover, around the search neighborhoods of the optimal solution, there are many local optimums. It can conclude that when considering the measurement error the common likelihood measures as in (1) and in (5) are weak to solve the problem of equifinality. The bias of the estimator will increase with adding the number of the parametric dimensions and the noise level. It is thus necessary to improve the identified ML/MAP estimator.
3. The Proposed Accuracy-Improving Method3.1. The First Two Deviations of the Likelihood Measure
With Taylor’s expansion, the likelihood measure can be deduced as(9)Lx^θ=Lx^θo+L′x^θoΔx^+12Δx^TL′′x^θoΔx^+oΔx^2,where Δx^ denotes x^θ-x^θo. With the derivative of (9) to x^θ and ignoring the high order derivative series, one obtains:(10)Gx^θ=Gx^θo+Δx^Hx^θo,where Gx^θ is the first order derivative of likelihood measure, which is the gradient matrix of Lx^θ; Hx^θ is the second order derivative of likelihood measure, which is the Hessian matrix of Lx^θ. Gx^θo and Hx^θo are the gradient and Hessian matrix at the point of true value, x^θo. Since Gx^θML=0, then it yields that(11)x^θML=x^θo-Gx^θoHx^θo-1.
As seen in (11), the bias, x^θML-x^θo, is equal to the negative product of the gradient and the inverse Hessian matrix at the actual value. Equation (11) is thus proposed in this study to formulate the informal likelihood measure. Let d(x^θ)=x^θ-x^θML denote the bias between the MAP/ML estimator and each of the posterior samples. The proposed Bayesian updating of the posterior samples using DREAM algorithm can be divided into two steps. The first step is to find the ML/MAP estimator, x^θML. The second step is to search the optimal point, x^θ*, of the proposed criteria around the neighborhood of the ML/MAP estimator. The informal likelihood measure can be written as follows: (12)L*x^θ=1.dx^θ-Gx^θHx^θ-1+ε,x^θ*=argmaxL*,where ε denote the credible range, which can be decided by the user as the stop criteria.
3.2. Illustration of the Proposed Likelihood Measure
From (12), it can be found that the extreme points of the likelihood measure, L*(x^(θ)), are the actual value, x^θo, and the MAP/ML estimator, x^θML, because the former extreme value is the null point of (11) and the latter item meets the condition of Gx^θML=0. Since the MAP/ML estimator, x^θML, has already been obtained in the first step after the distribution of the posterior samples has become stationary, the posterior samples on the Markov chains in the second step will converge into the neighborhood around the actual value due to the proposed likelihood measure. The accuracy of the estimator is thus improved and the uncertain range of the posterior samples will be narrowed around the actual value. To verify the proposed likelihood measure, as is in (12), the surface and contour for the 2-DOF linear system, of which the simulation in 100% noise scenario is the same as that in Section 2.3, are shown in Figure 2. The accuracy of the posterior samples, seen in Figure 2, is improved comparing with the MAP estimator, comparing with Figure 1(b).
The surface and contour plot of the proposed likelihood measures.
3.3. Two Steps of the DREAM Based Bayesian Estimation3.3.1. Step 1: MAP Estimator Using DREAM Algorithm
The DREAM algorithm combines the DE mutation strategy into the updating of Markov chains. In the transition of Markov chains, the posterior samples are updated by the revised DE mutation strategy. Let Δx^i(θ)=x^k+1i(θ)-x^ki(θ) denote the jump scale between the updating state (k+1) and current state (k) of the ith of the Markov chain. The samples are updated as follows: (13)Δx^i=Id+edγδ,d∑j=1δx^kr1(j)-∑n=1δx^kr2(n)+εr,where εr is a small random vector that is drawn from Nd0,Σ^; δ is the number of chosen pairs; and r1(j) and r2(n) are, respectively, different and random integers that are chosen from the integer set {1,2,…,i-1,i+1,…,Ns}. The term Id signifies the d-dimensional identity matrix, and ed signifies a small random vector drawn from a uniform distribution to assure the ergodicity of the samples on the Markov chain. The scaling factor γ is decided by the values of δ and d, where d is the parametric dimension. The DREAM algorithm also explores the DE crossover strategy in d dimensions of the current samples,x^ki(θ), the updated posterior samples,x^k+1i(θ), and a trial sample x^k+1ij(θ) is generated with (14)x^k+1ij(θ)=x^kijθ,ifundj≤1-CR,x^k+1ijθ,otherwise,where j=1,2,…,d;undj is the jth independent random number uniformly distributed in the range of [0,1]. CR is a crossover probability defined by the user. The DREAM accepts the candidate state x^k+1(θ) with probability min(1,αx^k+1(θ),x^k(θ)) and keeps the current state x^k(θ) with probability1-min(1,αx^k+1(θ),x^k(θ)). The MH acceptance probability, αx^k+1(θ),x^k(θ), is computed by (4). The posterior samples are updated by (13) and (14) and approaches to a stationary after a burn-in period. The convergence of posterior samples is checked by the Gelman-Rubin criteria [16]. The R^j-statistic,R^, is computed by using the last 50% of the samples in each chain. The convergence criteria, R^, are (15)R^=k+1k×σ^2W-Ns-1k×Ns,where k is the number of the used posterior samples; W equals ∑i=1ksi2/k, where si2 is the variance of the sequence; the posterior variance is estimated as σ^2=((n-1)/n)W+(1/n)B, where B is the variance between the sequences; B can be computed as B=Ns×∑i=1k(x^i-x^-)2/(k-1).
3.3.2. Step 2: Density Updating of the Samples That Satisfy the Proposed Criteria
When R^jin (14) for each dimension (j) is less than 1.2, the posterior samples on the MC chain converged into a stationary distribution. The MAP estimator, xMAP, is obtained with the maximum posterior PDF. Also, the standard deviation of the posterior samples, σx(θ), can be obtained. Then, the algorithm turns into the second step. The gradient and Hessian matrix of the posterior samples within the boundary of xMAP∓3σx(θ) are calculated at each iteration. The proposed informal likelihood measure as in (11) will be used for the updating of the posterior samples. The estimation procedure will stop till the prescribed precision, ε, is satisfied. The gradient matrix and the Hessian matrix at each sample can be written as (16)Gx^(θj)=∂Lx^1,∂Lx^2,…,∂Lx^n,Hx^(θj)=∂2L∂x^12∂2L∂x^1x^2⋯∂2L∂x^1x^n∂2L∂x^2x^1∂2L∂x^22⋯∂2L∂x^2x^n⋮⋮⋱⋮∂2L∂x^nx^1∂2L∂x^nx^1⋯∂2L∂x^n2,where j denotes the jth posterior sample and n is the parametric dimension. The diagonal and off-diagonal elements of the Hx^(θj) can be given by (16) and the following equation:(17)Hl,lx^θj=Lx^θj+Δx^lθj-2Lx^θj+Lx^θj-Δx^lθjΔx^lθj2,Hl,l′x^θj=14Δx^lθjΔx^l′θj·Lx^θj+Δx^lθj+Δx^l′θj-Lx^θj+Δx^lθj-Δx^l′θj-Lx^θj-Δx^lθj+Δx^l′θj+Lx^θj-Δx^lθj-Δx^l′θj.
3.4. Identification Procedures
The procedure of the proposed posterior density estimation is as follows.
Step 1.
Use the LHS method to generate Ns sequences for the initial state of MC chains, respecting the prescribed limits of the search space. The likelihood measure of each sample is obtained by (5).
Step 2.
Update the posterior sample of the Markov chain by mutation strategy using (13) and by the crossover probability using (14). Calculate the density for the updated samples.
Step 3.
The Metropolis acceptance (4) is used for choosing the accepted posterior samples.
Step 4.
Return to Step 2 and to Step 3, after the burn-in period and calculating the convergence criteria using (15) for each dimension of the structural parameter. If the convergence criteria of the MC chain are met (R^j<1.2), the MAP estimator and the standard deviation of the samples are obtained; otherwise, return to Step 2.
Step 5.
Calculate the gradient and Hessian matrix at the point of each sample within the interval of [xMAP∓3σxθ] by (16) and (17).
Step 6.
The informal likelihood measure as in (12) is used for the transition of the posterior samples till the predefined stop condition is satisfied.
4. Numerical Study4.1. Identification of a 5-DOF LTI System
Numerical simulation of a 5-DOF LTI system was carried out to verify the proposed method. The structural system is simulated as (6) and the measured signal is as (7). The input was an El-Centro wave lasting 40 s and the sampling frequency was 100 Hz (Figure 3). The measurement noise of the 5th DOF in the 100% noise level scenarios is shown in Figure 3. Table 1 shows the structural properties of the dynamic system.
Structural properties.
Stiffness
(N/m)
Levels 1–5
5.0 × 10^{3}
Mass
(kg)
Levels 1–4
50
Levels 5
45
Damping ratio
ζ1,2
0.05
Input and simulated measured error.
The influence of the limited availability of measurements on the proposed method is also assessed in this study. In the “full output” scenario, measurements of all DOFs are available, whereas, in the “partial output” case, only data from DOFs 3 and 5 are available. The mass is assumed to be known; hence, an n-DOF system with m-available measurements is described by a model set, of which the interested parameterized vector is as follows: (18)x(θ)=k1θ,…,knθ,ζ1θ,ζ2θ.
The parameters of the DREAM algorithm were set as follows: the number of Markov chain samples (Ns) was 20, the crossover probability (CR) was 0.85, and the number of sample pairs (δ) was 5. The search domain was taken to be 0.5–2.0 times the true value. The identified results are exhibited in Table 2.
Identified results of structural parameters.
Full outputs
Partial outputs
No noise
30% noise
100% noise
No noise
30% noise
100% noise
k1
Error
0.000
0.224
0.561
0.000
0.241
0.932
Cov.
0.000
0.484
1.657
0.000
1.244
3.339
k2
Error
0.000
0.239
0.487
0.000
0.395
0.498
Cov.
0.000
0.358
1.137
0.000
1.852
3.039
k3
Error
0.000
0.383
0.603
0.000
0.516
0.705
Cov.
0.000
0.628
2.234
0.000
1.402
3.394
k4
Error
0.000
0.353
0.472
0.000
0.447
0.819
Cov.
0.000
0.417
1.322
0.000
0.921
2.898
k5
Error
0.000
0.473
0.681
0.000
0.486
0.691
Cov.
0.000
0.443
1.475
0.000
1.893
2.323
ζ1
Error
0.000
0.398
0.581
0.000
0.695
1.044
Cov.
0.000
0.319
2.322
0.000
2.145
3.549
ζ2
Error
0.000
0.494
0.726
0.000
0.701
0.993
Cov.
0.000
0.897
2.397
0.000
2.423
4.197
The “Error” is in %; and the “Cov.” (the ratio of standard deviation to the mean) is in %.
From Table 2, the conclusion that the proposed method can find the optimal value, x^θ*, in the scenarios of both “full outputs” and “partial outputs,” even when considering large noise level, can be drawn. Ignoring the measured noise error, the proposed method can identify the structural parameters with no bias. The coefficient variance (cov.) of posterior samples is identified as zero, which demonstrates that the inverse problem under the noise-free scenario is deterministic. When the acceleration of each DOF is available for measure, the maximum relative error ranged from 0.494% in the 30% noise case to 0.726% in the 100% noise case. While considering that the measurements of the 3rd and the 5th DOFs are available, the maximum relative error ranged from 0.701% in the 30% noise case to 1.044% in the 100% noise case. The credible range of the posterior samples is depicted by the coefficient variance of the MC sequences. It is clear that the parametric uncertainty was additive with the increasing of measurement error. In the “full outputs” scenario, the maximum coefficient variance (cov.) of the posterior samples ranged from 0.897% in the 30% noise level to 2.397% in the 100% noise case. Correspondingly increasing with noise level in the partial outputs scenario, the maximum cov. rises from 2.423% to 4.197%.
The convergence progress of the MC samples for each dimensional structural parameter in the scenario of partial output and the consideration 100% noise level are shown in Figure 4. From Figure 4, it is found that the probability density of the posterior samples in Markov chain approached to be stationary after 1500 iterations, which is clearly shown in Figure 4(b). The proposed likelihood measure used for the transition of posterior samples to find the optimal x^θ* works after 2500 iterations. The posterior uncertain range that assures a reliability of 95% can be obtained from the posterior samples of the model class which denotes the plausibility of each I/O system. Figure 5 shows the uncertain range of the response with 95% assurance and with measurement errors at each time interval by incorporating the identified standard deviation of the prediction error (time history of initial 10 s is shown). The percentage of the response considering a 100% measurement error within the uncertain range that considers prediction error is 94.76%.
Identification progress for stiffness of the 5th floor (partial output: 100% noise).
95% uncertainty ranges for acceleration of the 5th DOF (100% noise: partial output).
4.2. The Identification of a 10-DOF LTI System
To show the advantage of the proposal, the identification results of a 10-DOF LTI system using the improved method, as is proposed in (12), are compared with the estimation solutions using the formal log-likelihood measure, as is in (5). For simplicity, the mass of each floor is assumed to be 50 kg, the stiffness of each DOF is equal to 5000 N/m, and the first two-mode shape damping ratio is assumed to be 0.05. The input excitation is the same as that used in the simulation of the 5-DOF LTI system. The white noise is added to the measured response, simulated by (11), where the noise level is assumed to be 10%, 30%, and 100%. When considering the scenario of partial measurements, only the even DOFs (2nd, 4th, 6th, 8th, and 10th) are assumed to be available for Bayesian updating. The results are shown in Figure 6.
Identified results using the proposed and traditional likelihood measures (gray bars: traditional likelihood measure; blue bars: proposed likelihood measure).
Case 1: full outputs
Case 2: partial outputs
In the case of “full outputs,” the maximum relative error of x^MAP ranges from zero to 0.617% in 10% noise level case, 1.175% in 30% noise level, and 1.766% in 100% noise level. Correspondingly, the maximum relative error of x^MAP using the traditional likelihood measure increases from zero in the case of ignoring measurement noise to 0.754% in 10% noise level, 1.999% in 30% noise level, and 6.491% in 100% noise level. The improvement is also clear in the scenario of partial outputs. The maximum relative error of x^MAP using the proposed method increases from zero in noise-free case to 1.479% in 10% noise level, 2.175% in 30% noise level, and 3.543% in 100% noise level. While the maximum relative error of x^MAP using the traditional method rises from zero in no-noise case to 2.621% in 10% noise level, 5.974% in 30% noise level, and 8.451% in 100% noise level, it can be found that the minimum and maximum relative errors of mean posterior samples in model set are all reduced using the proposed method. Moreover, it can be found from Figure 6 that using (12) for the parametric uncertainty (the cov.) becomes smaller than that obtained by the formal log-likelihood measure as in (5). For instance, when considering the case of 100% noise level and partial outputs are available, the maximum coefficient variance of the MC samples using the proposed method is 6.198% compared with that obtained by traditional method which is 11.09%. Comparing the identification results shown in Figure 6, it is clear that the accuracy of the estimator using the proposed likelihood measure is improved.
5. Conclusions
To improve the accuracy of the MAP estimator obtained by the traditional likelihood measure using the DREAM based structural identification, the gradient and Hessian of the log-likelihood measure are proposed to formulate the generalized likelihood measure for the density transition of Markov chains. Compared with the formal likelihood function, the relative error of the MAP estimator and the uncertain range of the posterior samples using the proposed method becomes smaller. Numerical simulations of a 5-DOF and 10-DOF LTI system demonstrated their effectiveness in solving identification problems with a high noise level and loss of measurement data. In conclusion, DREAM based Bayesian estimation using the proposed improvement has the potential to solve the problem of “equifinality” especially when considering large level of measurement error.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported in part by a Grant for the project “Implementation of Shelter Guidance System for Commuters Who Are Unable to Return Home Based on Structural Health Monitoring of Tall Buildings after Large-Scale Earthquake” (FY2013–2016, PI: A. Mita) from the Japan Science and Technology Agency.
SimoenE.De RoeckG.LombaertG.Dealing with uncertainty in model updating for damage assessment: a reviewBeckJ. L.AuS.-K.Bayesian updating of structural models and reliability using Markov chain Monte Carlo simulationMutoM.BeckJ. L.Bayesian updating and model class selection for hysteretic structural models using stochastic simulationChingJ.ChenY.-C.Transitional Markov Chain Monte Carlo method for Bayesian model updating, model class selection and model averagingCheungS. H.BeckJ. L.Bayesian model updating using hybrid Monte Carlo simulation with application to structural dynamic models with many uncertain parametersHuhtalaA.BossuytS.A Bayesian approach to vibration based structural health monitoring with experimental verificationAnD.ChoiJ.-H.KimN. H.Identification of correlated damage parameters under noise and bias using Bayesian inferenceGreenP. L.Bayesian system identification of a nonlinear dynamical system using a novel variant of Simulated AnnealingBevenK. J.A manifesto for the equifinality thesisStedingerJ. R.VogelR. M.LeeS. U.Apprisal of the generalized likelihood uncertainty estimation (GLUE) methodSadeghM.VrugtJ. A.Approximate Bayesian computation in hydrologic modeling: equifinality of formal and informal approachesBevenK.FreerJ.Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems using the GLUE methodologyZhangY.LiuH. H.HouseworthJ.Modified generalized likelihood uncertainty estimation (GLUE) methodology for considering the subjectivity of likelihood measure selectionLiZ.ChenQ.XuQ.BlanckaertK.Generalized likelihood uncertainty estimation method in uncertainty analysis of numerical Eutrophication models: take bloom as an exampleMitaA.GelmanA.RubinD. B.Inference from iterative simulation using multiple sequence