The inverse problem of using measurements to estimate unknown parameters of a system often arises in engineering practice and scientific research. This paper proposes a Collage-based parameter inversion framework for a class of partial differential equations. The Collage method is used to convert the parameter estimation inverse problem
into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.
1. Introduction
In industrial and engineering applications there are broad classes of inverse problems that can be described as the problems that seek to go backwards from measurements to estimated parameter values [1, 2]. In this paper we concentrate on the following partial differential equation (PDE) with m unknown parameters:
dudt=f(u,Du,x,t,λ1,…,λm),u(x,t0)=u0(x)
where u=(u1,…,un) and f=(f1,f2,…,fn) are n-dimensional vector functions, and Du=(D(1)u,D(2)u,…,D(K)u) is a K-dimensional vector consisting of spatial partial derivatives of first or higher order involved in (1.1). The detailed description on (1.1) will be given in the next section. The parameter estimation problem of (1.1) can be phrased as follows.
Let u¯(x,t) be a target solution. Find parameters λ1,…,λm such that (1.1) admits u¯(x,t) as a solution or an approximate solution, where fi(i≤i≤n) may be linear or nonlinear with respect to the unknown parameters.
Most numerical methods for solving this kind of inverse problems rely on numerous executions of the forward problem, every time with different parameter values. Therefore, the numerical method for the forward problem must be fast. This paper proposes a new framework for solving the above parameter inversion problem by the numerical approximation based on the Collage method, aiming at avoiding to solve the forward problem again and again.
In the proposed framework the Collage method is used to convert the parameter inversion problem into a function optimization problem. The motivation for our treatment comes from the use of contraction maps in fractal-based approximation methods such as fractal interpolation [3–5] and fractal image compression [6, 7]. The mathematical methods that underlie fractal image compression were first introduced to inverse problems for ODEs by Kunze and Vrscay [8], in which a framework for solving inverse problems was set up based on the Picard contraction map associated with ODEs. This framework has been successfully applied to more inverse problems of ODEs (see [9–11]). Recently, Kunze et al. [12] developed a Collage-based approach for PDEs inverse problem, in which boundary value inverse problems were solved by the Lax-Milgram representation theorem and a generalized Collage theorem. Deng et al. [13] proposed a framework for solving parameter estimation problems for reaction-diffusion equations by an approximate Picard contraction map and Collage method. In this framework, the fixed point of the contractive Picard integral operator is viewed as an approximation of the target solution u¯(x,t). The inverse problem becomes one of the finding unknown parameters that define the Picard operator P by using the minimization of the squared Collage distance d2(u¯,Pu¯).
In the frameworks proposed in [8, 13], the stationarity conditions ∂d2(u¯,Pu¯)/∂λk=0 yield a set of linear equations under the assumption that a vector field is linear with respect to unknown parameters. Obviously, the above algorithms are incredibly simple both in concept and in form. However the stationarity conditions will yield a nonlinear system in the case that the vector field is nonlinear with respect to unknown parameters, and it is very difficult to solve the nonlinear system.
Differing from [8, 13], in this paper the parameter estimation problem of (1.1) is viewed as a global minimization problem of the function F(λ1,λ2,…,λm) determined by the squared Collage distance d2(u¯,Pu¯), and the grid approximation and ant colony optimization are both proposed to solve the minimization problem of F(λ1,λ2,…,λm). The methods presented in this paper are suitable for solving the complicated parameter estimation problem, such as when f is a nonlinear function with respect to unknown parameters or is associated with a great number of unknown parameters.
The structure of this paper is as follows. In Section 2, we provide a simple review of the Collage method for inverse problems of ODEs, and give the theoretical framework for converting the parameter estimation problem of (1.1) into a minimization problem of a function of several variables. In Section 3, we describe an algorithm for computing the function of several variables determined by the squared Collage distance. In Section 4, the grid approximation and ant colony optimization schemes for parameter estimation are applied with our method in order to solve a parameter estimation problem for the Belousov-Zhabotinskii equation.
2. Formulation from Parameter Estimation to Minimization Problem
In this section, we restrict our discussion of technical details to a minimum. The reader is referred to [8, 13] for greater mathematical details. The framework presented in this paper is an extension of the Picard contraction mapping method for a class of inverse problems of ordinary differential equations [8], the theoretical basis for which comes from the Collage theorem [14].
Proposition 2.1.
(Collage theorem) Let (V,d) be a complete metric space, and let P be a contractive map on V with fixed point u* and contraction factor cP∈[0,1). Then
d(u,u*)≤11-cPd(u,Pu),∀u∈V.
In [8], the framework for solving the inverse problem of ODEs by Collage theorem was set up. Seek an ODE initial value problem u̇=f(u,t),u(t0)=u0 that admits u(t) as either a solution or an approximate solution, where f is restricted to a class of functional forms, for example, affine and quadratic. Associated with the initial value problem is the Picard integral operator P:
(Pu)(t)=u0+∫t0tf(u(s),s)ds.
It is well known that, subject to appropriate conditions on f, the operator P is contractive over an appropriate Banach function space V. By taking u as the target solution, the approximate vector field g(u,t) of f(u,t) associated with the operator P is found by minimizing the squared Collage distance d2(u,Pu).
Now we turn to discuss the parameter estimation problem of (1.1) by the use of the Collage method. Firstly some basic assumptions on (1.1) are listed as follows.
(x,t)∈Ω×[t0,T], where Ω⊂RN is a bounded region; t0 and T are two positive constants satisfying t0<T.
u(x,t) is a vector function with the form of u(x,t)=(u1(x,t),u2(x,t),…,un(x,t)), ui(x,·) is a differentiable function, ui(·,t)∈Cα(Ω), and here α is the highest order number of spatial partial derivative involved in (1.1).
fi(u,Du,x,t,λ1,…,λm)(1≤i≤n) are, for the moment, continuous.
The exact solution u*(x,t) of the system (1.1) exists uniquely.
By replacing the term Du(x,t) of (1.1) with Du*(x,t), we gain an approximate dynamical model of (1.1):
dudt=f(u,Du*,x,t,λ1,…,λm),u(x,t0)=u0(x),
and the solution of (2.3) satisfies the equivalent integral equation
u(x,t)=u0(x)+∫t0tf(u,Du*,x,s,λ1,…,λm)ds.
Define the Picard operator W associated with the model (2.4) as follows:
(Wu)(x,t)=u0(x)+∫t0tf(u,Du*,x,s,λ1,…,λm)ds.
It is clear that Wu*=u*. The parameter estimation problem of (1.1) will be converted into a minimization problem based on (2.5) by the Collage method.
In [13], we have showed that, subject to appropriate conditions on the vector field f, the Picard operator W is contractive over a complete space V¯of functions supported over the domain Ω×[t0,T]. The space V¯is equipped with norm
∥u∥=(∫t0T∥u(x,t)∥22dt)1/2,u∈V¯,
where
∥u(x,t)∥2=(∑i=1n∥ui(x,t)∥L22)1/2,∥ui(x,t)∥L2=(∫Ωui2(x,t)dx)1/2,i=1,2,…,n.
Let
d(u,v)=∥u-v∥,∀u,v∈V¯.
Then an interesting inequality is obtained (see [13] for details):
d(u¯,u*)≤cW¯1-cWd(Du¯,Du*)+11-cWd(u¯,W¯u¯),
where 0<cW¯,cW<1 are two constants and
W¯u¯=u0(x)+∫t0tf(u¯(x,s),Du¯(x,s),x,s,λ1,…,λm)ds.
Note the metric d(Du,Dv) is defined similar to (2.6)–(2.9) for d(u,v); the only difference between d(u,v) and d(Du,Dv) is their dimensions.
In the inequality (2.10), the true approximation error d(u¯,u*) is bounded by the spatial derivative approximation error d(Du¯,Du*) and the Collage distance d(u¯,W¯u¯). It is clear that d(Du¯,Du*)=0 when u¯=u*, so one can find the estimate values of the unknown parameters λ1,…,λm by the use of the minimization of the squared Collage distance d2(u¯,W¯u¯). However, in many practical problems the target function u¯(x,t) will be generated by interpolating observational or experimental data points u(xi,tj), collected at various locations xi at various times tj. Obviously, there needs a further discussion for the case u¯(x,t)≠u*(x,t) for applying the minimization method of the squared Collage distance to practical problems.
Proposition 2.2.
Let u(x,t) satisfy u(x,·) be differentiable, and let ∂D(i)u/∂t(x,t) be continuous for i=1,2,…,K. Then
∥Du(x,t)-Du0(x)∥≤C|T-t0|3/2,(x,t)∈Ω×[t0,T],
where Du0(x)=Du(x,t0), and C>0 is a positive constant.
Proof.
It follows from the differential mean-value theorem that for i=1,…,KD(i)u(x,t)-D(i)u(x,t0)=∂D(i)u(x,ξi)∂t(t-t0),ξi∈[t0,t].
From the continuity assumption on ∂D(i)u/∂t(x,t), we have that
|∂D(i)u(x,ξi)∂t|≤Mi,
where
Mi=sup(x,t)∈Ω×[t0,T]|∂D(i)u∂t(x,t)|.
We find from the definition of the norm ∥·∥ that
∥Du(x,t)-Du0(x)∥2=∥Du(x,t)-Du(x,t0)∥2=∫t0T[∑i=1K∫Ω(∂D(i)u(x,ξi)∂t)2(t-t0)2dx]dt≤C2(T-t0)3,
where C=SΩ/3∑i=1KMi2; here SΩ is the area (or volume) of the domain Ω. Thus the inequality (2.12) holds.
Proposition 2.3.
Let u*(x,t) and u¯(x,t) be the exact solution and the target solution of (1.1), respectively. Assume that ∂D(i)u*/∂t(x,t) and ∂D(i)u¯/∂t(x,t) are continuous for i=1,2,…,K. Then there exists a positive constant C′ such that
d(Du¯,Du*)≤C′(T-t0)3/2+d(Du¯0(x),Du0(x)),
where Du¯0(x)=Du¯(x,t0),Du0(x)=Du*(x,t0).
Proof.
Firstly, from Proposition 2.2, there are two positive constants C1′ and C2′ such that
∥Du¯(x,t)-Du¯0(x)∥≤C1′(T-t0)3/2,∥Du*(x,t)-Du0(x)∥≤C2′(T-t0)3/2.
We have that
d(Du¯,Du*)=∥Du¯(x,t)-Du*(x,t)∥≤∥Du¯(x,t)-Du¯(x,t0)∥+∥Du*(x,t)-Du*(x,t0)∥+∥Du¯(x,t0)-Du*(x,t0)∥≤(C1′+C2′)(T-t0)3/2+d(Du¯(x,t0),Du*(x,t0)).
Letting C′=C1′+C2′, we gain the result of Proposition 2.3.
The following theorem follows immediately from the inequality (2.10) and Proposition 2.3.
Theorem 2.4.
Let u*(x,t) and u¯(x,t) be the exact solution and the target of (1.1), respectively. Denote u¯(x,t0) by u¯0(x), and denot u*(x,t0) by u0(x). Assume that ∂D(i)u*/∂t(x,t) and ∂D(i)u¯/∂t(x,t) are continuous for i=1,2,…,K. Then
d(u¯,u*)≤C1(T-t0)3/2+C2d(Du¯0(x),Du0(x))+C3d(u¯,W¯u¯),
where
C1=CW¯C′1-CW,C2=CW¯1-CW,C3=11-CW.
From Theorem 2.4, the true approximation error d(u¯,u*) is controlled by (T-t0)3/2, the spatial derivative approximation error d(Du¯0(x),Du0(x)),and the Collage distance d(u¯,W¯u¯). For a given target solution u¯, the first two terms of the right-hand side of (2.20) are fixed; so the smallest upper bound of d(u¯,u*) associated with the inequality (2.20) can be obtained by the minimization of d(u¯,W¯u¯). Thus, Theorem 2.4 provides a theoretical basis for finding the unknown parameters of (1.1) by minimizing the squared Collage distance. At worst, the presented method can provide an excellent starting point for traditional inversion methods.
In a real problem, it is important to make the error bound of d(u¯,u*) obtained from (2.20) as small as possible. Obviously, there is no problem with the first term of the right-hand side of (2.20), which approaches zero as T approaches t0. For guaranteeing the effectiveness of the proposed minimization method, it is necessary to construct the target solution u¯ from the known measurements of (1.1) such that d(Du¯0(x),Du0(x)) is as small as possible. If the target solution u¯ satisfies that Du¯0(x)=(D(1)u0(x),…,D(K)u0(x)), then the target function u¯ and the exact solution u* have the same spatial derivatives at the initial time point t0, and d(Du¯0(x),Du0(x))=0. We have from (2.20) that
d(u¯,u*)≤C1(T-t0)3/2+C3d(u¯,W¯u¯).
In general, the Hermite interpolation method can be used to construct the target solution u¯(x,t). When the exact solution u*(x,t) is given in the form of data points (xi,tj), it can be expected to provide d(Du¯0(x),Du0(x)) with a small value by taking the spatial derivative values of the exact solution u*(x,t) at initial time point t0.
3. Algorithm for Function of Several Variables
Differing from the ideas proposed in [8, 13], the unknown parameters of (1.1) will be estimated by finding the minimum of the function of several variables determined by d2(u¯,W¯u¯). Let J be the vector function defined as follows:
J(u¯,Du¯,x,t,λ1,…,λm)=∫t0tf(u¯,Du¯,x,s,λ1,…,λm)ds,
then
d2(u¯,W¯u¯)=∥u¯-u0-J(u¯,Du¯,x,t,λ1,…,λm)∥2=∫t0T∑i=1n∥u¯(i)-u0(i)-Ji(u¯,Du¯,x,t,λ1,…,λm)∥L22dt=∑i=1n∫t0T(∫Ω(u¯(i)-u0(i)-Ji(u¯,Du¯,x,t,λ1,…,λm))2dx)dt,
where u¯(i) and u0(i) denote the ith part of the vector function u¯ and u0, respectively. Let
di2=∫t0T(∫Ω(u¯(i)-u0(i)-Ji(u¯,Du¯,x,t,λ1,…,λm))2dx)dt.
We have that
d2(u¯,W¯u¯)=∑i=1ndi2.
Obviously, a function of unknown parameters λ1,…,λm will be obtained by computing the integrals involved in d2(u¯,W¯u¯). The obtained function is denoted by F(λ1,…,λm) throughout the rest of this paper, that is,
F(λ1,…,λm)=∑i=1ndi2.
Example 3.1.
To demonstrate the above algorithm, we consider the Belousov-Zhabotinskii equation
∂u∂t=u(1-u-rv)+lrv+∂2u∂x2,∂v∂t=mv-buv+∂2v∂x2,
where x∈Ω,t0≤t≤T. Suppose that u¯(x,t),v¯(x,t) is the target solution satisfying the condition u(x,t0)=u0(x),v(x,t0)=v0(x). l,r and m,b are unknown parameters. Let
g1=∫t0tu¯(x,s)ds,g2=∫t0t(u¯(x,s))2ds,g3=∫t0tu¯(x,x)v¯(x,s)ds,g4=∫t0t∂2u¯∂x2(x,s)ds,g5=∫t0tv¯(x,s)ds,g6=∫t0t∂2v¯∂x2(x,s)ds.
Then
d12=∫t0T(∫Ω(u¯-u0-g1+g2+rg3-lrg5-g4)2dx)dt,d22=∫t0T(∫Ω(v¯-v0-mg5+bg3-g6)2dx)dt.
Denoting d12,d22 by F1(l,r) and F2(m,b), respectively, we have that
F(l,r,m,b)=F1(l,r)+F2(m,b).
Let 〈·〉 denote the integral
〈f(x,t)〉=∫t0T∫Ωf(x,t)dxdt,
and
A=〈g52〉,B=〈g3g5〉,C=〈g32〉,D=〈g5(u¯-u0-g1+g2-g4)〉,E=〈g3(u¯-u0-g1+g2-g4)〉,Q1=〈(u¯-u0-g1+g2-g4)2〉,G=〈g5(v¯-v0-g6)〉,H=〈g3(v¯-v0-g6)〉,Q2=〈(v¯-v0-g6)2〉.
Then
F1(l,r)=Al2r2-2Blr2+Cr2-2Dlr+2Er+Q1,F2(m,b)=Am2+Cb2-2Bmb-2Gm+2Hb+Q2.
4. Numerical Approximation Methods
From the previous section, the function of several variables obtained from the Collage method has the form of a sum every member of which depends only upon a few variables. This leads to the conclusion that many parameter estimation problems for PDEs can be solved in the exact way known from classical analysis. However many problems can only be solved by an approximate numerical method when the function F(λ1,…,λm) is especially complicated, such as when f associated with F is nonlinear. Also approximate numerical methods are suitable for the case that the number of variables is great. In this paper, we are interested in the grid approximation and ant colony optimization methods, and these methods will be applied to the unknown parameter estimation of (1.1).
Note that the ranges of unknown parameters may be assumed from the physical understanding of the problem and modified from the analysis of numerical approximation results. In this section we assume that (λ1,…,λm)∈S, where S is a bounded domain with the form
S=[λ1(min),λ1(max)]×[λ2(min),λ2(max)]×⋯×[λm(min),λm(max)].
Thus, the continuous optimization problem associated with the parameter estimation of (1.1) can be phrased as
minF(λ1,λ2,…,λm),(λ1,λ2,…,λm)∈S.
Example 4.1.
We demonstrate the methods for the system (3.6) with assumptions: the domain Ω×[0,T]=[0,1.0]×[0,0.5], the parameter domain S=[0,1]×[0,1]×[0,1]×[0,1], the initial condition u0(x)=sinx,v0(x)=cosx, and the target solution u¯(x,t)=xt3+sinx,v¯(x,t)=x3t+cosx. By applying the algorithm presented in Section 3, the coefficients of (3.12) and (3.13) are obtained
A=0.0332,B=0.0140,C=0.0079,D=-0.0072,E=-0.0049,Q1=0.0035,G=0.0044,H=0.0069,Q2=0.0148.
The estimates of the unknown parameters l,r and m,b can be obtained by solving the optimization problems of F1(l,r) and F2(m,b), respectively.
4.1. Grid Approximation
We firstly describe a partition scheme of the parameter domain S. For i=1,2,…,m, the intervals [λi(min),λi(max)] are partitioned with step hi=λi,j+1-λi,j,j=0,1,…,Ni-1, that is,
λi(min)=λi,0<λi,1<⋯<λi,Ni=λi(max).
Let λmin=(λ1,0,…,λm,0). We define the spatial grid GR(S) by the formula
GR(S)={λ∈Rm:λ=λmin+∑l=1mklhlel,ki=0,…,Ni-1,i=1,…,m}
where ei=(ei1,…,eim) are basis vectors satisfying eii=1,eij=0(i≠j),i,j=1,…,m.
With the above GR(S) grid, the approximate estimate of the unknown parameter vector of (1.1) λ*=(λ1*,…,λm*)∈GR(S) is determined by
F(λ*)=minλ∈GR(S)F(λ).
For testing the effect of the grid approximation method, the minimization problems of (3.12) and (3.13) are solved with S=[0,1]×[0,1],hi=0.01(i=1,2), the results are shown in Figures 1 and 2. Note that the parameter estimation of (3.6) cannot be solved by the framework proposed in [13] due to f being nonlinear with respect to the unknown parameters.
The grid approximation for the minimum of F1(l,r).
The grid approximation for the minimum of F2(m,b).
In Figure 1, the red point is the global minimum position of F1(l,r), where l*=0.18,r*=0.9 and F1(l*,r*)=2.8556e-04. Similarly, (m*,b*)=(0.14,0.01) is the global minimum point of F2(m,b) with a minimum F2(m*,b*)=0.0143 (see Figure 2).
Sometimes the stationarity conditions ∂F/∂λi=0 can be used to reduce the computational complexity. For example, it follows from ∂F1(l,r)/∂l=0 that
l=1A(B+D1r),
The minimum of F1(l,r) can be found by viewing F1(l,r) as a function with respect to the variable r; the result is shown in Figure 3.
The grid approximation for the minimum of F1(l,r) with condition l=1/A(B+D(1/r)), where F1(l*,r*)=2.8553e-04,l*=0.182,r*=0.89.
The initial positions of 30 particles.
The positions of 30 particles after 100 iteration processes with a global best solution F1(l*,r*)=2.8551e-04,l*=0.1734,r*=0.8823.
The global minimum of F1(l,r) for every iteration process.
4.2. Ant Colony Optimization Approximation
The ant colony optimization (ACO) algorithm was inspired by the observation of real ant colonies. Its inspiring source is the foraging behavior of real ants, which enables them to find the shortest paths between nest and food sources [15, 16]. Recently, ACO algorithms for continuous optimization problems have received an increasing attention in swarm computation; many researches have shown that the ACO algorithms have great potential in solving a wide range of optimization problems, including continuous optimization [17–22]. These ACO algorithms for continuous domains can be directly used for solving the minimization problem of (4.2).
In [17], Shelokar et al. proposed a particle swarm optimization (PSO) hybridized with an ant colony approach (PSACO) for optimization of multimodal continuous functions, which applies PSO for global optimization and the idea of ant colony approach to update positions of particles to attain rapidly the feasible solution space (see [17] for detail). for example, the PSACO algorithm is used for the minimization problem of (3.12); the results in Figures 4, 5, and 6 are obtained.
Acknowledgment
This work is supported by the National Natural Science Foundation of China under Grant no. 50875104.
IsakovV.1997New York, NY, USASpringerMalengierB.van KeerR.Parameter estimation in convection dominated nonlinear convection-diffusion problems by the relaxation method and the adjoint equation20082152477483MR240664810.1016/j.cam.2006.03.050ZBL1138.65086BarnsleyM. F.1988New York, NY, USAAcademic PressBarnsleyM. F.Fractal functions and interpolation198624303329MR89215810.1007/BF01893434ZBL0606.41005DengX.LiH.ChenX.The symbol series expression and Hölder exponent estimates of fractal interpolation function2009113507523FisherY.1995New York, NY, USASpringerxviii+341MR1313035JacquinA. E.Image coding based on a fractal theory of iterated contractive image transformations1992111830KunzeH. E.VrscayE. R.Solving inverse problems for ordinary differential equations using the Picard contraction mapping1999153745770MR1696910ZBL1089.34506KunzeH. E.HickenJ. E.VrscayE. R.Inverse problems for ODEs using contraction maps and suboptimality of the ‘collage method’2004203977991MR2067511ZBL1067.34010KunzeH.HeidlerK.The collage coding method and its application to an inverse problem for the Lorenz system20071861124129MR231649810.1016/j.amc.2006.07.093ZBL1114.65086KunzeH.VasiliadisS.Using the collage method to solve ODEs inverse problems with multiple data setsNonlinear Analysis: Theory, Methods & Applications. In press10.1016/j.na.2009.01.167KunzeH.La TorreD.VrscayE. R.A generalized collage method based upon the Lax-Milgram functional for solving boundary value inverse problemsNonlinear Analysis: Theory, Methods & Applications. In press10.1016/j.na.2009.01.160DengX.WangB.LongG.The Picard contraction mapping method for the parameter inversion of reaction-diffusion systems200856923472355MR2466757BarnsleyM. F.ErvinV.HardinD.LancasterJ.Solution of an inverse problem for fractals and other sets198683719751977MR83475610.1073/pnas.83.7.1975ZBL0613.28008DorigoM.mdorigo@ulb.ac.beManiezzoV.maniezzo@csr.unibo.itColorniA.colorni@elet.polimi.itAnt system: optimization by a colony of cooperating agents19962612941DorigoM.BlumC.Ant colony optimization theory: a survey20053442-3243278MR217885510.1016/j.tcs.2005.05.020ZBL1154.90626ShelokarP. S.SiarryP.JayaramanV. K.KulkarniB. D.Particle swarm and ant colony algorithms hybridized for improved continuous optimization20071881129142MR232709910.1016/j.amc.2006.09.098ZBL1114.65334HuX.-M.ZhangJ.junzhang@ieee.orgLiY.Orthogonal methods based ant colony search for solving continuous optimization problems200823121810.1007/s11390-008-9111-5SochaK.DorigoM.Ant colony optimization for continuous domains2008185311551173MR236175010.1016/j.ejor.2006.06.046ZBL1146.90537SochaK.DorigoM.ACO for continuous and mixed-variable optimization20043172Berlin, GermanySpringer2536Lecture Notes in Computer ScienceChelouahR.SiarryP.Genetic and Nelder-Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions20031482335348MR196580110.1016/S0377-2217(02)00401-0ZBL1035.90062ChelouahR.SiarryP.A hybrid method combining continuous tabu search and Nelder-Mead simplex algorithms for the global optimization of multiminima functions20051613636654MR209922510.1016/j.ejor.2003.08.053ZBL1071.90035