A novel modeling method for population dynamics is developed. Based on
the classical Lotka-Volterra model, we construct a new predator-prey model with unknown
parameters to simulate the behaviors of predator and prey. Using a the approximation property
and the machine learning approach of artificial neural networks, a tuning algorithm of unknown
parameters is obtained and the factual data of predator-prey can be asymptotically stabilized
using a neural network controller. Numerical examples and analysis of the results are presented.
1. Introduction
A mathematical model of population is an abstract model that uses mathematical language to describe the behavior between population and the relationships of populations. Mathematical models are extremely powerful because they usually enable explanation and prediction to be made about population. Soon after Lotka and Volterra’s pioneering works [1, 2], many scholars established a large number of mathematical models to simulate the evolution of population. The theories and applications of mathematical ecology are attended by the most momentous results. Particularly, in the seventies of the last century, the new theories and methods of differential equations and dynamical systems have been widely used in study of population genetics, neurobiology, epidemiology, immunology, physiology, and environmental pollution, and so forth. See the monographs of Murray [3], Chen [4], Freedman [5], and Murray [6] for a detailed description of a population and for various mathematical methods for analyzing population models.
One of the most basic and important models is the Lotka-Volterra type model. In the last decades, considerable work on the permanence, the extinction, and the global asymptotic stability of autonomous or nonautonomous Lotka-Volterra type models have been studied extensively, for example, [7–11] and the references therein. In addition to these, the books of Takeuchi [9], Gopalsamy [12], and Kuang [13] are good sources for the dynamical behavior of Lotka-Volterra models.
Unfortunately, we can never make a completely precise model of a population system, and there are always phenomena which we will not be able to model. Thus, there will always be model errors or model uncertainties. For example, in the establishment of the mathematical model, we ignored a number of secondary factors in order to make an analysis of mathematical reasoning. However, with the rapid progress of science and technology, especially the fast development of computer and network, more information on biological population can be obtained. A large amount of information shows that the existing models are not able to precisely simulate the developmental process of the population. In other words, those secondary factors have been ignored in the modeling process but they played an important role in the development process of populations. Therefore, the traditional models cannot simulate the real dynamic of population and the traditional modeling technique is not perfect.
On the other hand, we note that an artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structual and/or functional aspects of biological neural networks. In most cases, ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. For this very reason, ANN is found in almost every domain of applied science, such as pattern recognition, data classification, medicine, sales forecasting, industrial process control, customer research, data validation, risk management, and target marketing.
ANN based on biological neural systems has been studied for years, and their properties of learning and adaptation, classification, function approximation, feature extraction, and more have made them of extreme use in signal processing and system identification applications. These are open-loop applications, and the theory of ANN has been very well-developed in this area. The applications of ANN in closed-loop feedback control systems have only recently been rigorously studied, and a foundation for neural networks in control has been provided in seminal results by Narendra et al. [14–17] and others. Several researchers have studied ANN control and managed to prove stability [18–20]. In particularly, Jiang [21] presented a neural network control scheme for tracking a nonlinear system using trajectory tracking, and an iterative training law described by a positive definite discrete kernel is also presented. Hayakawa [22] considered a neural network hybrid adaptive control framework for nonlinear uncertain hybrid dynamical systems.
Motivated by these considerations, in this paper, we will propose a new modeling method, using machine learning of the ANN theory to amend Lotka-Volterra predator-prey model such that the revised model can better simulate or control the development of populations.
This paper is organized as follows. Using machine learning of the ANN theory, a new predator-prey model is introduced in Section 2. In Section 3, we give some functional-link neural network (FLNN) weight tuning algorithms for this model. In Section 4, some specific examples are given to illustrate our results. Finally, Section 5 presents the conclusions.
2. Model Description and Preliminaries
It is well known that Lotka-Volterra models are fundamental population models. For example, the following classical autonomous model is used to model the interaction of the prey and predator:
(1)dx1dt=x1(b1-a11x1-a12x2)dx2dt=x2(-b2+a21x1-a22x2),
where x1 and x2 represent the population densities of prey and predator at time t, respectively; bi and aii (i=1,2) are nonnegative constants. The dynamic behaviors of model (1) are clear. It has no periodic solution in region ℝ+2={(x1,x2):x1≥0,x2≥0} and has a saddle (0,0) and a stable nod (b1/a11,0) under the condition b1a21≤b2a11, whereas it has two saddles (0,0) and (b1/a11,0) and one stable focus ((b1a22+b2a12)/(a11a22+a12a21),(b1a21-b2a11)/(a11a22+a12a21)) under the condition b1a21>b2a11 as shown in Figure 1.
The dynamical behavior of model (1): (a) a stable focus ((r1a22+r2a12)/(a11a22+a12a21),(r1a21-r2a11)/(a11a22+a12a21)); (b) a stable nod (r1/a11,0).
Along with the development of technology, more information on biological population can be obtained. We find that the model (1) could not simulate the real dynamic behavior of predator and prey. Therefore, according to the traditional Lotka-Volterra predator-prey model, we construct a new predator-prey model to simulate the states of predator and prey. The model takes the following form:
(2)dx1dt=x1(b1-a11x1-a12x2)+τ1dx2dt=x2(-b2+a21x1-a22x2)+τ2,
where τ=(τ1,τ2)T is an unknown function.
Next, we will use the machine learning of the ANN theory to approximate the unknown function τ, such that model (2) can better simulate the behaviors of predator and prey.
Let x=(x1,x2) be a solution of model (2). Suppose that the factual data of predator-prey is xd=(x1d,x2d)∈R+2. Then, we define the error function e(t) by
(3)e(t)=x-xd=(x1-x1d,x2-x2d)T.
Since xd is the factual data, so xd satisfies the following assumption.
(A1) The factual data is bounded, that is,
(4)∥(xd,dxddt)∥≤XB,
where XB is a known positive constant and
(5)dxddt=(dx1ddt,dx2ddt)T.
By differentiating (3) and invoking (2), it is seen that the model is expressed in the terms of a tracking error as
(6)dedt=-De+p(y)+τ,
where
(7)dedt=(de1dt,de2dt)T,y=(x1d,x2d,e1,e2)T, D=diag(0,-b2) is diagonal matrix, and(8)p(y)=(p1(y)p2(y))=((x1d+e1)[b1-a11(x1d+e1)-a12(x2d+e2)]-x˙1d(x2d+e2)[a21(x1d+e1)-a22(x2d+e2)]-b2x2d-x˙2d).Let S be a compact, simply connected set of R4 and p(·):S→R2. Define C2(S) as a space of continuous functions p(·). The universal approximation theorem claims that, for real valued function p(y), there exists an ideal approximating weight W such that(9)p(y)=WTϕ(y)+ɛ
with the estimation tracking error bounded by
(10)∥ɛ∥<ɛY,
and ϕ(·):R4→R2 is a basis set which can choose
(11)(11+exp(-·),11+exp(-·))T
or (x1-x1d,x2-x2d)T. One must notice as well that the ideal weight is unknown and not even unique. Assume that ideal weight is a constant and bounded so that
(12)∥W∥F≤WB
with the bounded WB known, where ∥W∥F=tr(ATA) is the Frobenius norm of W and tr(·) is the matrix trace, that is, sum of diagonal elements. For more details, we refer to [23, 24].
Then, an estimate of function p(y) is given by
(13)p^(y)=W^Tϕ(y)
with W^ as the current actual values of the one-layer functional-link neural network (FLNN) controller weights as provided by the tuning algorithm to be specified. It is necessary to show how to tune the ANN weight W^ on-line so as to guarantee stable tracking. The tuning algorithm found will presumably modify the actual weight W^ so that they become close to the ideal weight W, which is unknown. We choose a general sort of approximation-based function, which is derived by setting
(14)τ(t)=-W^Tϕ(y)-Kve,
where Kve is an outer proportional-plus-derivative tracking loop. Using this controller, system (6) is rewritten by the following form:
(15)dedt=-De-Kve+W~Tϕ(y)+ɛ,
where W~=W-W^ is the function approximation error.
3. FLNN Weight Tuning Algorithms for Model (2)
In this section, we give some FLNN weight tuning algorithms that guarantee the tracking stability of model (15). It is required to demonstrate that the tracking error e(t) is suitably small and that the FLNN weights W^ remain bounded, for that, the unknown function τ(t) is bounded.
3.1. Ideal Case
In this subsection, we detail the behavior in the idealized case, where net functional reconstruction error ɛ is zero. The following theorem derives the ANN control τ(t) that asymptotically stabilized the predator-prey model about its desired trajectory xd(t).
Theorem 1.
Suppose that assumption (A1) holds and ɛY in (10) is equal to zero. Suppose that the unknown function p(y) be given by (9) and ANN weight tuning provided by
(16)dW^dt=Fϕ(y)eT,
where F=FT>0 is a constant design parameter matrix. Then the tracking error e(t) goes to zero with t and the weight estimates W^ are bounded.
Proof.
Under the ideal case, the error system is
(17)dedt=-De-Kve+W~Tϕ(y).
Select the Liapunov function candidate
(18)V(t)=12eTe+12tr{W~TF-1W~}.
Calculating the derivative of V(t) along the error system (17), it follows that
(19)dVdt=eTdedt+tr{W~F-1dW~dt}=-eT(D+Kv)e+tr{W~(F-1dW~dt+ϕ(y)eT)}.
Since W~=W-W^ and W is constant, from (16), it yields that
(20)dW~dt=-Fϕ(y)eT.
From this and (19), it follows that
(21)dVdt=-eT(D+Kv)e.
Since V(t)>0 and dV/dt≤0, this shows stability in the sense of Liapunov so that e(t) and W~ (and hence W^) are bounded. Thus
(22)-∫0∞dV<∞.
Now
(23)d2Vdt2=-2eT(D+Kv)dedt
and the right-hand side of (16) verifies the boundedness of de/dt and hence of d2V/dt2 and therefore the uniform continuity of dV/dt. This allows us to invoke Barbalat’s Lemma in connection with (23) to conclude that dV/dt goes to zero with t and hence that e(t) vanishes as t→∞. The proof is complete.
On model (2), we have Theorem 2 that is a direct consequence of Theorem 1.
Theorem 2.
Suppose that assumption (A1) holds and ɛY in (10) is equal to zero. Further, let τ(t)=-W^Tϕ(y)-Kve, where W^ satisfy condition (16). Then, the error e(t)=x-xd→0, with t→∞.
Remark 3.
From Theorem 2, we note that the solution of model (2) goes to the factual data xd. So, model (2) can better simulate the dynamical behaviors of predator and prey.
3.2. The Nonideal Case
It has just been seen that there is no ANN functional approximation error under the ideal case. In this subsection, it will be seen that if the ANN approximation errors are not zero but bounded, then the tracking errors do not vanish but are bounded by small enough values to guarantee good tracking performance.
Theorem 4.
Suppose that assumption (A1) holds and ɛY in (10) is constant. Let the unknown function p(y) be given by (9) and let NN weight tuning be provided by
(24)dW^dt=Fϕ(y)eT,
where F=FT>0 is a constant design parameter matrix. Then the tracking error e(t) is uniform ultimate boundedness and the weight estimates W^ are bounded. Moreover, e(t) may be kept as small as desired by increasing the gain Kv.
Proof.
Let the NN approximation property (9) hold for the function p(y) given in (8) with a given accuracy ɛY for all y in the compact set Sy={y:∥y∥<by} with by>XB.
Select the Liapunov function candidate
(25)V(t)=12eTe+12tr{W~TF-1W~}.
Calculating the derivative of V(e) along the error system (15), it follows that
(26)dVdt=eTdedt+tr{W~F-1dW~dt}=-eT(D+Kv)e+tr{W~(F-1dW~dt+ϕ(y)eT)}+eTɛ.
Since W~=W-W^ and W is constant, from (24) it yields that
(27)dW~dt=-Fϕ(y)eT.
From this and (26), it follows that
(28)dVdt≤-λmin∥e∥2+ɛY∥e∥,
where λmin is the minimum singular value of matrix (D+Kv). Since ɛY is constant, dV/dt≤0 as long as
(29)∥e∥>ɛYλmin:=L.
Let Sa={e:e∈Sy,V(e)≤Ca}, where Ca is the maximum positive constant such that Sa∈Sy; Sb={e:e∈Sy,V(e)≤Cb,dV/dt>0}, where Cb is the maximum positive constant such that Sb∈Sy. It is obviouse that Sa⊂Sb.
Suppose that initial tracking error e(0)∈Sa. If e(0)∈Sa∖Sb, dV/dt<0 by the definitions of Sa and Sb. Therefore, V will gradually become smaller until the access to Sb. At this time, V will become bigger from the definition Sb. So, it follows that e(t) is the asymptotic convergence to the border of Sb. Similarly, if e(0)∈Sb, we can obtain the same results.
Now, let S={e:∥e∥<L}. Then we choose appropriate Kv and ɛ*<ɛY such that L<ρ for all ɛ<ɛ*, where ρ is the diameter of set Sy. From this, we have T⊂Sy and Sb⊂T. Otherwise, there is a point e* in set Sy such that e*∈¯S. Hence, we have V˙(e*)<0. This is a conflict with the definition of set S.
From the above discussion, we get that if e(0)∈Sa, then e(t) is the asymptotic convergence to the border of Sb. On the other hand, since Sb∈S, it follows that ∥e∥<L as t→∞. Therefore, tracking error e(t) is uniform ultimate boundedness and the boundedness L may be kept as small as desired by increasing the gain Kv. This completes the proof.
From Theorem 4, on the model (2), we have the following Theorem 5.
Theorem 5.
Suppose that assumption (A1) holds and ɛY in (10) is constant. Further, let τ(t)=-W^Tϕ(y)-Kve, where W^ satisfies condition (24). Then, the tracking error e(t)=x-xd is uniform ultimate boundedness and the wight estimates W^ are bounded. Moreover, e(t) may be kept as small as desired by increasing the gain Kv.
Remark 6.
From Theorem 5, we can get that the solution of model (2) goes to the factual data xd. So, if the conditions of Theorem 5 hold, then model (2) can better simulate the behaviors of predator and prey.
4. Example and Numerical Simulation
In this paper, we proposed a new modeling method, using machine learning of the ANN theory to amend the traditional Lotka-Volterra model such that the revised model can better simulate or control the behaviors of population.
In order to testify the validity of our results, we consider the following model:
(30)dx1dt=x1(b1-0.5x1-2x2)+τ1,dx2dt=x2(-2+2x1-0.2x2)+τ2,
where τ=(τ1,τ2) is an unknown function.
If we choose b1=1.5 and τ=0 in model (30), it is clear that model (30) has a saddle (0,0) and a stable nod (r1/a11,0)=(3,0) which are shown in Figures 2(a) and 3(a).
The trajectory of the prey x1 of model (30) with (a) b1=1.5 and τ=0; (b) b1=1.5, τ≠0, and xd=(1.5,0.5).
The trajectory of the predator x2 of model (30) with (a) b1=1.5 and τ=0; (b) b1=1.5, τ≠0, and xd=(1.5,0.5).
However, if a factual data of predator-prey is xd(t)=(x1d,x2d)=(1.5,0.5), then model (30) with b1=1.5 cannot simulate the real dynamic of populations. So, in this way, according to the Theorems 1 and 4, we can choose
(31)F=(0.30.10.10.3),Kv=(50303050),
and ϕ(t)=(x1-x1d,x2-x2d)T. Then, we note that the factual data of predator-prey xd(t) is asymptotically stable that is, the tracking error e(t) goes to zero as t→∞. That is, if we choose τ(t)=-W^Tϕ(y)-Kve, then model (30) can better simulate (or control) the real behaviors of population which are shown in Figures 2(b) and 3(b).
Further, if we choose b1=3.0 and τ=0 in model (30), it is easy to demonstrate that model (30) has two saddles (0,0) and (6,0) and one stable focus (4.18,0.45) which are shown in Figures 4(a) and 5(a).
The trajectory of the prey x1 of model (30) with (a) b1=3.0 and τ=0; (b) b1=3.0, τ≠0, and xd=(2.5+0.2sin(t),2.0+0.2cos(t)).
The trajectory of the prey x2 of model (30) with (a) b1=3.0 and τ=0; (b) b1=3.0, τ≠0, and xd=(2.5+0.2sin(t),2.0+0.2cos(t)).
If a factual data of predator-prey is
(32)xd(t)=(x1dx2d)=(2.5+0.2sin(t)2.0+0.2cos(t)),
then model (30) with b1=3.0 cannot simulate the real dynamic of populations. So, in this way, according to Theorems 1 and 4, we choose
(33)F=(1.50.60.62.0),Kv1=(300510360),
and ϕ(t)=(x1-x1d,x2-x2d)T. Then, we note that the factual data of predator prey, xd(t), is asymptotically stable and has asymptotic phase property. That is, if we choose τ(t)=-W^Tϕ(y)-Kve, then model (30) can better simulate (or control) the real behaviors of predator and prey, which are shown in Figures 4(b) and 5(b). Moreover, the tracking error e(t) may be kept as small as desired by increasing the gain Kv which is shown in Figures 6(a) and 6(b).
The trajectory of error system e(t): (a) e1(t) with Kv1 and Kv2, (b) e2(t) with Kv1 and Kv2; where Kv1=(300510360) and Kv2=(500510560).
5. Conclusion
According to more information on biological population that can be obtained, we find that the traditional models cannot simulate the real dynamic of populations. Therefore, the traditional modeling technique is not perfect. So, in this paper, based on the traditional and most important Lotka-Volterra model, we developed a new modeling method, using machine learning of the ANN theory to construct a new predator-prey model to simulate the states of predator and prey. From the neural networks function approximation property and a factual data of predator-prey, we proposed a neural network trajectory tracking strategy, and the tuning algorithm of the new model is obtained. That is, under general assumptions, we proved that the tracking error is ultimately uniformly bounded and that the corresponding ultimate bound can be sufficiently decreased by modifying the feedback gain matrix. So, the new model can better simulate or control the real behaviors of populations. Finally, numerical examples were presented to show that the proposed method is feasible and efficient.
Acknowledgments
This research has been partially supported by the Natural Science Foundation of Xinjiang (Grant no. 2011211B08), the Scientific Research Programs of Colleges in Xinjiang (Grant no. XJEDU2011S08), the National Natural Science Foundation of China (Grants nos. 11001235, 11271312, and 11261056), the China Postdoctoral Science Foundation (Grants nos. 20110491750 and 2012T50836).
LotkaA.1925Baltimore, Md, USAWilliams and WilkinsVolterraV.Variazioni e uttuazioni del numero di individui in specie animali conviventi1926231113MurrayB. G.1979New York, NY, USAAcademic PressChenL. S.1988Beijing, ChinaScience PressFreedmanH. I.198057New York, NY, USAMarcel Dekkerx+254Monographs and Textbooks in Pure and Applied MathematicsMR586941MurrayJ. D.2002173rdNew York, NY, USASpringerxxiv+551Interdisciplinary Applied MathematicsMR1908418RedhefferR.Lotka-Volterra systems with constant interaction coefficients200146811511164MR1868353ZBL1003.340392-s2.0-003554608410.1016/S0362-546X(00)00166-8SaitoY.Permanence and global stability for general Lotka-Volterra predator-prey systems with distributed delays200147961576168MR1971506ZBL1042.345812-s2.0-003541517210.1016/S0362-546X(01)00680-0TakeuchiY.1996River Edge, NJ, USAWorld Scientific Publishingxii+30210.1142/9789812830548MR1440182TengZ.LiZ.JiangH.Permanence criteria in non-autonomous predator-prey Kolmogorov systems and its applications2004192171194MR2060425ZBL1066.340482-s2.0-294251454310.1080/14689360410001698851WaldrogelJ.The period in the Volterra-Lotka predator-prey model19832061264127210.1137/0720098MR723845GopalsamyK.1992Dordrecht, The NetherlandsKluwer Academic PublishersKuangY.1993191Boston, Mass, USAAcademic Pressxii+398Mathematics in Science and EngineeringMR1218880NarendraK. S.MillerW. T.SuttonR. S.WerbosP. J.Adaptive control using neural networks1991Cambridge, Mass, USAMIT Press115142NarendraK. S.AnnaswamyA. M.A new adaptive law for robust adaptation without persistent excitation19873221341452-s2.0-0023288277NarendraK. S.ParthasarathyK.Identification and control of dynamical systems using neural networks1990114272-s2.0-002539956710.1109/72.80202NarendraK. S.ParthasarathyK.Gradient methods for the optimization of dynamical systems containing neural networks1991222522622-s2.0-002611746610.1109/72.80336JagannathanS.Discrete-time CMAC NN control of feedback linearizable nonlinear systems under a persistence of excitation19991011281372-s2.0-003278579510.1109/72.737499LeuY.-G.WangW.-Y.LeeT.-T.Robust adaptive fuzzy-neural controllers for uncertain nonlinear systems1999155805817MR1870703ZBL1015.340312-s2.0-003335354310.1109/70.795786XuJ.-X.QuZ.Robust iterative learning control for a class of nonlinear systems1998348983988MR1822996ZBL1040.935192-s2.0-0032137219JiangP.UnbehauenR.Iterative learning neural network control for nonlinear system trajectory tracking2002481411532-s2.0-003682611010.1016/S0925-2312(01)00661-0HayakawaT.HaddadW. M.VolyanskyyK. Y.Neural network hybrid adaptive control for nonlinear uncertain impulsive dynamical systems200823862874MR2431719ZBL1215.931312-s2.0-4544908623510.1016/j.nahs.2008.01.002HornikK.Approximation capabilities of multilayer feedforward networks1991422512572-s2.0-0025751820LewisF. L.YesildirakA.JagannathanS.1999New York, NY, USATaylor