A Nonlinear Projection Neural Network for Solving Interval Quadratic Programming Problems and Its Stability Analysis

This paper presents a nonlinear projection neural network for solving interval quadratic programs subject to box-set constraints in engineering applications. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the interval quadratic optimization problems. By employing Lyapunov function approach, the global exponential stability of the proposed neural network is analyzed. Two illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.


Introduction
In lots of engineering applications including regression analysis, image and signal progressing, parameter estimation, filter design and robust control, and so forth 1 , it is necessary to solve the following quadratic programming problem: where Q ∈ R n×n , c ∈ R n , and Ω is a convex set.When Q is a positive definite matrix, the problem 1.1 is said to be the convex quadratic program.When Q is a semipositive definite matrix, the problem 1.1 is said to be the degenerate convex quadratic program.In general, the matrix Q is not precisely known, but can only be enclosed in intervals, that is, Q ≤ Q ≤ Q.Such quadratic program with interval data is named as interval quadratic program usually.
In the recent years, there have been some project neural network approaches for solving the problem 1.1 ; see, for example, 2-15 , and the references therein.In 2 , Kennedy and Chua presented a primal network for solving the convex quadratic program.This network contains a finite penalty parameter, so it converges an approximate solution only.To overcome the penalty parameter, in 3, 4 , Xia proposed several primal projection neural networks for solving the convex quadratic program and it dual, and analyzed the global asymptotic stability of the proposed neural networks when the constraint set Ω is a box set.In  In order to solve the interval quadratic program, in 14 , Ding and Huang presented a new class of interval projection neural networks, and proved the equilibrium point of this neural networks is equivalent to the KT point of a class of interval quadratic program.Furthermore, some sufficient conditions to ensure the existence and global exponential stability for the unique equilibrium point of interval projection neural networks are given.To the best of the authors knowledge, the work in 14 is first to study solving the interval quadratic program by a projection neural network.However, the interval quadratic program discussed in 14 is only a quadratic program without constraints, thus has many limitations in practice.It is well known that the quadratic program with constraints is more popular.
Motivated by the above discussion, in the present paper, a new projection neural network for solving the interval quadratic programming problem with box-set constraints is presented.Based on the Saddle theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the KT point of the interval quadratic program.By using the fixed point theorem, the existence and uniqueness of an equilibrium point of the proposed neural network are analyzed.By constructing a suitable Lyapunov function, a sufficient condition to ensure the existence and global exponential stability for the unique equilibrium point of interval projection neural network is obtained.
This paper is organized as follows.Section 2 describes the system model and gives some necessary preliminaries; Section 3 gives the proof of the existence of equilibrium point of the proposed neural network, and discusses the global exponential stability of the proposed neural network; Section 4 provides two numerical examples to demonstrate the validity of the obtained results.Some conclusions are drawn in Section 5.

A Projection Neural Network Model
Consider the following interval quadratic programming problem: where where u ∈ R n is referred to as the Lagrange multiplier and η Based on the well-known Saddle point theorem 1 , x * is an optimal solution of 2.1 if and only if there exist u * and η * , satisfying

2.3
By the first inequality in 2.3 , By using the project formulation 16 , the above inequality can be equivalently represented as η * P X η * − u * , where P X u P X u 1 , P X u 2 , . . ., P X u n T is a project function, and, for i 1, 2, . . ., n,

2.4
On the other hand, f x * ≤ f x , for all x ∈ R n .This implies that

2.7
Substituting u D −1 Qx c into the equation Dx P X Dx − u , we have where By the above discussion, we can obtain the following proposition.
Proposition 2.1.Let x * be a solution of the project equation then, x * is an optimal solution of the problem 2.1 .In the following, we propose a neural network, which is said to be the interval projection neural network, for solving 2.1 and 2.10 , whose dynamical equation is defined as follows:

2.12
Figure 1 shows the architecture of the neural network 2.
where β > 0 is a constant independent of the initial value x 0 and c 0 > 0 is a constant dependent on the initial value x 0 .• denotes the 1-norm of R n , that is,

2.15
where P Ω u is a project function on Ω, given by P Ω u arg min y∈Ω u − y .

Stability Analysis
In order to obtain the results in this paper, we make the following assumption for the neural network 2.11 : . By Definition 2.2, it is obvious that the neural network 2.11 has a unique equilibrium point if and only if T has a unique fixed point in R n .In the following, by using fixed point theorem, we prove that T has a unique fixed point in R n .For any x, y ∈ R n , by Lemma 2.5 and the assumption H 1 , we can obtain that ji < 1, i 1, 2, . . ., n.This implies that 0 < max 1≤i≤n i < 1. Equation 3.2 shows that T is a contractive mapping, and hence T has a unique fixed point.This completes the proof.Proposition 3.2.If the assumption H 1 holds, then for any x 0 ∈ R n , there exists a solution with the initial value x 0 x 0 for the neural network 2.11 .
Proof.Let F D T − I , where I is an identity mapping, then

3.3
Equation 3.3 means that the mapping F is globally Lipschitz.Hence, for any x 0 ∈ R n , there exists a solution with the initial value x 0 x 0 for the neural network 2.11 .This completes the proof.Proposition 3.2 shows the existence of the solution for the neural network 2.11 .

Theorem 3.3. If the assumption H 1 is satisfied, then the equilibrium point of the neural network 2.11 is globally exponentially stable.
Proof.By Theorem 3.1, the neural network 2.11 has a unique equilibrium point.We denote the equilibrium point of the neural network 2.11 by x * .
Consider Lyapunov function Calculate the derivative of V t along the solution x t of the neural network 2.11 .When t > t 0 , we have

3.4
Noting , by Lemma 2.5, we have

Mathematical Problems in Engineering 9
Hence, where By the assumption H 1 , i < 0. Hence, max 1≤i≤n i < 0. Let * min 1≤i≤n | i |, then * > 0. Equation 3.6 can be rewritten as dV t /dt ≤ − * x − x * .It follows easily that x t − x * ≤ x 0 − x * exp − * t − t 0 , for all t > t 0 .This shows that the equilibrium point x * of the neural network 2.11 is globally exponentially stable.This completes the proof.The optimal solution of this quadratic program is 1, 2 under Q Q or Q Q.It is easy to check that 3.0 < q 11 < 3.1,

4.1
The assumption H 1 holds.By Theorems 3.1 and 3.3, the neural network 2.11 has a unique equilibrium point which is globally exponentially stable, and the unique equilibrium point 1, 2 is the optimal solution of this quadratic programming problem.
In the case of Q Q, Figure 2 reveals that the projection neural network 2.11 with random initial value 2.5, −0.5 has a unique equilibrium point 1, 2 which is globally exponentially stable.In the case of Q Q, Figure 3 reveals that the projection neural network 2.11 with random initial value −2.5, 3 has the same unique equilibrium point 1, 2 which is globally exponentially stable.These are in accordance with the conclusion of Theorems 3.1 and 3.3.The optimal solution of this quadratic program is 1, 1, 1.5 under Q Q or Q Q.It is easy to check that 0.8 < q 11 < 0.9, q 11 < d 2 1 1, 3.0 < q 22 < 3.1, q 22 < d 2 2 4, 3.5 < q 33 < 3.6, q 33 < d 2 3 4,

4.2
The assumption H 1 holds.By Theorems 3.1 and 3.3, the neural network 2.11 has a unique equilibrium point which is globally exponentially stable, and the unique equilibrium point 1, 1, 1.5 is the optimal solution of this quadratic programming problem.
In the case of Q Q, Figure 4 reveals that the projection neural network 2.11 with random initial value −0.5, 0.6, −0.8 has a unique equilibrium point 1, 1, 1.5 which is globally exponentially stable.In the case of Q Q, Figure 5 reveals that the projection neural network 2.11 with random initial value 0.8, −0.6, 0.3 has the same unique equilibrium point 1, 1, 1.5 which is globally exponentially stable.These are in accordance with the conclusion of Theorems 3.1 and 3.3.

Conclusion
In this paper, we have developed a new projection neural network for solving interval quadratic programs, the equilibrium point of the proposed neural network is equivalent to the solution of interval quadratic programs.A condition is derived which ensures the existence, uniqueness, and global exponential stability of the equilibrium point.The results obtained are highly valuable in both theory and practice for solving interval quadratic programs in engineering.

Figure 1 :
Figure 1: Architecture of the proposed neural network in 2.11 .

Figure 2 :
Figure 2: Convergence of the state trajectory of the neural network with random initial value 2.5,−0.5 ; Q Q in this example.

Figure 3 :
Figure 3: Convergence of the state trajectory of the neural network with random initial value −2.5,3 ; Q Q in this example.

Example 4 . 2 .Figure 4 :
Figure 4: Convergence of the state trajectory of the neural network with random initial value −0.5,0.6,−0.8 ; Q Q in this example.

Figure 5 :
Figure 5: Convergence of the state trajectory of the neural network with random initial value 0.8,−0.6,0.3 ; Q Q in this example.
5, 6 , Xia et al. presented a recurrent projection neural network for solving the convex quadratic program and related linear piecewise equation, and gave some conditions of the exponential convergence.In 7, 8 , Yang and Cao presented a delayed projection neural network for solving problem 1.1 , and analyzed the global asymptotic stability and exponential stability of the proposed neural networks when the constraint set Ω is a unbounded box set.
In order to solve the degenerate convex quadratic program, Tao et al. 9 and Xue and Bian 10, 11 proposed two projection neural networks, and proved that the equilibrium point of the proposed neural networks was equivalent to the KT point of the quadratic programming problem.Particularly, in 10 , the proposed neural network was shown to have complete convergence and finite-time convergence, and the nonsingular part of the output trajectory with respect to Q has an exponentially convergent rate.In 12, 13 , Hu and Wang designed a general projection neural network for solving monotone linear variational inequalities and extended linear-quadratic programming problems, and proved that the proposed network was exponentially convergent when the constraint set Ω is a polyhedral set.

5
Mathematical Problems in EngineeringThus, x * is an optimal solution of 2.1 if and only if there exist u * and η * , such that x * , u * , η * satisfies From 2.6 , it follows that Dx P X Dx − u .Hence, x * is an optimal solution of 2.1 if and only if there exists u * such that x * , u * satisfies Qx c − Du 0, Dx P X Dx − u .
The point x * is said to be an equilibrium point of interval projection neural network 2.11 , if x * satisfies 0P X Dx * − D −1 Qx * − D −1 c − Dx * .The point x * is an equilibrium point of the interval projection neural network 2.11 if and only if it is an optimal solution of the interval quadratic program 2.1 .Definition 2.4.The equilibrium point x * of the neural network 2.11 is said to be globally exponentially stable, if the trajectory x t of the neural network 2.11 with the initial value x 0 satisfies 11 , where M m ij n×n D−D −1 Q, C D −1 c, and D d ij n×n .