A One-Layer Recurrent Neural Network for Solving Pseudoconvex Optimization with Box Set Constraints

A one-layer recurrent neural network is developed to solve pseudoconvex optimization with box constraints. Compared with the existing neural networks for solving pseudoconvex optimization, the proposed neural network has a wider domain for implementation. Based on Lyapunov stable theory, the proposed neural network is proved to be stable in the sense of Lyapunov. By applying Clarke’s nonsmooth analysis technique, the finite-time state convergence to the feasible region defined by the constraint conditions is also addressed. Illustrative examples further show the correctness of the theoretical results.


Introduction
It is well known that nonlinear optimization problems arise in a broad variety of scientific and engineering applications including optimal control, structure design, image and signal progress, and robot control.Most of nonlinear programming problems have a time-varying nature; they have to be solved in real time.One promising approach to solve nonlinear programming problems in real time is to employ recurrent neural networks based on circuit implementation.
It should be noticed tha, many nonlinear programming problems can be formulated as nonconvex optimization problems, and among nonconvex programming, as a special case, pseudoconvex programmings are found to be more prevalent than other nonconvex programming.Pseudoconvex optimization problem has many applications in practice, such as fractional programming, computer vision, and production planning.Very recently, Liu et al. presented a one-layer recurrent neural network for solving pseudoconvex optimization subject to linear equality in [1]; Hu and Wang proposed a recurrent neural network for solving pseudoconvex variational inequalities in [10].Qin et al. proposed a new one-layer recurrent neural network for nonsmooth pseudoconvex optimization in [21].
Motivated by the works above, our objective in this paper is to develop a one-layer recurrent neural network for solving pseudoconvex optimization problem subject to a box set.The proposed network model is an improvement of the neural network model presented in [10].To the best of our knowledge, there are few works treating of the pseudoconvex optimization problem with a box set constraint.
Given a set  ⊂ R  , [] denotes the closure of the convex hull of .
Let  : R  → R be a locally Lipschitz continuous function.Clarke's generalized gradient of  at  is defined by where Ω  ⊂ R  is the set of Lebesgue measure zero, ∇ does not exist, and M ⊂ R  is an arbitrary set with measure zero.The set-valued map (⋅) is said to have a closed (convex, compact) image if for each  ∈ , () is closed (convex, compact).
The remainder of this paper is organized as follows.In Section 2, the related preliminary knowledge are given, and the problem formulation and the neural network model are described.In Section 3, the stability in the sense of Lyapunov and finite-time convergence of the proposed neural network is proved.In Section 4, illustrative examples are given to show the effectiveness and the performance of the proposed neural network.Some conclusions are drawn in Section 5.

Model Description and Preliminaries
In this section, a one-layer recurrent neural network model is developed to solve pseudoconvex optimization with box constraints.Some definitions and properties concerning the set-valued map and nonsmooth analysis are also introduced.
Throughout this paper, the following assumptions on the optimization problem (4) are made.
(A 1 ) The objective function () of the problem ( 4) is pseudoconvex and regular and locally Lipschitz continuous.
where   > 0 is a constant.
In the following, we develop a one-layer recurrent neural network for solving the problem (4).The dynamic equation of the proposed neural network model is described by differential inclusion system: where  is a nonnegative constant, ( −1 ) =  −1 (), and  [,ℎ] is a discontinuous function with its components defined as Architecture of the proposed neural network system model ( 9) is depicted in Figure 1.
Definition 6.  ∈   is said to be an equilibrium point of the differential inclusion system (9) if that is, there exist such that where  =  −1 .
Next, we prove that when  ≥  * , the trajectory stays in Ω after reaching Ω.If this is not true, then there exists  1 >  * such that the trajectory leaves Ω at  1 , and there exists  2 >  1 such that for  ∈ ( 1 ,  2 ), () ∈   \ Ω.
Consider the following Lyapunov function: Obviously, from (33), () ≥ (1/2)‖ −  * ‖ 2 , and Evaluating the derivation of  along the trajectory of the system (9) gives For any  ∈ Equation (37) shows that the neural network system ( 9) is stable in the sense of Lyapunov.The proof is complete.

Numerical Examples
In this section, two examples will be given to illustrate the effectiveness of the proposed approach for solving the pseudoconvex optimization problem.
Example 1.Consider the quadratic fractional optimization problem: where  is an  ×  matrix, ,  ∈   , and  0 ,  0 ∈ . ) , It is easily verified that  is symmetric and positive definite and consequently is pseudoconvex on Ω = { ≤  ≤ ℎ}.The proposed neural network in ( 9) is capable of solving this problem.Obviously, neural network (9) associated to (38) can be described as where   Then the designed parameter  is estimated as  > 10.32.Let  = 11 in the simulation.
We have simulated the dynamical behavior of the neural network by using the mathematical software when  = 11.Figures 2, 3, and 4 display the state trajectories of this neural network with different initial values, which shows that the state variables converge to the feasible region in finite time.This is in accordance with the conclusion of Theorem 15.Meanwhile, it can be seen that the trajectory is stable in the sense of Lyapunov.
In this problem, the objective function () is pseudoconvex.Thus the proposed neural network is suitable for solving the problem in this case.Neural network (9) associated to (42) can be described as  Figures 5 and 6 display the state trajectories of this neural network with different initial values.It can be seen that these trajectories converge to the feasible region in finite time as well.This is in accordance with the conclusion of Theorem 15.It can be verified that the trajectory is stable in the sense of Lyapunov.

Conclusion
In this paper, a one-layer recurrent neural network has been presented for solving pseudoconvex optimization with box constraint.The neural network system model has been described with a differential inclusion system.The constructed recurrent neural network has been proved to be stable in the sense of Lyapunov.The conditions which ensure the finite time state convergence to the feasible region have been obtained.The proposed neural network can be used in a wide variety to solve a lot of optimization problem in the engineering application.