The Second-Order Differential Equation System with the Controlled Process for Variational Inequality with Constraints

In this paper, the variational inequality with constraints can be viewed as an optimization problem. Using Lagrange function and projection operator, the equivalent operator equations for the variational inequality with constraints under the certain conditions are obtained. (en, the second-order differential equation system with the controlled process is established for solving the variational inequality with constraints. We prove that any accumulation point of the trajectory of the second-order differential equation system is a solution to the variational inequality with constraints. In the end, one example with three kinds of different cases by using this differential equation system is solved. (e numerical results are reported to verify the effectiveness of the second-order differential equation system with the controlled process for solving the variational inequality with constraints.


Introduction
We consider the variational inequality with constraints, denoted by VIP (K, F): find x * ∈ K such that 〈F x * , y − x * 〉 ≥ 0, ∀y ∈ K, (1) where K � x ∈ Ω|g(x) ≤ 0 , F: R n ⟶ R n is a monotone mapping, g: R n ⟶ R m is a convex and differentiable mapping, and Ω ⊆ R n is a nonempty closed convex set. Variational inequality problems arise in physics, mechanics, economics, optimization, control, equilibrium model in transportation, and so forth. e finite dimensional variational inequality is an active field with abundant intension. In the past 30 years, great progress has been made in the study of numerical algorithms for solving the variational inequality problems and many numerical methods have emerged in large numbers, such as nonsmooth equation methods, smoothing methods, projection iteration methods, interior methods, multisplitting methods, homotopy approaches, Tikhonov regularization, and proximal point methods. e book by Facchinei and Pang [1] is the most famous reference to give a detailed numerical treatment of variational inequality problems and complementarity problems in the area of mathematical programming. But this book did not include the differential equation approaches for solving variational inequality problems. e differential equation method is used to solve constrained nonlinear optimization problems, which was firstly proposed by Arrow and Hurwicz [2]; and the equity-constrained optimization problem was considered. Fiacco and McCormick [3] studied the constraint qualification by using differential equation method. Evtushenko [4] studied the problem of equality constraint earlier. Yamashita [5], Evtushenko et al. [6][7][8][9][10], and Pan [11] developed and improved the differential equation methods. In particular, Evtushenko and Zhadan have carried out a great deal of research on differential equation methods for nonlinear programming problems and constraint problems on general closed sets by using the stability theory of differential equations since 1973, which enriches the differential equation method of nonlinear programming problem. ere are other scholars who have done a lot of research work on differential equation methods, and they have established a variety of differential equation systems to solve the optimization problems. Brown et al. [12,13] proposed the differential equation system based on the penalty function. Zhang and Constantinides [14] presented a differential equation system based on Lagrange function to solve the equality constraint problem. Since the nonlinear Lagrange function can be used to construct a dual algorithm for solving nonlinear optimization problems and the dual algorithm has no limitation on the feasibility of the original variables, many differential equation systems involved dual variables, such as those in the works of Zhou et al. [15] and Jin et al. [16]. Recently, Jin et al. [17] formulated the differential systems based on the first derivatives and the second derivatives of the approximate augmented Lagrangian. Under suitable conditions, the asymptotic stability of the differential systems and local convergence properties of their Euler discrete schemes are analyzed. Zhang [18] studied the differential equation method for quadratic programming.
At the same time, there are a series of neural network methods for solving variational inequality and complementarity problems. In fact, the model of neural network is often described by the system of differential equations. Zhou et al. [19] transformed the nonlinear complementarity problem into an equivalent optimization problem, and a system of differential equations is constructed to solve the optimization problem. On the basis of projection operator, Gao et al. [20] presented a system of differential equations for solving variational inequality problems with linear and nonlinear constraints. He and Yang [21] studied a new neural network model for nonsymmetric linear variational inequality. e classical neural network model for solving linear variational inequality based on projection operator and compression method is extended. Liao et al. [22] studied the differential equation system of nonlinear complementarity problem. Nazemi and Sabeghi [23] applied a gradient neural network model to efficiently solve the convex second-order cone constrained variational inequality problem. Nazemi and Sabeghi [24] considered a new neural network model to simply solve the convex second-order cone constrained variational inequality problem. Different from the above results, Antipin [25] considered the first-order and second-order differential equation systems for solving the minimization function on a simple set. e asymptotic and exponential stability of such processes was proved without using Lyapunov function. Furthermore, Antipin [26] studied the differential equation system and the controlled proximal differential equation system for solving the minimization function. e convergence theorems were obtained. Antipin and Antipin [27,28] established the differential equation system and the controlled proximal differential equation system for solving the saddle problem and proved that the trajectory of the system converges monotonically in norm to one of the equilibrium points. Antipin [29] considered the differential equation system with the controlled process for solving the fixed point of the extremal mapping and proved that the trajectory of the system converges to the solution with exponential rate. Antipin [30] established the differential equation systems and the differential equation system with the controlled process for optimization problem, saddle problem, and the extremal mapping, respectively. e corresponding convergence theorems were also proved. e variational inequality problem with coupled constraints and the fixed point of the extremal mapping with coupled constraints were studied by Antipin and Antipin [31,32], where symmetric functions were introduced and the differential equation systems with the controlled process for global convergence were proposed. Wang and Wang [33] established the first-order differential equation system to solve the variational inequality with constraints problem (1). We think that the research of differential equation method for solving variational inequality is not much. In the past, the problem is firstly transformed into a smooth unconstrained optimization problem; then the differential equation method is constructed by using negative gradient for the unconstrained optimization. Inspired by the work of Antipin, our approach is different, which directly starts from the nonsmooth system of equations to establish the differential equation method, which requires relatively weak conditions. At the same time, due to the fact that the differential equation system has mature numerical solution methods, such as Runge-Kutta method, using these pieces of numerical software to solve the problem of variational inequality may achieve good results in practical calculation. It is hoped that our research on the differential equation method for the variational inequality will contribute to the development of the neural network method for solving the variational inequality problem.
In this paper, we will establish the second-order differential equation system with the controlled process for solving the variational inequality with constraints problem (1). e remainder of this paper is organized as follows. In the next section, based on Lagrange function and projection operator, we prove a lemma that gives the equivalent operator equation under certain conditions for problem (1). In Section 3, we establish the second-order differential equation system with the controlled process for the variational inequality with constraints (1) and prove that any accumulation point of the trajectory of the second-order differential equation system is a solution to problem (1). One example with three kinds of different cases is solved by the secondorder differential equation system with the controlled process in Section 4, and the transient behaviors of the trajectories of this kind of differential equation system for every case are illustrated.

Preliminaries
e projection operator to a convex set is quite useful in reformulating the variational inequality with constraints (1) as an equation.
Let C be a convex closed set; for every x ∈ R n , there is a unique x in C such that e point x is the projection of x onto C, denoted by Π C (x). e projection operator Π C : R n ⟶ C is well defined over R n and it is a nonexpansive mapping.

Complexity
Lemma 1 (see [34]). Let H be a real Hilbert space and let C ⊂ H be a closed convex set. For a given z ∈ H, u ∈ C satisfies the inequality if and only if u − Π C (z) � 0.
In order to obtain the second-order differential equation system, we need to transform the variational inequality with constraints problem (1), which can be viewed as the following optimization problem: where f(y) � 〈F(x * ), y − x * 〉 and f(y) ≥ 0. Define the Lagrange function of the minimization problem (4): where y and u are the primal and dual variables, respectively. Since x * is the minimum of f(y), the pair (x * , u * ) (under certain regularity conditions) is a saddle point of L(x * , y, u). at is, ∀y ∈ Ω and u ∈ R m + . System (6) can be represented in an equivalent manner as Lemma 2. Suppose that g: R n ⟶ R m is a differentiable mapping and Ω ⊂ R n is a nonempty closed convex set. en the pair (x * , u * ) is the saddle point of the Lagrange function L(x * , y, u) if and only if (x * , u * ) satisfies the following operator equation: where Π + (·) and Π Ω (·) are the operators that project a vector onto the positive octant R m + and the set Ω, respectively, and α > 0 is a parameter.
Proof. System (4) or system (5) implies that (x * , u * ) satisfies the following inequalities: If the function g is differentiable, we have where Jg(x * ) is the Jacobian of the mapping g at x * . e variational inequality system (10) can be computed by where α > 0 is a parameter.
With the help of projection operators and Lemma 1, the above system of variational inequalities is represented in the form of operator equation as where Π + (·) and Π Ω (·) are the operators that project a vector onto the positive octant R m + and the set Ω, respectively. is completes the proof. □ Furthermore, if g: R n ⟶ R m is convex, we have for the second term in the first inequality of the variational inequality (10) that for all y ∈ Ω. us, we can rewrite the variational inequality (10) as follows: Complexity Remark 1. When the function g is convex and differentiable, the relations in (6)- (14) are equivalent to each other.
For simplicity, we denote € x � (d 2 x/dt 2 ) and € u � (d 2 u/dt 2 ). Similarly, the second-order differential equation system with the controlled process (16) can be represented in the form of variational inequalities as follows: Subsequently, we assume that g, F, and Jg T are Lipschitz continuous with the constants |g|, |F|, and |J|, respectively. In addition, we suppose that ‖u‖ ≤ C 0 and C 0 is a constant.
With regard to these conditions and (16), it is easy to show the following inequalities since the projection operator is a nonexpansive mapping. 4 Complexity x where C � |F| + C 0 |J|. Now we discuss the convergence of the trajectories x(t), u(t), _ x(t), and _ u(t) of the second-order differential equation system (16).

Theorem 1.
Suppose that the set of solutions Ω * of problem (1) is nonempty, F is monotone and Lipschitz continuous with the constant |F|, and g is convex, differentiable, and Lipschitz continuous with the constant |g|. Suppose that ‖u(t)‖ ≤ C 0 for all t ⟶ ∞, ‖ _ u(t)‖ is bounded for all t ⟶ ∞, Jg T is Lipschitz continuous with the constant |J|, and Ω⊆R n is a convex closed set. en the accumulation point of the trajectory x(t) of the second-order differential equation system (16)

) is the solution to the variational inequality with constraints problem (1).
Proof. Setting y � x * ∈ Ω * in the variational inequality (18), we have Let (20); we obtain which implies that Summing inequalities (24) and (26), we infer that By using inequality (23), we compute the above inequality as follows: where C � |F| + C 0 |J|. Since g is convex, we transform the above inequality by using inequality (14) as follows: Complexity Letting y � x in the first inequality of (14), we have Adding the above inequality into (29), we have Setting v � u * in the variational inequality (19) and v � u + μ 2 € u + β 2 _ u in the variational inequality (21), we deduce that Summing the above two inequalities and taking into account (22), we obtain In view of 〈u, g(x * )〉 ≤ 0 and 〈u * , g(x * )〉 � 0, we conclude that Since F is monotone, we sum (31) and (34) to obtain e above inequality yields that 6 Complexity Using the relations inequality (35) can be transformed into the following: According to the relations 1 2 Complexity 7 inequality (38) means that Let ϕ(x) � (1/2)‖x − x * ‖ 2 and ψ(u) � (1/2)‖u − u * ‖ 2 ; inequality (40) yields that Since μ 1 < (1/2)β 2 1 , μ 2 < (1/2)β 2 2 , and α 2 (C 2 + |g| 2 ) < (1/2), inequality (41) can be integrated from t 0 to t: where Since ‖u(t)‖ and ‖ _ u(t)‖ are all bounded for all t ⟶ ∞, we deduce that | (d/dt)ψ(u)| has the lower bound because us, there exists a constant B 1 such that which implies that at is, By integrating (45), we have where C 1 is a constant. We conclude that which means that ϕ(x) is bounded for all t ⟶ ∞. e function ϕ(x) is strongly convex, and it is well known that each of its Lebesgue sets is bounded. us, the trajectory x(t) is bounded. at is, there exists a constant B 2 such that converge as t ⟶ ∞.
Assuming that there exists ε > 0 such that we obtain a contradiction to the convergence of integrals. Hence, there exists a subsequence of time moments and u(t) are bounded, we know that x(t i ) and u(t i ) are bounded. We choose the subsequences x(t i j ) and u(t i j ) of x(t) and u(t).
en there exist x ′ and u ′ such that Let us consider inequality (16) for all t i j and take the limit as j ⟶ ∞; we have which means that x ′ ∈ Ω * is a solution of variational inequality with constraints (1) by using Lemma 2.
is completes the proof.

Numerical Results
In this section, we test the example with three kinds of different cases by our differential equation system (16). e Complexity transient behaviors of the proposed differential equation systems are demonstrated in each case. e numerical implementation is coded by Matlab R2019a running on a PC with Intel i5 9400F of 2.9 GHz CPU and the ordinary differential equation solver adopted is ode45, which uses a Runge-Kutta (4, 5) formula. Example 1. Consider the variational inequality with constraints problem with F: where M is an asymmetric positive definite matrix and e parameter ρ is used to vary the degree of asymmetry and nonlinearity, and the data of the example are illustrated as follows: and q � (5.308, 0.008, − 0.938, 1.024, − 1.312) T , which have been considered in the work of Dang et al. [35]. Its solution is x * � (2, 2, 2, 2, 2) T . Since the parameter ρ is used to vary the degree of asymmetry and nonlinearity, we apply the second-order differential equation system (16) to solve the variational inequalities in three cases of ρ < 1, ρ � 1, and ρ > 1.
Case 1. When ρ < 1, we take ρ � 0.1, which means that the degree of asymmetry of F(x) is higher than that of the nonlinearity of F(x). In this experiment, we set μ 1 � μ 2 � 2.02, β 1 � β 2 � 2.01, and α � 0.5 in the secondorder differential equation system with the controlled process (16). Figure 1 describes the convergence behaviors of x(t), u(t), _ x(t), and _ x(t) of the second-order differential equation system with the controlled process (16) from the given initial points for the example with ρ � 0.1.

Case 2.
We take ρ � 1, which means that the degree of asymmetry of F(x) is the same as the degree of the nonlinearity of F(x). In this experiment, we let μ 1 � μ 2 � 2.02, β 1 � β 2 � 2.01, and α � 0.4 in the second-order differential equation system with the controlled process (16). e solution trajectories x(t), u(t), _ x(t), and _ x(t) of the secondorder differential equation system with the controlled process (16) from the given initial points for the example with ρ � 1 are shown in Figure 2.
Case 3. When ρ > 1, we take ρ � 10, which means that the degree of asymmetry of F(x) is lower than that of the nonlinearity of F(x). In this experiment, we set μ 1 � μ 2 � 2.02, β 1 � β 2 � 2.01, and α � 0.06 in the secondorder differential equation system with the controlled process (16). Figure 3 describes the convergence behaviors of x(t), u(t), _ x(t), and u · (t) of the second-order differential equation system with the controlled process (16) from the given initial points for the example with ρ � 10.
Fixing μ 2 � 2.02, β 2 � 2.01, and α � 0.5, the convergence time will be changed as follows when μ 1 and β 1 are changed (Table 1). In this experiment, when μ 2 , β 2 , and α were fixed, we found that the smaller μ 1 and β 1 are, the shorter the  convergence time is. Furthermore, we also found that the change of β 1 had a great influence on the rate of the convergence when μ 1 is fixed; meanwhile the change of μ 1 has a little effect on the rate of the convergence when β 1 is fixed. Fixing μ 1 � 2.02, β 1 � 2.01, and α � 0.5, the convergence time will be changed as follows when μ 2 and β 2 are changed ( Table 2). In this experiment, when μ 1 , β 1 , and α were fixed, we found that the smaller μ 2 and β 2 are, the shorter the convergence time is. Furthermore, we also found that the change of β 2 had a great influence on the rate of the convergence when μ 2 is fixed; meanwhile the change of μ 2 has a little effect on the rate of the convergence when β 2 is fixed.

Conclusions
In this paper, based on the Lagrange function and projection operator, the second-order differential equation system with the controlled process is established for solving the variational inequality with constraints problem (1). e convergence of the trajectories for this kind of differential equation system is proved. Furthermore, we test one example with three kinds of different cases by the second-order differential equation system with the controlled process and illustrate the transient behaviors of the trajectories for this differential equation system in each case, which verify the effectiveness of the second-order differential equation system with the controlled process for solving the variational inequality with constraints.
It is an interesting issue that the second-order differential equation systems are used to solve the optimization problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon reasonable request.

Disclosure
Some of the results in this paper were presented at the Proceeding of the 11th World Congress on Intelligent Control and Automation, 2014 (see https://ieeexplore.ieee.org/document/7052904).

Conflicts of Interest
e authors declare that they have no conflicts of interest. Time (ms) Figure 3: Transient behavior of x(t) and u(t) of the second-order differential equation system (16) in the example with ρ � 10.