A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

and Applied Analysis 3 Proof. From (6), we have ⟨F (y k ) , x k − y k ⟩ ≥ σα k 󵄩󵄩󵄩󵄩dk 󵄩󵄩󵄩󵄩 2 > 0. (15) For any x∗ ∈ C, from (3), the nonexpansiveness of the projection operator, it holds that 󵄩󵄩󵄩󵄩xk+1 − x ∗󵄩󵄩󵄩󵄩 2 = 󵄩󵄩󵄩󵄩PC[xk − γξkF(yk)] − x ∗󵄩󵄩󵄩󵄩 2 ≤ 󵄩󵄩󵄩󵄩xk − γξkF(yk) − x ∗󵄩󵄩󵄩󵄩 2 = 󵄩󵄩󵄩󵄩xk − x ∗󵄩󵄩󵄩󵄩 2 − 2γξ k ⟨F (y k ) , x k − x ∗ ⟩ + γ 2 ξ 2 k 󵄩󵄩󵄩󵄩F(yk) 󵄩󵄩󵄩󵄩 2 . (16) By the monotonicity of mapping F(⋅), we have ⟨F (y k ) , x k − x ∗ ⟩ = ⟨F (y k ) , x k − y k ⟩ + ⟨F (y k ) , y k − x ∗ ⟩ ≥ ⟨F (y k ) , x k − y k ⟩ + ⟨F (x ∗ ) , y k − x ∗ ⟩ = ⟨F (y k ) , x k − y k ⟩ . (17) Substituting (15) and (17) into (16), we have 󵄩󵄩󵄩󵄩xk+1 − x ∗󵄩󵄩󵄩󵄩 2 ≤ 󵄩󵄩󵄩󵄩xk − x ∗󵄩󵄩󵄩󵄩 2 − 2γξ k ⟨F (y k ) , x k − y k ⟩ + γ 2 ξ 2 k 󵄩󵄩󵄩󵄩F (yk) 󵄩󵄩󵄩󵄩 2 = 󵄩󵄩󵄩󵄩xk − x ∗󵄩󵄩󵄩󵄩 2 − γ (2 − γ) ⟨F (y k ) , x k − y k ⟩ 2 󵄩󵄩󵄩󵄩F (yk) 󵄩󵄩󵄩󵄩 2 ≤ 󵄩󵄩󵄩󵄩xk − x ∗󵄩󵄩󵄩󵄩 2 − γ (2 − γ) σ 2 α 2 k 󵄩󵄩󵄩󵄩dk 󵄩󵄩󵄩󵄩 4 󵄩󵄩󵄩󵄩F (yk) 󵄩󵄩󵄩󵄩 2 ,


Introduction
Let  :   →   be a continuous nonlinear mapping and  a nonempty closed convex set of   .In this paper, we consider the problem of finding  ∈  such that  () = 0. ( Nonlinear constrained equations (1), denoted by CES (, ), arise in various applications, for instance, ballistic trajectory computation and vibration systems [1], the power flow equations [2], chemical equilibrium systems [3], and so forth.
In recent years, many numerical methods have been proposed to find a solution of nonsmooth CES (, ), which include the trust region methods [4,5], the Levenberg-Marquardt method [6], and the projection methods [7][8][9].Compared with the trust region method and the Levenberg-Marquardt method, the projection method is more efficient for solving large-scale CES (, ).Noting this, Wang et al. [7] proposed a projection method for solving CES (, ), which possesses global convergence property without the differentiability.A drawback of this method is that it needs to solve a linear equation inexactly at each iteration, and its variants [8,10] also have this drawback.
It is well-known that the spectral gradient method and the conjugate gradient method are two efficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage.Recently, La Cruz and Raydan [11] successfully applied the famous spectral gradient method to solve unconstrained equations by using some merit function.Then, Zhang and Zhou [12] presented a spectral gradient projection method (SGP) for solving unconstrained monotone equations, which does not utilize any merit function.Later, the SGP was extended by Yu et al. [9] to solve monotone constrained equations.However, the study of conjugate gradient methods for largescale (un)constrained equations is relatively rare.Cheng [13] proposed a PRP type method (PRPT) for systems of monotone equations, which is a combination of the wellknown PRP method and the hyperplane projection method, and the numerical results in [13] show that the PRPT method performs better than the SGP method in [12].
Different from the methods in [7,8,10], the methods in [9,[11][12][13] do not need to solve a linearized equation at each iteration; however, the latter do not investigate the convergent rate, and even we do not know whether they possess the linear convergence rate.In this paper, motivated by the projection methods in [7,8,10] and the gradient methods in [9,12,13], we propose a matrix-free method for solving nonlinear constrained equations, which can be viewed as a combination of the well-known PRP conjugate gradient method and the famous hyperplane projection method, and it possesses linear convergence rate under standard conditions.The remainder of this paper is organized as follows.Section 2 describes the new method and presents its global convergence analysis.The linear convergence rate of the new method is established in Section 3. Numerical results are reported in Section 4. Finally, some final remarks are included in Section 5.

Algorithm and Convergence Analysis
Let  * denote the solution set of CES (, ).Throughout this paper, we assume that  * is nonempty and (⋅) is monotone; that is, Now, we describe the matrix-free method for nonlinear constrained equations.
Algorithm 1.Consider the following.
Remark 4. Line search ( 6) is different from that of [12,13], which is well-defined by the following Lemma.
Proof.In fact, if   = 0, then from (10), we have ‖(  )‖ = 0, which means that Algorithm 1 terminates with   being a solution of CES (, ).Now, we consider   ̸ = 0 for all .For the sake of contradiction, we suppose that there exists  0 ≥ 0 such that (6) is not satisfied for any nonnegative integer ; that is, Letting  → ∞ and using the continuity of (⋅) yield On the other hand, by (10), we obtain which together with (12) means that  ≥ ; however, this contradicts the fact that  >  > 0. Therefore the assertion holds.This completes the proof.( By the monotonicity of mapping (⋅), we have Substituting ( 15) and ( 17) into (16), we have which shows that the sequence {  } is bounded.By (10), it holds that {  } is bounded and so is {  }.Then, by the continuity of (⋅), there exists a constant  > 0 such that ‖(  )‖ ≤  for all .Therefore it follows from (18) that which implies that the assertion ( 14) holds.The proof is completed.
Now, we prove the global convergence of Algorithm 1.
Theorem 7. Suppose that the conditions in Lemma 6 hold.
Then the sequence {  } generated by Algorithm 1 globally converges to a solution of CES (, ).
Proof.We consider the following two possible cases.

Convergence Rate
Therefore, there is a positive constant , such that for all .The proof is completed.

Numerical Results
In this section, we test Algorithm 1 and compared it with the projection method in [7] and the spectral gradient projection method in [9].We give the following two simple problems to test the efficiency of the three methods.
By the monotonicity and the Lipschitz continuity of (⋅), it is not difficult to show that 0.01  where  min = 10 −10 and  max = 10 10 .This parabolic model is the same as the one described in [15].We stop the iteration if the iteration number exceeds 1000 or the inequality ‖(  )‖ ≤ 10 −6 is satisfied.The method in [7] (denoted by WPM) is implemented with the following parameters:   ≡ 0,  = 0,  = 0.95,  = 0.5, and   ≡ 2.5.The method in [9] (denoted by YSGP) is implemented with the following parameters:  = 0.5,  = 0.01, and  = 0.001.
For Problem 11, the initial point is set as  0 = (1, 1, . . ., 1), and Table 1 gives the numerical results by Algorithm 1, WPM, and YSGP with different dimensions, where Iter.denotes the iteration number, Fn denotes the number of function evaluations, and CPU denotes the CPU time in seconds when the algorithm terminates.Table 2 lists the numerical results of Problem 12 with different initial points.The numerical results given in Tables 1 and 2 show that Algorithm 1 performs a little better than YSGP in [9] and obviously better than WPM in [7], since it requires much lower number of iterations or less CPU time than WPM in [7] and a little lower number of iterations or less CPU time than YSGP in [9].So the proposed method is promising.

Conclusions
A globally convergent matrix-free method to solve constrained equations has been developed, which is not only derivative-free but also completely matrix-free.Consequently, it can be applied to solve large-scale nonsmooth constrained equations.We established the global convergence without the requirement of differentiability of the equations and presented the linear convergence rate under standard conditions.We also report some numerical results to show the efficiency of the proposed method.
Numerical results indicate that the parameters  and  influence the performance of the method, so the choice of the positive constants  and  is our future work.

Table 1 :
Numerical results with different dimensions of Problem 11.

Table 2 :
Numerical results with different initial points of Problem 12 with  = 64.