A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations

In this paper, a Cauchy point direction trust region algorithm is presented to solve nonlinear equations. +e search direction is an optimal convex combination of the trust region direction and the Cauchy point direction with the sufficiently descent property and the automatic trust region property. +e global convergence of the proposed algorithm is proven under some conditions. +e preliminary numerical results demonstrate that the proposed algorithm is promising and has better convergence behaviors than the other two existing algorithms for solving large-scale nonlinear equations.


Introduction
e system of nonlinear equations is one of the most important mathematical models, with extensive applications in chemical equilibrium problems [1], problem of hydraulic circuits of power plants [2], L-shaped beam structures [3], and image restoration problems [4]. Some ℓ 1 -norm regularized optimization problems in compressive sensing [5] are also obtained by reformulating the systems of nonlinear equations. In this paper, consider the following nonlinear system of equation: where F : R n ⟶ R n is continuously differentiable. Let where f : R n ⟶ R. It is clear that the nonlinear system of equations (1) is equivalent to the unconstrained optimization problem as follows: Many literature studies have studied a variety of numerical algorithms to solve the nonlinear system of equations, such as the conjugate gradient algorithm [6][7][8][9], Levenberg-Marquardt algorithm [10][11][12][13], and trust region algorithm [14][15][16][17][18][19][20][21]. e trust region method plays an important role in the area of nonlinear optimization. It was proposed by Powell in [22] to solve the nonlinear optimization problem by using an iterative structure. Recently, some trust region-based methods have been developed to solve nonlinear equations and have shown very promising results. For instance, Fan and Pan [16] discussed a regularized trust region subproblem with an adaptive trust region radius, which possesses quadratic convergence under the local error bound condition. Yuan et al. [17] proposed a trust region-based algorithm, in which the Jacobian is updated by the BFGS formula, for solving nonlinear equations. e algorithm showed global convergence at a quadratic convergence rate. is algorithm does not compute the Jacobian matrix in each interrogation, which significantly reduces the computation burden. Esmaeili and Kimiaei [19] studied an adaptive trust region algorithm with the moderate radius size by employing the nonmonotone technique, and the global and quadratic local convergence was given. However, the maximal dimension of the test problems was only 300. Qi et al. [23] proposed an asymptotic Newton search direction method which is an adequate combination of the projected gradient direction and the projected trust region direction for solving box-constrained nonsmooth equations and proved the global and quadratic local convergence. Motivated by the search direction by Qi et al., Yuan et al. [24] used the idea of the search direction to solve nonsmooth unconstrained optimization problems and also showed global convergence.
Inspired by the fact that the search direction by Qi et al. [23] speeds up the local convergence rate compared to the gradient direction, we propose an efficient Cauchy point direction trust region algorithm to solve (1) in this paper. e search direction is an optimal convex combination of the trust region direction and the Cauchy point direction with the automatic trust region property and the sufficiently descent property, which can improve the numerical performance. We also show the global convergence of the proposed algorithm. Numerical results indicate that this algorithm has better behavior convergence compared with that of the classical trust region algorithm and the adaptive regularized trust region algorithm. However, the local convergence is not given. e remainder of this paper is organized as follows. In Section 2, we introduce our motivation and a new search direction. In Section 3, we give a Cauchy point direction trust region algorithm for solving problem (1). e global convergence of the proposed algorithm is proven in Section 4. Preliminary numerical results are reported in Section 5. We conclude our paper in Section 6. roughout this paper, unless otherwise specified, ‖ · ‖ denotes the Euclidean norm of vectors or matrices.

Motivation and Search Direction
In this section, motivated by the definition of the Cauchy point which is parallel to the steepest descent direction, we present a search direction with the Cauchy point. Let Δ k be the trust region radius and Δ k > 0.

Cauchy Point Direction.
Noting that J T k J k is a semidefinite matrix, we define the Cauchy point at the current point x k : where F k � F(x k ), and J k � ∇F(x k ) is the Jacobian of F(x) or its approximation. e following lemma gives a nice property of the Cauchy point direction d C k .

Proposition 1 (automatic trust region property).
Suppose that d C k is defined by (4), then ‖d C k ‖ ≤ Δ k .
Proof. It is easy to get the result of this proposition from 0 < c k ≤ 1. is completes the proof. From the definition of d C k and the above proposition, we can determine the descent property and the automatic trust region property of the Cauchy point direction d C k . □ 2.2. Trust Region Direction. Let the trial step d TR k (Δ k ) be a solution of the following trust region subproblem: where Δ k is the same as the above definition. From the famous result of Powell [22], we give the following lemma and omit its proof.

Lemma 1. Let d TR
k (Δ k ) be a solution of the trust region subproblem (6), then As mentioned by Qi et al. [23] and Yuan et al. [24], when x k is far from the optimization solution, the trust region direction d TR k (Δ k ) may not possess the descent property. However, it follows from the definition of the Cauchy point direction that d C k (Δ k ) always possesses the descent property. On the contrary, if the search direction is defined by using a combination of d C k and d TR k , it will be a descent direction. Hence, we give a new search direction by using a convex combination form in the next section.

Search Direction.
Let λ k be a solution of the one-dimensional minimization problem as follows: and the search direction is defined as For the above one-dimensional minimization problem, we can use the gold section method, Fibonacci method, parabolic method [25], etc. e idea of the search direction follows from Qi et al. [23]. However, we use the Cauchy point direction which possesses the automatic trust region property to get optimal λ k by using the gold section method for problem (6) in this paper.

New Algorithm
In this section, let d k (Δ k ) be the optimal solution of (3). en, the actual reduction is defined as and the predicted reduction is us, r k is defined by the ratio between Ared k and Pred k : We now list the detailed steps of the Cauchy point direction trust region algorithm in Algorithm 1.

Algorithm 1. Cauchy point direction trust region algorithm for solving (3).
Step 1. Given initial point x 0 ∈ R n and parameters Step 2. If the termination condition ‖J T k F k ‖ ≤ ϵ is satisfied at the iteration point x k , then stop. Otherwise, go to Step 3.
Step 3. Solve the trust region subproblem (6) to obtain d TR k (Δ k ) and compute c k and d C k (Δ k ) from (4) and (5), respectively.
Step 4. To solve the one-dimensional minimization problem (8) giving λ k by using Algorithm 2, calculate the search direction from (6).
Step 5. Compute Ared k , Pred k , and r k , if Step 6. If r k < μ 1 , return to Step 3, and let x k+1 ≔ x k . Otherwise, return to Step 2. Let k ≔ k + 1. In Step 4 of Algorithm 1, we use the gold section algorithm as given in Algorithm 2.
Step 1. For the given initial parameters and Step 3. Otherwise, go to Step 4.
Step 3. If the termination condition and compute h k (p k+1 ) and go to Step 5.
Step 4. If the termination condition and compute h k (q k+1 ) and go to Step 5.

Global Convergence
In this section, the global convergence of Algorithm 1 is proven. Firstly, it is easy to have the following lemma from Lemma 1.

Lemma 2.
Let d k (Δ k ) be defined by (9), where λ k is a solution of problem (8). en, . It follows from the definition of Pred k and Lemma 1 that Pred k ≥ f k − m k (d TR k (Δ k )) holds. us, (16) holds, and therefore, the proof is completed. e next lemma indicates an important property of the Cauchy point direction d C k , which is the sufficiently descent property.
Proof. For any Δ k , from (9), we have and note that 0 < c k ≤ 1; thus, Mathematical Problems in Engineering is completes the proof. To derive the global convergence of Algorithm 1, the following assumption is considered. □ Assumption 1.
e Jacobian J(x) of F(x) is bounded; i.e., there exists a positive constant M > 0 such that ‖J(x)‖ ≤ M, for any x ∈ R n .
In the sequel, we show that "Steps 3-6" of Algorithm 1 do not cycle in the inner loop infinitely. Proof. It follows from x k which is not a stationary point that there exists a positive constant ϵ 0 > 0 such that Since 0 < c k ≤ 1 and Assumption 1 holds, there exists a positive constant α > 0 such that Letting 0 < Δ k ≤ 2/α 2 (1 − δ)ϵ 0 c k , then from (4) and (22), we obtain and noting (21) and therefore, (13) holds. Next, we prove that (14) is also satisfied. By contradiction, we assume that "Step 3-Step 6" of Algorithm 1 do cycle in the inner loop infinitely; then, It follows from (13) and Lemma 3 that there exists a positive constant β > 0 such that By the definition d TR k (Δ k ), we have ‖d TR k ‖ ≤ Δ k , and noting Proposition 1 and 0 < c k ≤ 1, further, we get us, from (26) and (27), we have and therefore r k ⟶ 1, which means that r k ≥ μ 2 when Δ k is sufficiently small. However, the definition of Δ k in Step 5 of Algorithm 1 contradicts the fact that Δ k ⟶ 0. Hence, the cycling between Step 3 and Step 6 terminates finitely. is completes the proof. We can get the global convergence of Algorithm 1 based on the above lemmas. □ Theorem 1. Suppose that Assumption 1 holds and x k be generated by Algorithm 1, then Proof. Suppose that the conclusion is not true, then there exists a positive constant ϵ 1 > 0 such that for any k, From Lemma 2, Assumption 1, and the above inequation, we have Noting the updating rule of Δ k , r k ≥ μ 1 , (31), and Lemma 4, we deduce that which yields a contradiction that this means (30) is not true; i.e., the result (29) of the theorem holds, and therefore, the proof is completed.

Numerical Results
In this section, the numerical results of the proposed algorithm for solving problem (1) are reported. We denote the proposed Algorithm 1 by CTR and compare it with some existing algorithms. e adaptive trust region algorithm by Fan and Pan [16] and the classical trust region algorithm [26] are denoted by ATR and TTR, respectively. In numerical experiments, all parameters of Algorithm 1 are chosen as follows: μ 1 � 0.1, μ 2 � 0.9, η 1 � 0.25, η 2 � 3, ϵ � 10 − 5 , and c 0 � Δ 0 � 1 δ � 0.9. We choose ϵ ′ � 10 − 6 in Algorithm 2. Steihaug method [27] is employed to obtain the approximative trial step d k for solving the trust region subproblem (6). We will terminate algorithms if the number of iterations is larger than 2000. All codes were implemented in MATLAB R2010b. e numerical experiments were performed on a PC with an Intel Pentium (R) Dual-Core CPU at 3.20 GHz and 2.00 GB of RAM and using the Windows 7 operating system.
We list the test problems in Table 1 that can also be found in [28]. In Tables 1-3, "No." denotes the number of the test problem, "Problem name" denotes the name of the test problem, "x 0 " denotes the initial point, "n" denotes the dimension of the problem, "NI" is the total number of iterations, "NF" is the number of the function evaluations, "‖F k ‖" refers to the optimum function values, and "λ mean " denotes the mean value of λ. It is worth mentioning that " * " represents the stopping of the algorithms in situations that the number of iterations exceeds the maximal iterations, but the terminal conditions cannot be satisfied yet.
It is shown from Tables 2 and 3 that the proposed algorithm is efficient to solve these test problems and is competitive with the other two algorithms in NI and NF for most problems. For problems 1, 2, 16, 19, and 20, λ mean is

Conclusions
In this paper, we presented a Cauchy point direction trust region algorithm for solving the nonlinear system of equations. is algorithm combines the trust region direction and the Cauchy point direction with the descent property and the automatic trust region. e optimal convex combination parameters were determined by using the gold section algorithm which is an exact one-dimensional linear search algorithm.
e new algorithm was proven to have the descent property and the trust region property. We also showed the global convergence of the proposed algorithm under suitable condition. More importantly, the numerical results demonstrated the faster convergence of the proposed algorithm over two existing methods, which is very promising to solve large-scale nonlinear equations. However, the convergence rate of the proposed algorithm remains unclear. erefore, the demonstration of the theoretical convergence rate of the proposed algorithm, application in image restoration problems, and compressive sensing will be the topics among others in our future study.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.  Mathematical Problems in Engineering