Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems

and Applied Analysis 3 It follows from (10) and (13) that Ad k = A(−Pg k + β k d k−1 − β k gT k d k−1 󵄩󵄩󵄩Pgk 󵄩󵄩󵄩 2 Pg k )


Introduction
In this paper, we consider solving the following linear equality constraints optimization problem: min  () where () :   →  is a smooth function and  is a  ×  matrix of rank ( ≤ ).In this paper, the feasible region  and the feasible direction set  are defined, respectively, as follows: Taking the negative gradient for a search direction ( = −∇(  )) is a natural way of solving unconstrained optimization problems.However, this approach does not work for constrained problems, since the gradient may not be a feasible direction.A basic technique to overcome this difficulty was initiated by Rosen [1] in 1960.To obtain a feasible search direction, Rosen projected the gradient into the feasible region; that is, The convergence of Rosen's gradient projection method can be proved by Du; see [2][3][4].
In fact, Rosen's gradient projection method is an extension of the steepest-descent method.It is well known that the drawback of the steepest-descent method is easy to suffer from zig-zagging specially when the graph of () has an "elongated" form.To overcome the zig-zagging, we want to use the conjugate gradient method to modify the projection direction.
It is well known that nonlinear conjugate gradient methods such as the Polak-Ribière-Polyak (PRP) method [5,6] are very efficient for large-scale unconstrained optimization problems due to their simplicity and low storage.However, it does not necessarily satisfy the descent conditions      ≤ −‖  ‖ 2 ,  > 0.
Recently, Cheng [7] proposed a two-term modified PRP method (called TMPRP), in which the direction   is given by which is independent of any line search.The presented numerical results show some potential advantage of the TMPRP method in Cheng [7].In fact, we can easily rewrite the above direction   (4) as a three-term form: In the past few years, researchers have paid increasing attention to the conjugate gradient methods and their applications.Among others, we mention here the following works, for example, .
In the past few years, some researchers also paid attention to equality constrained problems.Martínez et al. [30] proposed a spectral gradient method for linearly constrained optimization by the following way to obtain the search direction: In this algorithm,   can be computed by quasi-Newton method in which the approximate Hessians satisfy a weak secant equation.The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy.C. Li and D. H. Li [31] proposed a feasible Fletcher-Reeves conjugate gradient method for solving linear equality constrained optimization problem with exact line search.Their idea is to use original Fletcher-Reeves conjugate gradient method to modify the Zoutendijk direction.The Zoutendijk direction is the feasible steepest descent direction.It is a solution of the following problem: Li et al. [32] also extended the modified Fletcher-Reeves conjugate gradient method in Zhang et al. [33] to solve linear equality constraints optimization problems which combined with the Zoutendijk feasible direction method.Under some mild conditions, Li et al. [32] showed that the proposed method with Armijo-type line search is globally convergent.
In this paper, we will extend the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7] to solve linear equality constraints optimization problems (1), which combines with the Rosen gradient projection method in Rosen [1].Under some mild conditions, we show that it is globally convergent with Armijo-type line search.
The rest of this paper is organized as follows.In Section 2, we firstly propose the algorithm and prove the the feasible descent direction.In Section 3, we prove the global convergence of the proposed method.In Section 4, we give some improvement for the algorithm.In Section 5, we report some numerical results to test the proposed method.

Algorithm and the Feasible Descent Direction
In this section, we propose a two-term Polak-Ribière-Polyak conjugate gradient projection method for solving the linear equality constraints optimization problem (1).The proposed method is a combination of the well-known Rose gradient projection method and the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7].
The iterative process of the proposed method is given by and the search direction   is defined by where and   > 0 is a steplength obtained by a line search.
For convenience, we call the method ( 9) and (10) as EMPRP method.Now we prove that the direction   defined by (10) and ( 11) is a feasible descent direction of  at   .Theorem 1. Suppose that   ∈ ,  =  −   (  ) −1 .  is defined by (10).If   ̸ = 0, then   is a feasible descent direction of  at   .
Proof.From ( 9), (10), and the definition of , we have This implies that   provides a descent direction of  at   .
In what follows, we show that   is a feasible descent direction of  at   .From (9), we have that It follows from ( 10) and ( 13) that When  = 0, we have It is easy to get from ( 14) and ( 15) that, for all ,   = 0 is satisfied.That is,   is feasible direction.
In the remainder of this paper, we always assume that () satisfies the following assumptions.
is bounded.
(ii) Function  :   →  is continuously differentiable and bounded from below.Its gradient is Lipschitz continuous, on an open ball N containing Ω; that is, there is a constant Since (  ) is decreasing, it is clear that the sequence {  } generated by Algorithm 2 is contained in Ω.In addition, we get from Assumption A that there is a constant  > 0, such that      ()     ≤ , ∀ ∈ Ω.
Since the matrix  is a projection matrix, it is reasonable to assume that there is a constant  > 0, such that      ()     ≤ , ∀ ∈ Ω.
We state the steps of the algorithm as follows.

Global Convergence
In what follows, we establish the global convergence theorem of the EMPRP method for general nonlinear objective functions.We firstly give some important lemmas of the EMPRP method.
Lemma 3. Suppose that   ∈ ,  =  −   (  ) −1 .  is defined by (10) and (11).Then we have Proof.By the definition of   , we have It follows from ( 12) and ( 21) that On the other hand, we also have then there exists a constant  > 0 such that           ≤ , ∀.
We now establish the global convergence theorem of the EMPRP method for general nonlinear objective functions.
We now prove (31) by considering the following two cases.
When  ∈  is sufficiently large, by the line search condition,  −1   does not satisfy inequality (20).This means By the mean-value theorem and inequality (17), there is a   ∈ (0, 1) such that   +    −1     ∈  and Substituting the last inequality into (34), for all  ∈  sufficiently large, we have from (12) This also yields a contradiction.The proof is then complete.

Improvement for Algorithm 2
In this section, we propose techniques for improving the efficiency of Algorithm 2 in practical computation which is about computing projections and the stepsize of the Armijotype line search.

Computing Projection.
In this paper, as in Gould et al. [34], instead of computing a basis for null space of matrix , we choose to work directly with the matrix of constraint gradients, computing projections by normal equations.As the computation of the projection is a key step in the proposed method, following Gould et al. [34], this projection can be computed in an alternative way.Let We can express this as where V is the solution of Noting that (40) is the normal equation.Since  is a  ×  matrix of rank ( ≤ ),   is symmetric positive definite matrix.We use the Doolittle (called LU) factorization of   to solve (40).
For the matrix  is constant, the factorization of   needs to be carried out only once at the beginning of the iterative process.Using a Doolittle factorization of   , (40) can be computed in the following form: where ,  is the Doolittle factor of   .
Let the initial stepsize of the Armijo line search be an approximation to It is not difficult to see that if   and ‖  ‖ are sufficiently small, then   and   are good estimation to   .So to improve the efficiency of EMPRP method in practical computation, we utilize the following line search process.

Numerical Experiments
This section reports some numerical experiments.Firstly, we test the EMPRP method and compare it with the Rose gradient projection method in [1] on low dimensional problems.Secondly, we test the EMPRP method and compare it with the spectral gradient method in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method in Li et al. [32] on large dimensional problems.In the line search process, we set   = 10 −6 ,  = 0.3,  = 0.02.The methods in the tables have the following meanings.
(iv) "FFR" stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijiotype line search in Li et al. [32].
We stop the iteration if the condition ‖  ‖ ≤  is satisfied, where  = 10 −5 .If the iteration number exceeds 10 5 , we also stop the iteration.Then we call it failure.All of the algorithms are coded in Matlab 7.0 and run on a personal computer with a 2.0 GHZ CPU processor.

Numerical Comparison of EMPRP and ROSE.
We test the performance of EMPRP and ROSE methods on the following test problems with given initial points.The results are listed in Table 1. stands for the dimension of tested problem and   stands for the number of constraints.We will report the following results: the CPU time Time (in seconds), the number of iterations Iter, the number of gradient evaluations Geval, and the number of function evaluations Feval.Problem 1 (HS28 [35]).The function HS28 in [35] is defined as follows: with the initial point  0 = (−4, 1, 1).The optimal solution  * = (0.5, −0.5, 0.5) and optimal function value ( * ) = 0.
From Table 1, we can see that the EMPRP method performs better than the Rosen gradient projection method in [1], which implies that the EMPRP method can improve the computational efficiency of the Rosen gradient projection method for solving linear equality constrained optimization problems.

Numerical Comparison of EMPRP, FFR, and SPCG.
In this subsection, we test the EMPRP method and compare it with the spectral gradient method (called SPCG) in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method (called FFR) in Li et al. [32] on the following large dimensional problems with given initial points.The results are listed in Tables 2, 3, 4, 5, and 6.We will report the following results: the CPU time Time (in seconds), the number of iterations Iter.Problem 6.Given a positive integer , the function is defined as follows: with the initial point  0 = (1, 0.6, 0.2, . . ., 1 − 0.4 −1 ).This problem comes from Asaadi [36] and is called MAD6.
Stepsize.The drawback of the Armijo line search is how to choose the initial stepsize   .If   is too large then the procedure needs to call much more function evaluations.If   is too small then the efficiency of related algorithm will be decreased.Therefore, we should choose an adequate initial stepsize   at each iteration.In what follows, we propose a way to generate the initial stepsize.We first estimate the stepsize determined by the exact line search.Support at the moment that  is twice continuously differentiable.We denote by () the Hessian of  at  and abbreviate (  ) as   .Notice that the exact line search stepsize   satisfies−     = ( (  +     ) −   )    ≈          .(42)Thisshows that scalar   ≜ −     /       is an estimation to   .To avoid the computation of the second derivative, we further estimate   by letting Process.If inequality,  (  +             ) ≤  (  ) +            2          2 , (45) holds, then we let   = |  |.Otherwise we let   be the largest scalar in the set {|  |  ,  = 0, 1, . ..} such that inequality (45) is satisfied.

Table 1 :
Test results for Problems 1-5 with given initial points.

Table 2 :
Test results for Problem 6 with given initial points.