A Self-Adjusting Spectral Conjugate Gradient Method for Large-Scale Unconstrained Optimization

and Applied Analysis 3 Additionally, we assume that there exist positive constants γ and γ such that 0 < γ ≤ 󵄩󵄩󵄩gk 󵄩󵄩󵄩 ≤ γ, ∀k ≥ 1, (21) then we have the following result. Theorem2. Consider the method (2), (8) and (12), where d k is a descent direction. If (21) holds, there exist positive constants ξ 1 , ξ 2 , and ξ 3 such that relations


Introduction
Consider the following unconstrained optimization problem: where  : R  → R is a nonlinear smooth function and its gradient is available.Conjugate gradient methods are very efficient for solving (1), especially when the dimension  is large, and have the following iterative form: where   > 0 is a steplength obtained by a line search, and   is the search direction defined by where   is a scalar and   denotes the gradient of  at point   .
There are at least six formulas for   , which are given below: where  −1 =   − −1 and ‖⋅‖ denotes the Euclidean norm.In the above six methods, HS, PR, and LS methods are especially efficient in real computations, but one may not globally converge for general functions.FR, CD, and DY methods are globally convergent, but they perform much worse.To combine the good numerical performance of HS method and the nice global convergence property of DY method, Dai and Yuan [1] proposed an efficient hybrid formula for   which is defined as the following form: Their studies suggested that the HSDY method (5) has the same advantage of avoiding the propensity of short steps as the HS method [1].They also proved that the HSDY method with the standard wolfe line search produces a descent search direction at each iteration and converges globally.Descent condition may be crucial for the convergence analysis of conjugate gradient methods with inexact line searches [2,3].Further, there are some modified conjugate gradient methods [4][5][6][7] which possess the sufficiently descent property without any line search condition.Recently, Yu [8] proposed a spectral version of HSDY method: where with The numerical experiments show that this simple preconditioning technique benefits to its performance.
In this paper, based on a new conjugate condition [9], we propose a new hybrid spectral conjugate gradient method with   defined by where A full description of DS-HSDY method is formally given as follows.
The rest of the paper is organized as follows.In the next section, we show that the DS-HSDY method possesses a self-adjusting property.In Section 3, we establish its global convergence result under the standard Wolfe line search conditions.Section 4 gives some numerical results on a set of large-scale unconstrained test problems in CUTEr to illustrate the convergence and efficiency of the proposed method.Finally we have a Conclusion section.

Self-Adjusting Property
In this section, we prove that the DS-HSDY method possesses a self-adjusting property.To begin with, we assume that otherwise, a stationary point has been found, and define the two following important quantities: ( The quantity   shows the size of   , where   is a quantity showing the descent degree of   .In fact, if   > 0,   is a descent direction.Furthermore, if   ≥  for some constant  > 0, then we have the sufficient descent condition On the other hand, it follows from (12) that Hence Dividing both sides of (18) by (     ) 2 and using (7), we obtain It follows from ( 19) and the definitions of   and   that Additionally, we assume that there exist positive constants  and  such that then we have the following result.

Global Convergence
Throughout the paper, we assume that the following assumptions hold.
Under Assumption 1 on , we could get a useful lemma.
Lemma 4. Suppose that  1 is a starting point for which Assumption 1 holds.Consider any method in the form (2), where   is a descent direction and   satisfies the weak Wolfe conditions; then one has that For DS-HSDY method, one has the following global convergence result.
Theorem 5. Suppose that  1 is a starting point for which Assumption 1 hold.Consider DS-HSDY method; if   ̸ = 0 for all  ≥ 1, then one has that From this and the formula for  SDY  , we get where At the same time, if we define Then we have by (10), with  replaced by  − 1, that Furthermore, we have The above relation, (40), (41), and the fact that  < 1 imply that      < 0. Thus by induction, (32) holds.We now prove (33) by contradiction and assume that there exists some constant  > 0 such that Since Dividing both sides of (44) by (     ) 2 and using (36) and (40), we obtain In addition, since  −1 < 1 and   ≤ 1, we have that This contradicts the Zoutendijk condition (31).Hence we complete the proof.

Numerical Result
In this section, we compare the performance of DS-HSDY method to PRP + method [11], HSDY method [1], and S-HSDY method [8].The test problems are taken from CUTEr (http://hsl.rl.ac.uk/cuter-www/problems.html) with the standard initial points.All codes are written in double precision Fortran and complied with f77 (default compiler settings) on a PC (AMD Athlon XP 2500 + CPU 1.84 GHz).Our line search subroutine computes   such that the Wolfe conditions ( 10) and ( 11) hold with  = 10 −4 and  = 0.5.We  1, 2, 3, and 4 with the form NI/Nfg/T, where we report the dimension of the problem (), the number of iteration (NI), the number of function evaluations (Nfg), and the CPU time () in 0.01 seconds.
Figure 1 shows the performance of these test methods relative to the CPU time, which were evaluated using the profiles of Dolan and Moré [12].That is, for each method, we plot the fraction  of problems for which the method is within a factor  of the best time.The top curve is the method that solved the most problems in a time that was within a factor  of the best time.Clearly, the left side of the figure gives the percentage of the test problems for which a method is the fastest.As we can see from Figure 1, DS-HSDY method has the best performance which performs better than S-HSDY method, HSDY method, and the well-known PRP + method.

Conclusion
In this paper, we proposed an efficient hybrid spectral conjugate gradient method with self-adjusting property.Under some suitable assumptions, we established the global convergence result for the DS-HSDY method.Numerical results indicated that the proposed method is efficient for large-scale unconstrained optimization problems.

Figure 1 :
Figure 1: Performance profiles for CPU time.

Table 1 :
Numerical results for PRP + method.

Table 2 :
Numerical results for HSDY method.

Table 3 :
Numerical results for S-HSDY method.