A Modified Nonlinear Conjugate Gradient Method for Engineering Computation

A general criterion for the global convergence of the nonlinear conjugate gradient method is established, based on which the global convergence of a newmodified three-parameter nonlinear conjugate gradientmethod is proved under somemild conditions. A large amount of numerical experiments is executed and reported, which show that the proposed method is competitive and alternative. Finally, one engineering example has been analyzed for illustrative purposes.


Introduction
Unconstrained optimization methods are widely used in the fields of nonlinear dynamic systems and engineering computation to obtain the numerical solution of the optimal control problem [1][2][3][4].In this paper, we consider the unconstrained optimization problem: min  () ,  ∈   , where  :   →  is a continuously differentiable function.
The nonlinear conjugate gradient (CG) method is highly useful for solving this kind of problems because of its simplicity and its very low memory requirement [1].The iterative formula of the CG methods is given by where   > 0 is step length which is obtained by carrying out some linear search, such as exact or inexact line search.
In practical computation, exact line search is consumption time and the workload is very large, so we usually take the following inexact line search (see [5][6][7]).Usually, a major inexact line search is the strong Wolfe-Powell line search.The strong Wolfe-Powell line search is to find the step-length   in (2) satisfying where 0 <  < 1/2 and  <  < 1.In this paper, the following modified Wolfe-Powell line search is to find the step-length   in (2) satisfying (3) and the following: and   is the search direction defined by where   denotes the gradient ∇(  ),   is scalar, and   is chosen so that   becomes the th conjugate direction.There have been many well-known formulae for the scalar   , for example, (Polak-Ribiere-Polyak [9], 1969), (Dai-Yuan [10], 1999), and other formulae (e.g., [11][12][13]), where ‖ ⋅ ‖ is the Euclidean norm of vectors,  −1 =   −  −1 , and "" stand for the transpose.These methods are generally regarded as very efficient conjugate gradient methods in practical computation.
In recent decades, in order to obtain the CG method which has not only good convergence property but also excellent computation, many researchers have studied the CG method extensively and obtained some improved methods with good properties [14][15][16][17][18][19][20].Li and Feng [21] gave the modified CG method which generates a sufficient descent direction and showed its global convergence property under the strong Wolfe-Powell conditions.Dai and Wen [22] gave a scaled conjugate gradient method.They proved its global convergence property under the strong Wolfe-Powell conditions.Al-Baali [23] proved that the FR method satisfies the sufficient descent condition and converges globally for general objective functions if the strong Wolfe-Powell line search is used.Dai and Yuan [24] also introduced a formula for   : where 0 ≤   ≤ 1, 0 ≤   ≤ 1, and 0 ≤   ≤ 1 −   .Because we can rewrite (10) as This formula includes the above three classes of CG method as an extreme case, and global convergence of three parameters of CG method was proved under strong Wolfe-Powell line search.If   = 0, then the family reduces to the twoparameter family of conjugate gradient methods in [25].Further, if   = 0,   = , and   = 0, then the family reduces to the one-parameter family in [26].Therefore, the three-parameter family has the one-parameter family in [26] and the two-parameter family in [25] as its subfamilies.
In addition, some hybrid methods can also be regarded as special cases of the three-parameter family [24].Above many modified CG methods, global convergence was obtained under strong Wolfe-Powell line search; however, in this paper, we further study the CG method, and our main aim is to improve the numerical performance of the CG method while keeping its global convergence with modified Wolfe-Powell line search.This paper is organized as follows.We first present a criterion for the global convergence of CG method in the next section.In Section 3, we propose a new modified threeparameter conjugate gradient method and establish global convergence results for relative algorithm under modified Wolfe-Powell line search.The preliminary numerical results are contained in Section 4. One engineering example is analyzed for illustration in Section 5. Finally, conclusions appear in Section 6.

A Criterion for the Global Convergence of CG Method
In this section, first, we adopt the following assumption used commonly in the research literatures.

The Global Convergence for the New Formula and Algorithm Frame
we use the maximum function to truncate zero and Using the equality we can rewrite the denominator of (27) as When then the denominator of  new

𝑘
given by ( 27) reduces to the denominator of  *  .On the other hand, when the numerator of ( 27) reduces to When then Now, the numerator of  new  (27) reduces to the numerator of  *  .From the above analysis, we can see that ( 27) indeed is an extension of (10).Due to the existence of the parameters   ,   , and   , it would be more flexible to call methods (2), (6), and ( 27) by this paper of conjugate gradient methods.Numerical experiments results in Section 4 Mathematical Problems in Engineering demonstrate the influence of these parameters versus formula (27).
The result shows that the search direction satisfies descent condition (     < 0); this condition may be crucial for convergence analysis of any conjugate gradient method.Lemma 6. Suppose that Assumption 1 holds and that {  } is given by ( 2) and (6), where   satisfies the modified Wolfe-Powell conditions ( 3) and ( 5), while   is computed by (27).Then, one has To sum up,

and by 𝜆
Theorem 7. Suppose the objective function () satisfies Assumption 1; consider methods ( 2) and (6), where   is given by (27)  The result shows that the proposed algorithm with the modified Wolfe-Powell line search possesses global convergence.

Algorithm A.
Based on the discussed above, now we can describe the algorithm frame for solving the unconstrained optimization problems (1) as follows.
Step 1.If a stopping criterion ‖  ‖ <  0 is satisfied, then stop; otherwise, go to Step 2.
Step 4. Set  fl  + 1 and go to Step 1.
Here,  * and ( * ) are the optimal solution and the function value at the optimal solution, respectively.For each algorithm, the parameters are chosen as  = 0.04 and  = 0.5.All codes were written in MATLAB 7.5 and run on Lenovo with 1.90 GHz CPU processor, 2.43 GB RAM memory, and Windows XP operating system.The stop criterion of the iteration is one of the following conditions: (1) ‖  ‖ ≤  0 = 10 −4 and (2) the number of iterations Itr > 5000.If condition (2) occurs, the method is deemed to fail for solving the corresponding test problem, and denote it by "."For the first three test problems, we present experimental results to observe the behavior of the proposed and DY (given by formula (10)) conjugate gradient algorithm for different   , different   , and different   .Details of the schemes for parameters set are given in Table 1.Numerical results of test problems are listed in Tables 2, 3, 4, 5, 6, 7, and 8, respectively.Table 9 shows numerical results of other test problems.Here,  0 denotes the initial point of the test problems and  and () are iteration value and the function value at the final iteration, respectively.
Based on Table 1's sixteen kinds of scheme (different parameters set), we compared algorithm A with DY (given by formula (10)) conjugate gradient algorithm based on different initial point for three test problems.It is easy to see that the two algorithms based on different scheme (different parameters set) are successful for the first and the second test problems listed in Tables 2, 3, and 4. From Tables 5, 6, 7, 8, and 9, we can see that algorithm A is more successful than DY (given by formula (10)) conjugate gradient algorithm.For example, for the first three test problems based on different scheme (different parameters set), algorithm A based on different initial point all achieved satisfied iteration value and the function value at the final  [8]   = 0.9,   = 0.3,   = 0.5 [9]   = 0.9,   = 0.3,   = 0.4 [10]   = 0.9,   = 0.3,   = 0.3 [11]   = 0.9,   = 0.3,   = 0.1 [12]   = 0.7,   = 0.2,   = 0.1 [13]   = 0.7,   = 0.3,   = 0.1 [14]   = 0.7,   = 0.4,   = 0.1 [15]   = 0.7,   = 0.5,   = 0.1 [16] iteration.Nevertheless, under some scheme (parameters set), DY (given by formula ( 10)) conjugate gradient algorithm cannot search satisfied iteration solution and the function value at the final iteration.From Tables 5-9, we can also see that DY (given by formula ( 10)) conjugate gradient algorithm sometimes is failed based on some scheme (parameters set); however, our algorithm is failed only one time.These indicate that the influence of parameters value's changing in formula ( 27) on the algorithm is not big.We presented the Dolan and Moré [30] performance profiles for the algorithm A and DY method.Note that the performance ratio () is the probability for a solver  for the tested problems with the factor  of the smallest cost.As we can see from Figure 1, algorithm A is superior to DY method for the absolute errors of () versus ( * ).Hence, compared with the DY (given by formula ( 10)) conjugate gradient algorithm, algorithm A has higher stability and adaptability.Therefore, algorithm A yields a better numerical performance than the DY (given by formula (10)) conjugate gradient algorithm.From the above analysis, we can conclude that algorithm A is competitive for solving unconstrained optimization problems.

Application to Engineering
In this section, we present a real example to illustrate application of the algorithm proposed in this article.The example is the results of tests on endurance of deep groove ball bearings.For illustrating the purposes, we applied the real dataset of 23 observed failure times that was initially reported in Lieblein and Zelen [31] and later by a number of authors including Abouammoh and Alshingiti [32] and Krishna and Kumar [33].
where  > 0 and  > 0 are the shape and scale parameters, respectively.Let  denote the number of observed failures   Algorithm A DY (given by formula ( 10)) algorithm /() /()
Dey and Pradhan [34] obtained the MLE of the parameters as follows: (α, λ) = (3.1835,1.4329 − 6).From the numerical results, we can see that our algorithm is alternative for the above real unconstrained optimization problem.

Conclusion
In this article, by modifying the scalar   , we have proposed a three-parameter family of conjugate gradient method for solving large-scale unconstrained optimization problems.Global convergence of the proposed methods under modified Wolfe-Powell line search and general criterion are established, respectively.Numerical results show that our algorithm is competitive for solving unconstrained optimization problems.So, the proposed method is an alternative method used in the reliability data.
and   satisfies the modified Wolfe-Powell line search condition.Then, either   = 0 holds for certain  or lim inf →∞           = 0.(44)Proof.By Theorem 4 and Lemma 6, Theorem 7 is proved.

Table 2 :
The numerical results of sphere function for different scheme.

Table 3 :
The numerical results of sphere function for different scheme.

Table 4 :
The numerical results of Rastrigin function for different scheme.

Table 5 :
The numerical results of Rastrigin function for different scheme.

Table 6 :
The numerical results of Freudenstein and Roth function for different scheme.

Table 7 :
The numerical results of Freudenstein and Roth function for different scheme.

Table 8 :
The numerical results of Freudenstein and Roth function for different scheme.