^{1, 2}

^{1}

^{1}

^{2}

The conjugate gradient method is an efficient method for solving large-scale nonlinear optimization problems. In this paper, we propose a nonlinear conjugate gradient method which can be considered as a hybrid of DL and WYL conjugate gradient methods. The given method possesses the sufficient descent condition under the Wolfe-Powell line search and is globally convergent for general functions. Our numerical results show that the proposed method is very robust and efficient for the test problems.

The nonlinear conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization due to the simplicity of their iterations and their very low memory requirements. In fact, the CG method is not among the fastest or most robust optimization algorithms for nonlinear problems available today, but it remains very popular for engineers and mathematicians who are interested in solving large-scale problems. As we know, the nonlinear conjugate gradient method is the extended based on linear conjugate gradient method. The first linear conjugate gradient method is proposed by Hestenes and Stiefel 1952 [

In this paper, we focus on solving the following nonlinear unconstrained optimization problem by conjugate gradient method:

Although all these methods are equivalent in the linear case, namely, when

From the structure of the above formulae

Generally speaking, methods with numerator

Since the PR method is considered as one of the most efficient nonlinear conjugate gradient methods, a lot of effort has been made on its convergence properties and its modifications. In [

Recently, Wei et al. [

The method with formula

In [

Recently, Dai and Liao [

Motivated by the above discussion, in this paper, we give the following formula to compute the parameter

The formula

The convergence properties of Algorithm

For conjugate gradient methods, during the iteration process, the gradient of the objective function is required. We make the following basic assumptions on the objective functions.

(i) The level set

(ii) In some neighborhood

Under the above assumptions of

For conjugate gradient methods, the sufficient descent condition is significant to the global convergence. We say the sufficient descent condition holds if there exists a constant

In [

Suppose that the sequence

We prove this result by induction. By using (

For PR method, when a small step-length occurs,

(

Consider a method of form (

By Lemma

For nonlinear conjugate gradient methods, Dai et al. [

Suppose that Assumption

By Lemma

Suppose that Assumption

Firstly, note that

Then by (

Property

the sufficient descent condition;

Theorem

Suppose that Assumption

It follows from Lemmas

According to the above lemmas and theorems, we can prove the following convergence theorem for WYLDL method.

Suppose that Assumption

We prove this theorem by contradiction. If

In this section, we report the performance of the Algorithm

In order to assess the reliability of the WYLDL algorithm, we also tested this method against the DL method and WYL method using the same problems. All these algorithms are terminated when

The comparing data contain the iterations, function and gradient evaluations, and CPU time. To approximatively assess the performance of WYLDL, WYL and DL methods, we use the profile of Dolan and Moré [

Dolan and Moré [

Requiring a baseline for comparisons, they compared the performance on problem

Suppose that a parameter

For the testing problems, if all three methods cannot terminate successfully, then we got rid of them. In case one method fails, but there are other methods that terminate successfully, then the performance ratio of the failed method is set to be

Performance profiles based on iterations.

Performance profiles based on function and gradient evaluations.

Performance profiles based on CPU time.

From Figure

From Figure

The authors declare that there is no conflict of interests regarding the publication of this paper.

This research was supported by the Guangxi Universities Foundation Grant no. 2013BYB210.