The Hybrid BFGS-CG Method in Solving Unconstrained Optimization Problems

and Applied Analysis 3 Algorithm 1 (BFGS method). States the following. Step 0. Given a starting point x 0 and H 0 = I n , choose values for s, β, and, σ and set i = 1. Step 1. Terminate if ‖g(x i+1 )‖ < 10−6 or i ≥ 10000. Step 2. Calculate the search direction by (6). Step 3. Calculate the step size α i by (3). Step 4. Compute the difference between s i = x i − x i−1 and y i = g i − g i−1 . Step 5. UpdateH i−1 by (7) to obtainH i . Step 6. Set i = i + 1 and go to Step 1. Algorithm 2 (CG-HS, CG-PR, and CG-FR). States the following. Step 0. Given a starting point x 0 , choose values for s, β, and σ and set i = 1. Step 1. Terminate if ‖g(x k+1 )‖ < 10−6 or i ≥ 10000. Step 2. Calculate the search direction by (4) with respect to the coefficient of CG. Step 3. Calculate the step size α i by (3). Step 4. Compute the difference between s i = x i − x i−1 and y i = g i − g i−1 . Step 5. Set i = i + 1 and go to Step 1. Algorithm 3 (BFGS-CG method). States the following. Step 0. Given a starting point x 0 and H 0 = I n , choose values for s, β, and σ and set i = 1. Step 1. Terminate if ‖g(x i+1 )‖ < 10−6 or i ≥ 10000. Step 2. Calculate the search direction by (10). Step 3. Calculate the step size α i by (3). Step 4. Compute the difference between s i = x i − x i−1 and y i = g i − g i−1 . Step 5. UpdateH i−1 by (7) to obtainH i . Step 6. Set i = i + 1 and go to Step 1. Based on Algorithms 1, 2, and 3 we assume that every search direction d i satisfied the descent condition


Introduction
The unconstrained optimization problem only requires the objective function as where   is an -dimensional Euclidean space and  :   →  is continuously differentiable.The iterative methods are used to solve (1).On the th iteration, an approximation point   and the ( + 1)th iteration are given by where   denotes the search direction and   denotes the step size.The search direction must satisfy the relation      < 0, which guarantees that   is a descent direction of () at   .The different choices of   and   yield the different convergence properties.Generally the first order condition ∇( * ) = 0 is used to check for local convergence to stationary point  * .There are many ways to calculate the search direction depending on the method used, such as the steepest descent method, conjugate gradient (CG) method, Newton-Raphson method, and quasi-Newton method.
The different choices of the step size ensure that the sequence of iterates   defined by ( 2) is globally convergent with some rates of convergence.There are two ways to determine the values of the step size, the exact line search, and the inexact line search.For the exact line search,   is calculated by using the formula   = argmin >0 ((  +     )).However, it is difficult and often impossible to find the value of step size in practical computation using the exact line search.Hence, the inexact line search is proposed by previous researchers like Armijo [1], Wolfe [2,3], and Goldstein [4] to overcome the problem.Recently Shi proposed a new inexact line search rule similar to the Armijo line search and analysed the global converge [5].Shi also claimed that among several well-known inexact line search procedures published by previous researchers, the Armijo line search rule is one of the most useful and the easiest to be implemented in computational calculations.The Armijo line search rule can be described as follows: Given  > 0,  ∈ (0, 1) ,  ∈ (0, 1) , = 0, 1, 2, . ...Then, the sequence of {  } ∞ =0 is converged to the optimal point,  * , which minimises  [6].Hence, we will use the Armijo line search in this research associated with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and the new hybrid method.
This paper is organised as follows.In Section 2, we elaborate the step size and search direction that are used in this research.Here, the BFGS method and CG method also will be presented.Then, the new hybrid method and convergence analysis will be discussed in Section 3.An explanation about the numerical results is provided in Section 4 and the paper ends with a short conclusion in Section 5.

The Search Direction
The different methods in solving unconstrained optimization problems depend on the calculation of search direction,   in (2).In this paper, we will focus on the CG method and quasi-Newton methods.The CG method is useful for finding the minimum value of functions or unconstrained optimization problems, which are introduced by [7].The search direction of the CG method is where   = ∇(  ) and   is known as the CG coefficient.There are many ways to calculate   and some well-known formulas are where   and  −1 are gradients of () at points   and  −1 , respectively, while ‖ ⋅ ‖ is a norm of vectors and  −1 is a search direction for the previous iteration.The above corresponding coefficients are known as Fletcher-Reeves (CG-FR) [7], Polak-Ribière (CG-PR) [8][9][10][11], and Hestenes-Stiefel (CG-HS) [12].
In quasi-Newton methods, the search direction is the solution of linear system where   is an approximation of Hessian.Initial matrix  0 is chosen by the identity matrix, which subsequently updates by an update formula.There are a few update formulas that are widely used like Davidon-Fletcher-Powell (DFP), BFGS, and Broyden family formula.This research uses a BFGS formula in a classical algorithm and the new hybrid method.The update formula for BFGS is with   =   −  −1 and   =   −  −1 .The approximation that the Hessian must fulfil is This condition is required to hold for the updated matrix  +1 .Note that it is only possible to fulfil the secant equation if which is known as the curvature condition.

The New Hybrid Method
The modification of the quasi-Newton method based on a hybrid method has already been introduced by previous researchers.One of the studies is a hybridization of the quasi-Newton and Gauss-Seidel methods, aimed at solving the system of linear equations in [13].Luo et al. [14] suggest the new hybrid method, which can solve the system of nonlinear equations by combining the quasi-Newton method with chaos optimization.Han and Neumann [6] combine the quasi-Newton methods and Cauchy descent method to solve unconstrained optimization problems, which is recognised as the quasi-Newton-SD method.Hence, the modification of the quasi-Newton method by previous researchers spawned the new idea of hybridizing the classical method to yield the new hybrid method.Hence, this study proposes a new hybrid search direction that combines the concept of search direction of the quasi-Newton and CG methods.It yields a new search direction of the hybrid method which is known as the BFGS-CG method.The search direction for the BFGS-CG method is where  > 0 and   = (    −1 /    −1 ).Hence, the complete algorithms for the BFGS method, CG-HS, CG-PR, and CG-FR methods, and the BFGS-CG method will be arranged in Algorithms 1, 2, and 3, respectively.
Step 0. Given a starting point  0 and  0 =   , choose values for , , and,  and set  = 1.
Step 0. Given a starting point  0 , choose values for , , and  and set  = 1.
Step 2. Calculate the search direction by ( 4) with respect to the coefficient of CG.

Algorithm 3 (BFGS-CG method). States the following.
Step 0. Given a starting point  0 and  0 =   , choose values for , , and  and set  = 1.
Based on Algorithms 1, 2, and 3 we assume that every search direction   satisfied the descent condition for all  ≥ 0. If there exists a constant  1 > 0 such that for all  ≥ 0, then the search directions satisfy the sufficient descent condition which can be proved in Theorem 6.Hence, we need to make a few assumptions based on the objective function.
Assumption 4. Consider the following.
H1: the objective function  is twice continuously differentiable.H2: the level set  is convex.Moreover, positive constants  1 and  2 exist, satisfying Then condition (12) holds for all  ≥ 0.

Numerical Result
In this section, we use the test problem considered by Andrei [18], Michalewicz [19], and Moré et al. [20] in Table 1 to analyse the improvement of the BFGS-CG method compared with the BFGS method and CG method.Each of the test problems is tested with dimensions varying from 2 to 1,000 variables.This represents a total of 159 test problems.As suggested by [20], for each of the test problems, the initial point  0 will further subtract from the minimum point.
In doing so, this leads us to test the global convergence properties and the robustness of our method.For the Armijo line search, we use  = 1,  = 0.5, and  = 0.1.The stopping criteria we use are ‖  ‖ ≤ 10 −6 and the number of iterations exceeds its limit, which is set to be 10,000.In our implementation, the numerical tests were performed on an Acer Aspire with a Windows 7 operating system and using Matlab 2012 languages.
The performance results will be shown in Figures 1 and  2, respectively, using the performance profile introduced by Dolan and Moré [21].The performance profile seeks to find how well the solvers perform relative to the other solvers on a set of problems.In general, () is the fraction of problems with performance ratio ; thus, a solver with high values of () or one that is located at the top right of the figure is preferable.
Figures 1 and 2 show that the BFGS-CG method has the best performance since it can solve 99% of the test problems compared with the BFGS (84%), CG-HS (65%), CG-PR (80%), and CG-FR (75%) methods.Moreover, we can also say that the BFGS-CG is the fastest solver on approximately 68% of the test problems for iteration and 52% of CPU time.

Conclusion
We have presented a new hybrid method for solving unconstrained optimization problems.The numerical results for a broad class of test problems show that the BFGS-CG method is efficient and robust in solving the unconstrained optimization problem.We also note that, as the size and complexity of the problem increase, greater improvements could be realised by our BFGS-CG method.Our future research will be to try the BFGS-CG method with coefficients of CG like Fletcher-Reeves, Hestenes-Stiefel, and Polak-Ribiére.

Figure 1 :Figure 2 :
Figure 1: Performance profile in a log 10 scale based on iteration.

Table 1 :
A list of problem functions.