JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 875494 10.1155/2012/875494 875494 Research Article Accumulative Approach in Multistep Diagonal Gradient-Type Method for Large-Scale Unconstrained Optimization Farid Mahboubeh 1 June Leong Wah 1 Zheng Lihong 2 Phat Vu 1 Department of Mathematics University Putra Malaysia Selangor, 43400 Serdang Malaysia upm.edu.my 2 School of Computing and Maths Charles Sturt University, Mitchell, NSW 2795 Australia csu.edu.au 2012 10 07 2012 2012 02 04 2012 10 05 2012 16 05 2012 2012 Copyright © 2012 Mahboubeh Farid et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper focuses on developing diagonal gradient-type methods that employ accumulative approach in multistep diagonal updating to determine a better Hessian approximation in each step. The interpolating curve is used to derive a generalization of the weak secant equation, which will carry the information of the local Hessian. The new parameterization of the interpolating curve in variable space is obtained by utilizing accumulative approach via a norm weighting defined by two positive definite weighting matrices. We also note that the storage needed for all computation of the proposed method is just O(n). Numerical results show that the proposed algorithm is efficient and superior by comparison with some other gradient-type methods.

1. Introduction

Consider the unconstrained optimization problem: (1.1)minf(x),xRn, where f:RnR is twice continuously differentiable function. The gradient-type methods for solving (1.1) can be written as (1.2)xk+1=xk-Bk-1gk, where gk and Bk denote the gradient and the Hessian approximation of f at xk, respectively. By considering Bk=αkI, Barzilai and Borwein (BB)  give (1.3)αk+1=skTykskTsk, where it is derived by minimizing αk+1sk-yk2 respect to α with sk=xk+1-xk and yk=gk+1-gk. Recently, some improved one-step gradient-type methods  in the frame of BB algorithm were proposed to solve (1.1). It is proposed to let Bk be a diagonal nonsingular approximation to the Hessian and a new approximating matrix Bk+1 to the Hessian is developed based on weak secant equation of Dennis and Wolkowicz  (1.4)skTBk+1sk=skTyk. In one-step method, data from one previous step is used to revise the current approximation of Hessian. Later Farid and Leong [7, 8] proposed multistep diagonal gradient methods inspired by the multistep quasi-Newton method of Ford [9, 10]. In this multistep framework, a fixed-point approach for interpolating polynomials was derived from data in previous iterations (not only one previous step) . General approach of multistep method is based on the measurement of distances in the variable space where the distance of every iterate is measured from one-selected iterate. In this paper, we are interested to develop multistep diagonal updating based on accumulative approach for defining new parameter value of interpolating curve. From this point, the distance is accumulated between consecutive iterates as they are traversed in the natural sequence. For measuring the distance, we need to parameterize the interpolating polynomial through a norm that is defined by a positive definite weighting matrix, say M. Therefore, the performance of the multistep method may be significantly improved by carefully defining the weighting matrix. The rest of paper is organized as follows. In Section 2, we discuss a new multistep diagonal updating scheme based on the accumulative approach. In Section 3, we establish the global convergence of our proposed method. Section 4 presents numerical result and comparisons with BB method and one-step diagonal gradient method are reported. Conclusions are given in Section 5.

2. Derivation of the New Diagonal Updating via Accumulative Approach

This section motivates to state new implicit updates for diagonal gradient-type method through accumulative approach to determining a better Hessian approximation at each iteration. In multistep diagonal updating methods, weak secant equation (1.4) may be generalized by means of interpolating polynomials, instead of employing data just from one previous iteration like in one-step methods. Our aim is to derive efficient strategies for choosing a suitable set of parameters to construct the interpolating curve and investigate the best norm for measurement of the distances required to parameterize the interpolating polynomials. In general, this method obeys the recursive formula of the form (2.1)xk+1=xk-αkBk-1gk, where xk is the kth iteration point, αk is step length which is determined by a line search, Bk is an approximation to the Hessian in a diagonal form, and gk is the gradient of f at xk. Consider a differentiable curve x(τ) in Rn. The derivative of g(x(τ)), at point x(τ*), can be obtained by applying the chain rule: (2.2)dgdτ|τ=τ*=G(x(τ))dxdτ|τ=τ*. We are interested to derive a relation that will be satisfied by the approximation of Hessian in diagonal form at xk+1. If we assume that x(τ) passes through xk+1 and choose τ* so that (2.3)x(τ*)=xk+1, then we have (2.4)G(xk+1)dxk+1dτ=dg(xk+1)dτ. As in this paper, we use two-step method, therefore; we use information of most recent points xk-1, xk, xk+1 and their associated gradients. Consider x(τ) as the interpolating vector polynomial of degree 2: (2.5)x(τj)=xk+j-1  j=0,1,2. The selection of distinct scalar value τj efficiently through the new approach is the main contribution of this paper and will be discussed later in this section. Let h(τ) be the interpolation for approximating the gradient vector: (2.6)h(τj)=g(xk+j-1)  j=0,1,2. By denoting x(τ2)=xk+1 and defining (2.7)dx(τ2)dτ=rk,dg(x(τ2))dτ=wk, we can obtain our desired relation that will be satisfied by the Hessian approximation at xk+1 in diagonal form. Corresponding to this two-step approach, weak secant equation will be generalized as follows: (2.8)rkTBk+1rk=rkTwk. Then, Bk+1 can be obtained by using an appropriately modified version of diagonal updating formula in  as follows: (2.9)Bk+1=Bk+(rkTwk-rkTBkrk)tr(Fk2)Fk, where Fk=diag((rk(1))2,(rk(2))2,,(rk(n))2). Now, we attempt to construct an algorithm for finding desired vector rk and wk to improve the Hessian approximation. The proposed method is outlined as follows. First, we seek to derive strategies for choosing a suitable set of values τ0, τ1, and τ2. The choice of {τj}j=02 is such that to reflect distances between iterates xk in Rn that are dependent on some metric of the following general form: (2.10)ϕM(z1,z2)={(z1-z2)TM(z1-z2)}1/2. The establishment on τj can be made via the so-called accumulative approach where the accumulating distances (measured by the metric ϕM) between consecutive iterates are used to approximate τj. This leads to the following definitions (where without loss of generality, we take τ1 to be origin for value of τ): (2.11)τ1=0,τj=τj+1-ϕM(xk+j,xk+j-1)  j=0,2. Then, we can construct the set {τj}j=02 as follows: (2.12)τ0=τ1-ϕM(xk,xk-1)-xk-xk-1M=-sk-1M,τ2=ϕM(xk+1,xk)xk+1-xkM=skM, where rk and wk are depending on the value of τ. As the set {τj}j=02 measures the distances, therefore they need to be parameterized the interpolating polynomials via a norm defined by a positive definite matrix M. It is necessary to choose M with some care, while improving the approximation of Hessian can be strongly influenced via the choice of M. Two choices for the weighting matrix M are considered in this paper. In first choice, if M=I, the ·M reduces to the Euclidean norm, and then we obtain the following τj values accordingly: (2.13)τ2=sk2,τ1=0,  τ0=-sk-12. The second choice of weighting matrix M is to take M=Bk, where the current Bk is diagonal approximation to the Hessian. By these two means, the measurement of the relevant distances is determined by the properties of the current quadratic approximation (based on Bk) to the objective function: (2.14)τ2=(skBksk)1/2,τ1=0,τ0=-(sk-1Bksk-1)1/2. Since Bk is a diagonal matrix, then it is not expensive to compute {τj}j=02 at each iteration. The quantity δ is introduced here and defined as follows: (2.15)δτ2-τ1τ1-τ0, and rk and wk are given by the following expressions: (2.16)rk=sk-δ21+2δsk-1,(2.17)wk=yk-δ21+2δyk-1. To safeguard on the possibility of having very small or very large rkTwk, we require that the condition (2.18)ɛ1rk22rkTwkɛ2rk22 is satisfied (we use ɛ1=10-6 and ɛ2=106). If not, then we replace rk=sk and wk=yk. More that the Hessian approximation (Bk+1) might not preserve the positive definiteness in each step. One of the fundamental concepts in this paper is to determine an “improved” version of the Hessian approximation to be used even in computing the metric when M=Bk and a weighing matrix as norm should be positive definite. To ensure that the updates remain positive definite, a scaling strategy proposed in  is applied. Hence, the new updating formula that incorporates the scaling strategy is given by (2.19)Bk+1=ηkBk+[rkTwk-ηkrkTBkrk]tr(Fk2)Fk, where (2.20)ηk=min(rkTwkrkTBkrk,1). This guarantees that the updated Hessian approximation is positive. Finally, the new accumulative MD algorithm is outlined as follows.

2.1. Accumulative MD Algorithm Step 1.

Choose an initial point x0Rn, and a positive definite matrix B0=I.

Let k:=0.

Step 2.

Compute gk. If gkϵ, stop.

Step 3.

If k=0, set x1=x0-(g0/g0). If k=1 set rk=sk and wk=yk go to Step 5.

Step 4.

If k2 and M=I is considered, compute {τj}j=02 from (2.13).

Else if M=Bk, compute {τj}j=02 from (2.14).

Compute δk, rk,wk and ηk, from (2.15), (2.16),  (2.17), and (2.20), respectively.

If rkTwk10-4rk2wk2, set rk=sk and wk=yk.

Step 5.

Compute dk=-Bk-1gk and calculate αk>0 such that the Armijo , condition holds:

f ( x k + 1 ) f ( x k ) + σ α k g k T d k , where σ(0,1) is a given constant.

Step 6.

Let xk+1=xk-αkBk-1gk, and update Bk+1 by (2.19).

Step 7.

3. Convergence Analysis

This section is devoted to study the convergence of accumulative MD algorithm, when applied to the minimization of a convex function. To begin, we give the following result, which is due to Byrd and Nocedal  for the step generated by the Armijo line search algorithm. Here and elsewhere, · denotes the Euclidean norm.

Theorem 3.1.

Assume that f is a strictly convex function. Suppose the Armijo line search algorithm is employed in a way that for any dk with dkTgk<0, the step length, αk satisfies (3.1)f(xk+αkdk)f(xk)+σαkgkTdk, where αk[τ,τ], 0<τ<τ and σ(0,1). Then, there exist positive constants ρ1 and ρ2 such that either (3.2)f(xk+αkdk)-f(xk)-ρ1(dkTgk)2dk2 or (3.3)f(xk+αkdk)-f(xk)-ρ2dkTgk is satisfied.

We can apply Theorem 3.1 to establish the convergence of some Armijo-type line search methods.

Theorem 3.2.

Assume that f is a strictly convex function. Suppose that the Armijo line search algorithm in Theorem 3.1 is employed with dk chosen to obey the following conditions: there exist positive constants c1 and c2 such that (3.4)-gkTdkc1gk2,dkc2gk, for all sufficiently large k. Then, the iterates xk generated by the line search algorithm have the property that (3.5)liminfkgk=0.

Proof.

By (3.4), we have that either (3.2) or (3.6) becomes (3.6)f(xk+αkdk)-f(xk)-cgk2, for some positive constants. Since f is strictly convex, it is also bounded below. Then, (3.1) implies that f(xk+αkdk)-f(xk)0 as k. This also implies that gk0 as k or at least (3.7)liminfkgk=0.

To prove that the accumulative MD algorithm is globally convergent when applied to the minimization of a convex function, it is sufficient to show that the sequence {Bk} generated by (2.19)-(2.20) is bounded both above and below, for all finite k so that its associated search direction satisfies condition (3.4). Since Bk is diagonal, it is enough to show that each element of Bk says Bk(i);  i=1,,n is bounded above and below by some positive constants. The following theorem gives the boundedness of {Bk}.

Theorem 3.3.

Assume that f is strictly convex function where there exists positive constants m and M such that (3.8)mz2zT2f(x)zMz2, for all x,zRn. Let {Bk} be a sequence generated by the accumulative MD method. Then, Bk is bounded above and below for all finite k, by some positive constants.

Proof.

Let Bk(i) be the ith element of Bk. Suppose B0 is chosen such that ω1B0(i)ω2;  i=1,,n, where ω1 and ω2 are some positive constants.

Case 1. If (2.18) is satisfied, we have (3.9)B1={η0B0;ifr0Tw0<r0TB0r0B0+r0Tw0-r0TB0rktr(F02)F0;ifr0Tw0r0TB0r0. By (2.18) and the definition of ηk, one can obtain (3.10)ɛ1ω2η01. Thus, if r0Tw0<r0TB0r0, then B1=η0B0 satisfies (3.11)ɛ1ω1ω2B1(i)ω2. On the other hand, if r0Tw0r0TB0r0, then (3.12)B1(i)=B0(i)+(r0Tw0-r0TB0r0)tr(F02)(r0(i))2, where r0(i) is the ith component of r0. Letting (r0(M)) be the largest component (in magnitude) of r0, that is, (r0(i))2(r0(M))2; for all i, then it follows that r02n(r0(M))2, and the property of r0Tw0r0TB0r0, (3.12) becomes (3.13)ω1B0(i)B1(i)ω2+n(ɛ2-ω1)tr(F02)(r0(M))4ω2+n(ɛ2-ω1). Hence, B1(i) is bounded above and below, for all i in both occasions.

Case 2. If (2.18) is violated, then the updating formula for B1 becomes (3.14)B1(i)=η0B0(i)+(s0Ty0-η0s0TB0s0)tr(E02)(s0(i))2, where s0(i) is the ith component of s0, E0=diag((s0(1))2,(s0(2))2,,(s0(n))2), and η0=min(1,s0Ty0/s0TB0s0).

Because η01 also implies that s0Ty0-s0TB0s00, then this fact, together with the convexity property (3.8), and the definition of η give (3.15)min(1,mω2)ω1η0B0(i)B1(i)B0(i)+(M-ω1)s02tr(E02)(s0(i))2. Using the similar argument as above, that is, by letting s0(M) be the largest component (in magnitude) of s0, then it follows that (3.16)min(1,mω2)ω1B1(i)ω2+n(M-ω1).

Hence, in both cases, B1(i) is bounded above and below, by some positive constants. Since the upper and lower bound for B1(i) is, respectively, independent to k, we can proceed by using induction to show that Bk(i) is bounded, for all finite k.

4. Numerical Results

In this section, we examine the practical performance of our proposed algorithm in comparison with the BB method and standard one-step diagonal gradient-type method (MD). The new algorithms are referred to as AMD1 and AMD2 when M=I and M=Bk are used, respectively. For all methods we employ Armijo line search  where σ=0.9. All experiments in this paper are implemented on a PC with Core Duo CPU using Matlab 7.0. For each run, the termination condition is that gk10-4. All attempts to solve the test problems were limited to a maximum of 1000 iterations. The test problems are chosen from Andrei  and Moré et al.  collections. The detailed test problem is summarized in Table 1. Our experiments are performed on a set of 36 nonlinear unconstrained problems, and the problems vary in size from n=10 to 10000 variables. Figures 1, 2, and 3 present the Dolan and Moré  performance profile for all algorithms subject to the iteration, function call, and CPU time.

Test problem and its dimension.

Problem Dimension References
Extended Trigonometric, Penalty 1, Penalty 2 10 , , 10000 Moré et al. 
Quadratic QF2, Diagonal 4, Diagonal 5, Generalized Tridiagonal 1
Generalized Rosenbrock, Generalized PSC1, Extended Himmelblau
Extended Three Exponential Terms, Extended Block Diagonal BD1
Extended PSC1, Raydan 2, Extended Tridiagonal 2, Extended Powell
Extended Freudenstein and Roth, Extended Rosenbrock 10 , , 10000 Andrei 
Extended Beale, Broyden Tridiagonal, Quadratic Diagonal Perturbed 10 , , 1000 Moré et al. 
Diagonal 3, Generalized Tridiagonal 2, Almost perturbed Quadratic
Tridiagonal perturbed quadratic, Full Hessian FH1, Full Hessian FH2
Raydan 1, EG2, Extended White and Holst 10 , , 1000 Andrei 

Performance profile based on Iteration for all problems.

Performance profile based on function call.

Performance profile based on CPU time per iteration.

From Figure 1, we see that AMD2 method is the top performer, being more successful than other methods in the number of iteration. Figure 2 shows that AMD2 method requires the fewest function calls. From Figure 3, we observe that the AMD2 method is faster than MD and AMD1 methods and needs reasonable time to solve large-scale problems when compared to the BB method. At each iteration, the proposed method does not require more storage than classic diagonal updating methods. Moreover, a higher-order accuracy in approximating the Hessian matrix of the objective function makes AMD method need less iterations and less function evaluation. The numerical results by the tests reported in Figures 1, 2, and 3 demonstrate clearly the new method AMD2 shows significant improvements, when compared with BB, MD, and AMD1. Generally, M=Bk performs better than M=I. It is most probably due to the fact that Bk is a better Hessian approximation than the identity matrix I.

5. Conclusion

In this paper, we propose a new two-step diagonal gradient method as view of accumulative approach for unconstrained optimization. The new parameterization for multistep diagonal gradient-type method is developed via employing accumulative approach. The new technique is devised for interpolating curves which are the basis of multistep approach. Numerical results show that the proposed method is suitable to solve large-scale unconstrained optimization problems and more stable than other similar methods in practical computation. The improvement that our proposed methods bring does come at a complexity cost of O(n) while others are about O(n2) [9, 10].

Barzilai J. Borwein J. M. Two-point step size gradient methods IMA Journal of Numerical Analysis 1988 8 1 141 148 10.1093/imanum/8.1.141 967848 ZBL0638.65055 Hassan M. A. Leong W. J. Farid M. A new gradient method via quasi-Cauchy relation which guarantees descent Journal of Computational and Applied Mathematics 2009 230 1 300 305 10.1016/j.cam.2008.11.013 2532311 ZBL1179.65067 Leong W. J. Hassan M. A. Farid M. A monotone gradient method via weak secant equation for unconstrained optimization Taiwanese Journal of Mathematics 2010 14 2 413 423 2655778 ZBL1203.90148 Leong W. J. Farid M. Hassan M. A. Scaling on diagonal Quasi-Newton update for large scale unconstrained Optimization Bulletin of the Malaysian Mathematical Sciences Soceity 2012 35 2 247 256 Waziri M. Y. Leong W. J. Hassan M. A. Monsi M. A new Newtons method with diagonal jacobian approximation for systems of nonlinear equations Journal of Mathematics and Statistics 2010 6 246 252 10.3844/jmssp.2010.246.252 Dennis, J. E. Jr. Wolkowicz H. Sizing and least-change secant methods SIAM Journal on Numerical Analysis 1993 30 5 1291 1314 10.1137/0730067 1239822 ZBL0802.65081 Farid M. Leong W. J. Hassan M. A. A new two-step gradient-type method for large-scale unconstrained optimization Computers & Mathematics with Applications 2010 59 10 3301 3307 10.1016/j.camwa.2010.03.014 2651868 ZBL1198.90395 Farid M. Leong W. J. An improved multi-step gradient-type method for large scale optimization Computers & Mathematics with Applications 2011 61 11 3312 3318 10.1016/j.camwa.2011.04.030 2801996 ZBL1222.90044 Ford J. A. Moghrabi I. A. Alternating multi-step quasi-Newton methods for unconstrained optimization Journal of Computational and Applied Mathematics 1997 82 1-2 105 116 10.1016/S0377-0427(97)00075-7 1473534 ZBL0886.65064 Ford J. A. Tharmlikit S. New implicit updates in multi-step quasi-Newton methods for unconstrained optimisation Journal of Computational and Applied Mathematics 2003 152 1-2 133 146 10.1016/S0377-0427(02)00701-X 1991286 Armijo L. Minimization of functions having Lipschitz continuous first partial derivatives Pacific Journal of Mathematics 1966 16 1 3 0191071 ZBL0202.46105 Byrd R. H. Nocedal J. A tool for the analysis of quasi-Newton methods with application to unconstrained minimization SIAM Journal on Numerical Analysis 1989 26 3 727 739 10.1137/0726042 997665 ZBL0676.65061 Andrei N. An unconstrained optimization test functions collection Advanced Modeling and Optimization 2008 10 1 147 161 2424936 ZBL1161.90486 Moré J. J. Garbow B. S. Hillstrom K. E. Testing unconstrained optimization software ACM Transactions on Mathematical Software 1981 7 1 17 41 10.1145/355934.355936 607350 ZBL0454.65049 Dolan E. D. Moré J. J. Benchmarking optimization software with performance profiles Mathematical Programming A 2002 91 2 201 213 10.1007/s101070100263 1875515 ZBL1049.90004