AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 478407 10.1155/2013/478407 478407 Research Article A Newton-Like Trust Region Method for Large-Scale Unconstrained Nonconvex Minimization Weiwei Yang Yueting Yang Chenhui Zhang Mingyuan Cao Dong Bo-Qing School of Mathematics and Statistics Beihua University Jilin 132013 China beihua.edu.cn 2013 21 10 2013 2013 08 06 2013 04 09 2013 2013 Copyright © 2013 Yang Weiwei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We present a new Newton-like method for large-scale unconstrained nonconvex minimization. And a new straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation is deduced to construct the trust region subproblem, in which the information of both the function value and gradient is used to construct approximate Hessian. The global convergence of the algorithm is proved. Numerical results indicate that the proposed method is competitive and efficient on some classical large-scale nonconvex test problems.

1. Introduction

We consider the following unconstrained optimization: (1)minxRnf(x), where f:RnR is continuously differentiable.

Trust region methods  are robust, can be applied to ill-conditioned problems, and have strong global convergence properties. Another advantage of trust region methods is that there is no need to require the approximate Hessian of the trust region subproblem to be positive definite. So, trust region methods are important and efficient for nonconvex optimization problems [68, 10, 12, 14]. For a given iterate xkRn, the main computation of trust region algorithms is solving the following quadratic subproblem: (2)minsRnϕk(s)=gkTs+12sTBks,s.t.  sΔk, where gk=f(xk) is the gradient of f(x) at xk, Bk is the true Hessian 2f(xk) or its approximation, Δk>0 is a trust region radius, and · refers to the Euclidean norm on Rn. For a trial step sk, which is generated by solving the subproblem (2), adequacy of the predicted reduction and true variation of the objective function is measured by means of the ratio (3)rk=f(xk)-f(xk+sk)ϕk(0)-ϕk(sk). Then the trust region radius Δk is updated according to the value of rk. Trust region methods ensure that at least a Cauchy (steepest descent-like) decrease on each iteration satisfies an evaluation complexity bound of the same order under identical conditions . It follows that Newton’s method globalized by trust region regularization satisfies the same O(ε-2) evaluation upper bound; such a bound can also be shown to be tight  provided additionally that the Hessian on the path of the iterates for which pure Newton steps are taken is Lipschitz continuous.

Newton’s method has been efficiently safeguarded to ensure its global convergence to first- and even second-order critical points, in the presence of local nonconvexity of the objective using line search , trust region , or other regularization techniques [9, 13]. Many variants of these globalization techniques have been proposed. These generally retain fast local convergence under some nondegeneracy assumptions, are often suitable when solving large-scale problems, and sometimes allow approximate rather than true Hessians to be employed. Solving-large scale problems needs expensive computation and storage. So many researchers have studied the limited memory techniques . The limited memory techniques are firstly applied to line search method. Liu and Nocedal [15, 16] proposed a limited memory BFGS method (L-BFGS) for solving unconstrained optimization and proved its global convergence. Byrd et al.  gave the compact representations of the limited memory BFGS and SR1 formula, which made it possible for combining limited memory techniques with trust region method. Considering that the L-BFGS updating formula used the gradient information merely and ignored the available function value information, Yang and Xu  deduced modified quasi-Newton formula with limited memory compact representation based on the modified quasi-Newton equation with a vector parameter . Recently, some researchers combined the limited memory techniques with trust region method for solving large-scale unconstrained and constrained optimizations .

In this paper, we deduce a new straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation, which uses both available gradient and function value information, to construct the trust region subproblem. Then the corresponding trust region method is proposed for large-scale unconstrained nonconvex minimization. The global convergence of the new algorithm is proved under some appropriate conditions.

The rest of the paper is organized as follows. In the next section, we deduce a new straightforward limited memory quasi-Newton updating. In Section 3, a Newton-like trust region method for large-scale unconstrained nonconvex minimization is proposed and the convergence property is proved under some reasonable assumptions. Some numerical results are given in Section 4.

2. The Modified Limited Memory Quasi-Newton Formula

In this section, we deduce a straightforward limited memory quasi-Newton updating based on the modified quasi-Newton equation, which employs both the gradients and function values to construct the approximate Hessian and is a compensation for the missing data in limited memory techniques. And then we apply the derived formula in trust region method.

Consider the following modified quasi-Newton equation : (4)Bk+1sk=y^k, where sk=xk+1-xk, yk=gk+1-gk, y^k=(1+(θk/skTyk))yk=λkyk, and θk=6(f(xk)-f(xk+1))+3(gk+gk+1)Tsk. The quasi-Newton updating matrix constructed by (4) achieves a higher order accuracy in approximating Hessian. Based on (4), the modified BFGS (MBFGS) updating is as follows: (5)Bk+1=Bk+y^ky^kTy^kTsk-(Bksk)(Bksk)TskTBksk=Bk+λkykykTykTsk-(Bksk)(Bksk)TskTBksk. For twice continuously differentiable function, if xk converges to a point x* at which g(x*)=0 and 2f(x*) is positive definite, then limkθk=0, and then limkλk=1. Moreover, if k is sufficiently large, the MBFGS updating approaches to the BFGS updating.

Then formula (5) can be rewritten into the straightforward formula (6)Bk+1=Bk-akakT+bkbkT, where ak=Bksk/(skTBksk)1/2 and bk=(λk/ykTsk)1/2yk. Thus, Bk can be recursively expressed as (7)Bk=B0+i=0k-1(bibiT-aiaiT)=B0+[b0,b1,,bk-1][b0Tb1Tbk-1T]-[a0,a1,,ak-1][a0Ta1Tak-1T]. Let Yk=[b0,b1,,bk-1], and let Sk=[a0,a1,,ak-1]. Then the above formula can be simply written as (8)Bk=B0+YkYkT-SkSkT. Formula (8) is called the whole memory quasi-Newton formula. For a given positive integer m (m usually is taken for 3,5,7,), if we use the last m pairs (sk-m,yk-m),,(sk-1,yk-1) at the kth (km) iteration to update the starting matrix Bk(0)m times, according to (8), we get the following limited memory MBFGS (L-MBFGS) formula: (9)Bk=Bk(m)=Bk(0)+YkYkT-SkSkT, where Yk=[bk-m,,bk-1], Sk=[ak-m,,ak-1]; then (10)Bk=Bk(0)+[bk-m,,bk-1][bk-mTbk-1T]-[ak-m,,ak-1][ak-mTak-1T], where ak-m+j=Bk-m+jsk-m+j/(sk-m+jTBk-m+jsk-m+j)1/2 and bk-m+j=(λk-m+j/yk-m+jTsk-m+j)1/2yk-m+j  (j=0,1,,m-1).

Since the vectors bi and ai  (i=k-m,,k-2) can be obtained and saved from the previous iterations, we only need to compute the vectors bk-1 and ak-1 to achieve the limited memory quasi-Newton updating matrix. Suppose Bk(0)=I, the computation of bk-1 needs 3n+3 multiplications. Then we consider the computation of ak-1. If Bk-1 can be saved and multiplies by sk-1 directly, the process needs n2 multiplications. In this paper, we compute the product Bk-1sk-1 by (9). Consider (11)Bk-1sk-1=Bk-1(0)sk-1+Yk-1Yk-1Tsk-1-Sk-1Sk-1Tsk-1=Bk-1(0)sk-1+[bk-m-1,,bk-2][bk-m-1Tsk-1bk-2Tsk-1]-[ak-m-1,,ak-2][ak-m-1Tsk-1ak-2Tsk-1]. So we need 4mn multiplications to achieve Bk-1sk-1. Let a-k-1=Bk-1sk-1; then ak-1=(sk-1Ta-k-1)-1/2a-k-1. It takes 2n+1 multiplications to compute ak-1. Ignoring lower order terms, it is a total of (4m+5)n multiplications to obtain Bk.

It is noticed that the only difference between the limited memory quasi-Newton method and the standard quasi-Newton method is in the matrix updating. Instead of storing the matrices Bk, we need to store m pairs vectors {ai,bi} to define Bk implicitly. The product Bkv or vTBkv is obtained by performing a sequence of inner products involving v and the m most recent vectors pairs {ai,bi}.

In the following, we discuss the computation of the products Bkv and vTBkv, vRn. As the situation of (11), we need 4mn multiplications to obtain Bkv. If Bkv has been computed, we only need to solve a vector product to obtain vTBkv which needs n multiplications. If Bkv has not been computed, we compute vTBkv directly by using (9). Consider (12)vTBkv=vTBk(0)v+vTYkYkTv-vTSkSkTv=vTBk(0)v+(YkTv)T(YkTv)-(SkTv)T(SkTv). The whole computation only requires (2m+1)n+4m multiplications. Thus, 2mn multiplications are saved in contrast to the previous method.

If we take Bk(0)=γkI,  vTv  and  YkTv, SkTv have been obtained and saved from the previous iteration, from (11), there are 2m+1 multiplications to compute vTBkv; it is a considerable improvement on computation comparing with (2m+1)n.

Algorithm 1.

Compute and save Sk, Yk.

For j=0,1,,m-1,

Step  1. Compute bk-m+j=(λk-m+j/yk-m+jTsk-m+j)1/2yk-m+j.

Step  2. Compute ak-m+j=Bk-m+jsk-m+j.

Step  3. Compute (sk-m+jTak-m+i)-1/2ak-m+i.

Algorithm 2.

Compute Bkv, vTBkv.

Let xk be the current iteration point, the vectors ak-1, bk-1,gk and matrixes Sk-1, Yk-1 have been obtained by the previous iteration.

Step  1. Update Sk, Yk.

Step  2. Compute SkTv, YkTv.

Step  3. Compute Bkv by (11); compute vTBkv by (12).

We use the form of (9) to store Bk. Instead of updating Bk into Bk+1, we update Sk, Yk into Sk+1, Yk+1.

3. Newton-Like Trust Region Method

In this section, we present a Newton-like trust region method for large-scale unconstrained nonconvex minimization.

Algorithm 3.

Step  0. Given x0Rn, ε>0, Δ^>0, Δ0(0,Δ^), η[0,1/4), B0Rn×n is a given matrix. Compute g0=f(x0); set k:=0.

Step  1. If gk<ε, then stop.

Step  2. Solve the subproblem (2) to obtain sk.

Step  3. Compute (13)rk=f(xk)-f(xk+sk)ϕk(0)-ϕk(sk).

Step  4. Compute (14)xk+1={xk+sk,  if  rk>η,xk,  otherwise.

Step  5. Update the trust region radius as the following: (15)Δk+1={14Δk,if  rk<14,min{2Δk,Δ^},if  rk>34,Δk,otherwise.

Step  6. By implementing Algorithm 1 to update Sk, Yk into Sk+1, Yk+1 in order to update Bk into Bk+1, set k:=k+1; go to Step 1.

In Step 2, using CG-Steihaug algorithm in  to solve the subproblem (2), the algorithm is suitable for solving large-scale unconstrained optimization. In the solving process, the products Bkv and vTBkv are computed by Algorithm 2. Then the whole computation of solving subproblem only requires O(n) multiplications.

To give the convergence result, we need the following assumptions.

Assumption 4.

The level set Ω={xf(x)f(x0)} is contained in a bounded convex set.

The gradient of the objective function f(x) is Lipschitz continuous in the neighborhood of x*; that is, there is a constant L>0 such that (16)g(x)-g(y)Lx-y,x,yRn.

The solution sk of the subproblem (2) satisfies (17)ϕk(0)-ϕk(sk)σgkmin{Δk,gkBk}, where σ(0,1].

The solution sk of subproblem (2) satisfies (18)skγΔk, for γ1.

Lemma 5.

Suppose that (H1) holds and Bk is positive definite; there exist constants M2>M10 such that (19)M1(gk+1-gk)T(xk+1-xk)xk+1-xk2M2,M1gk+1-gk2(gk+1-gk)T(xk+1-xk)M2, for any xk+1,xkΩ with xk+1xk. Then matrices {Bk} are uniformly bounded.

Proof.

From Taylor expansion (20)f(xk+1)=f(xk)+gkTsk+12skT2f(xk+tsk)sk,ppppppppppppppppppppt(0,1), we have (21)|2(f(xk)-f(xk+1)+gkTsk)|=skT2f(xk+tsk)sk,t(0,1). Then (22)|θk|=|6(f(xk)-f(xk+1))+3(gk+gk+1)Tsk|3|skT2f(xk+tsk)sk-(gk+1-gk)Tsk|. From (19), we obtain that (23)|θk|6M2sk2. It is obvious that (24)|λk|=|1+θkskTyk|1+6M2M1. Thus, (25)bkTbk|λk|yk2ykTskM2(1+6M2M1).

Since Tr(xyT)=xTy(x,yRn), Tr(A+B)=Tr(A)+Tr(B)  (A,BRn×n) and from (9) (in which Bk(0)=I), we have (26)Bk=Bk(0)+[bk-m,,bk-1][bk-mTbk-1T]-[ak-m,,ak-1][ak-mTak-1T]; then by (25) and Bk being positive definite, we have (27)Tr(Bk)=Tr(Bk(0))+j=0m-1(bk-m+jTbk-m+j-ak-m+jTak-m+j)Tr(Bk(0))+j=0m-1bk-m+jTbk-m+jn+m(M2+6M22M1).

By the definition of Euclidean norm: A=ρ(ATA)(ARm×n), when ARn×n is a symmetric matrix, A=ρ(A). Obviously, Bk is a symmetric matrix. Suppose the eigenvalues of Bk are 0<λ1λ2λn; then (28)Bk=λni=1nλi=Tr(Bk)n+m(M2+6M22M1). So, Bk is uniformly bounded.

Theorem 6.

Let η=0 in Algorithm 3. Suppose that Assumption 4 holds and Bkβ for some constant β. Let the sequence {xk} be generated by Algorithm 3. Then one has (29)limkinfgk=0.

The proof is similar to Theorem  4.7 in  and is omitted.

4. Numerical Results

In this section, we apply Algorithm 3 to solve nonconvex programming problems. Preliminary numerical results to illustrate the performance of Algorithm 3 are denoted by NLMTR. The contrast tests are called NTR, which is the same as NLMTR except that Bk is updated by BFGS formula. All tests are implemented by using Matlab R2008a on a PC with CPU 2.00 GHz and 2.00 GB RAM. The test problem collections for nonconvex unconstrained minimization are taken from Moré et al. in , the CUTEr collection [26, 27]. These problems are listed in Table 1.

Problem Objective function Problem Objective function
1 Gaussian function 2 Powell badly scaled function
3 Gulf function 4 Chebyquad function
5 Boundary value function 6 Broyden tridiagonal function
7 Separable cubic function 8 Arwhead function
9 Extended denschnb function 10 Extended denschnf function

All numerical results are listed in Table 2, in which iter stands for the number of iterations, which equals the number of gradient evaluations; nf stands for the number of objective function evaluations; Prob stands for the problem label; Dim stands for the number of variables of the tested problem; cpu denotes the CPU time for solving the problems; gk is the terminated gradient; and f* denotes the optimal value.

Numerical results for NLMTR and NTR.

Prob/Dim NLMTR NTR
iter/nf/gk/f*/cpu iter/nf/gk/f*/cpu

1/3 4 / 8 / 2.7405 e - 009 / 1.1279 e - 008 / 0.00 28 / 64 / 3.0591 e - 007 / 6.7392 e - 015 / 0.02
2/2 33 / 79 / 2.3479 e + 003 / 4.3276 e - 004 / 0.01 36 / 78 / 0.0028 / 0.0014 / 0.00
3/3 39 / 83 / 0.0014 / 9.5599 e - 005 / 0.01 76 / 170 / 9.8496 e - 011 / 3.9977 e - 012 / 0.02
4/5 33 / 77 / 4.5033 e - 006 / 1.5576 e - 012 / 0.01 9 / 20 / 3.6793 e - 011 / 8.3131 e - 023 / 0.00
5/10 47 / 98 / 0.0095 / 5.5443 e - 004 / 0.01 **
5/50 51 / 107 / 3.6715 e - 004 / 8.5719 e - 006 / 0.01 **
6/10 / 0.1274 / 4.7049 e - 004 / 0.01 36 / 92 / 3.9484 e - 007 / 2.5660 e - 015 / 0.01
7/10 18 / 40 / 7.2477 e - 009 / 1.3136 e - 017 / 0.00 10 / 20 / 3.5034 e - 009 / 3.7240 e - 018 / 0.00
7/50 22 / 44 / 9.0075 e - 009 / 1.9726 e - 017 / 0.01 11 / 28 / 2.7054 e - 009 / 2.1585 e - 018 / 0.01

5/100 45 / 95 / 1.0124 e - 004 / 1.1843 e - 006 / 0.01 **
5/500 36 / 78 / 4.3127 e - 006 / 1.0217 e - 008 / 0.21 **
7/100 23 / 46 / 5.2647 e - 009 / 6.6238 e - 018 / 0.02 12 / 31 / 3.0016 e - 011 / 2.8353 e - 022 / 0.77
7/500 25 / 50 / 3.9054 e - 009 / 3.8097 e - 018 / 0.48 12 / 28 / 4.3635 e - 009 / 5.4178 e - 018 / 2.64
8/100 39 / 96 / 0.0210 / 1.8391 e - 005 / 0.02 13 / 33 / 1.3995 e - 011 / - 1.4211 e - 014 / 1.14
9/100 41 / 91 / 6.3678 e - 004 / 6.7291 e - 008 / 0.03 9 / 19 / 1.0376 e - 010 / 2.1414 e - 021 / 0.72
9/500 41 / 91 / 0.0014 / 3.3646 e - 007 / 0.29 11 / 24 / 6.1012 e - 010 / 4.7336 e - 020 / 12.09
10/100 40 / 90 / 0.0111 / 4.1508 e - 007 / 0.02 26 / 68 / 2.4382 e - 011 / 7.7829 e - 025 / 2.15
10/500 42 / 94 / 0.0247 / 2.0754 e - 006 / 0.33 18 / 49 / 1.2142 e - 007 / 2.1634 e - 017 / 14.44

5/1000 34 / 74 / 1.0801 e - 006 / 1.2890 e - 009 / 0.72 **
5/2000 32 / 70 / 2.7030 e - 007 / 1.6186 e - 010 / 2.65 **
5/5000 29 / 64 / 4.3275 e - 008 / 1.0388 e - 011 / 15.08 **
7/1000 25 / 50 / 5.7571 e - 009 / 8.2784 e - 018 / 1.82 10 / 21 / 3.6235 e - 009 / 3.9031 e - 018 / 15.85
7/2000 25 / 50 / 8.3098 e - 009 / 1.7247 e - 017 / 7.13 11 / 23 / 2.5187 e - 010 / 1.9410 e - 020 / 130.06
7/5000 26 / 52 / 9.1295 e - 009 / 2.0187 e - 017 / 80.52 **
9/1000 41 / 91 / 0.0020 / 6.7291 e - 007 / 1.10 11 / 24 / 1.3827 e - 009 / 2.5035 e - 019 / 43.52
9/2000 44 / 97 / 0.0028 / 1.3458 e - 006 / 3.63 8 / 23 / 5.3761 e - 010 / 3.6664 e - 020 / 112.23
9/5000 44 / 97 / 0.0045 / 3.3646 e - 006 / 22.28 **
10/1000 42 / 94 / 0.0350 / 4.1508 e - 006 / 1.19 17 / 49 / 1.6202 e - 007 / 3.8522 e - 017 / 44.47
10/2000 42 / 94 / 0.0494 / 8.3015 e - 006 / 4.49 14 / 45 / 3.6692 e - 007 / 1.9770 e - 016 / 197.92
10/5000 45 / 100 / 0.0782 / 2.0754 e - 005 / 23.59 **

* * The algorithm fails.

We compare NLMTR with NTR. The trial step sk is computed by CG-steihaug algorithm . The matrix Bk of NLMTR is updated by the straightforward modified L-MBFGS formula (9). Choosing η=0.1, m=3. The matrices Bk of NTR is updated by BFGS formula in . The iteration is terminated by gkε or skε, where ε=10-8. The related figures are listed in Table 2.

From Table 2, we can see that for small-scale problems, the optimal values and the gradient norms of NTR are more accurate than NLMTR. For middle-scale problems, the accuracy of NTR is higher, but the cpu time of NLMTR is shorter. For large-scale problems, the cpu time of NTR is much more than NLMTR, and for some problems NTR fails, especially when n=5000. So NLMTR is suitable for solving large-scale nonconvex problems.

Acknowledgments

This work is supported in part by the NNSF (11171003) of China, the Key Project of Chinese Ministry of Education (no. 211039), and Natural Science Foundation of Jilin Province of China (no. 201215102).

Powell M. J. D. Rosen J. B. Mangasarian O. L. Ritter K. A new algorithm for unconstrained optimization Nonlinear Programming 1970 New York, NY, USA Academic Press 31 65 MR0272162 ZBL0228.90043 Nocedal J. Yuan Y.-X. Yuan Y. Combining trust region and line search techniques Advances in Nonlinear Programming 1998 14 Dordrecht, The Netherlands Kluwer Academic 153 175 10.1007/978-1-4613-3335-7_7 MR1639889 ZBL0909.90243 Nocedal J. Wright S. J. Numerical Optimization 1999 New York, NY, USA Springer 10.1007/b98874 MR1713114 Conn A. R. Gould N. I. M. Toint P. L. Trust-Region Methods 2000 Philadelphia, Pa, USA Society for Industrial and Applied Mathematics (SIAM) 10.1137/1.9780898719857 MR1774899 Wu H. P. Ni Q. A new trust region algorithm with a conic model Numerical Mathematics 2008 30 1 57 67 MR2440868 ZBL1174.65425 Toint Ph. L. Global convergence of a class of trust-region methods for nonconvex minimization in Hilbert space IMA Journal of Numerical Analysis 1988 8 2 231 252 10.1093/imanum/8.2.231 MR967689 ZBL0698.65043 Powell M. J. D. Yuan Y. A trust region algorithm for equality constrained optimization Mathematical Programming 1990 49 2 189 211 10.1007/BF01588787 MR1087453 Li D.-H. Fukushima M. A modified BFGS method and its global convergence in nonconvex minimization Journal of Computational and Applied Mathematics 2001 129 1-2 15 35 10.1016/S0377-0427(00)00540-9 MR1823208 ZBL0984.65055 Nesterov Y. Polyak B. T. Cubic regularization of Newton method and its global performance Mathematical Programming 2006 108 1 177 205 10.1007/s10107-006-0706-8 MR2229459 ZBL1142.90500 Guo Q. Liu J.-G. Global convergence of a modified BFGS-type method for unconstrained non-convex minimization Journal of Applied Mathematics & Computing 2007 24 1-2 325 331 10.1007/BF02832321 MR2311969 ZBL1128.65040 Gratton S. Sartenaer A. Toint P. L. Recursive trust-region methods for multiscale nonlinear optimization SIAM Journal on Optimization 2008 19 1 414 444 10.1137/050623012 MR2403039 ZBL1163.90024 Cartis C. Gould N. I. M. Toint P. L. On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems SIAM Journal on Optimization 2010 20 6 2833 2852 10.1137/090774100 MR2721157 ZBL1211.90225 Cartis C. Gould N. I. M. Toint P. L. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results Mathematical Programming 2011 127 2 245 295 10.1007/s10107-009-0286-5 MR2776701 Xue D. Sun W. He H. A structured trust region method for nonconvex programming with separable structure Numerical Algebra, Control and Optimization 2013 3 2 283 293 10.3934/naco.2013.3.283 MR3049512 ZBL1264.65091 Nocedal J. Updating quasi-Newton matrices with limited storage Mathematics of Computation 1980 35 151 773 782 10.2307/2006193 MR572855 ZBL0464.65037 Liu D. C. Nocedal J. On the limited memory BFGS method for large scale optimization Mathematical Programming 1989 45 3 503 528 10.1007/BF01589116 MR1038245 ZBL0696.90048 Byrd R. H. Nocedal J. Schnabel R. B. Representations of quasi-Newton matrices and their use in limited memory methods Mathematical Programming 1994 63 2 129 156 10.1007/BF01582063 MR1268604 ZBL0809.90116 Xu C. Zhang J. A survey of quasi-Newton equations and quasi-Newton methods for optimization Annals of Operations Research 2001 103 213 234 10.1023/A:1012959223138 MR1868452 ZBL1007.90069 Yang Y. T. Xu C. X. A compact limited memory method for large scale unconstrained optimization European Journal of Operational Research 2007 180 1 48 56 10.1016/j.ejor.2006.02.045 MR2293603 ZBL1114.90072 Ni Q. Yuan Y. A subspace limited memory quasi-Newton algorithm for large-scale nonlinear bound constrained optimization Mathematics of Computation 1997 66 220 1509 1520 10.1090/S0025-5718-97-00866-1 MR1422793 ZBL0886.65065 Burdakov O. P. Martínez J. M. Pilotta E. A. A limited-memory multipoint symmetric secant method for bound constrained optimization Annals of Operations Research 2002 117 1–4 51 70 10.1023/A:1021561204463 MR1962168 ZBL1025.90038 Wang Z. H. A limited memory trust region method for unconstrained optimization and its implementation Mathematica Numerica Sinica 2005 27 4 395 404 MR2199850 Gill P. E. Murray W. Saunders M. A. SNOPT: an SQP algorithm for large-scale constrained optimization SIAM Review 2005 47 1 99 131 10.1137/S0036144504446096 MR2149103 ZBL1210.90176 Liu H. Ni Q. New limited-memory symmetric secant rank one algorithm for large-scale unconstrained optimization Transactions of Naniing University of Aeronautics and Astronautics 2008 25 3 235 239 Moré J. J. Garbow B. S. Hillstrom K. E. Testing unconstrained optimization software Association for Computing Machinery 1981 7 1 17 41 10.1145/355934.355936 MR607350 ZBL0454.65049 Gould N. I. M. Orban D. Toint P. L. GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization Association for Computing Machinery 2003 29 4 353 372 10.1145/962437.962438 MR2077337 ZBL1068.90525 Benson H. Y. Cute models http://orfe.princeton.edu/~rvdb/ampl/nlmodels/cute/