MPEMathematical Problems in Engineering1563-51471024-123XHindawi10.1155/2020/45192744519274Research ArticleConvergence Analysis of an Improved BFGS Method and Its Application in the Muskingum ModelYangTianshan12https://orcid.org/0000-0002-3108-3510LiPengyuan3https://orcid.org/0000-0002-6926-1765WangXiaoliang4ZhouWeijun1School of BusinessGuangxi UniversityNanningGuangxiChinagxu.edu.cn2School of Finance and EconomicsNanning College for Vocational TechnologyNanningGuangxiChina3College of Mathematics and Information ScienceGuangxi Center for Mathematical ResearchGuangxi UniversityNanningGuangxiChinagxu.edu.cn4School of Mathematical SciencesDalian University of TechnologyDalianLiaoningChinadlut.edu.cn202018820202020280620202807202018820202020Copyright © 2020 Tianshan Yang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The BFGS method is one of the most effective quasi-Newton algorithms for minimization-optimization problems. In this paper, an improved BFGS method with a modified weak Wolfe–Powell line search technique is used to solve convex minimization problems and its convergence analysis is established. Seventy-four academic test problems and the Muskingum model are implemented in the numerical experiment. The numerical results show that our algorithm is comparable to the usual BFGS algorithm in terms of the number of iterations and the time consumed, which indicates our algorithm is effective and reliable.

Basic Ability Promotion Project of Guangxi Young and Middle-Aged Teachers2020KY30018
1. Introduction

With the development of the economy and society, a large number of optimization problems have been emerged in the fields of economic management, aerospace, transportation, national defense and so on. It is very necessary and meaningful for us to discuss, analyse the problems, and find some effective methods to solve them. Let us consider the optimization model:(1)minfxxn,where f:n, fC2. To solve (1), the following iterative formula is widely used. Given the starting point x0, the iterative scheme is(2)xk+1=xk+αkdk,k0,1,2,,where xk is the current iteration point, xk+1 is the next iteration point, αk is the step length, and dk is the search direction that is obtained by solving the quasi-Newton equation:(3)Bkdk+gk=0,where gk is the gradient fxk of fx at the point xk, Bk is the quasi-Newton updating matrix or its approximation, and the sequence Bk satisfies the standard secant equation Bk+1sk=yk. The updating matrix of Bk can be defined by(4)Bk+1=BkBkskskTBkskTBksk+ykykTskTyk,where yk=gk+1gk, sk=xk+1xk, and B0 is symmetric and positive definite.

Formula (4) is the famous standard BFGS update formula, which is one of the most effective quasi-Newton methods. For a convex function, using exact line search or some special inexact line search, the global convergence (see [1, 2]) and superlinear convergence (see [3, 4]) of the BFGS method were obtained. For general functions, the BFGS method may fail under inexact line search techniques. This fact has been proven by Dai , and Mascarenhas  has also proven that the BFGS method is not convergent, even under the exact line search techniques. Although the convergence of the BFGS method under general nonconvex functions has some shortcomings, its high efficiency and great numerical stability have motivated many scholars  to study and improve the BFGS method. The improvements achieved by scholars are as follows.

Formula 1.

(see ). The BFGS update formula is modified by(5)Bk+1=BkBkskskTBkskTBksk+δkTδkskTδk,where δk=yk+max0,skTyk/sk2+ϕgksk and function ϕ: satisfies (i) ϕt>0 for all t>0; (ii) ϕt=0 if and only if t=0; (iii) if t is in a bounded set, ϕt is bounded. Li and Fukushima discussed its global convergence without the convexity assumption on f.

Formula 2.

(see ) The BFGS update formula is modified by(6)Bk+1=BkBkskskTBkskTBksk+ykmykmTskTykm,where ykm=yk+ρk/sk2sk and ρk=2fxkfxk+αkdk+gxk+αkdk+gxkTsk. Moreover, scholars [8, 13] have proven that this method is better than the original BFGS method.

Formula 3.

(see ) The BFGS update formula is modified by(7)Bk+1=BkBkskskTBkskTBksk+yklyklTskTykl,where ykl=yk+A¯ksk and A¯k=6fxkfxk+αkdk+3gxk+αkdk+gxkTsk/sk2. According to A¯k, it is clear that the method contains both gradient and function value information. In addition, the modulated quasi-Newton method with superlinear convergence constructed by formula 3 is studied in .

Formula 4.

(see ) The BFGS update formula is modified by(8)B˜k+1=B˜kB˜kskskTB˜kskTB˜ksk+ykykTskTyk,where yk=yk+Aksk and Ak=maxA¯k,0. The global convergence of the improved BFGS method (MBFGS) is discussed by Li et al. . Meanwhile, they also compared the three methods in numerical experiments. The results show that the algorithm based on this method is superior to the other three methods.

In many optimization algorithms, scholars often use the weak Wolfe–Powell (WWP) line search technique to find the step length αk. The WWP line search technique is determined by(9)fxk+αkdkfk+jαkgkTdk,gxk+αkdkTdkρgkTdk,where j0,1/2, αk>0, and ρj,1.

In order to get more interesting properties of WWP line search, many scholars have improved the line search technique. Yuan et al.  improved the WWP line search technique and studied the new line search technique that has global convergence in the BFGS and PRP methods. Their improved line search technique (MWWP) is formulated as follows:(10)fxk+αkdkfxk+βαkgkTdk+αkminβ1gkTdk,βαkdk22,(11)gxk+αkdkTdkθgkTdk+minβ1gkTdk,βαkdk2,where β0,1/2, αk>0, β10,β, and θβ,1. The detailed line search is elaborated in . Some research results based on this improved line search can be found in [16, 17]. The above discussion motivate us to seek an improved BFGS method which may obtain better numerical performance.

In this article, we will discuss our work in the following sections. In Section 2, using (8) and the MWWP line search technique, algorithms are constructed to solve optimization problems. In Section 3, we study convergence of the modified BFGS method. In Section 4, the numerical results of the algorithm are reported. In the last section, the conclusion is presented.

2. Algorithm

The corresponding modified BFGS algorithm is called Algorithm 1 and can be presented as follows.

Algorithm 1.

Step 1: choose an initial point x0n, ε0,1, β0,1/2, β__10,β, and θβ,1. Given an initial n×n symmetric and positive definite matrix B˜0, set k=0.

Step 2: when gkε, stop. Otherwise, take the next step.

Step 3: solve B˜kdk+gk=0 to obtain dk.

Step 4: the step length αk is determined by (10) and (11).

Step 5: set a new iteration point of xk+1=xk+αkdk. Update B˜k by (8).

Remark 1.

The step length αk, generated by the proposed new line search technique, has a great numerical performance. And the rationality proof of the MWWP line search has been given in .

3. Convergence Analysis

The global convergence analysis of the improved BFGS method will be introduced in this section, and the following assumptions are needed.

Assumption 1.

The level set of F0=xfxfx0 is bounded.

The objective function fx is convex on F0.

fx is twice continuously differentiable and bounded below, with a Lipschitz continuous gradient function gx. It means that there exists a positive constant M, such that

(12)gxgyMxy,x,yRn.

Next, we will give the global convergence. The positive definite of B˜k is presented in the following lemma.

Lemma 1.

Let the sequence B˜k be generated by (8); then, the matrix B˜k is positive definite for all k.

Proof.

Induction is used to prove the positive definiteness of B˜k. For k=0, it is obvious that the matrix B˜0 is positive definite. For k>0, by B˜kdk+gk=0 and (11), we have(13)skTyk=skTyk+skTAkskθgkTsk+αkminβ1gkTdk,βαkdk2skTgk+skTAksk1θgkTsk>0,where the last inequality holds by gkTdk=dkTB˜kdk>0 and minβ1gkTdk,βαkdk20. Therefore, the matrix B˜k+1 is positive definite. The proof is completed.

Lemma 2.

Let Assumption 1 hold, and the sequence xk,αk,dk,gk is generated in the Algorithm 1. Then,(14)k=0gkTdk2dk2<.

Proof.

By MWWP line search (8) and formula (12), we obtain(15)Mαkdk2gk+1gkTdkθgkTdk+minβ1gkTdk,βαkdk2gkTdk1θgkTdk.

Therefore, the following bound holds:(16)αk1θgkTdkMdk2.

By (10) and Assumption 1 (iii), we have(17)fxkfxk+αkdkβαkgkTdkαkminβ1gkTdk,βαkdk22ββ1αkgkTdk.

Adding these inequalities from k=0 to , we obtain(18)k=0αkgkTdk<.

Combining the above inequality with (16), we obtain (14). Therefore, Lemma 2 has been proven.

Remark 2.

It is obvious that there are two values of the minβ1gkTdk,βαkdk2. Therefore, the MWWP line search has two situations. In this paper, we discuss the situation of minβ1gkTdk,βαkdk2=βαkdk2.

Lemma 3.

Let minβ1gkTdk,βαkdk2=βαkdk2 and Assumption 1 hold. Then, there exists a positive constant l1l2 such that(19)B˜kskl1sk,(20)skTB˜kskl2sk2,hold for at least t/2 values of k1,2,,t with any positive integer t.

Proof.

By the Ak=maxA¯k,0, if A¯k0, then yk=yk. Lemma 3 holds (see ).

If A¯k>0, then yk=yk+A¯ksk. It is similar to the result of Yuan and Wei . According to the convexity of objective function fx, we obtain(21)fkfk+1gk+1Tsk,fk+1fkgkTsk.

The above two inequalities and the definition of A¯k indicate that(22)A¯k3skTyksk2.

Then, we obtain(23)yk=yk+A¯kskyk+A¯ksk4yk.

Therefore, by the above analysis, it follows(24)yk216yk2=16gk+1gk216M2sk2.

By the definition of yk, it follows that(25)skTyk=skTyk+Aksk1θgkTsk+αkminβ1gkTdk,βαkdk2+skTAkskβsk2.

Then, we have that(26)skTyksk2β,yk2skTyk16M2β.

The proof of Theorem 2.1 of  implies that Lemma 3 holds.

Based on the above conclusions, the global convergence is analysed in the following theorem.

Theorem 1.

If the conditions of Lemma 3 hold, then we obtain(27)limkinfgk=0.

Proof.

By Lemma 2, we obtain(28)limkgkTdk2dk2=0.

Since B˜kdk+gk=0, we have(29)limkdkTB˜kdk2dk2=0.

Combining (19) with (20), we obtain(30)0l22dk2dkTB˜kdk2dk2.

Thus, k+, dk0, and gk=B˜kdk, and we obtain(31)l2dkgkl1dk.

Therefore, (27) holds. The proof is complete.

4. Numerical Results

In this section, we will study the numerical performance of the MBFGS-MWWP algorithm established in Section 2. To verify the algorithm’s effectiveness, we divide the experiments into two parts: we first compare our algorithm with the standard BFGS method with the weak Wolfe–Powell line search technique (BFGS-WWP) in 74 academic problems listed in Table 1 with the dimension varying from 300 to 2700 and then apply our algorithm to the Muskingum engineering model.

The test problems.

N0Test problem
1Extended Freudenstein and Roth function
2Extended trigonometric function
3Extended Rosenbrock function
4Extended White and Holst function
5Extended Beale function
6Extended penalty function
8Raydan 1 function
9Raydan 2 function
10Diagonal 1 function
11Diagonal 2 function
12Diagonal 3 function
13Hager function
14Generalized Tridiagonal-1 function
15Extended Tridiagonal-1 function
16Extended three exponential terms function
17Generalized Tridiagonal-2 function
18Diagonal 4 function
19Diagonal 5 function
20Extended Himmelblau function
21Generalized PSC1 function
22Extended PSC1 function
23Extended Powell function
24Extended block diagonal BD1 function
25Extended Maratos function
26Extended Cliff function
28Extended Wood function
29Extended Hiebert function
34Extended EP1 function
35Extended Tridiagonal-2 function
36BDQRTIC function (CUTE)
37TRIDIA function (CUTE)
39NONDIA function (CUTE)
40NONDQUAR function (CUTE)
41DQDRTIC function (CUTE)
42EG2 function (CUTE)
43DIXMAANA function (CUTE)
44DIXMAANB function (CUTE)
45DIXMAANC function (CUTE)
46DIXMAANE function (CUTE)
48Broyden Tridiagonal function
51EDENSCH function (CUTE)
52VARDIM function (CUTE)
53STAIRCASE S1 function
54LIARWHD function (CUTE)
55DIAGONAL 6 function
56DIXON3DQ function (CUTE)
57DIXMAANF function (CUTE)
58DIXMAANG function (CUTE)
59DIXMAANH function (CUTE)
60DIXMAANI function (CUTE)
61DIXMAANJ function (CUTE)
62DIXMAANK function (CUTE)
63DIXMAANL function (CUTE)
64DIXMAAND function (CUTE)
65ENGVAL1 function (CUTE)
66FLETCHCR function (CUTE)
67COSINE function (CUTE)
68Extended DENSCHNB function (CUTE)
69Extended DENSCHNF function (CUTE)
71BIGGSB1 function (CUTE)
4.1. Unconstrained Optimisation Problems

In this section, we compare Algorithm 1 with the BFGS-WWP algorithms for the 74 academic problems listed in Table 1. The codes are written with MATLAB R2014a and run on a PC with an Inter (R) Core (TM) i5-4210U CPU @ 1.70 GHz, 8.00 GB of RAM and the Windows 10 operating system, and the parameters are chosen as β=0.2, β1=0.15, θ=0.75, and ε=105. The numerical results and comparison are shown in Tables 26. The columns in Tables 26 have the following meaning:

N0: the index of the tested problems

Dim: the dimension of the tested problem

NI: the iteration number consumed

NFG: the total number both of the gradient and the function value

CPU time: the time consumed in corresponding algorithms in seconds

The numerical results for problems 1–17.

N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time
13009260.12519261.0625
19007192.40637192.4375
1270072134.656172134.0131
23002795984.65632545474.0052
29006601381186.23127001489203.3942
2270071630.469171630.5030
330062814718.734467614729.2969
390010002429286.140610002598275.7656
32700100030865405.7969100029995390.2344
430073316369.9219733161010.9375
490010002371278.406310002421285.8281
42700100021105408.3906100021165375.0313
530013390.187515450.2031
590015464.031315463.8125
52700154674.1875154673.2188
63008330.09388330.1406
690013493.484413493.0625
6270041710.500141710.3906
73001984012.56251984012.5781
7900409823112.9531409823111.6563
7270088317684735.796988317684768.6094
830022480.406322480.3281
890031668.359431668.5156
8270057118294.218857118292.7812
93006160.062513280.1563
99006161.406313283.2656
9270061624.7813143063.6563
10300290.0313290
10900290.0469290.0469
102700290.1563290.1563
1130046940.609446940.5781
119006713618.03126613419.0021
11270082200422.890697196533.4688
1230038780.531338780.5469
12900469412.7031469412.6253
12270066134340.437566134343.1406
1330014340.156214340.2031
1390015423.968815424.0156
13270049153257.421949151250.4063
1430011250.281311250.3438
1490010232.859410232.5625
14270061525.109461523.5313
1530013320.328113320.3906
1590014343.812514343.9219
152700153772.0156153775.4844
163007180.06257180.0938
169006161.32816161.2521
16270061824.453161623.8906
17300981991.2656981991.3281
179008818223.98488718023.3281
17270082170415.312582171405.2813

The numerical results for problems 18–34.

N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time
183003100.03123100.0312
189003100.54683100.4844
1827003109.34383109.0032
19300390.03123100.0312
19900390.54683100.5312
1927003910.17183109.6254
2030011320.18758270.0625
2090013552.468710291.8281
202700164865.7812134045.4531
21300471220.5781501140.7187
219005814116.21015012513.2567
21270058149295.171842108207.6562
223006210.09377300.1093
229006211.39067301.5937
22270062125.156273029.0781
233001143931.85931133781.5937
2390015252342.890619062852.0468
2327002458541304.25212096931105.5625
24300943381.1562311610.2813
249009336122.9531171271.9375
2427002412974.76562513679.4375
2530068618238.781273419089.7031
2590010003200280.968710002465272.8751
25270096928675167.4062100027915395.4062
263004150.03124150.0312
269004150.60934150.6406
26270041510.468141510.1093
273007180.09377180.1563
2790013303.312513303.7031
2727002860139.20312860149.8593
28300882551.3593842451.4843
2890010328828.218710229527.3125
282700131328694.8125128330669.3593
293004150.03124150.0312
299004150.65624150.4218
29270041510.39064156.2968
303002074192.75212074192.8281
30900444893122.4375444893120.4687
30270086317284608.984386317284734.1093
3130011310.140611310.1562
3190013493.359313493.8281
312700165879.8751165891.4531
3230013440.156213450.2031
3290012333.046812333.2521
32270072223367.953183235435.4687
333004110.03124110.0312
339004110.71874110.6562
33270041112.515641112.7812
34300390390.0312
34900390.5625390.5121
34270041014.718741014.2031

The numerical results for problems 35–51.

N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time
353005130.06255130.0625
359005131.09375131.0781
35270051319.593751318.1093
36300451180.8751451160.9375
369004712213.59375514515.4375
36270064171324.656264167316.7968
373002515073.56252515073.7031
379005771159160.87515771159176.6251
37270086917414684.328186917414743.0937
383005150.06255150.0625
389005151.04685151.1718
38270051519.734351518.9218
3930019720.252219670.3281
3990023876.265621815.2521
39270034118173.112335120181.7812
4030048511627.218752010557.4375
409005061162143.34377311468198.0625
40270052812702713.421882916644167.6718
4130012340.171812340.1718
4190012342.906212343.0468
412700123456.5231123453.7968
4230018600.218716550.1875
429004210.07814210.0468
4227004210.29684210.3125
4330015380.265615380.2343
4390018445.406218444.6875
432700194694.0312204892.6562
4430029710.437527700.3752
449004712013.09378320422.3593
44270087203443.703199224494.0937
45300561310.7812631460.9531
459007517820.82819020925.4843
45270081190414.4218124265615.4062
46300701570.9375901961.67187
4690012127634.671811725832.5468
462700174401916.87521994161049.3437
473001102252.29681102252.3752
4790021944673.843721944668.9843
4727003467021940.34373467021948.3437
48300901831.6252901831.7031
489007515720.56257515720.0468
482700144291741.3751144291716.9375
493001973992.67181973993.0781
49900422849117.1562422849116.6562
49270088217664726.187588217664725.7031
503001853754.17181853754.0312
50900395795116.1562395795112.3906
50270085917204628.046885917204585.6251
5130024580.312524580.3125
5190021565.703130788.2523
5127003380163.78122772127.3593

The numerical results for problems 52–68.

N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time
5230031303130.0468
529003130.34373130.2968
5227003135.64063135.4062
533003146344.10934048125.4063
539009271860261.953110002004270.2188
532700100020045316.6406100020025250.6406
5430025780.328124740.2969
5490028887.328128857.1406
54270033107166.218137116183.0625
5530010240.140620420.2812
5590011262.906221445.2187
552700112650.96872348105.5123
563003116263.78123547104.6252
569009211846252.859310002002271.5312
562700100020025193.1251100020004871.1241
57300771991.2187962321.4531
579007291.64064014011.0781
57270073030.4531157565.8594
58300671440.8751731541.2656
5890010321429.843110321428.1875
5827002888124.14061964071044.9688
593003150.03123150.0313
5990011582.812511583.2656
59270024111107.312524108109.1093
60300711660.9687791731.1875
6090013329542.203118238450.2031
6027001904331024.2812182388947.9062
61300681720.937526870.4375
619007291.75219451.7656
61270073031.2812199082.3906
623003598775.71813919086.3125
629006251452182.96876181415168.5625
62270062214733372.6406156520809.6251
633001573712.51561663912.6562
6390023657167.640626759274.8906
63270054112162961.687549311142569.2656
643001984823.21312024853.2187
6490027662982.1562211125.4218
6427003129.93753129.1406
65300511141.0312481080.9062
659005112215.42184210911.6562
65270055126290.484477179379.0781
66300527118710.2968534121110.1875
6690010002415294.625110002442285.8281
662700100024945153.8281100025444975.5468
673006210.03126210
6790010040821.328112330.1562
67270010290.906213590.9843
6830011260.187511260.2031
6890011263.218712283.5156
682700122860.4375122852.7656

The numerical results for problems 69–74.

N0DimMBFGS-MWWPBFGS-WWP
NINFGCPU timeNINFGCPU time
69300431420.687526810.3752
699006520017.890628897.5938
69270073222399.734334107170.4375
703003588479.2031501510.9218
709004415913.81257125019.2812
7027002637641469.703145152207.3438
713001633322.03121633342.0938
71900483975147.3593488986130.2656
712700100020005118.8751100020004951.7343
7230035800.828143940.7968
729005812818.26565712618.1875
72270096208535.140698210531.1562
733001332712.28121332711.7031
7390022545563.859322545560.2521
7327004719472572.96814438912325.0156
74300431120.7187431120.5625
7490018437357.359318437350.5781
7427002625291460.04682625291372.9375

For intuitive effect, we adopt the performance technique in  to show the performance of different algorithms. From the strategy in , the higher the line in the figure is, the better the numerical results are. Figures 13 show the NI, NFG, and CPU time performances, respectively, of the new algorithm and the standard BFGS method. From the results shown in the figures, the NI, NFG, and CPU time of the algorithm constructed in this paper are generally better than those of the standard BFGS algorithm. Therefore, our algorithm is interesting and reliable from this perspective.

Performance profiles of these methods (NI).

Performance profiles of these methods (NFG).

Performance profiles of these methods (CPU time).

4.2. Muskingum Model

In this section, the main work is to use Algorithm 1 to numerically estimate the Muskingum model , whose definition is as follows:(32)minfx1,x2,x3=i=1n11Δt6x1x2Ii+1+1x2Qi+1x31Δt6x1x2Ii+1x2Qix3Δt2IiQi+Δt21Δt3Ii+1Qi+12,where n is the total time, Ii is the observed inflow discharge, Qi is the observed outflow discharge, x1 denotes the storage time constant, x2 denotes the weight coefficient, x3 denotes an extra parameter, and Δt is the time step at time ti (i=1,2,,n). The observation data in the experiment are from the process of flood runoff from Chenggouwan and Linqing of Nanyunhe in the Haihe Basin, Tianjin, China, and the initial point x=0,1,1T. In addition, the time step t=12h are selected, and the detailed values of Ii and Qi for the years 1960, 1961, and 1964 are given in .

Figures 46 and Table 7 imply the following conclusions: (1) Similar to the BFGS method and the HIWO method, the MBFGS method, combined with the Muskingum model, has interesting experimental results. (2) The final points (x1, x2, and x3) of the MBFGS method are more competitive than other similar methods. (3) Due to the difference in the final points of these three methods, the Muskingum model may have more approximate optimal points.

Performance of Algorithm 1 in 1960.

Performance of Algorithm 1 in 1961.

Performance of Algorithm 1 in 1964.

Results of the three algorithms.

Algorithmx1x2x3
BFGS 10.81560.98261.0219
HIWO 13.28130.80010.9933
MBFGS11.18320.99990.9996
5. Conclusion

In this paper, we study the improved BFGS method with the line search technique [14, 15] and mainly discuss its convergence on a convex function. The numerical results show that the proposed algorithm has a better problem-solving capability than that of the standard BFGS algorithm based on WWP line search. As for the further work, we have several points to consider: (i) That whether the improved BFGS method, based on other line search, also has convergence property. (ii) The combination of line search techniques (10) and (11) with other quasi-Newton methods is worth studying. (iii) Similar applications of nonlinear conjugate gradient algorithm, especially the PRP method, are also worthy of attention.

Data Availability

The data used to support the findings of this study are available in tables in this paper and also can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Basic Ability Promotion Project of Guangxi Young and Middle-Aged Teachers (No. 2020KY30018).

BroydenC. G.DennisJ. E.MoréJ. J.On the local and superlinear convergence of quasi-Newton methodsIMA Journal of Applied Mathematics197312322324510.1093/imamat/12.3.2232-s2.0-0000302437ByrdR. H.NocedalJ.A tool for the analysis of quasi-Newton methods with application to unconstrained minimizationSIAM Journal on Numerical Analysis198926372773910.1137/0726042DennisJ. E.MoréJ. J.Quasi-Newton methods, motivation and theorySIAM Review1977191468910.1137/1019005DennisJ. E.MoréJ. J.A characterization of superlinear convergence and its application to quasi-Newton methodsMathematics of Computation19742812654910.1090/s0025-5718-1974-0343581-12-s2.0-84966259557DaiY.-H.Convergence properties of the BFGS algoritmSIAM Journal on Optimization200213369370110.1137/s10526234013834552-s2.0-0042659410MascarenhasW. F.The BFGS method with exact line searches fails for non-convex objective functionsMathematical Programming2004991496110.1007/s10107-003-0421-72-s2.0-21044434499LiD.FukushimaM.A modified BFGS method and its global convergence in nonconvex minimizationJournal of Computational and Applied Mathematics20011291-2153510.1016/s0377-0427(00)00540-92-s2.0-0035301251WeiZ.YuG.YuanG.LianZ.The superlinear convergence of a modified BFGS-type method for unconstrained optimizationComputational Optimization and Applications200429331533210.1023/b:coap.0000044184.25410.392-s2.0-5544237802ZhangJ. Z.DengN. Y.ChenL. H.New quasi-Newton equation and related methods for unconstrained optimizationJournal of Optimization Theory and Applications1999102114716710.1023/a:10218986300012-s2.0-0033164194LiD.-H.FukushimaM.On the global convergence of the BFGS method for nonconvex unconstrained optimization problemsSIAM Journal on Optimization20011141054106410.1137/s10526234993542422-s2.0-0035646690ZhangL.TangH.A hybrid MBFGS and CBFGS method for nonconvex minimization with a global complexity boundPacific Journal of Optimization2018144693702ZhouW.A modified BFGS type quasi-Newton method with line search for symmetric nonlinear equations problemsJournal of Computational and Applied Mathematics202036711245410.1016/j.cam.2019.112454WeiZ.LiG.QiL.New quasi-Newton methods for unconstrained optimization problemsApplied Mathematics and Computation200617521156118810.1016/j.amc.2005.08.0272-s2.0-33645848452LiX.WangB.HuW.A modified nonmonotone BFGS algorithm for unconstrained optimizationJournal of Inequalities Applications20172017118310.1186/s13660-017-1453-52-s2.0-85027694685YuanG.WeiZ.LuX.Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line searchApplied Mathematical Modelling20174781182510.1016/j.apm.2017.02.0082-s2.0-85018762891YuanG.ShengZ.WangB.HuW.LiC.The global convergence of a modified BFGS method for nonconvex functionsJournal of Computational and Applied Mathematics201832727429410.1016/j.cam.2017.05.0302-s2.0-85022335122YuanG.LiP.LuJ.The global convergence of the BFGS method with a modified WWP line search for nonconvex functionsNumerical AlgorithmsIn pressYuanG.WeiZ.Convergence analysis of a modified BFGS method on convex minimizationsComputational Optimization and Applications201047223725510.1007/s10589-008-9219-02-s2.0-77956768212DolanE. D.MoréJ. J.Benchmarking optimization software with performance profilesMathematical Programming200291220121310.1007/s1010701002632-s2.0-28244496090OuyangA.LiuL.-B.ShengZ.WuF.A class of parameter estimation methods for nonlinear Muskingum model using hybrid invasive weed optimization algorithmMathematical Problems in Engineering201520151557389410.1155/2015/5738942-s2.0-84936992468OuyangA.TangZ.LiK.SallamA.ShaE.Estimating parameters of Muskingum model using an adaptive hybrid PSO algorithmInternational Journal of Pattern Recognition and Artificial Intelligence201428112910.1142/s02180014145900342-s2.0-84897558166GeemZ. W.Parameter estimation for the nonlinear Muskingum model using the BFGS techniqueJournal of Irrigation and Drainage Engineering2006132547447810.1061/(asce)0733-9437(2006)132:5(474)2-s2.0-33748792293