We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.
1. Introduction
We consider the sum of linear ratios programming problem as the following form:
(1)(GFP):{minf(x)=∑i=1pfi(x)=∑i=1pni(x)di(x),s.t.Ax≤b,x≥0,
where the feasible domain D≜{x∈Rn∣Ax≤b,x≥0} is n-dimensional, nonempty, and bound, A∈Rm×n,b∈Rm. Assume that ni(x)=ciTx+di≥0 and di(x)=eiTx+ri>0 in some rectangle X which contains D, where ci,ei∈Rn,di,ri∈R, i=1,2,…,p, and 2≤p≪n.
Fractional programming is an important branch of nonlinear optimization and it has attracted many researchers’ concern for several decades. Sum of linear ratios problem is a special class of fractional programming problem; it has wide applications, such as investment, transportation scheme, and economic benefits [1–3]. From a research point view, sum of ratios problems challenge theoretical analysis and computation because these problems possess multiple local optima that are not globally optimal solutions; it is difficult to solve the global solution.
At present there exist a number of algorithms for globally solving sum of linear ratios problems. When p=2, Konno et al. [4] constructed a similar parametric simplex algorithm which can solve large-scale optimization problems; when p=3, Konno and Abe [5] developed parametric simplex algorithm and constructed an effected heuristic algorithm; when p>3, the literature [6] is a sum of linear ratios problem with coefficients; by using an equivalent transformation and linearization technique, the original nonconvex programming problem reduces to a series of linear programming problems to achieve the purpose of solving it. To minimize the problem, Yanjun et al. [7] use the linearization technique twice by the nature of exponential and logarithmic functions to achieve a linear relaxation programming of the original problem. Benson [8] put forward a new branch and bound algorithm to solve the equivalent concave minimum problem of the original problem. Jiao and Feng [9] present a new pruning technique. In the literature [10], the numerator and denominator of the ratios are not necessarily positive. In this paper, we present a new branch and bound algorithm for solving the sum of linear ratios problems, and the convergence of the algorithm is proved. At last, the numerical experiments are carried out.
This paper is organized as follows. In Section 2, we show how to convert the problem (GFP) into an equivalent problem (EP) by a transformed technique. In Section 3, the linear relaxation programming problem of (EP) is constructed. The branching process of the rectangle is given in Section 4. In Section 5, the branch and bound algorithm for globally solving (EP) is presented and the convergence of the algorithm is proved. In Section 6, some numerical results are given to show the effectiveness of the present algorithm. Finally, the conclusion is given.
2. Equivalent Transformation
Because the set D is nonempty and bound, we can construct the rectangle X=[l,u], which contains the feasible region of the problem (GFP), l=(l1,l2,…,ln)T,u=(u1,u2,…,un)T,lj and uj is the optimal value of the linear programming problem (2) and (3), respectively.
(2)minl(xj)=xj,s.t.Ax≤b,x≥0,(3)maxu(xj)=xj,s.t.Ax≤b,x≥0.
Firstly, we solve the following 2p linear programming problems:
(4)mindi(x),s.t.x∈D,maxdi(x),s.t.x∈D,i=1,2,…,p.
The optimal solutions of (4) are xi1 and xi2 (i=1,2,…,p), and the optimal value is denoted by li¯ and ui¯ (i=1,2,…,p) respectively. Obviously, xi1 and xi2 are feasible to (GFP). Set W=W∪{xi1,xi2:i=1,2,…,p}, where W represent the set of the current feasible solution of the problem (GFP). Set
(5)H0={y∈Rp∣li0≤yi≤ui0,i=1,2,…,p},y=(y1,y2,…,yp)T,
where li0=1/ui¯, ui0=1/li¯. Then the problem (GFP) is converted into an equivalent nonconvex programming problem:
(6)EP(H0):{minφ0(x,y)=∑i=1pyini(x)=∑i=1pyi(∑j=1ncijxj+di),s.t.φi(x,y)=yidi(x)=yi(∑j=1neijxj+ri)≥1,i=1,2,…,p,x∈D∩X,y∈H0.
Theorem 1 (see [10]).
If (x*,y*) is a global optimal solution of the problem EP(H0), then x*is a global optimal solution of the problem (GFP), and for every i=1,2,…,p, when ni(x*)≥0, yi*=1/di(x*); conversely, if x* is a global optimal solution of the problem (GFP), then (x*,y*) is a global optimal solution of the problem EP(H0), where yi*=1/di(x*),i=1,2,…,p.
From Theorem 1, the problems (GFP) and EP(H0) are equivalent; their global optimal values are equal. Therefore, in order to solve (GFP), we only need to solve EP(H0) instead.
3. Linear Relaxation Technique
From Section 2, X=[l,u] and H0=[l0,u0] are rectangles; set
(7)Ωi={(x,yi)∣l≤x≤u,li0≤yi≤ui0}=Ω1i×Ω2i×···Ωni,
where
(8)Ωji={(xj,yi)∣lj≤xj≤uj,li0≤yi≤ui0},j=1,2,…,n.
Because in Ωji we have xj-lj≥0,yi-li0≥0, so
(9)(xj-lj)(yi-li0)≥0,j=1,2,…,n,
expanding it, then we have xjyi≥li0xj+ljyi-ljli0,j=1,2,…,n.
Similarly, we can obtain that xj-uj≤0,yi-ui0≤0, so
(10)(xj-uj)(yi-ui0)≥0,j=1,2,…,n,
expanding it, then we have xjyi≥ui0xj+ujyi-ujui0,j=1,2,…,n. Let
(11)θji11(xj,yi)=li0xj+ljyi-ljli0,j=1,2,…,n,θji12(xj,yi)=ui0xj+ujyi-ujui0,j=1,2,…,n.
Because xjyi≥θji11(xj,yi),xjyi≥θji12(xj,yi),j=1,2,…,n, we have the following result:
(12)xjyi≥max{θji11(xj,yi),θji12(xj,yi)},j=1,2,…,n.
Similarly, we have (xj-lj)(yi-ui0)≤0,(xj-uj)(yi-li0)≤0,j=1,2,…,n, expanding them, then we have xjyi≤ui0xj+ljyi-ljui0,xjyi≤li0xj+ujyi-ujli0; let
(13)θji21(xj,yi)=ui0xj+ljyi-ljui0,j=1,2,…,n,θji22(xj,yi)=li0xj+ujyi-ujli0,j=1,2,…,n.
Consequently,
(14)xjyi≤min{θji21(xj,yi),θji22(xj,yi)},j=1,2,…,n.
From formulae (12) and (14), the following formula is obtained:
(15)max{θji11(xj,yi),θji12(xj,yi)}≤xjyi≤min{θji21(xj,yi),θji22(xj,yi)}.
In the problem EP(H0), let LB(x) and UB(x), respectively, represent the lower bound and upper bound of x; then
(16)LB(cijxjyi)={cij·max{θji11(xj,yi),θji12(xj,yi)},cij≥0,cij·min{θji21(xj,yi),θ22(xj,yi)},cij<0,UB(eijxjyi)={eij·min{θji21(xj,yi),θji22(xj,yi)},eij≥0,eij·max{θji11(xj,yi),θji12(xj,yi)},eij<0.
From formula (16), we can obtain the linear relaxation programming problem REP(H0) of the problem EP(H0):
(17)REP(H0):{minφ0l(x,y)=∑i=1p(∑j=1nLB(cijxjyi)+diyi),s.t.φiu(x,y)=∑j=1nUB(eijxjyi)+riyi≥1,i=1,2,…,p,x∈D∩X,y∈H0.
The optimal value of the problem REP(H0) is a lower bound of the optimal value of the problem EP(H0) in the feasible region D.
Obviously, the problem REP(H0) can equivalently be converted into the following linear programming problem LRP(H0):(18)LRP(H0):{minf(x,y,t,s)=∑i=1p(∑j=1ntji+diyi),s.t.∑j=1nsji+riyi≥1,i=1,2,…p,tji≥cij·θji11(xj,yi),cij≥0,j=1,2,…,n,i=1,2,…,p,tji≥cij·θji12(xj,yi),cij≥0,j=1,2,…,n,i=1,2,…,p,tji≥cij·θji21(xj,yi),cij<0,j=1,2,…,n,i=1,2,…,p,tji≥cij·θji22(xj,yi),cij<0,j=1,2,…,n,i=1,2,…,p,sji≤eij·θji21(xj,yi),eij≥0,j=1,2,…,n,i=1,2,…,p,sji≤eij·θji22(xj,yi),eij≥0,j=1,2,…,n,i=1,2,…,p,sji≤eij·θji11(xj,yi),eij<0,j=1,2,…,n,i=1,2,…,p,sji≤eij·θji12(xj,yi),eij<0,j=1,2,…,n,i=1,2,…,p,x∈D∩X,y∈H0.
The optimal value of the problem LRP(H0) can be obtained by solving the linear programming problem LRP(H0), which is a lower bound of the problem EP(H0) in feasible region D.
The Determination of Upper Bound. From the process of the determination of lower bound, by solving LRP(H0), we can obtain a global optimal solution x¯*; let
(19)yi¯*=(∑j=1neijxj¯*+ri)-1.
It is obvious that (x¯*,y¯*) is a feasible solution of EP(H0). Therefore, φ0(x¯*,y¯*) provide an upper bound for the global optimal value ν(H0) of the problem EP(H0).
4. Branching
In this algorithm, the branching process is executed in the space of Rp other than in Rn. In general, when p≪n, the amount of computation will decrease so that the efficiency of computation will improve. Therefore, we choose the rectangle H0 which contains y to branch, and the subrectangle after branching is also p-dimensional. Set
(20)H={y∈Rp∣Li≤yi≤Ui,i=1,2,…,p}.
Denote the initial rectangle H0 or subrectangle of it. The branching rule is as follows:
choose the longest side of H, that is, Us-Ls=max{Ui-Li:i=1,2,…,p};
let Vs=(Us+Ls)/2 and
(21)H1=∏i=1s-1[Li,Ui]×[Ls,Vs]×∏i=s+1p[Li,Ui],H2=∏i=1s-1[Li,Ui]×[Vs,Us]×∏i=s+1p[Li,Ui].
5. Algorithm and Its Convergence
The branch and bound algorithm of the problem (GFP) is stated as follows:
Step 1.
Choose ε≥0, the initial rectangle H0={y∈Rp∣li0≤yi≤ui0,i=1,2,…,p}; we can find an optimal solution x0 and the optimal value LB(H0) by solving the problem LRP(H0). Set LB0=LB(H0), xc=x0. Set yic=(∑j=1neijxjc+ri)-1, i∈{1,2,…,p}, UB0=φ0(xc,yc).
If UB0-LB0≤ε, stop. (xc,yc) and xc are global ε-optimal solutions of problems EP(H0) and (GFP), respectively. Otherwise, set P0={H0}, F=⌀, k=1, and go to Step 2.
Step 2.
Set UBk=UBk-1. Subdivide Hk-1 into two p-dimensional rectangles Hk,1,Hk,2⊆Rp via the branching rule. Set F=F∪{Hk-1}.
Step 3.
For j=1,2, compute LB(Hj,k). If LB(Hj,k)≠+∞, find an optimal solution xk,j of problem LRP(H¯) with H¯=Hj,k; set t=0.
Step 4.
Set t=t+1. If t>2, go to Step 6. Otherwise, continue.
Step 5.
If UBk≤LB(Hk,t), set F=F∪{Hk,t}; go to Step 4. Otherwise, set
(22)yik,t=(∑j=1neijxjk,t+ri)-1,i∈{1,2,…,p}.
Let UBk=min{UBk,φ0(xk,t,yk,t)}. If UBk<φ0(xk,t,yk,t), go to Step 4. If UBk=φ0(xk,t,yk,t), set xc=xk,t,(xc,yc)=(xk,t,yk,t). Let
(23)F=F∪{H∈Pk-1∣UBk≤LB(H)}.
Step 6.
Set Pk={H∣H∈(Pk-1∪{Hk,1,Hk,2}),H∉F}.
Step 7.
Set LBk=min{LB(H)∣H∈Pk}. Let Hk∈Pk satisfy LBk=LB(Hk).
If UB0-LB0≤ε, stop. (xc,yc) and xc are global ε-optimal solutions of the problems EP(H0) and (GFP), respectively. Otherwise, set k=k+1 and go to Step 2.
Next, the convergence of the algorithm is stated in the following theorem.
Theorem 2.
(a) If the algorithm is finite, (xc,yc) and xc are global ε-optimal solutions of the problems EP(H0) and (GFP), respectively.
(b) For k≥0, let xk denote the incumbent solution xc at the end of step k. If the algorithm is infinite, then {xk} is a feasible solution sequence, whose every accumulation point is a global optimal solution of the problem (GFP), and
(24)limk→∞
UB
k=limk→∞
LB
k=ν.
Proof.
(a) If the algorithm is finite, without loss of generality, it terminates in step k(k≥0), since (xc,yc) is obtained by solving problem LRP(H), for some H⊆H0 and optimal solution xc, set
(25)yic=1∑j=1neijxjc+ri,i∈{1,2,…,p},
where xc is a feasible solution of the problem (GFP) and (xc,yc) is a feasible solution of problem EP(H0). When UBk-LBk≤ε, the algorithm terminates. From Steps 1, 2, and 5, it is implied that φ0(xc,yc)-LBk≤ε; by the algorithm, it shows that LBk≤ν. Since (xc,yc) is a feasible solution of the problem EP(H0), therefore, φ0(xc,yc)≥ν.
Taken together, it is implied that
(26)ν≤φ0(xc,yc)≤LBk+ε≤ν+ε.
Therefore,
(27)ν≤φ0(xc,yc)≤ν+ε.
From the formula yic=1/(∑j=1neijxjc+ri),i=1,2,…,p, we have
(28)f(xc)=φ0(xc,yc).
From (27), this implies that
(29)ν≤f(xc)≤ν+ε.
The proof of (a) is complete.
(b) If the algorithm is infinite, then it generates a sequence of incumbent solutions of the problem EP(H0), denoted by {(xk,yk)}, for each k≥1, (xk,yk) is obtained by solving the problem LRP(H). For some Hk⊆H0 and optimal solution xk∈D, set
(30)yik=1∑j=1neijxjk+ri,i∈{1,2,…,p}.
Then the sequence {xk} consists of feasible solutions of the problem (GFP).
Suppose that x¯ is an accumulation point of {xk}. Assume without loss of generality that limk→∞xk=x¯. Since D is a compact set, x¯∈D. Furthermore, because {xk} is infinite, we assume without loss of generality that, for each k, Hk+1⊆Hk, for some point y¯∈Rp,
(31)limk→∞Hk=⋂kHk={y¯}.
Set H¯={y¯}, for each k; let Hk={y∈Rp∣Lik≤yi≤Uik,i=1,2,…,p}. Since Hk+1⊆Hk⊆H0, from Step 5, we know that {LB(Hk)} is a nonincreasing sequence, and limk→∞LB(Hk) is a finite number and satisfies
(32)limk→∞LB(Hk)≤ν.
For each k, from Step 3, we know that LB(Hk) is equal to the optimal value of the problem LRP(Hk) and that xk is an optimal solution of this problem. From (31), we have
(33)limk→∞Lk=limk→∞Uk={y¯}=H¯.
Since limk→∞xk=x¯, Lik≤1/(∑j=1neijxjk+ri)≤Uik, and the continuity of ∑i=1peijxjk+ri,
(34)1∑j=1neijxj¯+ri=yi¯,i=1,2,…,p.
This implies that (x¯,y¯) is a feasible solution of the problem EP(H0). Therefore,
(35)φ0(x¯,y¯)≥ν.
Together with (32), we have
(36)φ0(x¯,y¯)≥ν≥limk→∞LB(Hk).
Since the branching process is bisection and the branching process of rectangle is exhaustive, we have
(37)limk→∞LB(Hk)=ν=φ0(x¯,y¯).
Therefore, (x¯,y¯) is a global optimal solution of the problem EP(H0). By Theorem 1, this implies that x¯ is a global optimal solution of the problem (GFP). For each k, since xk is the incumbent solution of the problem (GFP) at the end of step k, UBk=f(xk); by the continuity of f, we obtain that
(38)limk→∞f(xk)=f(x¯).
Since x¯ is a global optimal solution of the problem (GFP),
(39)f(x¯)=ν.
Therefore, limk→∞UBk=ν. The proof is complete.
6. Numerical Experiment
The proposed algorithm is programmed in MATLAB 7.8 and is run in Pentium(R) 4 CPU 3.20 GHz. In order to compare with the algorithm of the literature [10], we perform three experiments to the literature [10].
Example 1 (see [10]).
We choose p=n=2; for each (x1,x2)∈R2, the numerator and denominator are
(40)n1(x1,x2)=37x1+73x2+13,n2(x1,x2)=63x1-18x2+39,d1(x1,x2)=13x1+13x2+13,d2(x1,x2)=13x1+26x2+13,
and all (x1,x2)∈D satisfy
(41)5x1-3x2=3,1.5≤x1≤3.
From our algorithm, we firstly should solve the following linear programming problems:
(42)mindi(x),s.t.Ax≤b,maxdi(x),s.t.Ax≤b,i=1,2,…,p,
of which the optimal solutions denote by xi1,xi2(i=1,2); then
(43)W=W∪{xi1,xi2:i=1,2,…,p},
where W represent the set of the current feasible solution of the problem EP(H0), and the optimal value is denoted by li¯ and ui¯(i=1,2); then the initial rectangle is
(44)H0=[0.00960.01920.00640.0140].
By solving the linear relaxation programming problem LRP(H0), we obtain the optimal solution x0=[2.0016;2.3360] and the optimal value LB(H0)=3.9743; then a lower bound of the original problem is LB(H0)=3.9743. Set
(45)yi0=(∑j=1neijxj0+ri)-1.
Then (x0,y0) is a feasible solution of EP(H0), min{φ0(x0,y0),f(x):x∈W}=4.9126, then it provides an upper bound for the global optimal value of the problem EP(H0). Next, we choose the rectangle H0 corresponding with the lower bound to branch; we obtain the following rectangles via our algorithm:
(46)H0,1=[0.00960.01440.00640.0140],H0,2=[0.01440.01920.00640.0140].
We solve the linear relaxation programming problem LRP in rectangles H0,1 and H0,2, respectively. In LRP(H0,1), the optimal solution and the optimal value are [2.2524;2.7540] and v=4.2345; then in rectangle H0,1, the lower bound of the original problem is LB(H0,1)=4.2345, and the upper bound corresponding with the optimal solution is 4.9617 (>4.9126), so the upper bound is unchanged. In LRP(H0,2), the optimal solution and the optimal value are [1.8019;2.0032]; and v=4.5548; then in rectangle H0,2, the lower bound of the original problem is LB(H0,2)=4.5548, and the upper bound corresponding with the optimal solution is 4.9323 (>4.9126), so the upper bound is also unchanged. Then we choose the rectangle corresponding with the lower bound to branch until the 55th iteration, and we can obtain that
(47)H55,1=[0.01860.01890.01350.0137],
we solve the linear programming problem LRP in H55,1; the lower bound is 4.9125; it satisfies the terminated rule. Therefore, the optimal value and the optimal solution of the original problem are 4.9126 and x=[1.5000;1.5000]; the lower bound of the optimal value is 4.9125, which is approximate optimal value. The accuracy is ε=0.0001.
The above example satisfies (n,p)=(2,2), where n denote the number of variables; our algorithm can have a good approach within accuracy. In Example 2, (n,p)=(3,3); in Example 3, (n,p)=(3,4) we still get good results. Along with the increase of n and p, the computation complexity is increasing. For example, in Example 3, (n,p)=(3,4), we can quickly obtain the approximate optimal value and the optimal value by using this paper’s algorithm, but its effect is poorer than the former example. The result of Example 1 is shown in Table 1.
ε
Approximate optimal value
Optimal value
0.01
4.9027
4.9126
Example 1
1.0e-3
4.9116
4.9126
1.0e-4
4.9125
4.9126
Example 2 (see [10]).
(48)min3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+4x2+504x1+3x2+2x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50,s.t.2x1+x2+5x3≤10,x1+6x2+2x3≤10,9x1+7x2+3x3≥10,x1,x2,x3≥0.
The optimal value is 2.8619.
Example 3 (see [10]).
(49)min4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+4x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50,s.t.2x1+x2+5x3≤10,x1+6x2+3x3≤10,9x1+7x2+3x3≥10,x1,x2,x3≥0.
The optimal value is 3.7109.
We choose ε=1.0e-4; then the approximate optimal solution satisfying accuracy and the iteration times and CPU running time are obtained. The results of our algorithm are shown in Table 2. But the results of the literature [10] are shown in Table 3.
Example
The optimal solution within accuracy or one solution among solutions
x1
x2
x3
1
1.5000
1.5000
2
5.0000
0.0000
0.0000
3
0.0000
1.6667
0.0000
Example
Approximate optimal value
The number of iterations
CPU (s)
1
4.9125
113
201.626020
2
2.8619
12
28.294344
3
3.7087
5
4.190375
Example
The optimal solution within accuracy or one solution among solutions
x1
x2
x3
1
3
4
2
0
3.3333
0
3
0
0.625
1.875
Example
Approximate optimal value
The number of iterations
CPU (s)
1
5
32
1.089285
2
3.0029
80
8.566259
3
4.0000
58
2.968694
According to Tables 2 and 3, in Example 1, although the optimal solution (3,4)T of the literature [10] is feasible, its optimal value 5 is bigger than 4.9126 of our algorithm; in Example 2, the optimal solution (0,3.3333,0)T of the literature [10] turns out to be infeasible; in Example 2, the optimal value 4.0000 which corresponds to the optimal solution (0,0.625,1.875)T of the literature [10] is actually 3.8384, but it is still bigger than 3.7109 of our algorithm.
From the above comparison we know that the optimal values of our algorithm are much lesser than in the literature [10], and except for Example 1, the iterations of Examples 2 and 3 are much lesser than in the literature [10]. Although our running time is longer than the literature [10], if we can solve the more accurate optimal solution, the price we pay is acceptable.
In conclusion, our algorithm is feasible and effective, and to some degree, it is better than in the literature [10].
7. Conclusion
In this paper, the solving of the sum of linear ratios programming problem is discussed. The problem is equivalently transformed into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product, the linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is proposed and the convergence of the algorithm is proved. Numerical results show the effectiveness of the algorithm, and our algorithm is better than the calculation results of the literature [10].
Acknowledgments
The work is supported by the Foundation of National Natural Science, China (11161001), and by the research project of Beifang University of Nationalities (2013XYZ025).
KonnoH.WatanabeH.Bond portfolio optimization problems and their applications to index tracking: a partial optimization approach1996393295306MR1413435ZBL0873.90009FalkJ. E.PalocsayS. W.Optimizing the sum of linear fractional functions1992Princeton, NJ, USAPrinceton University Press221258Princeton Series in Computer ScienceMR1147444HorstR.PardalosP. M.ThoaiN. V.2000482ndDordrecht, The NetherlandsKluwer Academic Publishersxiv+353Nonconvex Optimization and Its ApplicationsMR1799654KonnoH.YajimaY.MatsuiT.Parametric simplex algorithms for solving a special class of nonconvex minimization problems199111658110.1007/BF00120666MR1263839ZBL0746.90056KonnoH.AbeN.Minimization of the sum of three linear fractional functions199915441943210.1023/A:1008376731013MR1772965ZBL0961.90115ShenP.-P.WangC.-F.Global optimization for sum of linear ratios problem with coefficients2006176121922910.1016/j.amc.2005.09.047MR2233345ZBL1098.65066YanjunW.PeipingS.ZhianL.A branch-and-bound algorithm to globally solve the sum of several linear ratios200516818910110.1016/j.amc.2004.08.016MR2170016ZBL1079.65071BensonH. P.Solving sum of ratios fractional programs via concave minimization2007135111710.1007/s10957-007-9199-8MR2342650ZBL1145.90089JiaoH. W.FengQ. G.Global optimization for sum of linear ratios problem using new pruning technique200820081264620510.1155/2008/646205WangC.-F.ShenP.-P.A global optimization algorithm for linear fractional programming2008204128128710.1016/j.amc.2008.06.045MR2458366ZBL1159.65064