This paper presents a branch and bound algorithm for globally solving the sum of concave-convex ratios problem (P) over a compact convex set. Firstly, the problem (P) is converted to an equivalent problem (P1). Then, the initial nonconvex programming problem is reduced to a sequence of convex programming problems by utilizing linearization technique. The proposed algorithm is convergent to a global optimal solution by means of the subsequent solutions of a series of convex programming problems. Some examples are given to illustrate the feasibility of the proposed algorithm.
1. Introduction
We consider the concave-convex ratios programming problems as follows:
(1)(P){v=maxf(x)=∑i=1pfi(x)gi(x),WWs.t.x∈X={x∈Rn∣Ax⩽b},
where p⩾2, fi, i=1,2,…,p, are concave and differentiable functions defined on Rn, gi, i=1,2,…,p, are convex and differentiable functions defined on Rn, X⊆Rn is a nonempty, compact convex set, and for each i=1,2,…,p, fi(x)>0 and gi(x)>0 for all x∈X.
During the past years, various algorithms have been proposed for solving special cases of fractional programming problem. For instance, algorithmic and computational results for single ratio fractional programming can be found in [1, 2] and in the literature cited therein. At present, there exist a number of algorithms for globally solving sum of ratios problem in which the numerators and denominators are affine functions or the feasible region is a polyhedron [3–5]. To my knowledge, four algorithms have been proposed for solving the nonlinear sum of ratios problem [6–9]. Freund and Jarre [10] present a suitable interior-point approach for the solution of much more general problems with convex-concave ratios and convex constraints. Shen et al. [11] present a simplicial branch and duality bound algorithm for globally solving the sum of convex-convex ratios problem with nonconvex feasible region.
In this paper, we implement a branch and bound algorithm for globally solving problem (P). First, although the branch and bound search involves rectangles defined in a space of dimension 3p, branching takes place in a space of only dimension p, where p is the number of ratios in the objective function of problem (P). Second, all subproblems that must be solved to implement the algorithm are convex programming problems, each of which is guaranteed to have an optimal solution. Finally, some examples are given to show that the proposed method can treat all of the test problems in finding globally optimal solutions within a prespecified tolerance. The algorithms of this paper were motivated by the seminal works of [12], the generalized concave multiplicative programming problem.
The organization and content of this paper can be summarized as follows. In Section 2, we demonstrate how to convert problem (P) into an equivalent problem (P1). By using the convex envelope of the bilinear function and the special characteristics of quadratic function, we will illustrate how to generate the convex relaxation program for problem (P1) in Section 3. In Section 4, the branch and bound algorithm for globally solving (P) is presented. And convergence properties of the algorithm and computational considerations for implementing the algorithm are given. Some numerical examples are given to demonstrate the effectiveness of the proposed algorithm in Section 5. Some concluding remarks are given in Section 6.
2. Equivalent Program
To globally solve problem (P), the branch and bound algorithm globally solves a problem (P1) equivalent to problem (P). In this section, the following main work is to show how to convert problem (P) into an equivalent nonconvex programming problem (P1).
Let I={1,2,…,p}. For each i∈I, let hi(x)=fi(x)/gi(x). Then, we have the following result.
Proposition 1.
Let X′ be an open set containing X such that for each i=1,2,…,p, fi(x)>0, gi(x)>0, for all x∈X. Then, for each i=1,2,…,p, the function hi(x) is semistrictly quasiconcave on X′.
Proof.
For any i∈I, it is easy to show that the function fi(x) is concave and differentiable on X′. Since gi(x) is positive, convex, and differentiable on X′, from Avriel et al. [13], this implies that hi(x) is semistrictly quasiconcave on X′.
Proposition 2.
Let X′ be defined as in Proposition 1. For each i∈I, we consider the problem
(2)(Pi)Ui0=maxx∈Xhi(x).
Then, any local maximum is also a global maximum of problem (Pi).
Proof.
Since X is a convex set, the result follows directly from Proposition 1 and Theorem 3.37 of [13].
Therefore, Ui0 can be found by any number of convex programming algorithms. Let H0={u∈Rp∣0⩽ui⩽Ui0,i∈I}. Then, H0 is a full-dimensional rectangle in Rp. Let Φ:X×H0→R be defined for each (x,u)∈X×H0 by
(3)Φ(x,u)=∑i∈I[2uifi(x)-ui2(gi(x))].
For any u∈H0, define the problem (Su) by
(4)(Su)φ(u)=maxx∈XΦ(x,u).
Definition 3 (see [<xref ref-type="bibr" rid="B14">14</xref>]).
Let X and Z be convex subsets of Rm and Rq, respectively. A real-valued function h defined on X×Z is biconcave if, for each fixed x¯∈X, h(x¯,z) is a concave function on Z and, for each fixed z¯∈Z, h(x,z¯) is a concave function on X. The following result shows that, for every u∈H0, the value of φ(u) can be determined by solving a convex program.
Lemma 4.
The objective function Φ(x,u) of problem (Su) is biconcave on X×H0.
Proof.
For each i∈I, let Hi0={ui∈R∣0⩽ui⩽Ui0} and define Φi:X×Hi0→R by Φi(x,ui)=2uifi(x)-ui2(gi(x)). Therefore, for every x∈X and u∈H0, we have Φ(x,u)=∑i=1pΦi(x,u). Notice that X, H0 and Hi0(i∈I) are convex sets; then it will suffice to show that, for every i∈I, Φi is biconcave on X×Hi0. Given u^i∈Hi0. Thus, for any i∈I, Φi(x,u^i)=2u^ifi(x)-u^i2gi(x). For all x∈X, 2u^ifi(x)(i=1,2,…,p) are concave, since the function fi(x)(i=1,2,…,p) is concave and u^i⩾0, fi(x)>0(i=1,2,…,p). Because the function gi(x) is positive convex function on X and u^i⩾0, -u^i2gi(x) is also concave on X. Therefore, it follows that Φi(x,u^i) is a concave function on x.
Now, let x^∈X be a fixed vector. For all ui∈Hi0, Φi(x^,ui)=2uifi(x^)-ui2gi(x^). Since gi(x^)>0, it follows easily that Φi(x^,ui) is a concave function on Hi0. The proof is complete.
We now define the problem (P1) by
(5)(P1)v1=maxu∈U0φ(u)=maxu∈U0maxx∈XΦ(x,u).
Theorem 5.
The problem (P) is equivalent to the problem (P1) in the following sense: if u* is an optimal solution to the problem (P1) and if x* is a corresponding optimal solution of problem (Su) with u=u*, then x* is an optimal solution for problem (P). Moreover, the following relations hold:
(6)u*=fi(x*)gi(x*),i∈I,(7)v1=φ(u*)=f(x*)=v.
Conversely, if x* is an optimal solution of problem (P), the value u* deduced from relation (6) corresponds to an optimal solution of problem (P1) and relation (7) holds.
Proof.
Let u* be a global optimal solution for problem (P1), and let x* solve problem (Su) with u=u*; then (x*,u*)∈X×H0. Then,
(8)φ(u*)=maxu∈H0φ(u)=maxx∈X∑i∈I[2ui*fi(x)-(ui*)2gi(x)]=∑i∈I[2ui*fi(x*)-(ui*)2gi(x*)].
It follows from the definition of φ and (10) that (x*,u*) is a global optimal solution to the problem
(9)(P2)maxx∈X,u∈H0Φ(x,u).
Therefore, u* is global optimal solution to the problem
(10)maxu∈H0Φ(x*,u).
For each i∈I, define ri:R→R for each ui∈R by
(11)ri(ui)=2uifi(x*)-ui2gi(x*).
Then, for all i∈I, since gi(x)>0, ri(ui) is a strictly concave function, and the maximum of ri(ui) over ui∈R is attained uniquely at ui0=fi(x*)/gi(x*). By definition of H0, (u0)T≜(u10,u20,…,up0)∈H0. The previous two statements imply that u0 is the unique optimal solution to (12). Therefore, u*=u0 and the objective function value in (12) of u* is
(12)Φ(x*,u*)=∑i∈I[2ui*fi(x*)-(ui*)2gi(x*)]=∑i∈I[2fi(x*)gi(x*)-fi(x*)gi(x*)]=∑i∈Ifi(x*)gi(x*)=f(x*).
So, f(x*) is also the objective function value of (x*,u*) in problem (P2). Assume that there exist some x∈X such that f(x)>f(x*). Let ui=(fi(x)/gi(x))(i∈I). Then, uT=(u1,u2,…,up)∈H0, and the objective function value of (x,u) in problem (P2) is
(13)Φ(x,u)=∑i∈I[2uifi(x)-(ui)2gi(x)]=∑i∈Ifi(x)gi(x)=f(x).
It follows from f(x)>f(x*) and (16) that (x*,u*) is not a global optimal solution to problem (P2), which is a contradiction. This implies that, for all x∈X, f(x)⩽f(x*); that is, x* is a global optimal solution for problem (P). From (10) and (16), φ(u*)=f(x*). Since f(x*)=v, this completes the proof of the first statement of the theorem.
Now suppose that x* is a global optimal solution for problem (P). Then, x*∈X and fi(x*)>0, gi(x*)>0 for all i∈I. We compute (u*)T≜(u1*,u2*,…,up*)∈H0 by (7). From the definition of φ,
(14)φ(u*)=maxx∈X{∑i∈I[2ui*fi(x)-(ui*)2gi(x)]}⩾∑i∈I[2ui*fi(x*)-(ui*)2gi(x*)]=∑i∈Ifi(x*)gi(x*)=f(x*).
Suppose that φ(u1)>φ(u*) for some u1∈U0 and x1 is a corresponding optimal solution of problem (Su) with u=u1. By the first part of the theorem, this implies that f(u1)>f(x*)=v1. Therefore, since u*∈H0, φ(u*)=v1, and u* is a global optimal solution for problem (P1).
3. Relaxation Problem for Problem <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M195"><mml:mo mathvariant="bold">(</mml:mo><mml:mi>P</mml:mi><mml:mn>1</mml:mn><mml:mo mathvariant="bold">)</mml:mo></mml:math></inline-formula>
Let H={u∈Rp∣L⩽u⩽U} denote H0 or a subrectangle of H0 that is generated by the branch and bound algorithm, where L,U∈Rp and 0⩽Li<Ui for all i∈I. We consider the following problem:
(15)(P1(H))maxu∈Hφ(u).
For each i∈I, let t¯i=maxx∈Xfi(x), and let s¯i satisfy
(16)s¯i⩾maxx∈Xgi(x)>0.
Since, for every i∈I, ci(x)=fi(x) is a concave function on X, then, for each i∈I, t¯i can be found by solving a convex programming problem. For each i∈I, s¯i can be chosen to be a sufficiently large positive number.
Now, consider the function G:H0→R which is given for any u∈H0 by
(17)(P(u))G(u)=max∑i=1p[2uiti-ui2si],s.t.ti-fi(x)⩽0,i∈I,-si+gi(x)⩽0,i∈I,0⩽ti⩽t¯i,0⩽si⩽s¯i,i∈I,x∈X.
Theorem 6.
For each u∈H0,φ(u)=G(u). Moreover, if u∈H0 and (x*,t*,s*) is an optimal solution to problem (P(u)), then φ(u)=∑i∈I[2uifi(x*)-ui2gi(x*)].
Proof.
Let u∈H0. On the basis of the definition φ, for some x^∈X,
(18)φ(u)=∑i∈I[2uifi(x^)-ui2gi(x^)].
For every i∈I, let
(19)t^i=fi(x^),s^i=gi(x^).
Assume that t^T=(t^1,t^2,…,t^p) and s^T=(s^1,s^2,…,s^p). So, (x^,t^,s^) is a feasible solution to problem (P(u)) with objective function value
(20)∑i∈I[2uit^i-ui2s^i]=φ(u),
where the equation follows from (18) and (19). Then, we have G(u)⩾φ(u). Thus, in order to show the first result in theorem, we only show that G(u)>φ(u) does not hold.
Assume that G(u)>φ(u). Based on (18), there exists some feasible solution (x,t,s) for problem (P(u)) such that
(21)∑i∈I[2uiti-ui2si]>φ(u)=∑i∈I[2uifi(x^)-ui2gi(x^)].
According to definition of 0⩽ti⩽fi(x) and 0⩽gi(x)⩽si in the problem (P(u)), for all i∈I and u⩾0, we have
(22)∑i∈I[2uifi(x)-ui2gi(x)]⩾∑i∈I[2uiti-ui2si].
From (18), (21), and (22), we obtain
(23)∑i∈I[2uifi(x)-ui2gi(x)]>φ(u).
Since x∈X, this is a contradiction with the definition of φ(u). Therefore, the assumption that G(u)>φ(u) is false. This implies that φ(u)=G(u), for each u∈H0.
Now, we show the second part of the theorem. Let (x*,t*,s*) be an optimal solution to problem (P(u)). Then, for each i∈I, 0<t*⩽fi(x*), s*⩾gi(x*)>0. So, we have
(24)G(u)=∑i∈I[2uiti*-ui2si*]⩽∑i∈I[2uifi(x*)-ui2gi(x*)].
Let t^T=(t^1,t^2,…,t^p) and s^T=(s^1,s^2,…,s^p), where, for each i∈I,
(25)t^i=fi(x^),s^i=gi(x^).
Then, (x*,t^,s^) is a feasible solution for problem (P(u)), so that, by definition of G(u),
(26)G(u)⩾∑i∈I[2uifi(x*)-ui2gi(x*)].
From (24) and (26), it follows that
(27)φ(u)=∑i∈I[2uifi(x*)-ui2gi(x*)],
since φ(u)=G(u).
It follows form Theorem 6 that problem (P1(H)) has the same optimal value v(H) with the following problem:
(28)(PE1(M))max∑i=1p[2uiti-ui2si],s.t.ti-fi(x)⩽0,i∈I,-si+gi(x)⩽0,i∈I,(t,s,u)∈M,x∈X,
where M={(t,s,u)∈R3p∣0⩽ti⩽t¯i,0⩽si⩽s¯i,Li⩽ui⩽Ui,i∈I}.
In order to construct relaxation problem for problem (PE1), we must use the concept of a concave envelope, which may be defined as follows.
Definition 7 (see [<xref ref-type="bibr" rid="B15">15</xref>]).
Let M⊆Rq be a compact, convex set, and let f:M→R be upper semicontinuous on M. Then, fM:M→R is called the concave envelope of f on M when
fM(x) is a concave function on M,
fM(x)⩽f(x) for all x∈M,
there is no function w(x) satisfying (i) and (ii) such that w(x¯)<fM(x¯) for some point x¯∈M.
The convex envelope of a function f on M is defined in a similar manner.
Let, for each i∈I, Mi={(ui,ti,si)∈R3∣Li⩽ui⩽Ui,0⩽ti⩽t¯,0⩽si⩽s¯}. Then, for each i∈I,0⩽Li<Ui, and the concave envelope of qiMi(ui) of the quadratic function qi(ui)=ui2 is given by
(29)qiMi(ui)=Ki(ui-Li)+Li2,
where Ki=Li+Ui. Let q_i(ui) represent the linear lower bounding function of qi(ui) over the interval Li⩽ui⩽Ui. Then, by the convexity of the function qi, the function q_i(ui) is given as follows:
(30)q_i(ui)=Ki(ui-14Ki).
Lemma 8.
Consider the functions qi(ui), q_i(ui), and qiMi(ui) for any ui∈Hi=[Li,Ui], where i=1,2,…,p. Then, the following two statements are valid.
qiMi(ui) is an affine concave envelope of qi(ui) over Mi, and q_i(ui) is an affine function corresponding to a supporting hyperplane of the graph of qi(ui) over Mi, which is parallel to qiMi(ui). Moreover, we have
(31)q_i(ui)⩽qi(ui)⩽qiMi(ui),∀ui∈Hi;i=1,2,…,p.
When ωi=Ui-Li→0, the differences Δi1=qiMi-qi and Δj2=qi-q_i satisfy
(32)maxui∈HiΔi1=maxui∈HiΔi2=14(Ui-Li)⟶0.
Proof.
Consider the following:
obviously,
since the Δi1 is a concave function about ui for any ui∈Hi, Δj1 can attain the maximum Δi,max1 at the point ui*=(1/2)Ki. Thus, it is not difficult to have
(33)Δi,max1=14(Ui-Li).
On the other hand, since the Δi2 is a convex function about ui for any ui∈Hi, Δi1 can attain the maximum Δi,max2 at the point Ui or Li. Thus,
(34)Δi,max2=14(Ui-Li).
Obviously, when ωi=Ui-Li→0,
(35)maxui∈HiΔi1=maxui∈HiΔi2=14(Ui-Li)⟶0.
This completes the proof.
Therefore, for all (ui,ti,si)∈Mi, we can obtain
(36)∑i=1p[2uiti-qiMi(ui)si]⩽∑i=1p[2uiti-ui2si]⩽∑i=1p[2uiti-q_i(ui)si],that is,
(37)∑i=1p[2uiti-Kiuisi+(KiLi-Li2)si]⩽∑i=1p[2uiti-ui2si]⩽∑i=1p[2uiti-Kiuisi+14Ki2si].
For each i∈I, 0⩽Li⩽Ui, and, from Benson [16], the concave envelope of hiMi(ui,ti) of the bilinear functions hi(ui,ti) is given for each (ui,ti,si)∈Mi by
(38)hiMi(ui,ti)=min{t¯iui+Liti-t¯iLi,Uiti},
and the convex envelope of hiMi(ui,si) of the bilinear functions hi(ui,si) is given for each (ui,ti,si)∈Mi by
(39)hiMi(ui,si)=max{Lisi,s¯iui+Uisi-s¯iUi}.
Notice that the optimal value UB(M) of problem PR1(M) satisfies UB(M)⩾v(H). It is also easy to see that the feasible region of problem PR1(M) is a nonempty compact set. Since the objective function of problem PR1(M) is affine function over this set, problem PR1(M) always has an optimal solution.
4. Algorithm and Convergence
To globally solve problem (P1), the algorithm to be presented uses a branch and bound approach. There are three fundamental processes in the algorithm: a branching process, a lower bounding process, and an upper bounding process.
4.1. Branching Rule
The algorithm performs a branching process in Rp that iteratively subdivides the p-dimensional rectangle H0 of problem (P1) into smaller rectangles that are also of dimension p. The branch and bound approach is based on partitioning the set H into subrectangles, each concerned with a node of the branch and bound tree, and each node is associated with a relaxation linear subproblem on each subrectangle. These subrectangles are obtained by the branching process, which helps the branch and bound procedure identify a location in the feasible region of problem (P1) that contains a global optimal solution to the problem.
During each iteration of the algorithm, the branching process creates a more refined partition of a portion of H1=H that cannot yet be excluded from consideration in the search for a global optimal solution for problem (P1(H1)). The initial partition Q1 consists simply of H1, since at the beginning of the branch and bound procedure, no portion of H1 can as yet be excluded from consideration.
During iteration k of the algorithm, k⩾1, the branching process is used to help create a new partition Qk+1. First, a screening procedure is used to remove any rectangle Hk from Qk that Hk can, at this point of the search, be excluded from further consideration, and Qk+1 is temporarily set equal to the set of rectangles that remain. Later in iteration k, a rectangle Hk in Qk+1 is identified for further examination. The branching process is then evoked to subdivide Hk into two subrectangles H2k, H2k+1. This subdivision is accomplished by a process called rectangular bisection.
Consider any node subproblem identified by the subrectangle H^, where H^ is defined as before. The branching rule is as follows [17].
Step 1.
Let Ujk-1-Ljk-1=maxi∈I{Uik-1-Lik-1}.
Step 2.
Let vj satisfy min{Ujk-1-vj,vj-Ljk-1}=(1/2)(Ujk-1-Ljk-1).
Step 3.
Let
(41)H1k-1={u∈RN∣Ljk-1⩽uj⩽vj,Lik-1⩽ui⩽Ujk-1,i≠j},H2k-1={u∈RN∣vj⩽uj⩽Ujk-1,Lik-1⩽ui⩽Uik-1,i≠j}.
The new partition Qk of the portion of H0 remaining under consideration is then given by Qk=Qk-1∖{Hk-1}⋃{H1k-1,H2k-1}.
4.2. Lower Bound and Upper Bound
The second fundamental process of the algorithm is the upper bounding process. For each rectangle H∈Rp created by the branching process, this process gives an upper bound UB(H) for the optimal value v(H) of the problem (P1(H)), that is,
(42)(P1(H))maxu∈Hφ(u).
For each rectangle H created by the branching process, from (40), UB(H) is found by solving a single convex program (PR1(H)).
During each iteration k⩾0, the upper bounding process computes an upper bound for the optimal value v1 of problem (P1). For each k⩾0, this upper bound UBk is given by
(43)UBk=max{UB(H)∣H∈Qk}.
The lower bounding process is the third fundamental process of the branch and bound algorithm. In each iteration of the algorithm, this process finds a lower bound for v1. For each k⩾0, this lower bound LBk is given by
(44)LBk=φ(u^k),
where u^k is the incumbent feasible solution for problem (P1); that is, among all of optimal solutions (r,z,t,s,u,x) for problems of the form (PR1(H)) found through iteration k, u=u^k achieves the largest value of φ.
4.3. Branch and Bound Algorithm
Based on the results and algorithmic processes discussed in this section, the basic steps of the proposed global optimization are summarized in the following.
Step 0 (initialization). (i) Determine an optimal solution (r0,z0,t0,s0,u0,x0) and the optimal value UB(H0) to problem PR1(H0). Set UB0=UB(H0), LB0=φ(u0), and u^0=u0. (ii) Set Q0={H0} and k=1, and go to iteration k.
Iteration k.
Step k.1. If UBk-1=LBk-1, then terminate. u^k-1 is a global optimal solution for problem (P1), and v1=LBk-1. Then, we can solve problem (P) on the basis of problem (Su) with u=u^k-1. If UBk-1≠LBk-1, continue.
Step k.2. Subdivide Hk-1 into two rectangles H1k-1 and H2k-1 via the rectangular bisection.
Step k.3. For each i=1,2, find an optimal solution (ri,k-1,zi,k-1,ti,k-1,si,k-1,ui,k-1,xi,k-1) and the optimal value UB(Hik-1) to problem PR1(Hik-1).
Step k.4. Set LBk=max{φ(u^k-1),φ(u1,k-1),φ(u2,k-1)}, and choose u^k so that LBk=φ(u^k).
Step k.5. Set Qk={Qk-1∖{Hk-1}}⋃{H1k-1,H2k-1}.
Step k.6. Delete from Qk all rectangles H such that UB(H)⩽LBk.
Step k.7. If Qk=∅, set UBk=LBk, set k=k+1, and go to iteration k. Otherwise, set UBk=max{UB(H)∣H∈Qk}. Choose a rectangle Hk∈Qk such that UB(Hk)=UBk, set k=k+1, and go to iteration k.
4.4. Convergence
In this subsection, we give the global convergence of the above algorithm. By the construction of the algorithm, when the algorithm is finite, it either finds a global optimal solution for problem (P1(H)) or detects that problem (P1(H)) is infeasible. It is also possible for the algorithm to be infinite. We will discuss this case in the following.
Denote M0={(s,t,u)∈R3p∣0⩽si⩽s¯i,0⩽ti⩽t¯i,0⩽ui⩽Ui0,i∈I}. Suppose that M^ denotes M0 or a subrectangle of M0 that is generated by the branch and bound algorithm. Then, M^ may be written as
(45)M^=M^1×M^2×⋯M^p,
where, for any i∈I,
(46)M^i={(si,ti,ui)∈R3∣0⩽si⩽s¯i,0⩽ti⩽t¯i,Li⩽ui⩽Ui},
where for each i=1,2,…,p, Li,Ui are positive scalars such that Li⩽Ui.
If the algorithm is infinite, by the rectangular bisection, since i∈I is finite, there exists an infinite sequence {Mk}k=1∞ of rectangles in R3p generated by the algorithm such that, for any i∈I, Mk+1⊆Mk and Mk+1 is formed from Mk. By Step 1 of the rectangular bisection process, for some fixed j0∈{i=1,2,…,p},
(47)Uj0k-Lj0k=maxi∈I{Uik-Lik},I={1,2,…,p},
where Mk={(s,t,u)∈R3p∣0⩽si⩽s¯i,0⩽ti⩽t¯i,Lik⩽ui⩽Uik} for all k. Next, let {Mk}k=1∞ be a sequence of rectangle of this type, and, for all k and any i∈I, let
(48)Mik={(si,ti,ui)∈R3∣0⩽si⩽s¯i,0⩽ti⩽t¯i,Lik⩽ui⩽Uik}.
Lemma 9.
For some subsequence K of {1,2,…}, the limit rectangle
(49)Mj0∞=⋂k∈KMj0k={(sj0,tj0,uj0)∣0⩽sj0⩽s¯j0,0⩽tj0⩽t¯j0,uj0=v¯}
is rectangle in R3 parallel to the (sj0,tj0)-coordinate plane.
Proof.
By Lemma 5.4 in [17] and the rectangle bisection, there exists a subsequence K of {1,2,…} such that
(50)limk∈KLj0k=Lj0∈R,limk∈KUj0k=Uj0∈R,limk∈Kvj0k=vj0*∈{Lj0,Uj0},
where vj0k=(1/2)(Lj0k-1+Uj0k-1). So, either vj0*=Lj0 or vj0*=Uj0. Then,
(51)Mj0∞=⋂k∈KMj0k={(sj0,tj0,uj0)∣0⩽sj0⩽s¯j0,0⩽tj0⩽t¯j0,uj0=v¯},
which is a rectangle in R3 parallel to the (tj0,sj0)-coordinate plane.
Theorem 10.
Suppose that the proposed algorithm is infinite, and let {Mk}k=1∞ be a sequence of rectangles in Rp generated by the algorithm such that, for each k=1,2,…,Mk+1⊂Mk. Let ψ(s,t,u)=∑i=1p[2uiti-(ui)2si]. Then, for some subsequence K of {1,2,…},
limk∈K{UB(Hk)-ψ(sk,tk,uk)}=0,
any accumulation point (t¯,s¯,u¯,x¯) of the sequence of {(t¯k,s¯k,u¯k,x¯k)}k=1∞ is a global optimal solution of problem (P1).
Proof.
Consider the following.
Notice that, according to (39) and (40), it follows that, for each i∈I,
(52)ri=hiMi(ui,ti)=min{t¯iui+Liti-t¯iLi,Uiti},zi=hiMi(ui,si)=max{Lisi,s¯iui+Uisi-s¯iUi},
and problem PR1(Mk) may be rewritten as
(53)UB(Mk)=maxΨ(s,t,u)s.t.ti-fi(x)⩽0,-si+gi(x)⩽0,0⩽ti⩽t¯i,0⩽si⩽s¯i,Li⩽ui⩽Ui,ri⩾0,zi⩾0,x∈X,
where Ψ(s,t,u)=∑i=1p[2hiMi(ui,ti)-KihiMi(ui,si)-(1/4)Ki2si].
By the algorithm, since {Mk}k=1∞ is infinite, we may choose a sequence K of {1,2,…} such that, for each k∈K, UB{Mk}≠-∞. At the same time, without loss of generality, we may assume that {Mk}k∈K have the properties of Lemma 9. Since for each k∈K, UB{Mk}≠-∞, by the upper bounding process
(54)Ψ(sk,tk,uk)=UB(Mk).
By applying Lemma 9 repeatedly, we may assume that
(55)limk∈KMk=M1∞×M2∞×⋯×Mp∞=M∞,
where for each i=1,2,…,p, Mi∞ is a rectangle in R3 parallel to the (tj0,sj0)-coordinate plane. Let Z(Mk) be the feasible domain of problem (PR1(Mk)). For each k∈K,
(56)UB(Mk)=max(s,t,u,x)∈Z(Mk)Ψ(s,t,u)⩾vM⩾ψ(s¯k,t¯k,u¯k)⩾ψ(sk,tk,uk),
where the first equation follows, since (sk,tk,uk,xk)∈Z(Mk)≠∅, from the definition of UB(Mk) in the upper bounding process, the first inequality follows from Step k.2 of the rectangle bisection algorithm and the validity of the upper bounding process, the second inequality follows because (s¯k,t¯k,u¯k,x¯k)∈Z(Mk), and the third inequality holds by the choice of the incumbent solution in Step k.4 of algorithm. For each k∈K,(sk,tk,uk,xk)∈Z(Mk)⊆Z(M); therefore, there is a convergence subsequence of {(sk,tk,uk,xk)}k∈K, and, by (56), the limit point of this sequence lies in M∞. Without loss of generality, assume that
(57)limk∈K(sk,tk,uk)=(s¯,t¯,u¯)∈M∞.
By the continuity of ψ(s,t,u) on M,
(58)limk∈Kψ(sk,tk,uk)=ψ(s¯,t¯,u¯).
For all k∈K,(sk,tk,uk)∈Mk, by (57),
(59)limk∈KΨ(sk,tk,uk)=ψ(s¯,t¯,u¯).
Combining (55), (57), and (58), we have
(60)ψ(s¯,t¯,u¯)=limk∈KUB(Mk)=limk∈Kψ(sk,tk,uk)=ψ(s¯,t¯,u¯).
Since for each k∈K,ψ(s¯k,t¯k,u¯k)=∑i=1p[2uiti-(ui)2si], this confirms the assertion.
By algorithm and (a), we obtain
(61)limk→∞vk=limk→∞(∑i=1p[2uiti-(ui)2si])=vM.
Let (s¯,t¯,u¯,x¯) be an accumulation point of {(s¯k,t¯k,u¯k,x¯k)}k=1∞; then for some K⊆{1,2,…}(62)limk∈K(s¯k,t¯k,u¯k,x¯k)=(s¯,t¯,u¯,x¯).
By (62), since {∑i=1p[2u¯ikt¯ik-(u¯ik)2s¯ik]}k∈K is a subsequence of {∑i=1p[2u¯ikt¯ik-(u¯ik)2s¯ik]}k=1∞,
(63)limk∈K(∑i=1p[2u¯ikt¯ik-(u¯ik)2s¯ik])=vM.
From (63) and the continuity of the objective function of problem (P(M)),
(64)limk∈K(∑i=1p[2u¯ikt¯ik-(u¯ik)2s¯ik])=∑i=1p[2u¯it¯i-(u¯i)2s¯i].
From (63) and (64), we get
(65)∑i=1p[2u¯it¯i-(u¯i)2s¯i]=vM.
Since the feasible region Z(M) of problem (P(M)) is a closed set, (s¯,t¯,u¯,x¯)∈Z(M). It follows from (65) that (s¯,t¯,u¯,x¯) is global optimal solution for problem (P(M)); the proof is complete.
By the algorithm, it may happen that, even after many iterations, Rk may remain nonempty. However, by the convergence result, it follows that, for any ϵ>0,
(66)(UBk-∑i=1p[2u¯ikt¯ik-(u¯ik)2s¯ik])⩽ϵ
will hold for k sufficiently large. In practice, it is recommended that the algorithm be terminated if, for some prechosen, relatively small value of ϵ>0, (66), holds. When termination occurs in this way, it is easy to show that x¯k is a global ϵ-optimal solution, and f(x¯k) is a global ϵ-optimal value for problem (LMP) in the sense that x¯k∈X and f(x¯k)+ϵ⩽v.
5. Numerical Experiments
To verify performance of the proposed global optimization algorithm, some test problems were implemented. The test problems are coded in C++ and the experiments are conducted on a Pentium IV (3.06 GHZ) microcomputer.
Prior to initiating the algorithm, we first determine a rectangle H0={(u1,u2)∣0.9354143⩽u1⩽1.561249,0.0769230⩽u2⩽0.6666667}. Then, the problem (P1) is
(68)v1=maxφ(u1,u2)s.t.u∈H0,
where
(69)φ(u1,u2)=maxx∈X[2u1-x12+3x1-x22+3x2+3.5-u12(x1+1)-x12+3x1-x22+3x2+3.5+2u2x2-u22(x12-2x1+x22-8x2+20)].
Solving the linear programming (PR1(H0)) gets initial upper bound UB0=5.793653, and the lower bound LB0=3.952468 with u1=1.181421, u2=0.2448056. Set ɛ=0.01. The algorithm finds a global ɛ-optimal value 4.060819 after 23 iterations at the global ɛ-optimal solution x*=(1,1.743823).
Example 12 (see [<xref ref-type="bibr" rid="B11">11</xref>]).
Consider
(70)v=maxf(x)=-2x1-x2x1+10+-2x2+10,s.t.-x12-x22+3⩽0,-x12-x22+8x2-14⩽0,2x1+x2⩽6,3x1+x2⩽8,x1-x2⩽1,x1⩾1,x2⩾1.
See Table 1
Computational results of Example 12.
Methods
ε
ε—optimal solution
ε—optimal value
Iteration number
CPU time
[11]
1.0E-5
(1.0000, 1.4142)
0.4856
90
6.9329 s
Our
1.0E-5
(1.00000, 1.41422)
0.48560
58
3.4545 s
Example 13.
In this example, we solve 6 different random instances:
(71)maxxTQ1x-c1xTP1x+xTQ2x-c1xTP2xs.t.∑i=1nxi=1,Ax⩽b,0⩽xi⩽1,i=1,2,…,n,
where Q1 and Q2 are negative semidefinite, while P1 and P2 are definite, A is m×n matrix, and all elements of Q1, Q2, P1, P2, A, b are randomly generated, whose ranges are [0,1]. Table 2 summarizes our computational results. In Table 2, the following indices characterize performance in algorithm: Ave. CPU (s) is the average CPU times in seconds; Ave. Iter is average number of iterations.
Computational results of Example 13.
n
m
Ave. Iter
Ave. CPU (s)
5
3
69
7.8
5
7
76
8.9
5
10
84
9.6
5
20
99
23.5
10
3
261
59.6
15
3
329
78.3
6. Conclusion
In this paper, we present a branch and bound algorithm for solving a class of fractional programming problems (P). To globally solve problem (P), we first convert problem (P) into an equivalent problem (P1); then through linearization method, we obtain a convex relaxation programming problem (PR1(H)) of problem (P1). In the algorithm, the branch and bound tree creates rectangular regions that belong to R3p, where p is the number of ratios in the objective function of problem (P1). However, the branching process only takes place in Rp, rather than R3p. In addition, all subproblems that must be solved to implement the algorithm are convex programming problems, each of which is guaranteed to have an optimal solution.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Project is supported by the Ph.D. Start-Up Fund of Natural Science Foundation of Guangdong Province, China (no. S2013040012506), Project Science Foundation of Guangdong University of Finance (no. 2012RCYJ005), and the Postdoctoral Fund of Shenyang Agricultural University (no. 770212025).
IbarakiT.Parametric approaches to fractional programsChadhaS. S.Fractional programming with absolute-value functionsKunoT.A branch-and-bound algorithm for maximizing the sum of several linear ratiosShenP.-P.WangC.-F.Global optimization for sum of linear ratios problem with coefficientsBensonH. P.On the global optimization of sums of linear fractional functions over a convex setBensonH. P.Using concave envelopes to globally solve the nonlinear sum of ratios problemBensonH. P.Global optimization algorithm for the nonlinear sum of ratios problemChangC.-T.On the posynomial fractional programming problemsWangY.-J.ZhangK.-C.Global optimization of nonlinear sum of ratios problemFreundR. W.JarreF.Solving the sum-of-ratios problem by an interior-point methodShenP.-P.DuanY.-P.PeiY.-G.A simplicial branch and duality bound algorithm for the sum of convex-convex ratios problemBensonH. P.Global maximization of a generalized concave multiplicative functionAvrielM.DiewartW. E.SchaibleS.ZangI.HorstR.ThoaiN. V.Decomposition approach for the global minimization of biconcave functions over polytopesHorstR.TuyH.BensonH. P.On the construction of convex and concave envelope formulas for bilinear and fractional functions on quadrilateralsTuyH.