This paper is concerned with an efficient global optimization algorithm for solving a kind of fractional program
problem (P), whose objective and constraints functions are all defined as the sum of ratios generalized polynomial functions. The
proposed algorithm is a combination of the branch-and-bound search and two reduction operations, based on an equivalent monotonic
optimization problem of (P). The proposed reduction operations specially offer a possibility to cut away a large part of the currently investigated
region in which the global optimal solution of (P) does not exist, which can be seen as an accelerating device for the solution algorithm of (P). Furthermore, numerical results show that the computational efficiency is improved by using these operations in the number of iterations and the overall execution time of the algorithm, compared with other methods. Additionally, the convergence of the algorithm is presented, and the computational issues that arise in implementing the
algorithm are discussed. Preliminary indications are that the algorithm can be expected to provide a practical approach for
solving problem (P) provided that the number of variables is not too large.
1. Introduction
Consider the following generalized polynomial fractional programs:
(1)(P):{ming0(y)=∑j=1pcjnj(y)dj(y)s.t.gm(y)≤0,m=1,2,…,M0,y∈Y={y∣0<yil≤yi≤yiu<∞,hhhhhhhhhhhhhhhhhi=1,…,n0yil},
where
(2)nj(y)=∑t=1T¯jα¯jt∏i=1n0yiγ¯jti,dj(y)=∑t=1T^jα^jt∏i=1n0yiγ^jti,hhhhhhhhhhhhhhhhhhhhhhhhhhj=1,2,…,p,gm(y)=∑t=1T~mα~jt∏i=1n0yiγ~mti,m=1,…,M0,
and cj, α¯jt, α^jt, α~jt, γ¯jti, γ^jti, and γ~jti are all arbitrary real number.
Problem (P) is worth studying because it frequently appears in many applications, including financial optimization, portfolio optimization, engineering design, manufacturing, chemical equilibrium (see, e.g., [1–8]), etc. On the other hand, many other nonlinear problems, such as quadratic program, linear (or quadratic, polynomial) fractional program [9–13], linear multiplication program [14–16], polynomial program, and generalized geometric program [17–20], can be all put into this form.
The problem (P) is obviously multiextremal, for its special cases such as quadratic program, linear fractional program, and linear multiplication program are multiextremal, which are known to be NP-hard problems [21], and it, therefore, falls into the domain of global optimization problems.
In the last decades, many solution algorithms have been developed to globally solve special cases of problem (P) (see, e.g., [9–14, 17–19, 22, 23]), but the global optimization algorithms for the general form of (P) are scarce. Recently, by using the linear relaxation methods, Wang and Zhang [24], Shen and Yuan [25], and Jiao et al. [26] gave the corresponding global optimization algorithms for finding the global minimum of (P), respectively. Also, Fang et al. [27] presented a canonical dual approach for minimizing the sum of a quadratic function and the ratio of two quadratic functions.
In this paper, we will suggest an efficient algorithm for solving globally problem (P). The goal of this research is fourfold. First, by introducing variables and by using a transformation, the original problem (P) is equivalently reformulated as a monotonic optimization problem (Q) based on the characteristics of problem (P). That is to say, the objective function is increasing and all the constrained functions can be denoted as the difference of two increasing functions in (Q). Second, in order to present an efficient algorithm for solving problem (Q), the two reduction operations are incorporated into the branch-and-bound framework to suppress the rapid growth of the branching trees so that the solution procedure is enhanced. The proposed reduction cut operation especially does not appear in other branch-and-bound methods (see [24, 25]) and is more easily implementable than the one in [28], because the latter (see (2.4) and (2.5) in [28]) is computed by solving the nonlinear nonconvex programming, but the former is involved in solving the roots of several equations in a single variable and with strict monotonicity. Third, by utilizing directly the proposed algorithm, one also can obtain the essential upper and lower bounds of denominator of each ratio in the objective function to problem (P), where these bounds are tighter than the ones given by Bernstein algorithm (see [24, 25]), and so the assumption 1 in [24, 25] is not necessary in this paper. Finally, numerical results show that the proposed algorithm is feasible and effective.
The paper is organized as follows. In Section 2, an equivalent reformulation of the original problem is given. Next, Section 3 presents and discusses the algorithm basis process for globally solving problem (P). The algorithm is presented and its convergence is shown in Section 4. In Section 5, the computational results are presented.
2. Equivalent Monotonic Reformulation
For the convenience of the following discussion, assume that there exist positive scalars Lj, Uj such that 0<Lj≤dj(y)≤Uj and nj(y)>0 for all y∈Y, for each j=1,2,…,p. In fact, Lj, Uj can be obtained by the algorithm to be proposed in this paper (see Section 5); define, therefore, the set
(3)S={s∈Rp∣Lj≤sj≤Uj,j=1,…,p}.
Without loss of generality, assume that cj>0, j=1,2,…,k and cj<0, j=k+1, k+2,…,p. By introducing variables sj, j=1,…,p, the problem (P) is then equivalent to the following problem:
(4)(P-):{minf(y,s)=∑j=1pcjsj-1nj(y)s.t.sj-dj(y)≤0,j=1,…,k,dj(y)-sj≤0,j=k+1,…,p,gm(y)≤0,m=1,…,M0,hhhhhhhhhhhy∈Y,s∈S.
Theorem 1.
If (y*,s*) is a global optimal solution for problem (P-), then sj*=dj(y*), j=1,2,…,p, and y* is a global optimal solution for problem (P). Conversely, if y* is a global optimal solution for problem (P), then (y*,s*) is a global optimal solution for problem (P-), where sj*=dj(y*), j=1,2,…,p.
Proof.
See Theorem 1 in [24]; it is omitted here.
In what follows, we show that problem (P-) can be transformed into a monotonic optimization problem such that the objective function is increasing and all the constrained functions are the difference of two increasing functions. To see how such a reformulation is possible, we first consider each constraint of (P-). Let
(5)γ^ji=min{{γ^jti,0}∣t=1,…,T^j},hhhhhhhhhhihihhhhhij=1,…,p,hhhhhhhhhihhhhhhhiii=1,…,n0,γ~mi=min{γ~mti∣t=1,…,T~m},hihhhhhhhhhhhhim=1,…,M0,hhhhhhhhhhhhhhhhii=1,…,n0.
For any y∈Y, s∈S, it follows from each constraint of (P-) that
(6)(sj-dj(y))·∏i=1n0yi(-γ^ji)=sj∏i=1n0yi(-γ^ji)-∑t=1T^jα^jt∏i=1n0yiγ^jti-γ^ji,mmmmmmmmmmmmj=1,…,k,(dj(y)-sj)·∏i=1n0yi(-γ^ji)=∑t=1T^jα^jt∏i=1n0yiγ^jti-γ^ji-sj∏i=1n0yi(-γ^ji),mmmmmmmmmmj=k+1,…,p,gm(y)·∏i=1n0yi(-γ~mi)=∑t=1T~mα~mt∏i=1n0yiγ~mti-γ~mi,mmmmmmmlmmmmmm=1,…,M0.
By using the above notation, one can thus convert (P-) into the form
(7)(P1):{minf(y,s)=∑j=1pcjsj-1nj(y)s.t.sj∏i=1n0yi(-γ^ji)-∑t=1T^jα^jt∏i=1n0yirjti≤0,j=1,…,k,∑t=1T^jα^jt∏i=1n0yirjti-sj∏i=1n0yi(-γ^ji)≤0,j=k+1,…,p,∑t=1T~mα~mt∏i=1n0yirmti≤0,m=1,…,M0,hhhhhhhhhhhhhhhhhy∈Y,s∈S,
where rjti=γ^jti-γ^ji≥0 for j=1,…,p and rmti=γ~mti-γ~mi≥0 for m=1,…,M0. Note that all the exponents are positive in the constraints of problem (P1). Thus, by applying the following exponent transformation
(8)yi=exp(ηi),i=1,…,n0,si=exp(ξi),i=1,…,p,
to the formulation (P1), letting N=n0+p and z=(η,ξ)∈RN, and by changing the notation, an equivalent problem of problem (P1) can be then given by
(9)(P2):{minΦ0(z)s.t.Φm(z)≤0,m=1,…,p+M0z∈Z0={z:zil≤zi≤ziu,∀i=1,…,Nzil},
where
(10)Φm(z)=∑t=1Tmαmtexp(∑i=1Nγmtizi),m=0,…,p+M0,zil=ln(yil)≤ηi≤ln(yiu)=ziu,i=1,…,n0,zil=lnLj≤ξj≤lnUj=ziu,j=i-n0,hhhhhhhhj=1,…,p,i=n0+1,…,N.
Next, we turn to consider the objective function of (P2). For convenience, for each m=0,1,…,p+M0, we assume, without loss of generality, that αmt>0 for t=1,…,Jm and αmt<0 for t=Jm+1,…,Tm. In addition, some notations are introduced as follows:
(11)It+={i∣γ0ti>0,i=1,…,N},It-={i∣γ0ti<0,i=1,…,N},Lt=∑i∈It-γ0tiziu,Ut=∑i∈It-γ0tizil,t=1,…,J0,lt=∑i∈It+γ0tizil,ut=∑i∈It+γ0tiziu,t=J0+1,…,T0.
Then, by introducing an additional vector ω=(ω1,…,ωT0)T∈RT0, we can convert the problem (P2) into
(12)(P3):{min∑t=1J0α0texp(∑i∈It+γ0tizi+ωt)+∑t=J0+1T0α0texp(∑i∈It-γ0tizi-ωt)s.t.∑t=1Jmαmtexp(∑i=1n0γmtizi)+∑t=Jm+1Tmαmtexp(∑i=1n0γmtizi)≤0,m=1,…,p+M0,ωt-∑i∈It-γ0tizi≥0,t=1,…,J0,ωt+∑i∈It+γ0tizi≥0,t=J0+1,…,T0,Lt≤ωt≤Ut,t=1,…,J0,-ut≤ωt≤-lt,t=J0+1,…,T0,hhhhhhhhhhhhhhhhhhhhhz∈Z0,
where Z0 is defined in (P2).
Note that the objective function of (P3) is increasing and each constrained function is the difference of two increasing functions. The key equivalent result for problems (P2) and (P3) is given by the following Theorem 2.
Theorem 2.
z* is a global optimal solution for problem (P2) if and only if (z*,ω*) is a global optimal solution for problem (P3), where
(13)ωt*=∑i∈It-γ0tizi*,t=1,…,J0,ωt*=-∑i∈It+γ0tizi*,t=J0+1,…,T0.
Proof.
The proof of this theorem follows easily from the definitions of problems (P2) and (P3); therefore, it is omitted here.
From Theorem 2, for solving problem (P2), we may solve problem (P3) instead. In addition, it is easy to see that the global optimal values of problems (P2) and (P3) are equal. Let x=(z,ω)∈RN+T0 with z∈RN and ω∈RT0 and let
(14)n=N+T0,M=p+M0+T0;
then, without loss of generality, by changing the notation, problem (P3) can be rewritten as the following form:
(15)(Q):min{F0(x)∣Fm(x)=Fm+(x)-Fm-(x)≤0,hhhhhhhhhhhhhhhhhhhim=1,…,M,x∈X0},
where
(16)F0(x)=∑t=1J0α0texp(∑i∈It+γ0tixi+xN+t)+∑t=J0+1T0α0texp(∑i∈It-γ0tixi-xN+t),Fm+(x)={∑t=1Jmαmtexp(∑i=1Nγmtixi),m=1,…,p+M0,xm,m=p+M0+1,…,M,Fm-(x)={-∑t=Jm+1Tmαmtexp(∑i=1Nγmtixi),m=1,…,p+M0,xm+N-p-M0-∑i∈Im-p-M0-γ0(m-p-M0)ixi,m=p+M0+1,…,p+M0+J0,xm+N-p-M0+∑i∈Im-p-M0+γ0(m-p-M0)ixi,m=p+M0+J0+1,…,M,X0={x∈Rn∣xil≤xi≤xiu,i=1,…,n}={x∈Rn∣zil≤xi≤ziu,i=1,…,N,Li-N≤xi≤Ui-N,i=N+1,…,N+J0,-ui-N≤xi≤-li-N,i=N+J0+1,…,n,}.
Based on the above discussion, to globally solve problem (P), the algorithm to be presented concentrates on solving the problem (Q); then a bound-reduction-bound (BRB) algorithm to be presented will be considered for the problem (Q).
3. Basic Operations
In order to solve globally the problem (Q), the main idea of (BRB) approach to be proposed consists of several basic operations: successively refined partitioning of the feasible set; estimation of lower bound for the optimal value of the objective function over each subset generated by the partitions; and the reduction operations by reducing the size of each partition subset without losing any feasible solution currently still of interest. Next, we begin the establishment of the approach with the basic operations needed in a branch and bound scheme.
Let X=[a,b]={x∣ai≤xi≤bi,i=1,…,n} denote the rectangle or subrectangle of X0 generated by the algorithm. Consider the following subproblem:
(17)Q(X):min{F0(x)∣Fm(x)=Fm+(x)-Fm-(x)≤0,hhhhhhhhhhhhhhhhhhhhhhm=1,…,M,x∈X}.
3.1. Partition Rule
The critical element in guaranteeing convergence to a minimum of (Q) is the choice of a suitable partition strategy. In this paper, we choose the standard branching rule. This method is sufficient to ensure convergence since it derives all the intervals to a singleton for all the variables that are associated with the term that yields the greatest discrepancy in the employed approximation along any infinite branch of the branch-and-bound tree.
Consider any node subproblem identified by rectangle X. The procedure for dividing X into two subrectangles X+ and X- can be described as follows.
Let
(18)τ=argmax{bi-ai∣i=1,…,n},πτ=(aτ+bτ)2.
Let
(19)X+={x∣ai≤xi≤bi,i=1,…,n,hhhhhhhhhhhi≠τ,πτ≤xτ≤bτ},X-={x∣ai≤xi≤bi,i=1,…,n,hhhhhhhhhhhi≠τ,aτ≤xτ≤πτ}.
Through this branching rule, the rectangle X is partitioned into two subrectangles X+ and X-.
3.2. Lower Bound
For each rectangle X, we intend to compute a lower bound LB(X) of the optimal value of (Q) over X; that is, compute a number LB(X) such that
(20)LB(X)≤min{F0(x)∣Fm(x)≤0,mmmmm=1,…,M,x∈X}.
To ensure convergence, this lower bound must be consistent in the sense that, for any infinite nested sequence of boxes Xk shrinking to a single point x*,
(21)limk→+∞LB(Xk)=F0(x*).
Clearly, a lower bound is F0(a), and any bound such that
(22)LB(X)≥F0(a)
will satisfy (21) since F0(x) is increasing.
Although the bound F0(a) (for a box X=[a,b]) is sufficient for guaranteeing convergence, for a better performance of the lower bound procedure, tighter bounds are often necessary to achieve reasonable efficiency. For instance, the following procedure may give a better bound.
Consider the subproblem Q(X) and denote the optimal value of problem Q(X) by V[Q(X)]. Our main method for computing a valid lower bound LB(X) of V[Q(X)] over X⊆X0 is to solve the relaxation linear programming RLP(X) of Q(X) by using a linearization technique. This technique can be realized by underestimating every function F0(x) and Fm+(x) and by overestimating every function Fm-(x), for each m=1,…,p+M0. All the details of this linearization technique for generating the linear relaxation will be given in what follows. For this purpose, let us denote
(23)X0t={∑i∈It+γ0tixi+xN+t,t=1,…,J0,∑i∈It-γ0tixi-xN+t,t=J0+1,…,T0,Xmt=∑i=1Nγmtixi,t=1,…,Tm;
then we have X0t∈[X0tl,X0tu] and Xmt∈[Xmtl,Xmtu] for any box X=[a,b], where
(24)X0tl={∑i∈It+γ0tiai+aN+t,t=1,…,J0,∑i∈It-γ0tibi-bN+t,t=J0+1,…,T0,X0tu={∑i∈It+γ0tibi+bN+t,t=1,…,J0,∑i∈It-γ0tiai-aN+t,t=J0+1,…,T0,
and for each m=1,…,p+M0,
(25)Xmtl=∑i=1Nγmtiai,Xmtu=∑i=1Nγmtibi,hhhhhhhhhhhhhhhhhht=1,…,Tm.
Additionally, let
(26)Amt=exp(Xmtu)-exp(Xmtl)Xmtu-Xmtl,Δmt1(x)=Amt(Xmt-Xmtl)+exp(Xmtl)-exp(Xmt),Δmt2(x)=exp(Xmt)-Amt(Xmt-lnAmt+1),
where m=0,1,…,p+M0, t=1,…,Tm.
Thus, from Theorem 1 in [20], it follows that
(27)Amt(Xmt-lnAmt+1)≤exp(Xmt)≤Amt(Xmt-Xmtl)+exp(Xmtl)
and that Δmt1(x) and Δmt2(x) satisfy
(28)maxx∈XΔmt1(x)=maxx∈XΔmt2(x)=exp(Xmtl)(1-Wmt+WmtlnWmt)⟶0asωmt⟶0,
where
(29)ωmt=Xmtu-Xmtl,Wmt=exp(ωmt)-1ωmt.
Next, we will give the relaxation linear functions of F0(x), Fm+(x), and Fm-(x) over X. Based on the above discussion, it is obvious that we have, for all x∈X,
(30)F0(x)≥∑t=1J0α0t(A0t(X0t-lnA0t+1))+∑t=J0+1T0α0t(Amt(Xmt-Xmtl)+exp(Xmtl))≜LF0(x),Fm+(x)≥∑t=1Jmαmt(Amt(Xmt-lnAmt+1))≜LFm+(x),Fm-(x)≤-∑t=Jm+1Tmαmt(Amt(Xmt-Xmtl)+exp(Xmtl))≜UFm-(x),
where m=1,…,p+M0.
Consequently, we obtain the following linear programming RLP(X) as a linear relaxation of Q(X) over the partition set X:
(31)RLP(X):{minLF0(x)s.t.LFm(x)≤0m=1,…,M,mmmx∈X,
where
(32)LFm(x)={LFm+(x)-UFm-(x),m=1,…,p+M0,Fm+(x)-Fm-(x),m=p+M0+1,…,M.
An important property of RLP(X) is that its optimal value V[RLP(X)] satisfies
(33)V[RLP(X)]≤V[Q(X)].
Thus, from (33), the optimal value V[RLP(X)] of RLP(X) provides a valid lower bound for the optimal value V[Q(X)] of Q(X) over X⊆X0.
Based on the above result, for any rectangle X⊆X0, in order to obtain a lower bound LB(X) of the optimal value V[Q(X)] to subproblem Q(X), we may compute LB(X) such that
(34)LB(X)=max{V[RLP(X)],F0(a)},
where V[RLP(X)] is the optimal value of the problem RLP(X).
Clearly, LB(X) defined in (34) satisfies
(35)F0(a)≤LB(X)≤V[Q(X)]
and is consistent. It can provide a valid lower bound and guarantee convergence.
3.3. Reduction Operations
Clearly, the smaller the rectangle X is, the tighter the lower bound LB(X) of Q(X) will be and, therefore, the closer the feasible solution of (Q) will be to the corresponding optimal solution. To show this, the next results give two reduction operations, including the reduction cut and the deleting technique, to reduce the size of the partitioned rectangle without losing any feasible solution currently still of interest.
(1) Reduction Cut. At a given stage of the branch and bound algorithm for (Q), for a rectangle [a,b] generated during the partitioning procedure and still of interest, let UB be the objective function value of the best so far feasible solution to problem (Q). Given an ɛ>0, we want to find a feasible solution x∈X of (Q) such that F0(x)≤UB-ɛ or else establish that no such x exists. So, the search for such x can then be restricted to the set F⋂[a,b], where
(36)F∶={x∣F0(x)≤UB-ɛ,Fm(x)≤0,m=1,…,M}.
The reduction cut is based on the monotonic structure of the problem (Q). The reduction cut aims at replacing the rectangle [a,b] with a smaller rectangle [a′,b′]⊂[a,b] without losing any point x∈F⋂[a,b], that is, such that F⋂[a′,b′]=F⋂[a,b]. The rectangle [a′,b′] satisfying this condition is denoted by redν[a,b] with ν=UB-ɛ. To illustrate how redν[a,b]=[a′,b′] is deduced by reduction cut, we first define the following functions.
Definition 3.
Given two boxes [a,b] and [a′,b′] with [a′,b′]⊆[a,b], for i=1,…,n, the functions φi(α) and ψi(β):[0,1]→Rn are defined by
(37)φi(α)=b-α(bi-ai)ei,ψi(β)=a′+β(bi-ai′)ei,
where ei is a unit vector with 1 at the ith position and 0 everywhere else, i=1,…,n.
From the functions Fm-(x), Fm+(x), and F0(x), we have the following result.
Theorem 4.
Let ɛ>0 be given and let ν=UB-ɛ. If F0(a)>ν or Fm+(a)-Fm-(b)>0 for some m∈{1,…,M}, then redν[a,b]=[a′,b′]=∅. Otherwise, redν[a,b]=[a′,b′] are given by
(38)a′=b-∑i=1nminm=1,…,M{αmi}(bi-ai)ei,b′=a′+∑i=1nminm=0,1,…,M{βmi}(bi-ai′)ei
satisfying
(39)αmi={1,ifFm-(φi(1))≥Fm+(a)αwithFm-(φi(α))=Fm+(a),otherwise,βmi={1,ifFm+(ψi(1))≤Fm-(b)βwithFm+(ψi(β))=Fm-(b),otherwise.
Proof.
(i) By the increasing property of F0(x), Fm+(x), and Fm-(x), if F0(a)>ν, then F0(x)≥F0(a)>ν for every x∈[a,b]. If there exists m∈{1,…,M} such that Fm+(a)-Fm-(b)>0, then Fm(x)=Fm+(x)-Fm-(x)≥Fm+(a)-Fm-(b)>0 for every x∈[a,b]. In both cases, F⋂[a,b]=∅.
(ii) Given any point x∈[a,b] satisfying
(40)F0(x)≤ν,Fm+(x)-Fm-(x)≤0,m=1,…,M,
we will show that x∈[a′,b′]. Let
(41)αm′i=min{αmi∣m=1,…,M},βm′′i=min{βmi∣m=0,1,…,M}.
Firstly, we will show that x≥a′. If x≱a′, then there exists index i such that
(42)xi<ai′=bi-αm′i(bi-ai),i.e.,xi=bi-α(bi-ai)withαm′i<α≤1.
We consider the following two cases.
Case 1. If αm′i=1, then, from (42), we have xi<ai′=bi-αm′i(bi-ai)=ai, conflicting with x∈[a,b]; that is, xi≥ai.
Case 2. If 0≤αm′i<1, the function Φm′i(α)=Fm′-(φi(α))-Fm′+(a) must be strictly decreasing in single variable α over the interval [0,1]. If the function Φm′i(α) is not strictly decreasing in single variable α, we get that Φm′i(α) must be a constant over the interval [0,1]. In this case, we have
(43)Φm′i(1)=Φm′i(0)=Fm′-(b)-Fm′+(a)≥0.
It follows from the definition of αm′i that αm′i=1, contradicting with 0≤αm′i<1.
Since the function Φm′i(α) is strictly decreasing, it follows, from (42) and the definition of αm′i that Fm′-(b-(bi-xi)ei)-Fm′+(a)=Φm′i(α)<Φm′i(αm′i)=0. Hence, Fm′-(b-(bi-xi)ei)<Fm′+(a). In addition, since Fm′-(x) is an increasing function in n-dimension variable x and x≤b-(bi-xi)ei, we have
(44)Fm′-(x)≤Fm′-(b-(bi-xi)ei)<Fm′+(a),
conflicting with Fm′-(x)≥Fm′+(x)≥Fm′+(a).
Based on the above discussion, we have x≥a′; that is, x∈[a′,b] in either case.
Secondly, we also can show from x∈[a′,b] that x≤b′; that is, x∈[a′,b′]. Supposing that x≰b′, then there exists some i such that
(45)xi>bi′=ai′+βm′′i(bi-ai′);
that is, there exists β such that
(46)xi=ai′+β(bi-ai′),βm′′i<β≤1.
By the definition of βm′′i, there are the following two cases to consider.
Case 1. If βm′′i=1, then, from (45), we have xi>bi′=ai′+(bi-ai′)=bi, conflicting with x∈[a′,b]; that is, xi≤bi.
Case 2. If 0≤βm′′i<1, the function Ψm′′i(β)=Fm+(ψi(β))-Fm-(b) is strictly increasing in single variable β. If Ψm′′i(β) is not strictly increasing in β, we get that Ψm′′i(β) must be a constant over [0,1]. In this case, we have
(47)F0(a′)-ν≤0
or
(48)Ψm′′i(1)=Ψm′′i(0)=Fm′′+(a′)-Fm′′-(b)≤0.
It follows from the definition of βm′′i that βm′′i=1, which is a contradiction with 0≤βm′′i<1.
Since the function Ψm′′i(β) is strictly increasing, from (46) and the definition of βm′′i, it implies that
(49)F0(ψi(β))-ν>F0(ψi(β0i))-ν=0
or
(50)Fm′′+(ψi(β))-Fm′′-(b)=Ψm′′i(β)>Ψm′′i(βm′′i)=0.
Assume that (49) holds; we can derive, from (46), that
(51)F0(a′+(xi-ai′)ei)=F0(ψi(β))>ν.
It follows from x≥a′+(xi-ai′)ei and F0(x) that
(52)F0(x)≥F0(a′+(xi-ai′)ei)>ν,
conflicting with F0(x)≤ν.
If (50) holds, we obtain, from (46), that
(53)Fm′′+(a′+(xi-ai′)ei)=Fm′′+(ψi(β))>Fm′′-(b).
Since x≥a′+(xi-ai′)ei and Fm′′+(x) is increasing, we have
(54)Fm′′+(x)≥Fm′′+(a′+(xi-ai′)ei)>Fm′′-(b).
It is a contradiction with Fm′′+(x)≤Fm′′-(x)≤Fm′′-(b).
From the above results, we must have x≤b′; that is, x∈[a′,b′] in both cases and this ends the proof.
Remark 5.
αmi and βmi given in Theorem 4 must exist and be unique, since the functions F0(x), Fm+(x), and Fm-(x) are all continuous and increasing.
Remark 6.
In order to obtain redν[a,b], the computation of αmi and βmi is more easily implementable than that of (2.4) and (2.5) in [28]. This is because the latter is computed by solving the nonlinear nonconvex programming problem, but the former is involved in solving the single variable equation with monotonicity.
(2) Deleting Technique. For any x∈X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n), without loss of generality, we assume that the relaxation linear problem RLP(X) can be rewritten as
(55)RLP(X):{min∑i=1nλ0ixi+μ0s.t.∑i=1nλjixi+μj≤0,j=1,…,M,hhhhhhhhhhhhhhhhx∈X⊆X0
and let UB be a known upper bound of the optimum of Q(X0). Define
(56)RLj=∑i=1nmin{λjiai,λjibi}+μj,j=0,1,…,M,(57)ρi=UB-RL0+min{λ0iai,λ0ibi}λ0iwithλ0i≠0,(58)τji=-RLj+min{λjiai,λjibi}λjiwithλji≠0,hhhhhhhhhhhhhhhhhhhhhhhij=1,…,M,
where i=1,…,n.
Theorem 7.
For any X=(Xi)n×1⊆X0, if RL0>UB, then there exists no optimal solution of the problem Q(X0) over X. Otherwise, if λ0h>0 and ρh<bh, for some h∈{1,…,n}, then there is no optimal solution of the problem P4(X0) over the subrectangle Xa; conversely, if λ0h<0 and ρh>ah, for some h∈{1,…,n}, then there does not exist optimal solution of P4(X0) over Xb, where
(59)Xa=(Xai)n×1⊆X0withXai={Xi,ifi≠h,(ρh,bh]⋂Xh,ifi≠h,Xb=(Xbi)n×1⊆X0withXbi={Xi,ifi≠h,[ah,ρh)⋂Xh,ifi≠h.
Proof.
The proof is similar to Theorem 2 in [27]; it is omitted here.
Theorem 8.
For any X=(Xi)n×1⊆X0, if RLj(x)>0, for some j∈{1,…,M}, then there exists no feasible solution of problem P4(X0) over X. Otherwise, consider the following two cases: if there exists some index h∈{1,…,n} satisfying λjh>0 and τjh<bh, for some j∈{1,…,M}, then there is no feasible solution of the problem P4(X0) over Xc; conversely, if λjh<0 and τjh>ah, for some j∈{1,…,M} and h∈{1,…,n}, then there exists no feasible solution of the problem P4(X0) over Xd, where
(60)Xc=(Xci)n×1⊆X0withXci={Xi,ifi≠h,(τjh,bh]⋂Xh,ifi=h,Xd=(Xdi)n×1⊆X0withXdi={Xi,ifi≠h,[ah,τjh)⋂Xh,ifi=h.
Proof.
The proof is similar to Theorem 3 in [27]; it is omitted here.
By Theorems 7 and 8, we can give a new deleting technique to reject some regions in which the globally optimal solution of Q(X0) does not exist. Let X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n) be any subrectangle of X0. The content of deleting technique is summarized as follows.
(S1) Optimality Rule. Compute RL0 in (56). If RL0>UB, let X=∅; otherwise, compute ρi(i=1,…,n) in (57). If λ0h>0 and ρh<bh, for some h∈{1,…,n}, then let bh=ρh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n). If λ0h<0 and ρh>ah, for some h∈{1,…,n}, then let ah=ρh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n).
(S2) Feasibility Rule. For any j=1,…,M, compute RLj in (56). If RLj>0, for some j∈{1,…,M}, then let X=∅; otherwise, compute τji in (58) (j=1,…,M,i=1,…,n). If λjh>0 and τjh<bh, for some j∈{1,…,M} and h∈{1,…,n}, then let bh=τjh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n). If λjh<0 and τjh>ah, for some j∈{1,…,M} and h∈{1,…,n}, then let ah=τjh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n).
This deleting technique provides a possibility to cut away all or a large part of the subrectangle X which is currently investigated by the algorithm procedure.
4. Algorithm and Its Convergence
Now, a branch-reduce-bound (BRB) algorithm is developed to solve the problem (Q) based on the former discussion. This method needs to solve a sequence of (RLP) problems over partitioned subsets of X0.
The BRB algorithm is based on partitioning the rectangle X0 into subrectangles, each concerned with a node of the branch-and-bound tree. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by Tk, where each node is associated with a rectangle X⊆X0 and ∀X∈Tk. For each such node X=[a,b], we will compute a lower bound LB(X) of the optimal objective function value of (P4) via the optimal value of the RLP(X) and F0(a), so the lower bound of the optimal value of (P4) at stage k is given by min{LB(X),∀X∈Tk}. We now select an active node to subdivide its associated rectangle into two subrectangles according to branch rule described in the Section 3.1. For each new node, reduce it and then compute the lower bound as before. At the same time, if necessary, we will update the upper bound UBk. Upon fathoming any nonimproving node, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.
4.1. Algorithm StatementStep 1 (initialization).
Choose the convergence tolerance ɛ>0. Let P0={X0} and T0={X0}. If some feasible solutions are available, add them to F and let UB0=min{F0(x)∣x∈F}; otherwise, let F=∅ and UB0=+∞. Set k=0.
Step 2 (reduction).
(i) Apply the reduction cut described in Section 3.3 to each box [a,b]∈Pk. Let Pk′={redν[a,b]∣[a,b]∈Pk} with ν=UBk-ɛ.
(ii) If Pk′≠∅, for each box [a,b]∈Pk′, we use the deleting technique in Section 3.3 to cut away X and denote the left still as Pk′′.
Step 3 (bounding).
If Pk′′≠∅, begin to do, for each X=[a,b]∈Pk′′, the following.
Solve the problem RLP(X) to obtain the optimal solution x(X) and the optimal value V[RLP(X)]. If x(X) is feasible to problem (Q), then set F=F⋃{x(X)}. Let LB(X)=max{V[RLP(X)],F0(a)}.
If Fm(a)≤0 for every m=1,…,M, then set F=F⋃{a}.
If F≠∅, define the new upper bound UBk=min{F0(x)∣x∈F}, and the best known feasible point is denoted by xk=argmin{F0(x)∣x∈F}. Set Tk=(Tk∖Xk)⋃Pk′′.
Step 4 (convergence checking).
Set Tk+1=Tk∖{X|LB(X)>UBk-ɛ,X∈Tk}.
If Tk+1=∅, then stop; if UBk=+∞, the problem is infeasible; otherwise, UBk is the optimal value and xk is the optimal solution. Otherwise, select an active node Xk+1=argmin{LB(X)∣X∈Tk+1} for further consideration and let LBk+1=LB(Xk+1).
Step 5 (branching).
Divide Xk+1 into two new subrectangles using the branching rule and let Pk+1 be the collection of these two subrectangles. Set k=k+1 and return to Step 2.
4.2. Convergence Analysis
In this subsection, we give the convergence of the proposed algorithm.
Theorem 9.
If the presented algorithm finishes at finite step, when it stops, xk must be a global optimum solution of the problem (Q). Otherwise, for any infinite branch of initial rectangle domain, an infinite partitioned rectangle sequence Xs will be produced and any accumulation point of which must be a global optimal solution of the initial problem (Q).
Proof.
Assume that this algorithm terminates finitely at some stage k, k≥0. Thus, when the algorithm terminates, it follows that UBk-LBk≤ɛ. By Steps 2 and 4 of the investigated algorithm, there exists a feasible solution xk of the problem (Q) satisfying F0(xk)=UBk, which implies that F0(xk)-LBk≤ɛ. Let F0* be the optimal value of the problem (Q); then, by the structure of this algorithm, we have LBk≤F0*. Since xk is a feasible solution of the problem (Q), F0*≤F0(xk). Connecting the above inequalities, we have F0*≤F0(xk)≤LBk+ɛ≤F0*+ɛ; that is, F0*≤F0(xk)≤F0*+ɛ. Thus, xk is an ɛ-global optimum solution of the problem (Q).
If the algorithm is infinite, it generates an infinite sequence {Xk} such that a subsequence {Xkl} of {Xk} satisfies Xkl+1⊂Xkl for l=1,2,…. Thus, it follows from [28, 29] that this rectangle subdivision is exhaustive. Hence, for every iteration k=0,1,2,…, by design of the algorithm, there is at least an infinite subsequence {LBkl} of {LBk} such that
(61)LBkl⩽minx∈XF0(x),Xkl∈argminX∈TklLB(X),xkl=x(Xkl)∈Xkl⊆X0.
Since {LBkl} is a nondecreasing sequence bounded above by minx∈DF0(x), where D is the feasible set to problem (Q), this guarantees the existence of the limit liml→∞LBkl∶=LB and LB≤minx∈XF0(x). Since {xkl} is an infinite sequence on a compact set X0, there exists a convergent subsequence {xq} of {xkl} satisfying limq→∞xq=x^,xq∈Xq and LBq=LB(Xq)=LF0(xq), where {Xq} is a subsequence of {Xkl}. By using Theorem 1 and Lemma 1 of [25], we see that the linear subfunctions LFm(m=0,1,…,M) used in the problem RLP(X) are strongly consistent on X0. Thus, limq→∞LBq=limq→∞LF0(xq)=LF0(limq→∞xq)=LF0(x^)=LB. All that remains is to show that x^∈D. Since X0 is a closed set, it follows that x^∈X0. Suppose that x^∉D. Then there exists some Fj, j∈{1,…,M}, such that Fj(x^)=δ>0. Since LFj is continuous, by Theorem 1 and Lemma 1 of [25], we have LFj(xq)→Fj(x^) as q→∞, that is, ∃qδ such that |LFj(xq)-Fj(x^)|<δ as q>qδ, and so when q>qδ, LFj(xq)>0 implies that the problem RLP(X) is infeasible. This contradicts the assumption that xq=x(Xq) is the optimal solution to RLP(X). Therefore, x^∈D; that is, LB=F0(x^)=minx∈DF0(x).
5. Numerical Experiments
There are two computational issues that may arise in using the suggested implementations of the global algorithm.
The first computational issue is concerned with the fact that we need to obtain the positive scalars Lj and Uj such that 0<Lj≤dj(y)≤Uj for all y∈Y before using the suggested implementations of the algorithm (see Section 2). Actually, Lj and Uj are available through solving the following two problems:
(62)(PP1):{mindj(y)s.t.gm(y)≤0,m=1,2,…,M0,0<yil≤yi≤yiu<∞,i=1,2,…,n0,(PP2):{maxdj(y)s.t.gm(y)≤0,m=1,2,…,M0,0<yil≤yi≤yiu<∞,i=1,2,…,n0.
The problems (PP1) and (PP2) are special cases of the original problem (P), by using the proposed algorithm; therefore, the values of Lj and Uj can be obtained directly without requiring other special procedure (see [24, 25]). Furthermore, the interval [Lj,Uj] obtained is tighter than one by making use of Bernstein algorithm in [24, 25] (see Table 1) so that the convergence of the algorithm may be improved.
The numerical results for upper and lower bounds of dj(y).
Example
Reference
L1
U1
L2
U2
Example 1
BRB
2
3.25
4
13
[25]
2
4
4
16
Example 2
BRB
2
3
4
12.38
[25]
2
4
4
16
Example 3
BRB
3
17.337
2
4.67
[25]
2
149.2961
3
601
Example 4
BRB
2
3
4
12.38
Example 5
BRB
2
4.67
3
17.337
The second computational issue concerns the lower bound computing process. From Section 3, each lower bound in the algorithm is computed by solving a relaxation linear programming of the form of problem (RLP). Here, we adopt the simplex algorithm to solve the relaxation linear programming. So the implement of the proposed global algorithm will depend upon the simplex algorithm.
We now report our numerical experiments through five test examples and some randomly produced problems to demonstrate the performance of the proposed optimization algorithm. The algorithm is coded in Compaq Visual Fortran, and all test problems are implemented in an Athlon(tm) CPU 2.31 GHz with 960 MB RAM microcomputer. The numerical results for all test problems are summarized in Tables 2 and 3. Numerical results show that the proposed algorithm can globally solve the problem (P) effectively.
The numerical results for Examples 1–5.
Example
Optimal solution
Optimal value
Iter
Time
ε
Example 1
BRB
(1.0, 1.7438231783465)
−4.060819175
1765
1.97
10-8
[25]
(1.0, 1.743823132)
−4.060819161
2638
16.23
10-8
Example 2
BRB
(1.618033989, 1.0)
1.16653785326
197
0.42
10-8
[25]
(1.618033989, 1.0)
1.166537841203
420
3.52
10-8
Example 3
BRB
(2.698690689, 1.207585549)
−2.33221836
5835
8.18
10-7
[25]
(2.698690670, 1.20758556)
−1.96149893
15243
130.62
10-7
Example 4
BRB
(1, 1)
3.3333
64
0.1563
10-3
[24]
(1, 1)
3.3333
262
2
10-3
Example 5
BRB
(1, 1)
5.5167
149
0.2
10-3
[24]
(1, 1)
5.5167
280
1.6
10-3
The numerical results for the random problem.
n
m
Ave. Iter
Ave. L
Ave. CPU (s)
4
5
94
93
40.35
4
7
94
93
40.36
4
10
98
95
40.37
6
5
638
637
77.87
6
7
675
674
78.18
6
10
682
677
80.85
10
5
528
527
232.5
10
7
554
546
232.35
10
10
598
583
235.6
In the following tables, the notations have been used for column headers: Iter: number of algorithm iterations; Time: execution time in seconds; and ɛ: convergence tolerance. And for row headers, BRB denotes the corresponding numerical results in the proposed BRB algorithm.
From Table 1, by using the proposed method, the upper and lower bounds of dj(y) are better than other methods [25]; that is, the values of L1 and L2 are all bigger than other methods and the values of U1 and U2 are all smaller than other methods.
From Table 2, numerical results show that the computational efficiency is obviously improved by using the proposed algorithm in the number of iterations and the overall execution time of the algorithm, compared with other methods [24, 25].
Additionally, we choose the following problem to test our algorithm further, which is generated randomly:
(68)(Problem1):{min∑i=1qpi1∏i∈Ni1xjαij1∑i=1qpi2∏i∈Ni2xjαij2+∑i=1qpi3∏i∈Ni3xjαij3∑i=1qpi4∏i∈Ni4xjαij4s.t.Ax≤b,∑i=1nxi=10,0<xi≤10,i=1,…,n,
where q is an integer number (e.g., q is taken to be n in Table 3), for each k=1,2,3,4, pik∈(1,2), Nik⊂{1,2,…,n} with 1≤|Nik|≤4, each element of Nik is randomly generated from {1,2,…,n}, and αijk are randomly generated from {1,2,3}. The elements of A and b with A∈Rm×n and b∈Rm are generated by using random number in the intervals (0,1) and (0,10), respectively.
For the above test problem, the convergence tolerance parameter is set as ɛ=10-2 and q=3. Numerical results are summarized in Table 3, where the average number of iterations, average number of list nodes, and average CPU times (seconds) are obtained by running the BRB algorithm for 20 times to this problem.
It is seen from Table 3 that the size of n (the number of variable) is the main factor affecting the performance of the algorithm. This is mainly because we have to take much time to compute the bound of introduced variables, which is increased as the size of the number of variable increasing. However, due to the constrained functions value in reduction operation and linear relaxation, the CPU time also increases, while m (the number of inequality constrained) is increasing but not as sharply as n.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The research is supported by the National Natural Science Foundation of China (11171094 and 11171368).
MaranasC. D.AndroulakisI. P.FloudasC. A.BergerA. J.MulveyJ. M.Solving long-term financial planning problems via global optimization1997218-91405142510.1016/S0165-1889(97)00032-8MR1470287ZBL0901.90016MarkowitzH. M.19912ndOxford, UKBasil BlackwellQuesadaI.GrossmannI. E.Alternative bounding approximations for the global optimization of various engineering design problems19969Norwell, Mass, USAKluwer Academic Publishers309331Nonconvex Optimization and Its Applications10.1007/978-1-4757-5331-8_10Stancu-MinasianI. M.1997409Dordrecht, The NetherlandsKluwer Academic Publishersviii+418Mathematics and Its Applications10.1007/978-94-009-0035-6MR1472981KonnoH.WatanabeH.Bond portfolio optimization problems and their applications to index tracking: a partial optimization approach1996393295306MR1413435ZBL0873.90009JhaN. K.Geometric programming based robot control design1995291-46316352-s2.0-0029370659DasK.RoyT. K.MaitiM.Multi-item inventory model with quantity-dependent inventory costs and demand-dependent unit cost under imprecise objective and restrictions: A geometric programming approach20001187817882-s2.0-1874441034710.1080/095372800750038382ChoiJ. C.BrickerD. L.Effectiveness of a geometric programming algorithm for optimization of machining economics models199623109579612-s2.0-003027082810.1016/0305-0548(96)00008-1YamamotoR.KonnoH.An efficient algorithm for solving convex-convex quadratic fractional problems2007133224125510.1007/s10957-007-9188-yMR2335266BensonH. P.A simplicial branch and bound duality-bounds algorithm for the linear sum-of-ratios problem2007182259761110.1016/j.ejor.2006.08.036MR2324550ZBL1121.90102ShenP.ChenY.MaY.Solving sum of quadratic ratios fractional programs via monotonic function2009212123424410.1016/j.amc.2009.02.024MR2519278ZBL05569560JiaoH.GuoY.ShenP.Global optimization of generalized linear fractional programming with nonlinear constraints2006183271772810.1016/j.amc.2006.05.102MR2290826ZBL1111.65052QuS.-J.ZhangK.-C.ZhaoJ.-K.An efficient algorithm for globally minimizing sum of quadratic ratios problem with nonconvex quadratic constraints200718921624163610.1016/j.amc.2006.12.034MR2332115ZBL1126.65056BensonH. P.Decomposition branch-and-bound based algorithm for linear programs with additional multiplicative constraints20051261416110.1007/s10957-005-2655-4MR2158430ZBL1093.90040RyooH.-S.SahinidisN. V.Global optimization of multiplicative programs200326438741810.1023/A:1024700901538MR1989747ZBL1052.90091SchaibleS.SodiniC.Finite algorithm for generalized linear multiplicative programming199587244145510.1007/BF02192573MR1358752ZBL0839.90113ShenP.MaY.ChenY.A robust algorithm for generalized geometric programming200841459361210.1007/s10898-008-9283-0MR2413129ZBL1152.90613QuS.ZhangK.WangF.A global optimization using linear relaxation for generalized geometric programming2008190234535610.1016/j.ejor.2007.06.034MR2412978ZBL1146.90073WangY.ZhangK.GaoY.Global optimization of generalized geometric programming20044810-111505151610.1016/j.camwa.2004.07.008MR2107107ZBL1066.90096ShenP.ZhangK.Global optimization of signomial geometric programming using linear relaxation200415019911410.1016/S0096-3003(03)00200-5MR2034370ZBL1053.90112MatsuiT.NP-hardness of linear multiplicative programming and related problems19969211311910.1007/BF00121658MR1411603ZBL0868.90111FalkJ. E.PalocsayS. W.Optimizing the sum of linear fractional functions1992Princeton, NJ, USAPrinceton University Press221258Princeton Ser. Comput. Sci.MR1147444KunoT.A branch-and-bound algorithm for maximizing the sum of several linear ratios2002221-415517410.1023/A:1013807129844MR1878140ZBL1045.90071WangY.-J.ZhangK.-C.Global optimization of nonlinear sum of ratios problem2004158231933010.1016/j.amc.2003.08.113MR2094622ZBL1065.65081ShenP.-P.YuanG.-X.Global optimization for the sum of generalized polynomial fractional functions200765344545910.1007/s00186-006-0130-0MR2314288ZBL1180.90331JiaoH.WangZ.ChenY.Global optimization algorithm for sum of generalized polynomial ratios problem2013371-218719710.1016/j.apm.2012.02.023MR2994175FangS.-C.GaoD. Y.SheuR.-L.XingW.Global optimization for a class of fractional programming problems200945333735310.1007/s10898-008-9378-7MR2550214ZBL1206.90189TuyH.Al-KhayyalF.ThachP. T.AudetC.HansenP.SavardG.Monotonic optimization: branch and cut methods20057New York, NY, USASpringer3978GERAD 25th Anniv. Ser.10.1007/0-387-25570-2_2MR2144333ZBL1136.90446HorstR.TuyH.19932ndBerlin, GermanySpringerxvi+69810.1007/978-3-662-02947-3MR1274246