This paper presents a global optimization algorithm for solving the signomial geometric programming (SGP) problem. In the algorithm, by the straight forward algebraic manipulation of terms and by utilizing a transformation of variables, the initial nonconvex programming problem (SGP) is first converted into an equivalent monotonic optimization problem and then is reduced to a sequence of linear programming problems, based on the linearizing technique. To improve the computational efficiency of the algorithm, two range reduction operations are combined in the branch and bound procedure. The proposed algorithm is convergent to the global minimum of the (SGP) by means of the subsequent solutions of a series of relaxation linear programming problems. And finally, the numerical results are reported to vindicate the feasibility and effectiveness of the proposed method.
1. Introduction
The signomial geometric programming (SGP) problem can be formulated as the following nonlinear optimization problem:
(1)(SGP):{minΦ0(y)s.t.Φm(y)≤0,m=1,…,M0,y∈Ω0,
where
(2)Φm(y)=∑t=1Tmδmt∏i=1n0yiηmti,m=0,1,…,M0,Ω0={y∈R+n0∣0<yil≤yi≤yiu<∞,i=1,…,n0}.Tm are positive integers and δmt and ηmti are all arbitrary real constant coefficients and exponents, respectively. In general, the problem (SGP) corresponds to a nonlinear optimization problem with nonconvex objective function and constraint set. As noted by [1, 2], many nonlinear programming problems may be restated as geometric programming with little additional effort by simple techniques such as change of variables or by straightforward algebraic manipulation of terms. Additionally, (SGP) problem has found a wide range of applications in production planning, location, distribution contexts in risk management problems, various chemical process design and engineering design situations, and so on [3–10]. Hence, it is necessary to present good algorithms for solving (SGP).
The theory of (SGP) was initially developed over three decades ago by Duffin et al. [11–13]. Subsequently, it had been studied by a number of researchers. In general, local optimization approaches for solving (SGP) problem include three kinds of methods as follows. First, successive approximation by posynomials has received the most popularity [14]. Second, Passy and Wilde [15] developed a weaker type of duality to accommodate this class of nonlinear optimization. Third, general nonlinear programming methods [16]. Though local optimization methods for solving SGP problem are ubiquitous, the global optimization algorithm based on the characteristics of (SGP) problem is scarce. When ηmti in Φm(y) is positive integer or rational number, some authors in [8, 17–19] developed the corresponding global solution methods for (SGP). In this case that each ηmti is real, Maranas et al. [20] proposed a global optimization branch and bound algorithm, by using the exponential variable transformation of (SGP) and the convex relaxation. Shen and Zhang [21] also proposed a global optimization algorithm based on the exponential variable transformation of (SGP) and the linear relaxation. Recently, Shen et al. [22] presented a robust algorithm for (SGP) problem by seeking an essential optimal solution. Wang et al. [23] developed a general algorithm for solving (SGP) problem with nonpositive degree of difficulty. Qu et al. [24] proposed a global optimization algorithm using linear relaxation for (SGP) problem.
In this paper we present a new global optimization algorithm for (SGP) problem by using several reduction operations and by solving a sequence of linear programming problems over partitioned subsets. The proposed method uses a convenient transformation based on the characteristics of (SGP) problem; thus, the original problem (SGP) is equivalently reformulated as a monotonic optimization problem (P), that is, the objective function is increasing and all the constrained functions can be denoted by the difference of two increasing functions in problem (P). A comparison of this method with other methods reviewed above is given below. First, the proposed linear relaxation is based on the monotonic optimization problem (P), which applies more information of the functions of (SGP). And what is more important is that the proposed reduction operations which are adopted in our global optimization algorithm can cut away a large part of the region in which the global optimal solution of (SGP) does not exist. This solution procedure will be more efficient than the methods in [21, 25, 26]. Second, the problem investigated in this paper generalizes those of [8, 17–19]. Furthermore, our method is more convenient in computation than the convex relaxation [19] because the main work is to solve the linear programs and the zeros of strictly monotonic functions of one variable over the interval [0,1), which can be solved very efficiently by the existing methods, for example, by the simplex method and the bisection search method. Third, numerical results and comparison with other methods are conducted to show the potential advantage of the proposed algorithm.
The remainder of this paper is organized as follows. The next section converts the (SGP) problem into a monotonic optimization problem. We discuss the rectangular branching operation, the lower bounding operation, and the reducing operations needed in our algorithm in Section 3. Section 4 incorporates this approach into an algorithm for solving (SGP) and shows the convergence property of the algorithm. In Section 5, we report the results of solving some numerical examples with the algorithm. A summary is presented in the last section.
2. Equivalent Problem
In order to convert (SGP) problem into an equivalent optimization problem (P), for each m=1,…,M0, i=1,…,n0, let us denote
(3)ηmi=min{ηmti∣t=1,…,Tm},γ0ti=η0ti,γmti=ηmti-ηmi.
By multiplying both sides of each constraint inequality of (SGP) with ∏i=1n0yi(-ηmi) and by applying the exponent transformation
(4)yi=exp(xi),i=1,…,n0,
to the formulation (SGP), we can obtain the following equivalent problem:
(5)(SGP1):{min∑t=1T0δ0texp(∑i=1n0γ0tixi)s.t.∑t=1Tmδmtexp(∑i=1n0γmtixi)≤0,m=1,…,M0,x∈Ω={x∈Rn0∣lnyil=xil≤xi≤xiuLLLLLllllllllll=lnyiu,i=1,…,n0xil}.
Next, for convenience, for each m=0,1,…,M0, we assume, without loss of generality, that δmt>0 for t=1,…,Jm and δmt<0 for t=Jm+1,…,Tm, and some notation is introduced as follows:
(6)It+={i∣γ0ti>0,i=1,…,n0},It-={i∣γ0ti<0,i=1,…,n0}.
Thus, by using It+, It-, let us calculate
(7)Lt=∑i∈It-γ0tilnyiuforeacht=1,…,J0,Ut=∑i∈It-γ0tilnyilforeacht=1,…,J0,lt=∑i∈It+γ0tilnyilforeacht=J0+1,…,T0,ut=∑i∈It+γ0tilnyiuforeacht=J0+1,…,T0.
Then, by introducing some additional variables xi, i=n0+1,…,n, with n=n0+T0, we can convert the problem (SGP1) into(8)(P):{minF0(x)=∑t=1J0δ0texp(∑i∈It+γ0tixi+xn0+t)+∑t=J0+1T0δ0texp(∑i∈It-γ0tixi-xn0+t)s.t.∑t=1Jmδmtexp(∑i=1n0γmtixi)+∑t=Jm+1Tmδmtexp(∑i=1n0γmtixi)≤0,m=1,…,M0,xn0+t-∑i∈It-γ0tixi≥0,t=1,…,J0,xn0+t+∑i∈It+γ0tixi≥0,t=J0+1,…,T0,x∈X0,where
(9)X0={x∈Rn∣xil≤xi≤xiu,i=1,…,n}={x∈Rn|xil≤xi≤xiu,i=1,…,n0,Li-n0≤xi≤Ui-n0,i=n0+1,…,n0+J0,-ui-n0≤xi≤-li-n0,i=n0+J0+1,…,n,}.
Additionally, for the sake of simplicity, let M=M0+T0; the problem (P) can be rewritten as the following form:
(10)(P):min{F0(x)∣Fm+(x)-Fm-(x)≤0,m=1,…,M,x∈X0},
where
(11)Fm+(x)={∑t=1Jmδmtexp(∑i=1n0γmtixi),m=1,…,M0,0,m=M0+1,…,M,(12)Fm-(x)={-∑t=Jm+1Tmδmtexp(∑i=1n0γmtixi),m=1,…,M0,xm+n0-M0-∑i∈Im-M0-γ0(m-M0)ixi,m=M0+1,…,M0+J0,xm+n0-M0+∑i∈Im-M0+γ0(m-M0)ixi,m=M0+J0+1,…,M.
Note that each function F0(x),Fm+(x),Fm-(x) of problem (P) is increasing (i.e., a function f:Rn→R is said to be increasing if f(x)≤f(y) for all x,y∈Rn satisfying xi≤yi, i=1,…,n). Thus problem (P) is a monotonic optimization problem, and the key equivalent result for problems (SGP) and (P) is given by Theorem 1.
Theorem 1.
y*∈Rn0 is a global optimal solution for problem (SGP) if and only if x*∈Rn is a global optimal solution for problem (P), where
(13)xi*={lnyi*,i=1,…,n0,∑i∈It-γ0tilnyi*,i=n0+1,…,n0+J0,-∑i∈It+γ0tilnyi*,i=n0+J0+1,…,n.
Proof.
The proof of this theorem follows easily from the definitions of problems (SGP) and (P); therefore, it is omitted here.
From Theorem 1, notice that, in order to solve problem (SGP), we may solve problem (P) instead. In addition, it is easy to see that the global optimal values of problems (SGP) and (P) are equal. Based on the above discussion, here, from now on we assume that the original problem (SGP) has been converted into the problem (P); then a general approach will be considered for solving problem (P).
3. Key Algorithm Processes
To globally solve the problem (P), a branch-reduce-bound (BRB) algorithm will be proposed. This algorithm proceeds according to the standard branch and bound scheme with three key processes: branching, reducing, and bounding.
The branching process consists in a successive rectangular partition of the initial box X0=[xl,xu] following in an exhaustive subdivision rule, that is, such that any infinite nested sequence of partition sets generated through the algorithm shrinks to a singleton. A commonly used exhaustive subdivision rule is the standard bisection.
The reducing process consists in applying reduction operations to reduce the size of the current partition set X=[a,b]⊂X0=[xl,xu]. The process aims at tightening the box containing the feasible portion currently still of interest.
The bounding process consists in using the linearization method to give a better lower bound.
Next, we begin to establish the approaches processes.
3.1. Lower Bound
At a given stage of the BRB algorithm for (P), let X=[a,b]⊂X0 be a rectangle during the partitioning procedure and still of interest; we intend to compute a lower bound LB(X) of the optimal value of (P) over X. Restrict the problem (P) to X:
(14)P(X):min{F0(x)∣Fm(x)≤0,m=1,…,M,x∈X}.
Denote the optimal objective function value of problem P(X) by V[P(X)].
Since F0(x) is increasing, an obvious bound is LB(X)=F0(a); although very simple, this bound suffices to ensure convergence of the algorithm. However, the following procedure may give a better bound.
Our main method for computing a lower bound of V[P(X)] over X is to solve the relaxation linear programming of P(X). The linear relaxation of the problem P(X) can be realized by underestimating every function F0(x) and Fm+(x) and by overestimating every function Fm-(x), for each m=1,…,M0. All the details for generating the linear relaxation will be given in the following.
Denote
(15)X0t={∑i∈It+γ0tixi+xn0+t,t=1,…,J0,∑i∈It-γ0tixi-xn0+t,t=J0+1,…,T0,X0tl={∑i∈It+γ0tiai+an0+t,t=1,…,J0,∑i∈It-γ0tibi-bn0+t,t=J0+1,…,T0,X0tu={∑i∈It+γ0tibi+bn0+t,t=1,…,J0,∑i∈It-γ0tiai-an0+t,t=J0+1,…,T0,Xmt=∑i=1n0γmtixi,t=1,…,Tm,Xmtl=∑i=1n0γmtiai,t=1,…,Tm,Xmtu=∑i=1n0γmtibi,t=1,…,Tm,
where m=1,…,M0. In addition, let
(16)Amt=exp(Xmtu)-exp(Xmtl)Xmtu-Xmtl,θmt(x)=exp(Xmt),θ¯mt(x)=Amt(Xmt-Xmtl)+exp(Xmtl),θ_mt(x)=Amt(Xmt-lnAmt+1),
where m=0,1,…,M0, t=1,…,Tm.
Theorem 2.
Consider the functions θmt(x), θ_mt(x), and θ¯mt(x), for any x∈X, where m=0,…,M0 and t=1,…,Tm. Then the following two statements are valid.
The function θ¯mt(x) is the concave envelope of the function θmt(x) over X, and the function θ_mt(x) is a supporting hyperplane of θmt(x), which is parallel with θ¯mt(x). Moreover, the functions θmt(x), θ_mt(x), and θ¯mt(x) satisfy
(17)θ_mt(x)≤θmt(x)≤θ¯mt(x),∀x∈X.
The differences Δmt1(x)=θ¯mt(x)-θmt(x) and Δmt2(x)=θmt(x)-θ_mt(x) satisfy maxx∈XΔmt1(x)=maxx∈XΔmt2(x)=exp(Xmtl)(1-Zmt+ZmtlnZmt)→0asωmt→0, where
(18)ωmt=Xmtu-Xmtl,Zmt=exp(ωmt)-1ωmt.
Proof.
The proof is similar to Theorem 1 in [21]; therefore, it is omitted here.
Remark 3.
From Theorem 2, we can follow that the functions θ_mt(x) and θ¯mt(x) enough approximate the function θmt(x) as ωmt→0, respectively.
From Theorem 2, it is obvious that for all x∈X we have
(19)F0(x)≥LF0(x)=∑t=1J0δ0tθ_0t(x)+∑t=J0+1T0δ0tθ¯0t(x),Fm+(x)≥LFm+(x)=∑t=1Jmδmtθ_mt(x),Fm-(x)≤UFm-(x)=-∑t=Jm+1Tmδmtθ¯mt(x),
where m=1,…,M0.
Consequently, we obtain the following linear programming RLP(X) as a linear relaxation of P(X) over the partition set X:
(20)RLP(X):{minLF0(x)s.t.LFm+(x)-UFm-(x)≤0,m=1,…,M0,Fm+(x)-Fm-(x)≤0,m=M0+1,…,M,x∈X.
An important property of RLP(X) is that its optimal value V[RLP(X)] satisfies
(21)V[RLP(X)]≤V[P(X)],
and thus, from (21), the optimal value V[RLP(X)] of RLP(X) provides a valid lower bound for the optimal value V[P(X)] of P(X) over X.
Based on the above discussion, for any rectangle X, in order to obtain a lower bound LB(X) of the optimal value V[P(X)] of the problem P(X), we may compute LB(X) such that
(22)LB(X)=max{V[RLP(X)],F0(a)}.
Clearly, LB(X) defined in (22) satisfies
(23)F0(a)≤LB(X)≤V[P(X)]
and is consistent. It can provide a valid lower bound and guarantee convergence.
3.2. Reduction Operations
Clearly, the smaller the rectangle X is, the tighter the lower bound LB(X) of P(X) will be, and therefore the closer the feasible solution of (P) will be to the optimal solution of (P). To show this, the next results give two reduction operations (i.e., reduction rules A and B) to reduce the size of this partitioned rectangle without losing any feasible solution currently still of interest.
3.2.1. Reduction Rule A
Rule A is based on the monotonic structure of the problem (P). At a given stage of the BRB algorithm for (P), for a rectangle X=[a,b] generated during the partitioning procedure and still of interest, let UB be the object function value of the best so far feasible solution to problem (P). Given an ε>0, we want to find a feasible solution x∈X of (P) such that F0(x)≤UB-ε or else establish that no such x exists. So the search for such x can then be restricted to the set H∩[a,b], where
(24)H:={x∣F0(x)≤UB-ε,Fm(x)≤0,m=1,…,M}.
The reduction rule aims at replacing the rectangle [a,b] with a smaller rectangle [a′,b′]⊂[a,b] without losing any point x∈H∩[a,b], that is, such that H∩[a′,b′]=H∩[a,b]. The rectangle [a′,b′] satisfying this condition is denoted by redν[a,b] with
(25)ν=UB-ε.
To illustrate how redν[a,b]=[a′,b′] is deduced by this rule, we first define the following functions.
Definition 4.
Given two boxes [a,b] and [a′,b′] with [a′,b′]⊆[a,b], for i=1,…,n, m=1,…,M, the functions φmi(α), ψmi(α), and ψ0i(α):[0,1]→R are defined by
(26)φmi(α)=Fm-(b-α(bi-ai)ei)-Fm+(a),ψmi(α)=Fm+(a′+α(bi-ai′)ei)-Fm-(b),ψ0i(α)=F0(a′+α(bi-ai′)ei)-ν,
where ei denotes the ith unit vector of Rn, that is, a vector such that eii=1,eji=0, ∀j≠i, and the functions F0(x), Fm+(x), and Fm-(x) are given in problem (P), respectively.
Clearly, the functions φmi(α), ψmi(α), and ψ0i(α) are either constant or strictly monotonic over the interval [0,1) from the properties of Fm-(x), Fm+(x), and F0(x). By using these functions, redν[a,b] can be given as follows.
Theorem 5.
(i) If F0(a)>ν or Fm+(a)-Fm-(b)>0 for some m=1,…,M, then redν[a,b]=[a′,b′]=∅.
(ii) If F0(a)≤ν and Fm+(a)-Fm-(b)≤0 for each m=1,…,M, then redν[a,b]=[a′,b′], where
(27)a′=b-∑i=1nminm=1,…,M{αmi}(bi-ai)ei,b′=a′+∑i=1nminm=0,1,…,M{βmi}(bi-ai′)ei
are given by
(28)αmi={1,ifφmi(1)≥0αwithφmi(α)=0,otherwise,βmi={1,ifψmi(1)≤0αwithψmi(α)=0,otherwise.
Proof.
(i) By the increasing property of F0(x), Fm+(x), and Fm-(x), if F0(a)>ν, then F0(x)≥F0(a)>ν for every x∈[a,b]. If there exists m∈{1,…,M} such that Fm+(a)-Fm-(b)>0, then Fm(x)=Fm+(x)-Fm-(x)≥Fm+(a)-Fm-(b)>0 for every x∈[a,b]. In both cases, H∩[a,b]=∅.
(ii) Given any point x∈[a,b] satisfying
(29)F0(x)≤ν,Fm+(x)-Fm-(x)≤0,m=1,…,M,
we will show that x∈[a′,b′]. Let
(30)αm′i=min{αmi∣m=1,…,M},βm′′i=min{βmi∣m=0,1,…,M}.
Firstly, we will show that x≥a′. If x≱a′, then there exists index i such that
(31)xi<ai′=bi-αm′i(bi-ai),i.e.,xi=bi-α(bi-ai)withαm′i<α≤1.
We consider the following two cases.
Case 1. If αm′i=1, then from (31) we have xi<ai′=bi-αm′i(bi-ai)=ai, conflicting with x∈[a,b]; that is, xi≥ai.
Case 2. If 0≤αm′i<1, the function φm′i(α) must be strictly decreasing in single variable α over the interval [0,1). If the function φm′i(α) is not strictly decreasing in single variable α, we get φm′i(α) must be a constant over the interval [0,1). In this case, we have
(32)φm′i(1)=φm′i(0)=Fm′-(b)-Fm′+(a)≥0.
It follows from the definition of αm′i that αm′i=1, contradicting with 0≤αm′i<1.
Since the function φm′i(α) is strictly decreasing, it follows from (31) and the definition of αm′i that
(33)Fm′-(b-(bi-xi)ei)-Fm′+(a)=Fm′-(b-α(bi-ai)ei)-Fm′+(a)=φm′i(α)<φm′i(αm′i)=0;
hence,
(34)Fm′-(b-(bi-xi)ei)<Fm′+(a).
In addition, since Fm′-(x) is an increasing function in n-dimension variable x and x≤b-(bi-xi)ei, we have
(35)Fm′-(x)≤Fm′-(b-(bi-xi)ei)<Fm′+(a),
conflicting with Fm′-(x)≥Fm′+(x)≥Fm′+(a).
Based on the above discussion, we have x≥a′; that is, x∈[a′,b] in either case.
Secondly, we also can show from x∈[a′,b] that
(36)x≤b′,i.e.,x∈[a′,b′].
Supposed that x≰b′, then there exists some i such that
(37)xi>bi′=ai′+βm′′i(bi-ai′);
that is, there exists α such that
(38)xi=ai′+α(bi-ai′),βm′′i<α≤1.
By the definition of βm′′i, there are the following two cases to consider.
Case 1. If βm′′i=1, then from (38) we have xi>bi′=ai′+(bi-ai′)=bi, conflicting with x∈[a′,b]; that is, xi≤bi.
Case 2. If 0≤βm′′i<1, the function ψm′′i(α) is strictly increasing in single variable α. If the function ψm′′i(α) is not strictly increasing in single variable α, we get ψm′′i(α) must be a constant over the interval [0,1). In this case, we have
(39)ψ0i(1)=ψ0i(0)=F0(a′)-ν≤0,
or
(40)ψm′′i(1)=ψm′′i(0)=Fm′′+(a′)-Fm′′-(b)≤0.
It follows from the definition of βm′′i that βm′′i=1, which is a contradiction with 0≤βm′′i<1.
Since the function ψm′′i(α) is strictly increasing, from (31) and the definition of βm′′i, it implies that
(41)F0(a′+α(bi-ai′)ei)-ν=ψ0i(α)>ψ0i(β0i)=0,
or
(42)Fm′′+(a′+α(bi-ai′)ei)-Fm′′-(b)=ψm′′i(α)>ψm′′i(βm′′i)=0.
Assume that (41) holds; we can derive from (38) that
(43)F0(a′+(xi-ai′)ei)=F0(a′+α(bi-ai′)ei)>ν.
It follows from x≥a′+(xi-ai′)ei and F0(x) increasing that
(44)F0(x)≥F0(a′+(xi-ai′)ei)>ν,
conflicting with F0(x)≤ν.
If (42) holds, we obtain from (38) that
(45)Fm′′+(a′+(xi-ai′)ei)=Fm′′+(a′+α(bi-ai′)ei)>Fm′′-(b);
since x≥a′+(xi-ai′)ei and Fm′′+(x) is increasing, we have
(46)Fm′′+(x)≥Fm′′+(a′+(xi-ai′)ei)>Fm′′-(b).
It is a contradiction with Fm′′+(x)≤Fm′′-(x)≤Fm′′-(b).
From the above results, we must have x≤b′;thatis,x∈[a′,b′] in both cases, and this ends the proof.
Remark 6.
Clearly, for any i=1,…,n, αmi(m=1,…,M) and βmi(m=0,1,…,M) defined in Theorem 5 must exist and be unique, since the functions F0(x), Fm+(x), and Fm-(x) are all continuous and increasing.
3.2.2. Reduction Rule B
For any x∈X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n), without loss of generality, we assume the above relaxation linear problem RLP(X) can be rewritten as
(47)RLP(X):{min∑i=1nλ0ixi+t0s.t.∑i=1nλjixi+tj≤0,j=1,…,M,x∈X⊆X0.
Let
(48)RLj=∑i=1nmin{λjiai,λjibi}+tj,j=0,1,…,M,(49)ρi=UB-RL0+min{λ0iai,λ0ibi}λ0iwithλ0i≠0,(50)τji=-RLj+min{λjiai,λjibi}λjiwithλji≠0,
where j=1,…,M, i=1,…,n.
Theorem 7.
For any rectangle X=(Xi)n×1⊆X0, if RL0>UB, then there exists no optimal solution of RLP(X0) over X; otherwise, consider the following two cases: if there exists some h∈{1,…,n} satisfying λ0h>0 and ρh<bh, then there is no optimal solution of RLP(X0) over Xa; conversely, if λ0h<0 and ρh>ah for some h∈{1,…,n}, then there does not exist optimal solution of RLP(X0) over Xb, where
(51)Xa=(Xai)n×1⊆X0withXai={Xi,ifi≠h,(ρh,bh]⋂Xh,ifi=h,Xb=(Xbi)n×1⊆X0withXbi={Xi,ifi≠h,[ah,ρh)⋂Xh,ifi=h.
Theorem 8.
For any rectangle X=(Xi)n×1⊆X0, if RLj(x)>0 for some j∈{1,…,M}, then there exists no feasible solution of problem RLP(X0) over X, otherwise, consider the following two cases: if there exists some index h∈{1,…,n} and j∈{1,…,M} satisfying λjh>0 and τjh<bh, then there is no feasible solution of the problem RLP(X0) over Xc; conversely, if λjh<0 and τjh>ah for some j∈{1,…,M} and h∈{1,…,n}, then there exists no feasible solution of the problem RLP(X0) over Xd, where
(52)Xc=(Xci)n×1⊆X0withXci={Xi,ifi≠h,(τjh,bh]⋂Xh,ifi=h,Xd=(Xdi)n×1⊆X0withXdi={Xi,ifi≠h,[ah,τjh)⋂Xh,ifi=h.
Proof.
The proof of the Theorems 7 and 8 is similar to Theorems 2 and 3 in [27], respectively; therefore, it is omitted here.
By Theorems 7 and 8, we can give a new reduction rule B to reject some regions in which the globally optimal solution of RLP(X0) does not exist. The computation procedure of this rule is summarized as follows.
Step 1.
Compute RL0 in (48). If RL0>UB, let X=∅; otherwise, compute ρi(i=1,…,n) in (49). If λ0h>0 and ρh<bh for some h∈{1,…,n}, then let bh=ρh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n). If λ0h<0 and ρh>ah for some h∈{1,…,n}, then let ah=ρh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n).
Step 2.
For any j=1,…,M, compute RLj in (48). If RLj>0 for some j∈{1,…,M}, then let X=∅; otherwise, compute τji in (50) (j=1,…,M,i=1,…,n). If λjh>0 and τjh<bh for some j∈{1,…,M} and h∈{1,…,n}, then let bh=τjh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n). If λjh<0 and τjh>ah for some j∈{1,…,M} and h∈{1,…,n}, then let ah=τjh and X=(Xi)n×1 with Xi=[ai,bi](i=1,…,n).
Rule B provides a possibility to cut away all or a large part of the rectangle X which is currently investigated by the algorithm procedure.
4. Algorithm and Its Convergence
In this section, a branch-reduce-bound (BRB) algorithm is developed to solve the problem (P) based on the former discussion. This method needs to solve a sequence of (RLP) problems over partitioned subsets of X0.
The BRB algorithm is based on partitioning the rectangle X0 into subrectangles, each concerned with a node of the branch and bound tree. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by 𝒯k, that is, each associated with a rectangle X⊆X0, forallX∈𝒯k. For each such node X=[a,b], we will compute a lower bound LB(X) of the optimal objective function value of (P) via the optimal value of the RLP(X) and F0(a), so the lower bound of the optimal value of (P) at stage k is given by min{LB(X),∀X∈𝒯k}. We now select an active node to subdivide its associated rectangle into two subrectangles according to the standard branch rule for each new node, reducing it, and then compute the lower bound as before. At the same time, if necessary, we will update the upper bound UBk. Upon fathoming any nonimproving node, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.
Algorithm 1.
Consider the following steps.
Step 0 (Initialization). Choose the convergence tolerance ε>0. Let 𝒫0={X0} and 𝒯0={X0}. If some feasible solutions are available, add them to H and let UB0=min{F0(x)|x∈H}; otherwise, let H=∅ and UB0=+∞. Set k=0.
Step 1 (Reduction). (i) Delete every box [a,b]∈𝒫k such that F0(a)>UBk-ε or Fm+(a)-Fm-(b)>0 for some m∈{1,…,M}, and denote the remaining still as 𝒫k. If 𝒫k≠∅, apply the reduction rule A described in Theorem 5 in Section 3.2 to each box [a,b]∈𝒫k. Let 𝒫k′={redν[a,b]∣[a,b]∈𝒫k} with ν=UBk-ε.
(ii) If 𝒫k′≠∅, for each box [a,b]∈𝒫k′ that is currently investigated, we use the reduction rule B in Sub section 3.2 to cut away X and denote the left still as 𝒫k′′.
Step 2 (Bounding). If 𝒫k′′≠∅, begin to do for each [a,b]∈𝒫k′′the following.
(i) Solve the problem RLP(X) to obtain the optimal solution x(X) and the optimal value V[RLP(X)]. Let LB(X)=max{V[RLP(X)],F0(a)}.
(ii) If Fm(a)≤0 for every m=1,…,M, then set H=H∪{a}.
(iii) If F0(b)>UB-ε, compute a point x^=a+θ(b-a) such that F0(a+θ(b-a))=UB-ε; otherwise, let x^=b.
(iv) If H≠∅, define the new upper bound UBk=min{F0(x)∣x∈H}, and the best known feasible point is denoted by x*=argmin{F0(x)∣x∈H}. Set 𝒯k=(𝒯k∖Xk)∪𝒫k′′.
Step 3 (Convergence Checking). Set 𝒯k+1=𝒯k∖{X∣LB(X)>UBk-ε,X∈𝒯k}.
If 𝒯k+1=∅, then stop: if UBk=+∞, the problem is infeasible; otherwise, UBk is the optimal value and x* is the optimal solution. Otherwise, select an active node Xk+1=argmin{LB(X)∣X∈𝒯k+1} for further consideration.
Step 4 (Branching). Divide Xk+1 into two new subrectangles using the standard branch rule and let 𝒫k+1 be the collection of these two subrectangles. Set k=k+1 and return to Step 1.
Convergence Analysis. In this subsection, we give the convergence of the proposed algorithm. Assume that the number of globally optimal solutions of (SGP) is finite. Then the above proposed algorithm either terminates finitely at a globally optimal solution or generates an infinite sequence of iteration nodes. If the algorithm terminates at some iteration k, then obviously the point x* is a globally optimal solution and UB is the optimal value of problem (P). If the algorithm is infinite, its convergence is discussed as follows.
Theorem 9.
Assume that the above algorithm is infinite; then it generates an infinite sequence of iterations such that along any infinite branch-and-bound tree any accumulation point of the sequence {LBk} will be the global minimum of problem (P).
Proof.
Since the algorithm is infinite, it generates an infinite sequence {Xk} such that a subsequence {Xkl} of {Xk} satisfies Xkl+1⊂Xkl for l=1,2,…. In this case, for every iteration k=0,1,2,…, from [28, 29] there is at least an infinite subsequence {LBkl} of {LBk} such that
(53)LBkl⩽minx∈XF0(x),Xkl∈argminX∈𝒯klLB(X),xkl=x(Xkl)∈Xkl⊆X0,
where X denotes the feasible region of problem (P). We see from [28–30] that {LBkl} is a nondecreasing sequence bounded above by minx∈XF0(x), which guarantees the existence of the limit liml→∞LBkl:=LB and LB≤minx∈XF0(x). Since {xkl} is an infinite sequence on a compact set, it follows that there exists a convergent subsequence {xq} of {xkl} satisfying limq→∞xq=x^,xq∈Xq and LBq=LB(Xq)=LF0(xq), where {Xq} is a subsequence of {Xkl}. The linear functions LFj(j=0,1,…,M) used in the problem RLP(X) are strongly consistent on X0. Thus, limq→∞LBq=LB=F0(x^). All that remains is to show that x^∈X. Since X0 is a closed set, it follows that x^∈X0. Suppose that x^∉X. Then there exists some Fj, j∈{1,…,M}, such that Fj(x^)=δ>0. Since LFj(x) is continuous, the sequence {LFj(xq)} converges to Fj(x^) as q→∞. By definition of convergence, ∃qδ such that |LFj(xq)-Fj(x^)|<δ as q>qδ, and so when q>qδ, LFj(xq)>0 implies that the problem RLP(X) is infeasible. This contradicts the assumption of xq=x(Xq). Therefore, x^∈X; that is, LB=F0(x^)=minx∈XF0(x), and the proof is complete.
5. Numerical Results
To verify the performance of the proposed algorithm, we will give some computational results through ten test problems. The algorithm is coded in Compaq Visual Fortran. The simplex method is applied to solve the relaxation linear programming problems. All test problems are implemented in an Athlon(tm) CPU 2.31 GHz with 960 MB RAM microcomputer.
From the computational results, we can see that the proposed BRB algorithm can solve the problem (SGP) effectively. This illustrates the potential advantage of the proposed algorithm: not only is a feasible optimal solution obtained, but also less computational effort may be required for finding a better objective function value.
6. Conclusion
To globally solve the problem (SGP), a new branch-reduce-bound algorithm is proposed, based on an equivalent monotonic optimization problem and a linear relaxation method. The algorithm can attain the global minimum through the successive refinement of a linear relaxation and the subsequent solutions of a series of linear programming problems. To improve the convergence speed, two range reduction operations are proposed, which can cut away a large part of the region in which the global optimal solution of (SGP) does not exist. The convergence of the algorithm is proved and numerical results are reported to vindicate the feasibility and effectiveness of the proposed algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This research is supported by the National Natural Science Foundation of China (11171094; 11171368).
HansenP.JaumardB.Reduction of indefinite quadratic programs to bilinear programs199221416010.1007/BF00121301MR1266895ZBL0786.90050BeightlerC. S.PhillipsD. T.1976New York, NY, USAJohn Wiley & SonsMR0680967AvrielM.WilliamsA. C.An extension of geometric programming with applications in engineering optimization1971521871942-s2.0-004325326010.1007/BF01535411JhaN. K.Geometric programming based robot control design1995291–46316352-s2.0-0029370659JeffersonT. R.ScottC. H.Generalized geometric programming applied to problems of optimal control. I. Theory197826111712910.1007/BF00933274MR514100ZBL0369.90120EckerJ. G.Geometric programming: methods, computations and applications198022333836210.1137/1022058MR584381ZBL0438.90088HorstR.TuyH.19932ndBerlin, GermanySpringerMR1274246FloudasC. A.VisweswaranV.HorstR.PardalosP. M.Quadratic optimization19952Kluwer Academic Publishers217269MR137708610.1007/978-1-4615-2025-2_5ZBL0833.90091SuiY. K.The expansion of functions under transformation and its application to optimization19941133-425326210.1016/0045-7825(94)90048-5MR1271008ZBL0847.73076DasK.RoyT. K.MaitiM.Multi-item inventory model with quantity-dependent inventory costs and demand-dependent unit cost under imprecise objective and restrictions: a geometric programming approach20001187817882-s2.0-1874441034710.1080/095372800750038382DuffinR. J.PetersonE. L.Duality theory for geometric programming19661413071349MR021175210.1137/0114105ZBL0203.21902DuffinR. J.PetersonE. L.ZenerC.The geometric inequality and the main lemma1967New York, NY, USAJohn Wiley & Sons115140DuffinR. J.PetersonE. L.Geometric programming with signomials197311335MR032730910.1007/BF00934288ZBL0238.90069PassyU.Generalized weighted mean programming197120763778MR043707610.1137/0120075ZBL0233.90021PassyU.WildeD. J.Generalized polynomial optimization19671513441356MR023260110.1137/0115117ZBL0171.18002KortanekK. O.XuX.YeY.An infeasible interior-point algorithm for solving primal and dual geometric programs199776115518110.1016/S0025-5610(96)00038-XMR1426402ZBL0881.90106SheraliH. D.TuncbilekC. H.A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique19922110111210.1007/BF00121304MR1266898ZBL0787.90088SheraliH. D.TuncbilekC. H.A reformulation-convexification approach for solving nonconvex quadratic programming problems19957113110.1007/BF01100203MR1342933ZBL0844.90064SheraliH. D.Global optimization of nonconvex polynomial programming problems having rational exponents199812326728310.1023/A:1008249414776MR1617944ZBL0905.90146MaranasC. D.FloudasC. A.Global optimization in generalized geometric programming19972143513692-s2.0-0030871124ShenP. P.ZhangK. C.Global optimization of signomial geometric programming using linear relaxation200415019911410.1016/S0096-3003(03)00200-5MR2034370ZBL1053.90112ShenP. P.MaY.ChenY. Q.A robust algorithm for generalized geometric programming200841459361210.1007/s10898-008-9283-0MR2413129ZBL1152.90613WangY. J.LiT.LiangZ. A.A general algorithm for solving generalized geometric programming with nonpositive degree of difficulty200944113915810.1007/s10589-007-9148-3MR2556848ZBL1208.90163QuS. J.ZhangK. C.WangF. S.A global optimization using linear relaxation for generalized geometric programming2008190234535610.1016/j.ejor.2007.06.034MR2412978ZBL1146.90073TuyH.Effect of the subdivision strategy on convergence and efficiency of some global optimization algorithms199111233610.1007/BF00120663MR1263836ZBL0744.90083PengJ.-M.YuanY.-X.Optimality conditions for the minimization of a quadratic with two quadratic constraints19977357959410.1137/S1052623494261520MR1462056ZBL0891.90150ShenP. P.BaiX. D.LiW. M.A new accelerating method for globally solving a class of nonconvex programming problems2009717-82866287610.1016/j.na.2009.01.142MR2532813ZBL1168.90576HorstR.PardalosP. M.ThoaiN. V.2000Dordrecht, The NetherlandsKluwer Academic PublishersMR1799654HorstR.TuyH.20033rdBerlin, GermanySpringerMR1274246HorstR.Deterministic global optimization with partition sets whose feasibility is not known: application to concave minimization, reverse convex constraints, DC-programming, and Lipschitzian optimization1988581113710.1007/BF00939768MR951795QuS.-J.ZhangK.-C.JiY.A new global optimization algorithm for signomial geometric programming via Lagrangian relaxation2007184288689410.1016/j.amc.2006.05.208MR2294955ZBL1116.65071WangY. J.LiangZ. A.A deterministic global optimization algorithm for generalized geometric programming2005168172273710.1016/j.amc.2005.01.142MR2170862ZBL1105.65335ShenP. P.JiaoH. W.A new rectangle branch-and-pruning approach for generalized geometric programming200618321027103810.1016/j.amc.2006.05.137MR2290857ZBL1112.65058RijckaertM. J.MartensX. M.Comparison of generalized geometric programming algorithms197826205241