MPEMathematical Problems in Engineering1563-51471024-123XHindawi Publishing Corporation40949110.1155/2011/409491409491Research ArticleA Branch-and-Reduce Approach for Solving Generalized Linear Multiplicative ProgrammingWangChun-Feng1, 2LiuSan-Yang1ZhengGeng-Zhong3VampaVictoria1Department of Mathematical SciencesXidian UniversityXi'an 710071Chinaxidian.edu.cn2Department of MathematicsHenan Normal UniversityXinxiang 453007Chinahenannu.edu.cn3School of Computer Science and TechnologyXidian UniversityXi'an 710071Chinaxidian.edu.cn2011572011201115032011120520112011Copyright © 2011 Chun-Feng Wang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We consider a branch-and-reduce approach for solving generalized linear multiplicative programming. First, a new lower approximate linearization method is proposed; then, by using this linearization method, the initial nonconvex problem is reduced to a sequence of linear programming problems. Some techniques at improving the overall performance of this algorithm are presented. The proposed algorithm is proved to be convergent, and some experiments are provided to show the feasibility and efficiency of this algorithm.

1. Introduction

In this paper, the following generalized linear multiplicative programming is considered: mini=1p0(c0iTx+d0i)γ0is.t.i=1pj(cjiTx+dji)γjiβj,j=1,,m,xX0=[l,u]Rn, where cji=(cji1,cji2,,cjin)TRn, djiR, and βjR, γjiR, βj>0 and for all xX0, cjiTx+dji>0, j=0,,m,i=1,,pj.

Since a large number of practical applications in various fields can be put into problem (P), including VLSI chip design , decision tree optimization , multicriteria optimization problem , robust optimization , and so on, this problem has attracted considerable attention in the past years.

It is well known that the product of affine functions need not be (quasi) convex, thus the problem can have multiple locally optimal solutions, many of which fail to be globally optimal, that is, problem (P) is multiextremal .

In the last decade, many solution algorithms have been proposed for globally solving special forms of (P). They can be generally classified as outer-approximation method , decomposition method , finite branch and bound algorithms [8, 9], and cutting plane method . However, the global optimization algorithms based on the general form (P) have been little studied. Recently, several algorithms were presented for solving problem (P) .

The aim of this paper is to provide a new branch-and-reduce algorithm for globally solving problem (P). Firstly, by using the property of logarithmic function, we derive an equivalent problem (Q) of the initial problem (P), which has the same optimal solution as the problem (P). Secondly, by utilizing the special structure of (Q), we present a new linear relaxation technique, which can be used to construct the linear relaxation programming problem for (Q). Finally, the initial nonconvex problem (P) is systematically converted into a series of linear programming problems. The solutions of these converted problems can be as close as possible to the globally optimal solution of (Q) by successive refinement process.

The main features of this algorithm: (1) the problem investigated in this paper has a more general form than those in ; (2) a new linearization method for solving the problem (Q) is proposed; (3) these generated linear relaxation programming problems are embedded within a branch and bound algorithm without increasing the number of variables and constraints; (4) some techniques are proposed to improve the convergence speed of our algorithm.

This paper is organized as follows. In Section 2, an equivalent transformation and a new linear relaxation technique are presented for generating the linear relaxation programming problem (LRP) for (Q), which can provide a lower bound for the optimal value of (Q). In Section 3, in order to improve the convergence speed of our algorithm, we present a reducing technique. In Section 4, the global optimization algorithm is described in which the linear relaxation problem and reducing technique are embedded, and the convergence of this algorithm is established. Numerical results are reported to show the feasibility of our algorithm in Section 5.

2. Linear Relaxation Problem

Without loss of generality, assume that, for 0iTj, γji>0, Tj+1ipj, γji<0, j=0,,m, i=1,,pj.

By using the property of logarithmic function, the equivalent problem (Q) of (P) can be derived, which has the same optimal solution as (P), minϕ0(x)=i=1T0γ0iln(c0iTx+d0i)+i=T0+1p0γ0iln(c0iTx+d0i)s.t.ϕj(x)=i=1Tjγjiln(cjiTx+dji)+i=Tj+1pjγjiln(cjiTx+dji)lnβj,xX0=[l,u]Rn,j=1,,m.

Thus, for solving problem (P), we may solve its equivalent problem (Q) instead. Toward this end, we present a branch-and-reduce algorithm. In this algorithm, the principal aim is to construct linear relaxation programming problem (LRP) for (Q), which can provide a lower bound for the optimal value of (Q).

Suppose that X=[x̲,x¯] represents either the initial rectangle of problem (Q), or modified rectangle as defined for some partitioned subproblem in a branch and bound scheme. The problem (LRP) can be realized through underestimating every function ϕj(x) with a linear relaxation function ϕjl(x)  (j=0,,m). All the details of this linearization method for generating relaxations will be given below.

Consider the function ϕj(x)  (j=0,,m). Let ϕj1(x)=i=1Tjγjiln(cjiTx+dji), and ϕj2(x)=i=Tj+1pjγjiln(cjiTx+dji), then, ϕj1(x) and ϕj2(x) are concave function and convex function, respectively.

First, we consider the function ϕj1(x). For convenience in expression, we introduce the following notations: Xji=cjiTx+dji=t=1ncjitxt+dji,  X̲ji=t=1nmin{cjitx̲t,cjitx¯t}+dji,X¯ji=t=1nmax{cjitx̲t,cjitx¯t}+dji,Kji=ln(X¯ji)-ln(X̲ji)X¯ji-X̲ji,fji(x)=ln(cjiTx+dji)=ln(Xji),hji(x)=ln(X̲ji)+Kji(Xji-X̲ji)=ln(X̲ji)+Kji(t=1ncjitxt+dji-X̲ji).

By Theorem 1 in , we can derive the lower bound function ϕj1l(x) of ϕj1(x) as follows: ϕj1l(x)=i=1Tjγjihji(x)i=1Tjγjifji(x)=ϕj1(x).

Second, we consider function ϕj2(x)  (j=0,,m). Since ϕj2(x) is a convex function, by the property of the convex function, we have ϕj2(x)ϕj2(xmid)+ϕj2(xmid)T(x-xmid)=ϕj2l(x),   where xmid=(1/2)(x̲+x¯), ϕj2(x)=(γj,Tj+1cj,Tj+1,1cj,Tj+1Tx+dj,Tj+1+γj,Tj+2cj,Tj+2,1cj,Tj+2Tx+dj,Tj+2++γj,pjcj,pj,1cj,pjTx+dj,pjγj,Tj+1cj,Tj+1,ncj,Tj+1Tx+dj,Tj+1+γj,Tj+2cj,Tj+2,ncj,Tj+2Tx+dj,Tj+2++γj,pjcj,pj,ncjpjTx+djpj).

Finally, from (2.2) and (2.3), for all xX, we have ϕjl(x)=ϕj1l(x)+ϕj2l(x)ϕj(x).

Theorem 2.1.

For all xX, consider the functions ϕj(x) and ϕjl(x), j=0,,m. Then, the difference between ϕjl(x) and ϕj(x) satisfies ϕj(x)-ϕjl(x)0,as  x¯-x̲0, where x¯-x̲=max{x¯i-x̲ii=1,,n}.

Proof.

Let Δ1=ϕj1(x)-ϕj1l(x),Δ2=ϕj2(x)-ϕj2l(x). Since ϕj(x)-ϕjl(x)=ϕj1(x)-ϕj1l(x)+ϕj2(x)-ϕj2l(x)=Δ1+Δ2, we only need to prove Δ10,Δ20 as x¯-x̲0.

First, consider Δ1. By the definition of Δ1, we have Δ1=ϕj1(x)-ϕj1l(x)=i=1Tjγji(fji(x)-hji(x)). Furthermore, by Theorem 1 in , we know that fji(x)-hji(x)0 as x¯-x̲0. Thus, we have Δ10 as x¯-x̲0.

Second, consider Δ2. From the definition of Δ2, it follows that Δ2=ϕj2(x)-ϕj2l(x)=ϕj2(x)-ϕj2(xmid)-ϕj2(xmid)T(x-xmid)=ϕj2(ξ)T(x-xmid)-ϕj2(xmid)T(x-xmid)2ϕj2(η)ξ-xmidx-xmid, where ξ,η are constant vectors, which satisfy ϕj2(x)-ϕj2(xmid)=ϕj2(ξ)T(x-xmid) and ϕj2(ξ)-ϕj2(xmid)=2ϕj2(η)T(ξ-xmid), respectively. Since 2ϕj2(x) is continuous, and X is a compact set, there exists some M>0 such that 2ϕj2(x)M. From (2.8), it implies that Δ2Mx¯-x̲2. Furthermore, we have Δ20 as x¯-x̲0.

Taken together above, it implies that ϕj(x)-ϕjl(x)=Δ1+Δ20 as x¯-x̲0, and the proof is complete.

From Theorem 2.1, it follows that the function ϕjl(x) can approximate enough the function ϕj(x) as x¯-x̲0.

Based on the above discussion, the linear relaxation programming problem (LRP) of (Q) over X can be obtained as follows: minϕ0l(x)s.t.ϕjl(x)lnβj,j=1,,m,xX=[x̲,x¯]Rn.

Obviously, the feasible region for the problem (Q) is contained in the new feasible region for the problem (LRP), thus, the minimum V(LRP) of (LRP) provides a lower bound for the optimal value V(Q) of problem (Q) over the rectangle X, that is V(LRP)V(Q).

3. Reducing Technique

In this section, we pay our attention on how to form the new reducing technique for eliminate the region in which the global minimum of (Q) does not exist.

Assume that UB is the current known upper bound of the optimal value ϕ0* of the problem (Q). Let αt=i=1T0γ0iK0ic0it+ϕj2(xmid)t,t=1,,n,T=i=1T0γ0i[ln(X̲0i)+K0id0i-K0iX̲0i]+ϕ02(xmid)-ϕ02(xmid)Txmid,ρk=UB-t=1,tknmin{αtx̲t,αtx¯t}-T,k=1,,n. The reducing technique is derived as in the following theorem.

Theorem 3.1.

For any subrectangle X=(Xt)n×1X0 with Xt=[x̲t,x¯t]. If there exists some index k{1,2,,n} such that αk>0 and ρk<αkx¯k, then there is no globally optimal solution of (Q) over X1; if αk<0 and ρk<αkx̲k, for some k, then there is no globally optimal solution of (Q) over X2, where X1=(Xt1)n×1X,with  Xt1={Xt,tk,(ρkαk,x¯k]Xt,t=k,X2=(Xt2)n×1X,with  Xt2={Xt,tk,[x̲k,ρkαk)Xt,t=k.

Proof.

First, we show that for all xX1, ϕ0(x)>UB. Consider the kth component xk of x. Since xk(ρk/αk,x¯k], it follows that ρkαk<xkx¯k. From αk>0, we have ρk<αkxk. For all xX1, by the above inequality and the definition of ρk, it implies that UB-t=1,tknmin{αtx̲t,αtx¯t}-T<αkxk, that is UB<t=1,tknmin{αtx̲t,αtx¯t}+αkxk+Tt=1nαtxt+T=ϕ0l(x). Thus, for all xX1, we have ϕ0(x)ϕ0l(x)>UBϕ0*, that is, for all xX1, ϕ0(x) is always greater than the optimal value ϕ0* of the problem (Q). Therefore, there cannot exist globally optimal solution of (Q) over X1.

For all xX2, if there exists some k such that αk<0 and ρk<αkx̲k, from arguments similar to the above, it can be derived that there is no globally optimal solution of (Q) over X2.

4. Algorithm and Its Convergence

In this section, based on the former results, we present a branch-and-reduce algorithm to solve the problem (Q). There are three fundamental processes in the algorithm procedure: a reducing process, a branching process, and an updating upper and lower bounds process.

Firstly, based on Section 3, when some conditions are satisfied, the reducing process can cut away a large part of the currently investigated feasible region in which the global optimal solution does not exist.

The second fundamental process iteratively subdivides the rectangle X into two subrectangles. During each iteration of the algorithm, the branching process creates a more refined partition that cannot yet be excluded from further consideration in searching for a global optimal solution for problem (Q). In this paper we choose a simple and standard bisection rule. This branching rule is sufficient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree. Consider any node subproblem identified by rectangle X={xRnx̲ixix¯i,=1,,n}X0. This branching rule is as follows.

Let p=argmax{x¯i-x̲ii=1,,n}.

Let γ=(x̲p+x¯p)/2.

Let X¯={xRnx̲ixix¯i,ip,x̲pxpγ},X¯¯={xRnx̲ixix¯i,ip,γxpx¯p}.

By this branching rule, the rectangle X is partitioned into two subrectangles X¯ and X¯¯.

The third process is to update the upper and lower bounds of the optimal value of (Q). This process needs to solve a sequence of linear programming problems and to compute the objective function value of (Q) at the midpoint of the subrectangle X for the problem (Q). In addition, some bound tightening strategies are applied to the proposed algorithm.

The basic steps of the proposed algorithm are summarized as follows. In this algorithm, let LB(Xk) be the optimal value of (LRP) over the subrectangle X=Xk, and xk=x(Xk) be an element of corresponding arg min. Since ϕjl(x)  (j=0,,m) is a linear function, for convenience in expression, assume that it is expressed as follows ϕjl(x)=t=1najtxt+bj, where ajt,bjR. Thus, we have minxXϕjl(x)=t=1nmin{ajtx̲t,ajtx¯t}+bj.

4.1. Algorithm StatementStep 1 (initialization).

Let the set all active node Q0={X0}, the upper bound UB=+, the set of feasible points F=, some accuracy tolerance ϵ>0 and the iteration counter k=0.

Solve the problem (LRP) for X=X0. Let LB0=LB(X0) and x0=x(X0). If x0 is a feasible point of (Q), then let UB=ϕ0(x0),F=F{x0}. If UB<LB0+ϵ, then stop: x0 is an ϵ-optimal solution of (Q). Otherwise, proceed.

Step 2 (updating the upper bound).

Select the midpoint xmidk of Xk; if xmidk is feasible to (Q), then F=F{xmidk}. Let the upper bound UB=min{ϕ0(xmidk),UB} and the best known feasible point x*=argminxFϕ0(x).

Step 3 (branching and reducing).

Using the branching rule to partition Xk into two new subrectangles, and denote the set of new partition rectangles as X¯k. For each XX¯k, utilize the reducing technique of Theorem 3.1 to reduce box X, and compute the lower bound ϕjl(x) of ϕj(x) over the rectangle X. If for j=1,,m, there exists some j such that minxXϕjl(x)>lnβj, or for j=0, minxXϕ0l(x)>UB, then the corresponding subrectangle X will be removed from X¯k, that is, X¯k=X¯kX, and skip to the next element of X¯k.

Step 4 (bounding).

If X¯k, solve (LRP) to obtain LB(X) and x(X) for each XX¯k. If LB(X)>UB, set X¯k=X¯kX; otherwise, update the best available solution UB,F and x* if possible, as in the Step 2. The partition set remaining is now Qk=(QkXk)X¯k, and a new lower bound is LBk=infXQkLB(X).

Step 5 (convergence checking).

Set Qk+1=Qk{XUB-LB(X)ϵ,XQk}. If Qk+1=, then stop: UB is the ϵ-optimal value of (Q), and x* is an ϵ-optimal solution. Otherwise, select an active node Xk+1 such that Xk+1=argminXQk+1LB(X), xk+1=x(Xk+1). Set k=k+1, and return to Step 2.

4.2. Convergence Analysis

In this subsection, we give the global convergence properties of the above algorithm.

Theorem 4.1 (convergence).

The above algorithm either terminates finitely with a globally ϵ-optimal solution, or generates an infinite sequence {xk} which any accumulation point is a globally optimal solution of (Q).

Proof.

When the algorithm is finite, by the algorithm, it terminates at some step k0. Upon termination, it follows that UB-LBkϵ. From Step 1 and Step 5 in the algorithm, a feasible solution x* for the problem (Q) can be found, and the following relation holds ϕ0(x*)-LBkϵ. Let v denote the optimal value of problem (Q). By Section 2, we have LBkv. Since x* is a feasible solution of problem (Q), ϕ0(x*)v. Taken together above, it implies that vϕ0(x*)LBk+ϵv+ϵ, and so x* is a global ϵ-optimal solution to the problem (Q) in the sense that vϕ0(x*)v+ϵ.

When the algorithm is infinite, by , a sufficient condition for a global optimization to be convergent to the global minimum, requires that the bounding operation must be consistent and the selection operation is bound improving.

A bounding operation is called consistent if at every step any unfathomed partition can be further refined, and if any infinitely decreasing sequence of successively refined partition elements satisfies limk(UB-LBk)=0, where LBk is a computed lower bound in stage k and UB is the best upper bound at iteration k not necessarily occurring inside the same subrectangle with LBk. Now, we show that (4.9) holds.

Since the employed subdivision process is rectangle bisection, the process is exhaustive. Consequently, from Theorem 2.1 and the relationship V(LRP)V(Q), the formulation (4.9) holds, this implies that the employed bounding operation is consistent.

A selection operation is called bound improving if at least one partition element where the actual lower bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is bound improving because the partition element where the actual lower bound is attained is selected for further partition in the immediately following iteration.

From the above discussion, and Theorem IV.3 in , the branch-and-reduce algorithm presented in this paper is convergent to the global minimum of (Q).

5. Numerical Experiments

In this section, some numerical experiments are reported to verify the performance of the proposed algorithm. The algorithm is coded in Matlab 7.1. The simplex method is applied to solve the linear relaxation programming problems. The test problems are implemented on a Pentium IV (3.06 GHZ) microcomputer, and the convergence tolerance is set at ϵ=1.0e-4 in our experiments.

Example 5.1 (see [<xref ref-type="bibr" rid="B12">12</xref>, <xref ref-type="bibr" rid="B15">15</xref>]).

min(x1+x2+1)2.5(2x1+x2+1)1.1(x1+2x2+1)1.9s.t.(x1+2x2+1)1.1(2x1+2x2+2)1.350,1x13,1x23.

Example 5.2 (see [<xref ref-type="bibr" rid="B15">15</xref>]).

min(2x1+x2-x3+1)-0.2(2x1-x2+x3+1)(x1+2x2+1)0.5s.t.(3x1-x2+1)0.3(2x1-x2+x3+2)-0.110,(1.2x1+x2+1)-1(2x1+2x2+1)0.512,(x1+x2+2)0.2(1.5x1+x2+1)-215,1x12,1x22,1x32.

Example 5.3 (see [<xref ref-type="bibr" rid="B12">12</xref>, <xref ref-type="bibr" rid="B15">15</xref>]).

min(x1+x2+x3)(2x1+x2+x3)(x1+2x2+2x3)s.t.(x1+2x2+x3)1.1(2x1+2x2+x3)1.3100,1x13,1x23,1x33.

Example 5.4 (see [<xref ref-type="bibr" rid="B13">13</xref>, <xref ref-type="bibr" rid="B16">16</xref>]).

min(-x1+2x2+2)(4x1-3x2+4)(3x1-4x2+5)-1(-2x1+x2+3)-1s.t.x1+x21.5,x1-x20,0x11,0x21.

Example 5.5 (see [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B15">15</xref>]).

min(2x1+x2+1)1.5(2x1+x2+1)2.1(0.5x1+2x2+1)0.5s.t.(x1+2x2+1)1.2(2x1+2x2+2)0.118,(1.5x1+2x2+1)(2x1+2x2+1)0.525,1x13,1x23.

The results of problems (2.2)–(4.9) are summarized in Table 1, where the following notations have been used in row headers: Iter: number of algorithm iterations; Time: execution time in seconds.

Computational results of test problems (2.2)–(4.9).

ExampleMethodsOptimal solutionOptimal valueIterTime
1(1.0, 1.0)997.661265160490
(1.0, 1.0)997.661350.0984
ours(1.0, 1.0)997.661310.0160

2(1.0, 2.0, 1.0)3.7127100.2717
ours(1.0, 2.0, 1.0)3.712710.0150

3(1.0, 1.0, 1.0)60.0640
(1.0, 1.0, 1.0)60.010.0126
ours(1.0, 1.0, 1.0)60.010.0148

4(0.0, 0.0)0.53333333330
(0.0, 0.0)0.533333160.05
ours(0.0, 0.0)0.533320.0221

5(1.0, 1.0) 275.07428410
(1.0, 1.0) 275.074310.0105
ours(1.0, 1.0) 275.074310.0102

The results in Table 1 show that our algorithm is both feasible and efficient.

Acknowledgments

The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper. This paper is supported by the National Natural Science Foundation of China (60974082) and the Fundamental Research Funds for the Central Universities (K50510700004).

DorneichM. C.SahinidisN. V.Global optimization algorithms for chip design and compactionEngineering Optimization199525131154BennettK. P.Global tree optimization: a non-greedy decision tree algorithmComputing Sciences and Statistics199426156160KeeneyR. L.RaiffaH.Decisions with Multiple Objective1993Cambridge, Mass, USACambridge UniversityMulveyJ. M.VanderbeiR. J.ZeniosS. A.Robust optimization of large-scale systemsOperations Research1995432264281132741510.1287/opre.43.2.264ZBL0832.90084HorstR.TuyH.Global Optimization: Deterministic Approache19932ndBerlin, GermanySpringer1274246GaoY.XuC.YangY.An outcome-space finite algorithm for solving linear multiplicative programmingApplied Mathematics and Computation20061792494505229316310.1016/j.amc.2005.11.111ZBL1103.65065BensonH. P.Decomposition branch-and-bound based algorithm for linear programs with additional multiplicative constraintsJournal of Optimization Theory and Applications200512614161215843010.1007/s10957-005-2655-4ZBL1093.90040RyooH. S.SahinidisN. V.Global optimization of multiplicative programsJournal of Global Optimization200326438741810.1023/A:10247009015381989747ZBL1052.90091KunoT.A finite branch-and-bound algorithm for linear multiplicative programmingComputational Optimization and Applications2001202119135185689510.1023/A:1011250901810ZBL0983.90075BensonH. P.BogerG. M.Outcome-space cutting-plane algorithm for linear multiplicative programmingJournal of Optimization Theory and Applications20001042301322175232010.1023/A:1004657629105ZBL0962.90024ShenP.JiaoH.Linearization method for a class of multiplicative programming with exponentApplied Mathematics and Computation20061831328336228281510.1016/j.amc.2006.05.074ZBL1110.65051JiaoH.A branch and bound algorithm for globally solving a class of nonconvex programming problemsNonlinear Analysis. Theory, Methods & Applications20097021113112310.1016/j.na.2008.02.0052468390ZBL1155.90459ShenP.BaiX.LiW.A new accelerating method for globally solving a class of nonconvex programming problemsNonlinear Analysis. Theory, Methods & Applications2009717-82866287610.1016/j.na.2009.01.1422532813ZBL1168.90576ZhouX. G.WuK.A method of acceleration for a class of multiplicative programming problems with exponentJournal of Computational and Applied Mathematics20092232975982247889510.1016/j.cam.2008.03.031ZBL1159.65067WangC. F.LiuS. Y.A new linearization method for generalized linear multiplicative programmingComputers & Operations Research201138710081013274126610.1016/j.cor.2010.10.016ZBL1208.90162ThoaiN. V.A global optimization approach for solving the convex multiplicative programming problemJournal of Global Optimization19911434135710.1007/BF001308301266210ZBL0752.90056