MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi 10.1155/2018/9146309 9146309 Research Article Global Optimization for Generalized Linear Multiplicative Programming Using Convex Relaxation http://orcid.org/0000-0003-1581-8414 Zhao Yingfeng 1 Zhao Ting 1 Cabrera-Guerrero Guillermo School of Mathematical Science Henan Institute of Science and Technology Xinxiang 453003 China hist.edu.cn 2018 1052018 2018 10 02 2018 01 04 2018 1052018 2018 Copyright © 2018 Yingfeng Zhao and Ting Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Applications of generalized linear multiplicative programming problems (LMP) can be frequently found in various areas of engineering practice and management science. In this paper, we present a simple global optimization algorithm for solving linear multiplicative programming problem (LMP). The algorithm is developed by a fusion of a new convex relaxation method and the branch and bound scheme with some accelerating techniques. Global convergence and optimality of the algorithm are also presented and extensive computational results are reported on a wide range of problems from recent literature and GLOBALLib. Numerical experiments show that the proposed algorithm with a new convex relaxation method is more efficient than usual branch and bound algorithm that used linear relaxation for solving the LMP.

Education Department of Henan Province 14A110024 15A110023 National Natural Science Foundation of Henan Province 152300410097 Science and Technology Projects of Henan Province 182102310941 Cultivation Plan of Young Key Teachers in Colleges and Universities of Henan Province 2016GGJS-107 Higher School Key Scientific Research Projects of Henan Province 18A110019 17A110021 Henan Institute of Science and Technology 2015ZD07
1. Introduction

This paper deals with finding global optimal solutions for generalized linear multiplicative programming problems of the form(1)LMP:minf0x=j=1p0ϕ0jxψ0jxs.t.fix=j=1piϕijxψijx0,i=1,2,,M,fix=j=1piϕijxψijx0,i=M+1,M+2,,N,xD=xRnAxb,x0,where ϕij(x),ψij(x) in objective and constraints are linear functions with general forms ϕij(x)=k=1naijkxk+bij, ψij(x)=k=1ncijkxk+dij, aijk, cijk, bij, and dij are all arbitrary real numbers, i=0,1,2,,N, j=1,2,,pi, k=1,2,,n, ARm×n is a matrix, bRm is a vector, and set DRn is nonempty and bounded.

Generally, linear multiplicative programming problem (LMP) is a special case of nonconvex programming problem, known to be NP-hard. Linear multiplicative programming problem (LMP) has attracted considerable attention in the literature because of its large number of practical applications in various fields of study, including financial optimization in Konno et al. , VLSI chip design in Dorneich and Sahinidis , data mining and pattern recognition in Bennett and Mangasarian , plant layout design in Quesada and Grossmann , marketing and service planning in Samadi et al. , robust optimization in Mulvey et al. , multiple-objective decision in Benson , in Keeney and Raika , in Geoffrion , location-allocation problems in in Konno et al. , constrained bimatrix games in Mangasarian , three-dimensional assignment problems in Frieze , certain linear max-min problems in Falk , and many problems in engineering design, economic management, and operations research. Another reason why this problem attracts so much attention is that, by utilizing some techniques, many mathematical programs like general quadratic programming, bilinear programming, linear multiplicative programming, and quadratically constrained quadratic programming can be converted into the special form of LMP in Benson ; in Tuy , the important class of sum of ratios fractional problems can also be encapsulated by LMP; in fact, suppose that fi are convex and gi are concave and positive. Since each 1/gi(x) is convex and positive, then the sum of ratios fractional problem (with the objective function as fi(x)/gi(x)) reduces to LMP when gi and fi are linear. When gi and fi are negative, one can add a large real member to make negative be positive because of the compactness of the constraint set in Dai et al. .

Moreover, from the algorithmic design point of view, sum of products of two affine functions need not be convex (even not be quasi-convex), and hence linear multiplicative programming problem (LMP) may have multiple local solutions, which fail to be global optimal. Due to the facts above, developing practical global optimization algorithms for the LMP has great theoretical and the algorithmic significance.

In the past 20 years, many solution methods have been proposed for globally solving linear multiplicative programming problem (LMP) and its special cases. The methods are mainly classified as parameterization-based methods, outer-approximation methods, kinds of branch-and-bound methods, decomposition method, and cutting plane methods in Konno et al. , in Thoai , in Shen and Jiao [19, 20], in Wang et al. , in Jiao et al. , and in Shen et al. . Most of these methods for linear multiplicative programming are designed only for problems in which the constraint functions are all linear or under the assumption that all linear functions that appeared in the multiplicative terms are nonnegative. And most branch-and-bound algorithms for this problem are developed based on two-phase linear relaxation scheme, which will take more computational time in the approximation process. For example, in Zhao and Liu  and in Jiao et al. , both algorithms utilize two-phase relaxation methods, and a large amount of computational time is consumed in the relaxation process. Compared with these algorithms, the main features of our new algorithm are threefold. (1) The problem investigated in this paper is linear multiplicative constrained multiplicative programming; it has a more general form than those linear multiplicative programming problems with linear constraints. (2) The relaxation programming problem is a convex programming which can be obtained with one-step relaxation; this will greatly improve the efficiency of approximation process. (3) The condition ϕij(x)>0,ψij(x)>0,xX, is a nonnecessitating requirement in our algorithm. (4) Extensive computational numerical examples and comparison from recent literature and GLOBALLib are performed to test our algorithm for LMP.

The rest of this paper is arranged in the following way. In Section 2, the construction process of the convex relaxation problem is detailed, which will provide a reliable lower bound for the optimal value of LMP. In Section 3, some key operations for designing a branch-and-bound algorithm and the global optimization algorithm for LMP are described. Convergence property of the algorithm is also established in this section. Numerical results are reported to show the feasibility and efficiency of our algorithm in Section 4 and some concluding remarks are reported in the last section.

2. Convex Relaxation of LMP

As is known to all, constructing a well-performed relaxation problem can bring great convenience for designing branch-and-bound algorithm of global optimization problems. In this section, we will show how to construct the convex relaxation programming problem (CRP) for LMP. For this, we first compute the initial variable bounds by solving the following linear programming problems: (2)xil=minxDxi,xiu=maxxDxi,i=1,2,,n;then an initial hyperrectangle X0=xRnxilxixiu,i=1,2,,n can be obtained.

For convenience for introduction of relaxation convex programming problem for LMP over subrectangle XX0, we further solve the following set of linear programming problems:(3)lij=minxDXϕijx,uij=maxxDXϕijx,Lij=minxDXψijx,Uij=maxxDXψijx.Based on the construction process of lij and Lij and uij and Uij, it is not hard to see that(4)ϕijx-uijψijx-Uij0,ϕijx-lijψijx-Lij0,(5)ϕijx-lijψijx-Uij0,ϕijx-uijψijx-Lij0;by combining (4) with (5) and performing some equivalent transformation, we can easily obtain a lower and an upper bound of each bilinear term; that is,(6)ϕijxψijxminuijψijx+Lijϕijx-uijLij,lijψijx+Uijϕijx-Uijlij,ϕijxψijxmaxuijψijx+Uijϕijx-Uijuij,lijψijx+Lijϕijx-Lijlij.To facilitate the narrative, i=0,1,2,,N, by denoting (7)g_ij1xuijψijx+Uijϕijx-Uijuij,g_ij2xlijψijx+Lijϕijx-Lijlij,g¯ij1xuijψijx+Lijϕijx-uijLij,g¯ij2xlijψijx+Uijϕijx-Uijlij,we can reformulate conclusion (6) as(8)ϕijxψijxming¯ij1x,g¯ij2xg¯ijx,ϕijxψijxmaxg_ij1x,g_ij2xg_ijx,respectively. With this, we obtain a lower bound function g_i(x) and upper bound function g¯i(x) for fi(x), which satisfy g_i(x)fi(x)g¯i(x), i=0,1,2,,N, where(9)g_ix=j=1pig_ijx,g¯ix=j=1pig¯ijx,i=0,1,2,,N.

So far, based on the above discussion, it is not too hard to obtain the relaxation programming problem over X for the LMP which we formulated as follows: (10)CRP:ming_0x=j=1p0g_0jxs.t.g_ix=j=1pig_ijx0,i=1,2,,M,g¯ix=j=1pig¯ijx0,i=M+1,M+2,,N,xDX=xXAxb,x0.

Remark 1.

As one can easily confirm, program (CRP) is a convex program which can be effectively solved by some convex optimization tool-boxes such as CVX; that is, all functions that appeared in the objective and constraints of CRP are convex.

Remark 2.

Both the lower and upper bound functions g_i(x) and g¯i(x) will approximate function fi(x), as the diameter of rectangle converges to zero.

3. Branch-and-Bound Algorithm and Its Convergence

As we have known, branch-and-bound algorithms utilize tree search strategies to implicitly enumerate all possible solutions to a given problem, applying pruning techniques to eliminate regions of the search space that cannot yield a better solution. There are three algorithmic components in branch-and-bound scheme which can be specified to fine-tune the performance of the algorithm. These components are the search strategy, the branching strategy, and the pruning rules. This section presents a description of these components and the proposed algorithm for solving LMP.

3.1. Key Operation

There are two important phases of any branch-and-bound algorithm: search phase and verification phase.

The choice of search strategy primarily impacts the search phase and has potentially significant consequences for the amount of computational time required and memory used. In this paper, we will choose the depth-first search (DFS) strategy with node ranking techniques, which can reduce a lot of storage space.

The choice of branching strategy determines how children nodes are generated from a subproblem. It has significant impacts on both the search phase and the verification phase. By branching appropriately at subproblems, the strategy can guide the algorithm towards optimal solutions. In this paper, to develop the proposed algorithm for LMP, we adopt a standard range bisection approach, which is adequate to insure global convergence of the proposed algorithm. Detailed process is described as follows.

For any region X=[xl,xu]X0, let rargmaxxiu-xili=1,2,,n and xmid=(xrl+xru)/2, and then the current region X can be divided into the two following subregions:(11)X¯=xRnxilxixiu,ir,xrlxrxmid,X̿=xRnxilxixiu,ir,xmidxrxru.

Another critical aspect of branch-and-bound algorithm is the choice of pruning rules used to exclude regions of the search space from exploration. The most common way to prune is to produce a lower and (or) upper bound on the objective function value at each subproblem and use this to prune subproblems whose bound is no better than the incumbents solution value. For each partition subset X generated by the above branching operation, the bounding operation is mainly concentrated on estimating a lower bound LB(X) and an upper bound UB(X) for the optimal value of linear multiplicative programming problem (LMP). LB(X) can be obtained by solving the following convex relaxation programming problems: (12)CRPX:ming_0x=j=1p0g_0jxs.t.g_ix=j=1pig_ijx0,i=1,2,,M,g¯ix=j=1pig¯ijx0,i=M+1,M+2,,N,xDX=xXAxb,x0.Moreover, since any feasible solution of linear multiplicative programming problem (LMP) will provide a valid upper bound of the optimal value, we can evaluate the initial objective value of the optimal solution of CRP to determine an upper bound (if possible) for the optimal value of linear multiplicative programming problem (LMP) over X.

3.2. Branch-and-Bound Algorithm

Based on the former discussion, the presented algorithm for globally solving the LMP can be summarized as follows.

Step 0 (initialization).

Step 0.1. Set iteration counter k0 and the initial partition set as Ω0=X0. The set of active nodes X~0=X0; the upper bound UB=; the set of feasible points F=ϕ; the convergence tolerance ϵ=1×10-8.

Step 0.2. Solve the initial convex relaxation problem (CRP) over region X0; if the CRP is not feasible, then there is no feasible solution for the initial problem. Otherwise, denote the optimal value and solution as fopt0 and xopt0, respectively. If xopt0 is feasible to the LMP, we can obtain an initial upper bound and lower bound of the optimal value for linear multiplicative programming problem (LMP); that is, UBf0(xopt0), and LBfopt0. And then, if UB-LB<ϵ, the algorithm can stop, and xopt0 is the optimal solution of the LMP; otherwise proceed to Step 1.

Step 1 (branching).

Partition Xk into two new subrectangles according to the partition rule described in Section 3.1. Delete Xk and add the new nodes into the active nodes set X~k; still denote the set of new partitioned sets as X~k.

Step 2 (bounding).

For each subregion still of interest XkμX0,μ=1,2, obtain the optimal solution and value for convex relaxation problem (CRP) by solving the convex relaxation programming problem over Xkμ; if LB(Xk,μ)>UB, delete Xkμ from X~k. Otherwise, we can update the lower and upper bounds: LB=min{LB(Xk,μ)μ=1,2} and UB=min{UB,f(xk,μ)μ=1,2}.

Step 3 (termination).

If UB-LBϵ, the algorithm can be stopped; UB is the global optimal value for LMP. Otherwise, set kk+1, and select the node with the smallest optimal value as the current active node, and return to Step 1.

3.3. Global Convergence of the Algorithm

The global convergence properties of the above algorithm for solving linear multiplicative programming problem (LMP) can be given in the sense of the following theorem.

Theorem 3.

The proposed algorithm will terminate within finite iterations or it will generate an infinite sequence {xk}, any accumulation point of which is a global optimal solution for the LMP.

Proof.

If the proposed algorithm terminates finitely, assume that it terminates at the kth iteration: k0. By the termination criteria, we know that (13)UB-LBϵ.Based on the upper bounding technique described in Step 2, it is implied that (14)fxk-LBϵ.Let vopt be the optimal value of linear multiplicative programming problem (LMP); then, by Section 3.1 and Step 2 above, we know that (15)UB=fxkvoptLB.Hence, taking (14) and (15) together, it is implied that (16)vopt+ϵLB+ϵfxkvopt,and thus the proof of the first part is completed.

If the algorithm is infinite, then it generates an infinite feasible solution sequence {xk} for the LMP via solving the CRP. Since the feasible region of LMP is compact, the sequence {xk} must have a convergent subsequence. For definiteness and without loss of generality, assume limkxk=x, and then we have (17)limkϕijxk=ϕijx,limkψijxk=ψijx.By the definitions of lij and Lij and uij and Uij, we know that (18)limklijxk=ϕijx,limkLijxk=ψijx.Moreover, we have(19)UBk=limkfxk=fx,LBk=limkj=1p0g_0jx=fx.Therefore, we have limk(UBk-LBk)=0. This implies that x is a global optimal solution for linear multiplicative programming problem (LMP).

4. Numerical Experiments

To test the proposed algorithm in efficiency and solution quality, we performed some computational examples on a personal computer containing an Intel Core i5 processor of 2.40 GHz and 4 GB of RAM. The code base is written in Matlab 2014a and all subproblems are solved by using CVX.

We consider some instances of linear multiplicative programming problem (LMP) from some recent literature in Wang and Liang , in Jiao , in Shen et al. , in Shen and Jiao [19, 20], and in Jiao et al.  and GLOBALLib at http://www.gamsworld.org/global/globallib.htm. Among them, Examples 16 are taken from some recent literature. Examples 711 are taken from GLOBALLib, a collection of nonlinear programming models. The last example is a nonlinear nonconvex mathematical programming problem with randomized linear multiplicative objective and constraint functions.

Example 1 (references in Shen et al. [<xref ref-type="bibr" rid="B19">23</xref>]).

(20) min - 4 x 1 2 - 5 x 2 2 + x 1 x 2 + 2 x 1 s.t. x 1 - x 2 0 , 1 3 x 1 2 - 1 3 x 2 2 1 , 1 2 x 1 x 2 1 , 0 x 1 3 , x 2 0 .

Example 2 (references in Wang and Liang [<xref ref-type="bibr" rid="B24">25</xref>] and in Jiao [<xref ref-type="bibr" rid="B10">26</xref>]).

(21) min x 1 2 + x 2 2 s.t. 0.3 x 1 x 2 1 , 2 x 1 5 , 1 x 2 3 .

Example 3 (references in Jiao [<xref ref-type="bibr" rid="B10">26</xref>]).

(22) min x 1 2 + x 2 2 - x 3 2 s.t. 0.3 x 1 x 2 + 0.3 x 2 x 3 + 0.6 x 1 x 3 4 , 2 x 1 5 , 1 x 2 3 , 1 x 3 3 .

Example 4 (references in Shen and Jiao [<xref ref-type="bibr" rid="B18">19</xref>, <xref ref-type="bibr" rid="B20">20</xref>] and in Jiao et al. [<xref ref-type="bibr" rid="B9">22</xref>]).

(23) min x 1 x 2 - 2 x 1 + x 2 + 1 s.t. 8 x 2 2 - 6 x 1 - 16 x 2 - 11 , - x 2 2 + 3 x 1 + 2 x 2 7 , 1 x 1 2.5 , 1 x 2 2.225 .

Example 5 (references in Shen and Jiao [<xref ref-type="bibr" rid="B18">19</xref>, <xref ref-type="bibr" rid="B20">20</xref>] and in Jiao et al. [<xref ref-type="bibr" rid="B9">22</xref>]).

(24) min x 1 s.t. 1 4 x 1 + 1 2 x 2 - 1 16 x 1 2 - 1 16 2 x 2 1 , 1 14 x 1 2 + 1 14 x 2 2 - 3 7 x 1 - 3 7 x 2 - 1 , 1 x 1 5.5 , 1 x 2 5.5 .

Example 6 (references in Jiao et al. [<xref ref-type="bibr" rid="B9">22</xref>]).

(25) min x 1 + 2 x 1 - 3 x 2 + 13 x 1 + x 2 - 1 s.t. - x 1 + 2 x 2 8 , - x 2 - 3 , x 1 + 2 x 2 12 , x 1 - 2 x 2 - 5 , x 1 0 , x 2 0 .

Examples 16 are taken from some literature, where they are solved by branch-and-bound algorithm with linear relaxation techniques. Numerical experiment demonstrates that the above method is more efficient than other methods in the sense that our algorithm requires rather less iterations and CPU time for solving the same problems. Specific results of numerical Examples 16 are listed in Table 1, where the notations used in the headline have the following meanings: Exam.: serial number of numerical examples in this paper; Ref.: serial number of numerical examples in the references; Iter.: iterative times; Time: CPU time in seconds; Prec.: precision used in the algorithm; Opt. val. and Opt. sol. denote the optimal value and solution of the problem, respectively.

Results of the numerical contrast experiments 1–6.

Exam. Ref. Opt. val. Opt. sol. Iter. Time (s) Prec.
1 in Shen et al.  - 15.000 ( 2.0,1 ) 1657 120.58 1 0 - 6
Ours - 15.0000 ( 2.0000,1.0000 ) 31 0.34449 1 0 - 8

2 In Wang and Liang  6.7780 ( 2.00003,1.66665 ) 44 0.18 1 0 - 4
In Jiao  6.77778 ( 2.0,1.666667 ) 58 <1 1 0 - 8
Ours 6.77778 ( 2.0000,1.6667 ) 1 0.0137 1 0 - 8

3 In Jiao  - 4.0 ( 2.0,1.0,3.0 ) 43 - 1 0 - 8
Ours - 4.0 ( 2.0000,1.0000,3.0000 ) 1 0.0214 1 0 - 8

4 In Shen and Jiao [19, 20] 0.0000 ( 2.00,1.00 ) 24 - 10-3
In Jiao et al.  0.00000003 ( 2.0000061,1.0 ) 16 0.018 10-8
Ours 0.0000 ( 2.0000,1.0000 ) 1 0.0351 10-8

5 In Shen and Jiao [19, 20] 1.1771 ( 1.17709,2.1772 ) 434 1 10-3
In Jiao et al.  1.17708 ( 1.17709,2.1772 ) 189 0.226 10-6
Ours 1.1770 ( 1.177088,2.17718 ) 3 0.1534 10-8

6 In Jiao et al.  3.0000 ( 0.0000,4.0000 ) 25 0.750 10-8
Ours 3.0000 ( 0.0000,4.0000 ) 1 0.01436 10-8
Example 7 (st-qpk1).

(26) min 2 x 1 - 2 x 1 2 + 2 x 1 x 2 + 3 x 2 - 2 x 2 2 s.t. - x 1 + x 2 1 , x 1 - x 2 1 , - x 1 + 2 x 2 3 , 2 x 1 - x 2 3 , 0 x 1 , x 2 .

Example 8 (st-z).

(27) min - x 1 2 - x 2 2 - x 3 2 + 2 x 3 s.t. x 1 + x 2 - x 3 0 , - x 1 + x 2 - x 3 0 , 12 x 1 + 5 x 2 + 12 x 3 22.8 , 12 x 1 + 12 x 2 + 7 x 3 17.1 , - 6 x 1 + x 2 + x 3 1.9 , x 2 0 .

Example 9 (ex5-4-2).

(28) min x 1 + x 2 + x 3 s.t. x 4 + x 6 400 , - x 4 + x 5 + x 7 300 , - x 5 + x 8 100 , x 1 - x 1 x 6 + 833.333333333333 x 4 83333.3333333333 , x 2 x 4 - x 2 x 7 - 1250 x 4 + 1250 x 5 0 , x 3 x 5 - x 3 x 8 - 2500 x 5 - 1250000 100 x 1 10000 , 1000 x 2 10000 , 1000 x 3 10000 , 10 x 4 1000 , 10 x 5 1000 , 10 x 6 1000 , 10 x 7 1000 , 10 x 8 1000 .

Example 10 (st-qpc-m1).

(29) min 10 x 1 - 0.34 x 1 x 1 - 0.28 x 1 x 2 + 10 x 2 - 0.22 x 1 x 3 + 10 x 3 - 0.24 x 1 x 4 + 10 x 4 - 0.51 x 1 x 5 + 10 x 5 - 0.28 x 2 x 1 - 0.34 x 2 x 2 - 0.23 x 2 x 3 - 0.24 x 2 x 4 - 0.45 x 2 x 5 - 0.22 x 3 x 1 - 0.23 x 3 x 2 - 0.35 x 3 x 3 - 0.22 x 3 x 4 - 0.34 x 3 x 5 - 0.24 x 4 x 1 - 0.24 x 4 x 2 - 0.22 x 4 x 3 - 0.2 x 4 x 4 - 0.38 x 4 x 5 - 0.51 x 5 x 1 - 0.45 x 5 x 2 - 0.34 x 5 x 3 - 0.38 x 5 x 4 - 0.99 x 5 x 5 s.t. x 1 + x 2 + 2 x 3 + x 4 + x 5 10 , 2 x 1 + 3 x 2 + x 5 8 , x 2 + 4 x 3 - x 4 + 2 x 5 12 , 8 x 1 - x 2 - x 3 + 6 x 4 20 , 2 x 1 + x 2 + 3 x 3 + x 4 + x 5 30 , x 1 , x 2 , x 3 , x 4 , x 5 0 .

Example 11 (st-e26).

(30) min - 3 x 1 2 - 5 x 1 - 3 x 2 2 - 5 x 2 s.t. 0.7 x 1 + x 2 6.3 , 0.5 x 1 + 0.8333 x 2 6 , x 1 + 0.6 x 2 7.08 , 0.1 x 1 + 0.25 x 2 1.35 , 0 x 1 10 , 0 x 2 30 .

These five examples are taken from GLOBALLib; all of them are generalized linear multiplicative programs with different types of constraints. Computational results and some known results are listed in Table 2, where the notations used in the headline have the following meanings: Exam. denotes the serial number of the example tested in this paper; Best sol. and Best val. represent the best optimal solution and optimal value currently known; Our sol. and Our val. are the optimal solution and optimal value obtained by our algorithm described in this paper. From the results summarised in the table, we can see that our algorithm can effectively solve the LMP.

One has (31)xglob=1026.9484,1000,5485,265.0597,280.58873,134.9403,284.47098,380.5887xour=1026.9484,1000,5485.282,265.06,280.589,134.94,284.471,380.589.

Results of numerical experiments 7–11.

Exam. Best sol. Our sol. Best val. Our val.
7 Unknown ( 1,0 ) Unknown 0
8 Unknown ( 0.9,0 , 0.9 ) Unknown - 1
9 xglob xour 7512.2301449 7512.2301446
10 Unknown ( 0,0 , 0,3.333,26.6667 ) Unknown - 473.778
11 Unknown ( 7.08,0 ) Unknown - 185.779196
Example 12 (random test).

(32) min f x = i = 1 p a 0 i T x + b 0 i c 0 i T x + d 0 i s.t. i = 1 p a 1 i T x + b 1 i c 1 i T x + d 1 i 0 , i = 1 p a 2 i T x + b 2 i c 2 i T x + d 2 i 0 , x D = x R n A x b , where the real numbers b0j and d0j are randomly generated in the range [-1,1], b1j is randomly generated in interval [-1,0], d1j,b2j,d2j are randomly generated in the range [0,1], and the real elements of aij, cij, A, and b are randomly generated in the range [0,1].

For this problem, we tested 8 groups of instances with different dimension. For each group, we performed 10 instances for a total of 80 instances. The computational results are listed in Table 3, where the notations used in the headline have the following meanings: Avr. iter.: average numbers of iterations in the algorithm; Std. dev.: standard deviation; Avr. time: average CPU time in seconds; m and n denote the numbers of linear constraints and variables, respectively.

Numerical results of Example 12.

p m n Avr. iter. Std. dev. Avr. time (s)
5         10 10 5.7 28.2 0.5157
5         10 20 11.4 13.4 1.315
5         10 30 19.0 14.7 5.528
10         10 20 31.4 18.4 41.2140
10         20 40 35.7 17.7 43.541
20         30 40 49.6 19.5 75.208
20         30 50 56.1 22.3 88.233
30         50 80 117.0 14.8 168.213
50         80 100 327.0 24.5 216.209
50         100 150 530.0 58.5 436.139
5. Concluding Remarks

This paper presents a new relaxation method for designing branch-and-bound algorithm for generalized linear multiplicative programming problem with linear multiplicative constraints. The relaxation problem is a convex programming which can be easily obtained with one-step relaxation; it has better approximation effect than usual two-phase linear relaxation method. The presented algorithm can efficiently work without nonnegative restriction to linear function in multiplicative terms, while this restriction is a necessary condition to most branch and bound algorithms described in a lot of literatures. Extensive results of numerical experiments from recent literature show that our method is feasible and effective for this kind of problems.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This paper is supported by the Science and Technology Key Project of Education Department of Henan Province (14A110024 and 15A110023), the National Natural Science Foundation of Henan Province (152300410097), the Science and Technology Projects of Henan Province (182102310941), the Cultivation Plan of Young Key Teachers in Colleges and Universities of Henan Province (2016GGJS-107), the Higher School Key Scientific Research Projects of Henan Province (18A110019 and 17A110021), and the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07).

Konno H. Shirakawa H. Yamazaki H. A mean-absolute deviation-skewness portfolio optimization model Annals of Operations Research 1993 45 1-4 205 220 10.1007/BF02282050 MR1254511 Zbl0785.90014 2-s2.0-21344487416 Dorneich M. C. Sahinidis N. V. Global optimization algorithms for chip layout and compaction Engineering Optimization 1995 25 2 131 154 10.1080/03052159508941259 Bennett K. P. Mangasarian O. L. Bilinear separation of two sets in nspace Computational optimization and applications 1993 2 3 207 227 10.1007/BF01299449 MR1258204 Quesada I. Grossmann I. E. Grossmann I. E. Alternative bounding approximations for the global optimization of various engineering design problems Global Optimization in Engineering Design, Nonconvex Optimization and Its Applications 1996 9 Norwell, MA Kluwer Academic Publishers Samadi F. Mirzazadeh A. Pedram M. . Fuzzy pricing, marketing and service planning in a fuzzy inventory model: a geometric programming approach Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems 2013 37 10-11 6683 6694 10.1016/j.apm.2012.12.020 MR3069969 2-s2.0-84878166851 Mulvey J. M. Vanderbei R. J. Zenios S. A. Robust optimization of large-scale systems Operations Research 1995 43 2 264 281 10.1287/opre.43.2.264 MR1327415 Zbl0832.90084 Benson H. P. Vector maximization with two objective functions Journal of Optimization Theory and Applications 1979 28 2 253 257 10.1007/BF00933245 MR536724 Zbl0372.90126 2-s2.0-0001665298 Keeney R. L. Raika H. Decisions with Multiple Objective 1993 Cambridge Cambridge University Press Geoffrion A. M. Solving bicriterion mathematical programs Operations Research 1967 15 39 54 MR0207392 10.1287/opre.15.1.39 Konno H. Thach P. T. Tuy H. Optimization on Low Rank Nonconvex Structures 1997 15 Dordrecht, The Netherlands Kluwer Academic Publishers Nonconvex Optimization and its Applications 10.1007/978-1-4615-4098-4 MR1480917 Mangasarian O. L. Equilibrium points of bimatrix games Journal of the Society for Industrial and Appl ied Mathematics 1964 12 778 780 Frieze A. M. A bilinear programming formulation of the 3-dimensional assignment problem Mathematical Programming 1979 7 1 376 379 10.1007/BF01585532 MR0354002 Falk J. E. A linear max-min problem Mathematical Programming 1973 5 169 188 10.1007/BF01580119 MR0332174 Zbl0276.90053 2-s2.0-34250434850 Benson H. P. Global maximization of a generalized concave multiplicative function Journal of Optimization Theory and Applications 2008 137 1 105 120 10.1007/s10957-007-9323-9 MR2386617 Zbl1145.90071 2-s2.0-41549121581 Tuy H. Convex Analysis and Global Optimization 2016 2nd Switzerland Springer International Publishing AG Dai Y. Shi J. Wang S. Conical partition algorithm for maximizing the sum of dc ratios Journal of Global Optimization 2005 31 2 253 270 10.1007/s10898-004-5699-3 MR2140534 2-s2.0-17444430818 Konno H. Yajima Y. Matsui T. Parametric simplex algorithms for solving a special class of nonconvex minimization problems Journal of Global Optimization 1991 1 1 65 81 10.1007/BF00120666 MR1263839 2-s2.0-0002398911 Thoai N. V. A global optimization approach for solving the convex multiplicative programming problem Journal of Global Optimization 1991 1 4 341 357 10.1007/BF00130830 MR1266210 Shen P. Jiao H. A new rectangle branch-and-pruning approach for generalized geometric programming Applied Mathematics and Computation 2006 183 2 1027 1038 10.1016/j.amc.2006.05.137 MR2290857 Zbl1112.65058 2-s2.0-33845788391 Shen P. Jiao H. Linearization method for a class of multiplicative programming with exponent Applied Mathematics and Computation 2006 183 1 328 336 10.1016/j.amc.2006.05.074 MR2282815 Zbl1110.65051 2-s2.0-33845389469 Wang C.-F. Liu S.-Y. Shen P.-P. Global minimization of a generalized linear multiplicative programming Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems 2012 36 6 2446 2451 10.1016/j.apm.2011.09.002 MR2887215 Zbl1246.90162 2-s2.0-84857029765 Jiao H.-W. Liu S.-Y. Zhao Y.-F. Effective algorithm for solving the generalized linear multiplicative problem with generalized polynomial constraints Applied Mathematical Modelling: Simulation and Computation for Engineering and Environmental Systems 2015 39 23-24 7568 7582 10.1016/j.apm.2015.03.025 MR3428303 2-s2.0-84958876166 Shen P. Duan Y. Ma Y. A robust solution approach for nonconvex quadratic programs with additional multiplicative constraints Applied Mathematics and Computation 2008 201 1-2 514 526 10.1016/j.amc.2007.12.039 MR2431948 2-s2.0-44649201508 Zhao Y. Liu S. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints SpringerPlus 2016 5 1, article no. 1302 2-s2.0-84981263647 10.1186/s40064-016-2984-9 Wang Y. Liang Z. A deterministic global optimization algorithm for generalized geometric programming Applied Mathematics and Computation 2005 168 1 722 737 10.1016/j.amc.2005.01.142 MR2170862 Zbl1105.65335 2-s2.0-25644459852 Jiao H. A branch and bound algorithm for globally solving a class of nonconvex programming problems Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal 2009 70 2 1113 1123 10.1016/j.na.2008.02.005 MR2468390