Global Optimization for Solving Linear Multiplicative Programming Based on a New Linearization Method

This paper presents a new global optimization algorithm for solving a class of linear multiplicative programming (LMP) problem. First, a new linear relaxation technique is proposed. Then, to improve the convergence speed of our algorithm, two pruning techniques are presented. Finally, a branch and bound algorithm is developed for solving the LMP problem. The convergence of this algorithm is proved, and some experiments are reported to illustrate the feasibility and efficiency of this algorithm.

As a special case of nonconvex programming problem, the problem LMP has been paid more attention since the 1990s.There are two reasons.The first one is that, from a practical point of view, LMP problem appears in a wide variety of practical applications, such as financial optimization [1], data mining/pattern recognition [2], plant layout design [3], VLISI chip design [4], and robust optimization [5].The second one is that, from a research point of view, LMP is N-hard; that is, it usually possesses multiple local optimal solutions that are not globally optimal.So, it is hard to find its global optimal solution, and it is necessary to put forward good methods.
The purpose of this paper is to present an effective method for globally solving problem LMP.Compared with other algorithms, the main features of this algorithm are (1) by using the special structure of LMP, a new linear relaxation technique is presented, which can be used to construct the linear relaxation programming (LRP) problem; (2) two pruning techniques are presented, which can be used to improve the convergence speed of the proposed algorithm; (3) the problem investigated in this paper has a more general form than those in [6][7][8][9][10][11][12]; it does not require     +   > 0 and     +   > 0; (4) numerical results and comparison with methods [8,[13][14][15][16][17][18][19][20][21][22] show that our algorithm works as well as or better than those methods.
This paper is organized as follows.In Section 2, the new linear relaxation programming (LRP) problem for LMP problem is proposed, which provides a lower bound for the optimal value of LMP.In order to improve the convergence speed of our algorithm, two pruning techniques are presented in Section 3. In Section 4, the global optimization algorithm 2 Scientific Programming is given, and the convergence of the algorithm is proved.Numerical experiments are carried out to show the feasibility and efficiency of our algorithm in Section 5.

Linear Relaxation Programming (LRP)
To solve problem LMP, the principal task is the construction of lower bound for this problem and its partitioned subproblems.A lower bound of LMP problem and its partitioned subproblems can be obtained by solving a linear relaxation programming problem.For generating this linear relaxation, the strategy proposed by this paper is to underestimate the objective function () with a linear function.All the details of this procedure will be given in the following.
First, we solve 2 linear programming problems:  0  = min ∈   ,  0  = max ∈   ,  = 1, . . ., , and construct a rectangle Then, the LMP problem can be rewritten as the following form: LMP: Let  = [, ] be the initial rectangle  0 or some subrectangle of  0 that is generated by the proposed algorithm.Next, we will show how to construct the linear relaxation programming problem LRP for LMP.

Pruning Technique
To improve the convergence speed of this algorithm, we present two pruning techniques, which can be used to eliminate the region in which the global optimal solution of LMP problem does not exist.
Assume that UB and LB are the current known upper bound and lower bound of the optimal value V of the problem LMP.Let The pruning techniques are derived as in the following theorems.
Theorem 2. For any subrectangle  ⊆  0 with   = [  ,   ], if there exists some index  ∈ {1, 2, . . ., } such that   > 0 and   <     , then there is no globally optimal solution of LMP problem over  1 ; if   < 0 and   <     , for some , then there is no globally optimal solution of LMP problem over  2 , where Proof.For all  ∈  1 , we first show that () > UB.Consider the th component   of .Since   ∈ (  /  ,   ], we can obtain that From   > 0, we have   <     .For all  ∈  1 , by the above inequality and the definition of   , it implies that that is, Thus, for all  ∈  1 , we have () ≥   () > UB ≥ V; that is, for all  ∈  1 , () is always greater than the optimal value V of the problem LMP.Therefore, there cannot exist globally optimal solution of LMP problem over  1 .
Similarly, for all  ∈  2 , if there exists some  such that   < 0 and   <     , it can be derived that there is no globally optimal solution of LMP problem over  2 .Theorem 3.For any subrectangle  ⊆  0 with   = [  ,   ], if there exists some index  ∈ {1, 2, . . ., } such that   > 0 and   >     , then there is no globally optimal solution of LMP problem over  3 ; if   < 0 and   >     , for some , then there is no globally optimal solution of LMP problem over  4 , where Proof.First, we show that, for all  ∈  3 , () < LB.
Consider the th component   of .By the assumption and the definitions of   and   , we have Note that since   > 0, we have   >     .For all  ∈  3 , by the above inequality and the definition of   , it implies that Thus, for all  ∈  3 , we have V ≥ LB > ().Therefore, there cannot exist globally optimal solution of LMP problem over  3 .For all  ∈  4 , if there exists some  such that   < 0 and   >     , from arguments similar to the above, it can be derived that there is no globally optimal solution of LMP problem over  4 .

Algorithm and Its Convergence
Based on the previous results, this section presents the branch and bound algorithm and gives its convergence.

Branching Rule.
In branch and bound algorithm, branch rule is a critical element in guaranteeing convergence.This paper chooses a simple and standard bisection rule, which is sufficient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree.

Branch and Bound Algorithm.
From the above discussion, the branch and bound algorithm for globally solving LMP problem is summarized as follows.
Let LB(  ) be the optimal function value of LRP over the subrectangle  =   and   = (  ) be an element of the corresponding argmin.
Step 2. Set UB  = UB −1 .Subdivide  −1 into two subrectangles via the branching rule, and denote the set of new partition rectangles as   .

Convergence Analysis.
In this subsection, the convergence properties of the algorithm are given.{  }, where any accumulation point is a globally optimal solution of LMP.

Theorem 4. The algorithm either terminates finitely with globally 𝜀-optimal solution or generates an infinite sequence
Proof.If the algorithm terminates finitely, without loss of generality, assume that the algorithm terminates at the th step; by the algorithm, we have So,   is a global optimal solution of the problem LMP.
If the algorithm is infinite, then an infinite sequence {  } will be generated.Since the feasible region of LMP is bounded, the sequence {  } must have a convergence subsequence.Without loss of generality, set lim →∞   =  * .By the algorithm, we have lim Since  * is a feasible solution of problem LMP, V ≤ Φ( * ).
Taken together, the following relation holds: On the other hand, by the algorithm and the continuity of   (), we have lim By Theorem 1, we can obtain Therefore, V = ( * ); that is  * is a global optimal solution of problem LMP.
The simplex method is applied to solve the linear relaxation programming problems.In our experiments, for Examples 1-10, the convergence tolerance  is 10 −6 ; for Example 11, the convergence tolerance  is 10 −2 .
The results of problems 1-10 are summarized in Table 1, where the following notations have been used in row headers: Iter is the number of algorithm iterations; Time (s) is execution time in seconds.Except for the results of our algorithm, the results of the other eleven algorithms are taken directly from the corresponding references.In Table 1, "-" denotes the corresponding value is not available.
For problems 1-10, the efficiency of the algorithm proposed by this paper (named Algorithm 1) and the algorithm proposed by this paper but without using the pruning techniques (named Algorithm 2) is compared.The comparison results are given in Table 2.
Example 3 (see [13,17]).min (0.813396 (37) To further verify the effectiveness of Algorithm 1, a random problem with variable scale is constructed, which is defined as follows.