Image Space Accelerating Algorithm for Solving a Class of Multiplicative Programming Problems

is paper interprets an image space accelerating branch and bound algorithm for globally solving a class of multiplicative programming problems (MP). In this algorithm, in order to obtain the global optimal solution, the problem (MP) is transformed into an equivalent problem (P2) by introducing new variables. By utilizing new linearizing relaxation technique, the problem (P2) can be converted into a series of linear relaxation programming problems, which provide the reliable lower bound in the branch and bound search. Meanwhile, an image space accelerating method is constructed to improve the computational performance of the algorithm by deleting the subintervals which have no global optimal solution. Furthermore, the global convergence of the algorithm is proved.e computational complexity of the algorithm is analyzed, and the maximum iterations of the algorithm are estimated. Finally, numerical experimental results show that the algorithm is robust and ecient.


Introduction
Consider a class of multiplicative programming problems as follows: where p ∈ Z + ; c j ∈ R n , d j , c j ∈ R, j 1, 2, . . . , p; A ∈ R m×n , and b ∈ R m ; c ⊤ j x + d j > 0 for any x ∈ X; and the feasible eld X is a nonempty bound polyhedron set. e problem is well known as multiplicative programming problem that corresponds to a nonlinear optimization problem with nonconvex objective function. e problem (MP) comes from various application elds, for example, decision tree optimization [1], multipleobjective decision [2,3], robust optimization [4], control and engineering optimization [5][6][7][8][9], computer vision [10,11], network ow optimization [12][13][14], food science engineering, and big data analysis. at is to say, the problem is useful in many aspects, so it is of great importance to develop an e cient algorithm for the problem (MP).
In the past twenty years, many deterministic feasible algorithms have been developed to solve the problem (MP). Generally, these algorithms can be divided into several categories, such as parameterization based methods [15][16][17], outer-approximation methods [18,19], nite branch and bound algorithms [20][21][22][23][24][25][26][27][28], decomposition method [29], level set algorithms [30,31], primal and dual simplex method [32], cutting plane methods [16,33], and heuristic methods [34,35] Recently, Gao et al. [19] proposed an outcome-space nite algorithm to solve a class of linear multiplicative program problems by solving only a convex quadratic program problem in each iteration. Chen and Jiao [36] present a nite branch and bound algorithm based on the theory of monotonic optimization. Wang et al. [37] used simplicial subdivision in the branch operation to solve generalized multiplicative programs. By directly mapping the original variable space to two-dimensional space, Wang et al. [38] gave a method that can effectively solve special second-order multiplicative problems. Shen and Huang [39] developed a rectangular branch and bound algorithm with bisection rule to solve linear multiplicative programming problems. Liu and Zhao [31] presented a new algorithm for solving a generalized multiplicative programming based on level set method. Zhang et al. [40] proposed an output space brand and bound algorithm for solving the linear multiplicative programs. Moreover, Shen and Wang [41] proposed a branch and bound algorithm that can solve the multiplicative programs in polynomial time. Using a piecewise linear approximation method and some outer space accelerating techniques, Hou and Liu [42] provided an outer space algorithm to solve a class of multiplicative programs effectively. Up to now, although there is a lot of progress in the development of deterministic algorithms for solving the special forms of the problem (MP), the global optimization algorithm of general forms of the problem (EP) in this paper has been little researched.
In the paper, based on the branch and bound framework, a new image space accelerating algorithm is proposed by solving a series of linear relaxation problems over partitioned subsets to find a global optimal solution of the problem (MP). For this purpose, we first convert the problem (MP) into the equivalent optimization problem (P2) by mapping the problem (MP) from the origin space to image space. Second, a linear relaxation problem of the problem (P2) is constructed. Based on the linear relaxation technique, the problem (P2) is systematically converted into a sequence of linear relaxation programming problems.
ird, by successive refinement iterations, the solutions of these constructed linear relaxation problems can be as close as possible to the global optimal solution of the problem (MP). Meanwhile, in the above iterations, the approximation process can be accelerated by deleting some regions in which there is no global optimal solution of the problem (P2).
Compared with the known methods, the proposed algorithm has the following superiorities: (1) new variables are introduced to reduce n-dimensional space into p-dimensional image space. en, the branching operations in brand and bound framework act on the p-dimensional image space which can significantly reduce the computational cost as p is usually smaller than n. (2) In the bound operations, according to properties of the objective function of the problem (P2) and the structure of the algorithm, an accelerating method is proposed to reduce the computational cost of the algorithm. (3) We prove the global convergence of the proposed algorithm and estimate the maximum number of the iterations by analyzing the computational complexity of the algorithm. On the other hand, the efficiency of the algorithm is sensitive to the interval range of the variable of the image space. e remainder of this paper is organized as follows: In Section 2, we give the equivalent transformation of the origin problem and the linear relaxation method for the equivalent problem. In Section 3, an image space accelerating algorithm based on branch and bound framework is developed and the convergence of the algorithm is proved. In addition, the computational complexity of the algorithm is analyzed and the maximum number of iterations of the algorithm is estimated. In Section 4, numerical results of some tests are provided. Finally, a brief summary is made in Section 5.

Equivalent Problem and Its Linear Relaxation
In this section, we will convert the problem (MP) into an equivalent problem (P1) which has the same optimal solution as the problem (MP). Using the logarithmic function property, we can obtain this equivalent programming problem as follows: Next, introduce p variables t j � c ⊤ j x + d j , j � 1, 2, . . . , p and let l 0 j and u 0 j be the minimum and maximum of t j , j � 1, 2, . . . , p, which can be obtained by the following linear programming. and Hence, the problem (P1) can be transformed into the following equivalent problem (P2).
Obviously, the problem (P1) has the same optimal solution as the problem (P2). In the following, the main task is to solve the problem (P2). For solving the problem (P2), the main work is to construct lower bounds of this problem and its subproblems generated by the branch and bound algorithm. Meanwhile, the lower bounds of this problem and its subproblems can be obtained by solving the corresponding linear relaxation programming problems.
For any t ∈ [l, u]⊆T 0 , we define the following functions: h j t j � ln l j + k j t j − l j , e linear relaxation programming problems of the problem (P2) can be generated by underestimating every implicit term c j ln (t j ), j � 1, 2, . . . , p with a linear function. e following theorem will show the details of this linear relaxation technique for the problem (P2).

Theorem 1.
For any functions f j (t j ), g j (t j ), and h j (t j ) over [l j , u j ], j � 1, 2, . . . , p, we have the following two statements: (1) e function h j (t j ) is the linear convex envelope of the function f j (t j ), and the function g j (t j ) is a supporting hyperplane of the function f j (t j ) which is parallel with h j (t j ). Meanwhile, we have (2) Define en, we have Proof. (i) Obviously, the function f j (t j ) � ln (t j ) is a concave and monotone increase function of t j over [l j , u j ]. erefore, the line passing through the points (l j , ln(l j )) and (u j , ln(u j )) is the affine convex envelope of the function f j (t j ) � ln(t j ), and this line can be described as the form of h j (t j ) � ln(l j ) + k j (t j − l j ). Hence, by t j ∈ [l j , u j ], we have On the other hand, suppose that the line g(t j ) is the tangential plane of the function f j (t) and parallel to the line h(t j ), the tangential support point must appear at the point (1/k j , ln(1/k j )). en, the corresponding tangential function of this line is g j (t) � k j t j − ln(k j ) − 1. According to the geometric properties of the function f j (t j ) � ln(t j ), for ∀t j ∈ [l j , u j ], we obtain is completes the proof of (i) in eorem 1. With z j � u j /l j , k j � ln(u j ) − ln(l j )/u j − l j , k j can be changed into en, we can obtain It is known that δ 2 j (t j ) is a concave function of t j over [l j , u j ], so the maximum δ 2 j. max can be obtained at the point of tangential support whose value is t j � 1/k j . us, From z j ⟶ 1, ln z j /z j − 1 ⟶ 1 as ω j ⟶ 0, we can obtain that δ 2 j. max ⟶ 0 as ω j ⟶ 0. On the other hand, δ 1 j (t j ) is a convex function of t j for any t j ∈ [l j , u j ]. Clearly, the maximum of δ 1 j (t j ), named δ 1 j. max , must occur at the critical point, l j or u j . In addition, the form of k j � ln(u j ) − ln(l j )/u j − l j can be transformed into the form of k j u j − ln u j � k j l j − ln l j . (15) en, we have From the description shown above, it holds that δ 1 From eorem 1, we can get approximations solution by shortening the gap between tangential hypersurface g j (t) and concave envelope h j (t). As a result, the functions g j (t) and h j (t) infinitely approximate the function f j (t) when ω j ⟶ 0. Now, we give the linear relaxation programming of the problem (P2). Suppose that en, summing all the terms Ψ L j (t j ), j � 1, 2, . . . , p and expressing the result as Consequently, the corresponding linear relaxation programming problem over T of the problem (P2) can be constructed as follows: From the construction process of the linear relaxation programming problem, all feasible solutions of the problem (P2) over subdomain T k are feasible to the problem (P3). Furthermore, in the subdomain T k , the optimal value of the problem (P3) is less than that of the problem (P2). erefore, the problem (P3) provides an effective lower bound for the solution of the problem (P2) in the partition set T k . It should be pointed out that the problem (P3) only contains the necessary constraints to ensure the global convergence of the algorithm.

Algorithm, Convergence, and Its Complexity
By combining the branching operation, the bound operation, and the accelerating method, we design an image space accelerating algorithm to solve the equivalent problem (P2). By solving a series of linear relaxation programming problems over the subset of T 0 , this method finds a global optimal solution. e branch operation of the branch and bound method divides the set T 0 into subhyperrectangles. Each subhyperrectangle is related to a node of the branch and bound tree, and each node is associated with the linear relaxation programming subproblem in the corresponding subhyperrectangle.
Using Q k to represent the active node of the step k of the algorithm, we will get a collection of active nodes which associate with a subhyperrectangle T⊆T 0 , ∀ T ∈ Q k . By calculating the solution LB(T) of the problem (P3), we get the lower bound of the optimal value of the problem (P2) on each node Q k at step k.
us, we denote LB k � min LB(T), { ∀T ∈ Q k } by the lower bound of the optimal value of the problem (P2) over the entire initial box region T 0 at step k.
If the solution of the problem (P3) is feasible for the problem (P2), the upper bound UB of the incumbent solution is updated. en, for any step k, the active node Q k is satisfied with LB(T) < UB, ∀T ∈ Q k .
Next, an active node with the smallest lower bound is selected and its associated subhyperrectangle is divided into two subhyperrectangles. As mentioned above, the lower bound of each new node is calculated and a new set of active nodes for the next step are generated. e whole process is iterated until the global convergence is obtained.
e key factor to ensure the convergence to the global minimum is to select the appropriate partition strategy. In this paper, we choose a standard and simple bisection rule.
is method makes all intervals of every variables tend to zero, so it is sufficient to guarantee the global convergence of the proposed algorithm. e rule of the branching operation is as follows.
Assuming that the current active node is the sub-

Accelerating Method.
According to the structure of the branch and bound framework, we propose an accelerating technique to enhance the convergence speed of the algorithm by deleting the whole region or a part of the region in which there is no global optimal solution. Without losing generality, assume that the current . . , p, and UB is the current upper bound of the problem (P1) which is obtained by the proposed algorithm. Denote e accelerating method can be given by eorem 2.

Theorem 2.
For any t k ∈ T k � ⊆T 0 , j � 1, 2, . . . , p, the following conclusions hold: It is obvious that the problem (P2) has no global optimal solution over the current subhyperrectangle T k . erefore, the current subhyperrectangle T k can be deleted.
where c j > 0. With erefore, the current subhyperrectangle T k r does not contain the global optimal solution of the problem (P2).
For any t k ∈ T k l , consider the s th dimensional variable t k s of t k ; we obtain where c j < 0. With Consequently, there is no global optimal solution over T k l . is completes the proof of eorem 2. From eorem 2, we can construct the accelerating method to delete the invalid subhyperrectangle or reduce the ineffective interval of all dimensions over the hyperrectangle which does not contain the global optimal solution to the problem (P2). Consequently, the efficiency of the proposed algorithm is improved. Step 0. Initialization: (1) Initialize the iteration number k: � 0 , the set of all active node Q: � ∅ , the upper bound UB: � +∞, the lower bound LB: � −∞ , the small enough accuracy tolerance ε > 0 , and the set of feasible solution F: � ∅ .
e upper bound u 0 j and lower bound l 0 j of t j are generated, and the initial interval T 0 of t is obtained. en, add the rectangle T 0 � [l 0 , u 0 ] to the activate node set Q.
Step 1. Solving the problem (P3) over the hyperrectangle T � T 0 , the feasible solution F: � t 0 , x 0 , and the optimal solution LB: � LB(T 0 ) is obtained. Update the initial upper bound UB through substituting t 0 into the problem (P2). If UB − LB ≤ ε, go to Step 6. Otherwise, proceed to Step 2.
Step 2. Select a branching variable t s to divide T k into two new subhyperrectangles according to the branching rule shown above and denote the set of new partition rectangles by T k .
(1) For each T ∈ T k , using the above accelerating technique to compress its range, still denote the remaining subrectangle by T k .
(2) For each T ∈ T k ≠ ∅, compute the lower bound LB(T k ) and the feasible solution t k , x k by solving linear relaxation programming problem (P3). If UB < LB k , discard the corresponding subhyperrectangle T from T k ; this is to say, T k : � T k ∖T, and continue to process the next element of T k .
Otherwise, calculate UB k and add the subhyperrectangle into the active node set Q. If UB k − UB < ε, update the upper bound UB � UB k and the feasible solution F: � t k , x k similar as Step 1; then go to Step 6.
Step 3. e remaining active node set is Q: � (Q∖T k ) ∪ T k . If Q: � ∅, go to Step 6.

Mathematical Problems in Engineering 5
Step 4. Update the lower bound LB: � inf T∈Q LB(T). If UB − LB < ε, go to Step 6. Otherwise, k: � k + 1 and move on to the next step.
Step 5. Choose an active node T k where T k � argmin T∈Q LB(T), t k : � t(T k ), and return to Step 2. Otherwise, move on to the next step.
Step 6. Stop the algorithm with exp(UB) as the optimal solution to the problem (MP) and the vector x * intercepted from F as the feasible solution.

Convergence of the Algorithm.
In this subsection, the global convergence properties of the proposed algorithm are given. It is obvious that the partitioning sets used by the proposed algorithm are all rectangular and compact. Tuy [43] pointed out that rectangular subdivisions are exhaustive.
erefore, the proposed algorithm is exhaustive according to the branch operation rule mentioned above. Otherwise, if the proposed algorithm is infinite, the subdivision of T generates an infinite, nested sequence of partition sets T k , and an associated sequence of diameters d(T k ) satisfies the condition as follows: (4), Ψ R (t) given by (19) is strongly consistent over T 0 . at is, there are subsequences T q { } of subhyperrectangle T k and a corresponding sequence t q { } with t q ∈ T k satisfying Ψ R (t q ) ⟶ Ψ(t) as q ⟶ ∞.

Lemma 1. Given a function Ψ(t) in
Proof. By exhaustive partition according to the branching rule, a sequence of subhyperrectangle T k that is exhaustive on T 0 is generated. Meanwhile, a corresponding sequence t k |t k ∈ T k is obtained. rough an exhaustive subdivision, we get that T k ⟶ t and t k ⟶ t. Since the upper bound and the lower bound over T k are both sequences in a compact space, there exists a convergent subsequence.
. Furthermore, according to (5), (18), (19), and (20), it follows that From eorem 1, we have where Since g j (t q j ) and f j (t q j ) can be regarded as the continuous functions of t q j and ω j is the function of (t q j , t q j ), by the problem (P3), we have Using the similar proof process to (32), we obtain erefore, it can be seen from (30), (31), (32), and (33) that which satisfies the strongly consistent underestimating requirements.

Lemma 2. Given the function Ψ(t) in (5), Ψ(t) has tight bounds. at is, there exist the upper bound Ψ(t) and the lower bound Ψ(t) of Ψ(t), respectively, such that, for any infinite sequence T k which is produced by the exhaustive subpartition with T k ⟶ t , there exists lim
Proof. Just as Lemma 1, for any infinite subhyperrectangle sequence T k , T k � [t k , t k ], which is produced by an exhaustive subpartition of T 0 with t k ⟶ t as k ⟶ ∞, there exists a sequence t k with t k ∈ T k such that t k ⟶ t as k ⟶ ∞. Corresponding to this sequence, there exists the sequence (t k , t k , t k ) ⟶ (t, t, t) as k ⟶ ∞. Corresponding to (30) and (31), denote Ψ(t) � Ψ R (x) and define the function as follows: By eorem 1, it is obvious that Denote l k , u k by the lower bound and upper bound for T k , respectively. According to (ii) of eorem 1, for any t ∈ T k , we have 6 Mathematical Problems in Engineering where max t∈T δ 1 j (t) and max t∈T δ 2 j (t) are given by eorem 1. Since z j ⟶ 1 as k ⟶ ∞, thus, max t∈T δ 1 j (t) ⟶ 0 and max t∈T δ 2 j (t) ⟶ 0 as k ⟶ ∞. Hence, by continuity of the functions f j (t) over T k , it follows from inequality (31) that at is, Consequently, from (30), (31), (32), (37), and (40), the conclusion can be drawn as follows: As shown above, the function Ψ(t) has tight upper bound Ψ(t) and tight lower bound Ψ(t).
Let T a be the set of accumulation points of t k and denote T * by the region of argmin t∈D Ψ(t), where D ≠ ∅ is the feasible space of the problem (P2).

Theorem 3. e proposed algorithm either terminates within finite iterations at the incumbent solution being an optimal solution to the problem (P2) or an infinite sequence of the branch and bound tree is generated such that
Proof. For each step of iterations in the proposed algorithm, Horst [44] proved that LB k is a nondecreasing sequence bounded with min t∈D Ψ(t), which ensures the limit LB � lim k⟶∞ LB k ≤ min t∈D Ψ(t) exists. Since t k is a sequence on a compact set, it has a convergent subsequence. In addition, for any t∈ T a , there is a sequence t r { } of t k with lim r⟶∞ t r � t. By Tuy [43] and Lemma 1, it can be drawn that the partition for subhyperrectangle sets in Step 2 is exhaustive on T 0 and the choice of subhyperrectangle to be subdivided in Step 2 is bound improving. erefore, there is a decreasing subsequence T q { } ⊂ T r where T r is the T space of the partition Q r with t q ∈ T q , LB q � LB(T q ) � Ψ R (T q ), and lim q⟶∞ t q � t . From Lemma 2, we have lim q⟶∞ LB q � LB � Ψ(t). Next, we prove that the condition t∈ D is satisfied. T 0 is a closed set, such that t∈ T 0 . Proved by contradiction, the above condition can be proved. Let t∈D; there exists Ψ(t) � δ > 0. Since Ψ is a continuous function, the sequence Ψ(t q ) { } can be converged to Ψ(t). According to the convergence property of Ψ, ∃ q δ such that |Ψ(t q ) − Ψ(t)| < δ as q > q δ . Hence, for q > q δ , Ψ(t q ) > 0 implying that the problem (P2) is infeasible and violating the assumption that t q � t(T q ). is leads to a contradiction, and therefore, t∈ D. As a result, we obtain LB � Ψ(t) � min

Computational Complexity of the Algorithm.
In this subsection, we analyze the computational complexity of the proposed algorithm by estimating the maximum number of iterations. In the process of branching operations, we denote the diameter of the T k � [l k , u k ]⊆T 0 by Meanwhile, we introduce some notations as follows.

Lemma 3. For given convergence tolerance ε
where UB is current global upper bound of the equivalent problem and LB(T k ) indicates the linear relaxation solution of current subhyperrectangle.
Proof. Assuming that t k is the optimal solution of the linear relaxation problem, we have According to (4) and (19), we can conclude that Mathematical Problems in Engineering 7 en, by using (16) and (17), we have From the previous formula and the condition of Δ(T k ) ≤ ε/(μρ), we can obtain that UB − LB(T k ) < ε.. is completes the proof of Lemma 3. □ Theorem 4. Given the convergence tolerance ε ≥ 0, the maximum iterations of the proposed algorithm to obtain the ε -globally optimal solution are where μ and ρ are given in (44) and (45), respectively.
Proof. Suppose that, at the k j.th iteration, the subhyperrectangle T k ⊆T 0 is satisfying with Meanwhile, according to the branching operation in Step 2, we have From (51) and (52), we obtain i.e., With the proof of Lemma 3, we can get that After at most iterations, we have where t * is the optimal solution of the equivalent problem (P2). erefore, we obtain the ε -globally optimal solution. e proof of eorem 3 is completed.

Numerical Experiment
In this section, we give some numerical experiments to verify the performance of the proposed global optimization algorithm. ese experiments are carried out on a workstation running Microsoft Windows 10 with Intel(R) Xeon(R) E5-2620 2.10 GHz processor and 16 GB memory. e proposed algorithm is written in Python programming language (v3.7) and the elementary simplex method is used to solve the linear relaxation programming. e corresponding numerical results are demonstrated in Tables 1, 2,  and 3.
Some exact tests in recent literatures [19,31,[36][37][38][39][40][41] are implemented to show that the proposed algorithm can globally solve the problem (MP) effectively. For these tests, the results obtained by the proposed algorithm are illustrated in Table 1. To describe the numerical results, we use the following notations for column headers in Table 1: Iter: the number of iterations; time: the execution time in seconds; optimum: the approximation optimal value; optimal solution: the approximation optimal solution for the tests; ε: the accuracy tolerance value. e experiment results show that our algorithm can globally solve the problem (MP) effectively.
In order to illustrate the performance of the proposed algorithm in the complex environment, we perform a series of random problems as follows. Problem 1. We get (see [40]) min p j�1 c T j x, Problem 2. We get (see [21,45,46]) In problems (1) and (2), each element of the vectors c j is pseudorandom number generated in [0, 1], all of the elements a ij of the constraint matrix A are randomly generated in the interval [−1, 1], and b i , i � 1, 2, . . . , m is generated by b i � n j�1 a ij + 2η, where η is pseudorandom number generated in [0, 1].
For the numerical experiments for problems (1) and (2), we perform 10 different random instances at every different scale and record the statistics information of these results in Table 2 and 3, respectively. To compare the performance of the algorithm, we set tolerance accuracy value ε � 10 − 6 .
In Tables 2 and 3 According to the computational results shown in Table 2, when p is fixed at 2, 3, 4, 5, 6, and 7 in turn, it is obvious that the elapsed CPU time of the proposed algorithm is better than what is mentioned by Zhang and Gao [40]. On the other hand, when p is smaller than 4, the number of iterations is approximately the same as that of Zhang and Gao [40]. When p is greater than 4, the number of iterations will be greatly reduced.
at is, the proposed algorithm can effectively solve larger scale problems (problem 1). Table 3 illustrates that the proposed algorithm can solve problem (2) with less computational time and fewer iterations. Meanwhile, the stability of the proposed algorithm is as good as that of Shen and Huang [46]. In addition, it is clear that the average CPU time and average iteration of the algorithm increase linearly and slowly with the scale of the problem when p is constant.
From the above numerical results, it can be concluded that the proposed algorithm can solve all examples with effectiveness and robustness.

Conclusion
In this paper, we study a class of multiplicative programming problems (MP) and propose a new image space branch and bound algorithm. In this algorithm, using the transformation of the logarithmic function, we transform the problem (MP) into the problem (P2). en, based on the linear relaxation technique, we construct the linear relaxation programming problem for the problem (P2). Different from the existing algorithm, the branch operations of the proposed algorithm take place in p-dimensional image space which brings great benefits for improving the computational efficiency of the proposed algorithm. In addition, the ε-global convergence of the proposed algorithm is achieved at most p j�1 [log 2 (μρ(u 0 j − l 0 j )/ε)] iterations. e numerical results demonstrate the feasibility and the efficiency of our algorithm. Our future work is to improve the universality of generalized linear multiplicative problems.

Conflicts of Interest
e authors declare that they have no conflicts of interest.