A New Global Optimization Algorithm for Solving a Class of Nonconvex Programming Problems

A new two-part parametric linearization technique is proposed globally to a class of nonconvex programming problems (NPP). Firstly, a two-part parametric linearizationmethod is adopted to construct the underestimator of objective and constraint functions, by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with e as the base, respectively. Then, a sequence of relaxation lower linear programming problems, which are embedded in a branch-and-bound algorithm, are derived in an initial nonconvex programming problem. The proposed algorithm is converged to global optimal solution by means of a subsequent solution to a series of linear programming problems. Finally, some examples are given to illustrate the feasibility of the presented algorithm.


Introduction
In this paper, we consider a class of nonconvex programming problems as follows: where and  ∈  q ,   ∈   ,   ,   ,   ,   are real numbers,  is  ×  matrix,  > 0, and ,  are finite.    +   > 0 for all  ∈  0 .(NPP) contains various variants such as a sum or product of a finite number of ratios in linear functions, generalized linear multiplicative programs, general polynomial programming, quadratic programming, and generalized geometric programming.So, (NPP) with its special form has attracted considerable attention to the literature because of its large number of practical applications in various fields of study, including transaction cost [1], financial optimization [2], robust optimization [3], VLISI chip design [4], data mining/pattern recognition [5], queueinglocation problems [6,7], bond portfolio optimization [8,9], and elastic-plastic finite element analysis of metal forming processes [10].From a researching point of view, (NPP) poses significant theoretical and computational challenges.It follows that it possesses multiple local optima that are not globally optimal.Recently, Jiao [11] and Shen et al. [12] have proposed a branch-and-bound algorithm globally to a class of nonconvex programming problems (NPP).By utilizing tangential hypersurfaces, convex envelope approximations of exponential function, and concave envelope approximations of logarithmic function, a two-stage linear relaxation technique was given.Then, the relaxation linear programming of original problem can be constructed with a branch-andbound algorithm proposed for globally solving (NPP).
Then, two-part parametric linearization method is presented for generating the relaxation lower linear programming of (NPP).In Section 3, the proposed branch-and-bound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established.Some numerical results are reported in Section 4. Finally, concluding remarks are given in Section 5.

Parametric Linear Relaxation of (NPP)
Now, we derive an equivalent form of the function   () by transformation.First, for any  ∈  0 , since     +   > 0, we assume that Then, for all  = 0, 1, . . ., , the function   () can be rewritten as where In order to construct underestimator of function   () for all , we adopt two-part parametric linearization method.We will firstly derive a linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of   exp(  ) about the variable   , respectively.Then, in the second part, an LUBF about primal variable  will be constructed ultimately.

Parametric Linear Estimation of Logarithm and Exponential
Functions.We first construct parametric linear overestimation and underestimation of a natural logarithm function and an exponential function with  as the base in interval vector  ⊆  0 , respectively.Let  = [, ] = { ∈   |  ⩽  ⩽ }, for all  = (  ) ×1 , where  and  are called the lower bound and upper bound, respectively.For any  ∈ , we denote where  ∈ {0, 1}  is an -dimensional vector with components   equal to 0 or 1.For convenience, we denote by 0 ∈ {0, 1}  the vector with all components equal to 0 and by 1 ∈ {0, 1}  the vector with all components equal to 1.Then, we have (0) =  and (1) = .The following theorem illustrates how to construct the lower and upper bound linear functions of natural logarithm function and the exponential function with  as the base, respectively.
Proof.For function Φ() = exp(∑  =1     + ), this result is shown in [38], and for Φ() = ln(∑  =1     + ), the proof is similar.However, to provide a self-contained presentation, and because this result is central to this paper, we provide a direct proof for natural logarithm function.

First-Part Parametric Linear Relaxation.
In this subsection, we discuss how to obtain the first-stage relaxation LLBF of   (  ) =   exp(  ) about the variable   by using Theorem 1.

Approximation Relaxation Linear
Programming.Consequently, the approximation relaxation lower linear programming (LLP) of problem (NPP) with the parametric vector  in interval vector  = [, ] ⊆  0 ⊂   is easily obtained like the following: Based on the linear underestimators, every feasible point of (NPP) is feasible in (LLP), and the objective of (LLP) is smaller than or equal to that of (NPP) for all points in .Thus, (LLP) provides a valid lower bound for the solution of (NPP) over the partition set .It should be noted that problem (LLP) contains only the necessary constraints to guarantee convergence of the algorithm.The following results are key to the convergence of the proposed algorithm.( Then one has Proof.From Theorem 1 and definition of function   (), for any  ∈ , it follows that where    is a gradient function of   ,  = +(1−()) for some  ∈ [0, 1], and (),   () are vertices of the interval vectors  and   := [  ,   ], respectively.By (6) and proof of Theorem 1, the right-hand side in inequality (29) where Similarly, we can prove that lim ‖−‖ → 0 Δ 2  → 0.
Theorem 4 shows that as the subhyperrectangle  ⊆  0 is small enough, the solution to (LLP)() is sufficiently approaching the solution of (NPP)() and this guarantees the global convergence of the method.

Algorithm and Its Convergence
In this section, a branch-and-bound algorithm is developed to solve (NPP) based on the relaxation lower linear programming in Section 2. This algorithm needs to solve a sequence of linear programming over partitioned subsets of  in order to find a global optimum.Consequently, this method needs partitioning the set  0 into subhyperrectangles, each concerned with a node of the branch-and-bound tree, and each node is associated with a relaxation linear subproblem in each subhyperrectangle.
First, at any stage  of the algorithm, suppose that we have a collection of active nodes denoted by   , say, each associated with a subhyperrectangle  ⊂  0 , for all  ∈   .For each node , we will have computed a lower bound of the optimal value of the problem ((NPP)()) via solution () of problem (LLP) so that the lower bound of optimal value of (NPP) on the whole initial box region  0 is given by   = min{() | ∀ ∈   } at stage .Whenever the lower bounding solution to any node subproblem; that is, the solution to the relaxation linear programming (LLP), turns out to be feasible to (NPP), we update the upper bound of incumbent solution  if necessary.Then, the active nodes collection   will satisfy  ⩾ (), for all  ∈   , for each stage .We now select an active node  ∈   such that () =   for further considering.The active node  is partitioned into two subhyperrectangles according to the following branching rules.For these two subhyperrectangles, the fathoming step is applied in order to identify whether the subhyperrectangles should be eliminated.Finally, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained.

Branching Rule.
The critical element in guaranteeing convergence to a global minimum means the choice of a suitable partitioning strategy.In our paper, we choose a simple and standard bisection rule.This method is sufficient to ensure convergence since it drives all the intervals to zero for the variables that are associated with the term yielding the greatest discrepancy in the employed approximation along with any infinite branch of a branch-and-bound tree.

Algorithmic Statement.
The deterministic global optimization algorithm is summarized as follows.

Convergence of the Algorithm.
By Theorem 4, global algorithm convergence will be given in Theorem 5.
Theorem 5.The above algorithm either terminates finitely with the incumbent solution being optimal to (NPP) or generates an infinite sequence of iterations such that, along with any infinite branch of the branch-and-bound tree, any accumulation point of sequence   will be global solution to (NPP).
Proof.If the above proposed algorithm terminates finitely, obviously  is a global optimal value and  best is optimal solution for the (NPP).If the algorithm is infinite, it generates at least one infinite sequence {  } such that  +1 ⊂   for any .Then, from [39,40], ⋂    = {x} for some point Example 5 [11] (1.0, 1.0) 288.0 43 5 [12] (1.0, 1.0) 288.0 1 0 Ours (1.0, 1.0) 288.0 42 5 x ∈   .For every iteration of the algorithm, the following results are true: Since {  } is contained in a compact set  0 , there must be one convergent subsequence {  } ⊆ {  } and assume lim  → ∞   = x.Then from the proposed algorithm, there exists a decreasing subsequence {  } ⊂ {  } where   ∈   with   ∈   ,   = (  ) =  0 (  ), and lim  → ∞   = {x}.According to Theorem 5, we have lim Then all what remains is to prove that x is feasible to (NPP)( 0 ).First, it is obvious that x ∈  0 since  0 is closed.Secondly, by the algorithm, we can obtain that, for all ,   is feasible solution to (NPP); that is,   ⩽ .Taking limits over  in this inequality yields  x ⩽ .The remainder of the proof will be by contradiction.Assume that   ( x) >   for some  = 1, 2, . . ., .Because function   () is continuous and again from Theorem 4, the sequence {  (  )} converges to   ( x); then by definition of convergence, there must be , such that |  (  ) −   ( x)| <   ( x) −   for any  > .Therefore, for any  > , we have   ( x) >   , which implies that LLP(  ) is infeasible and violating the assumption that   = (  ).This is a contradiction, and thus the theorem is completed.

Numerical Experiments
To verify performance of the proposed global optimization algorithm, some test problems were implemented.The test problems are coded in C++ and the experiments are conducted on a Pentium IV (3.06 GHZ) microcomputer.Set  = 0.000001.The results of Examples 1-5 are summarized in Table 1.In Table 1, the notations have been used for row headers: Iter.: number of algorithm iterations;  max : the maximal length of the enumeration tree.

Conclusion
In this paper, a global optimization algorithm is presented to a class of nonconvex programming problems (NPP).A transformation and a two-part parametric linearization technique are employed to initial (NPP), and (NPP) is reduced to a parametric relaxation in lower linear programming based on the linear lower bounding of the objective function and nonlinear constraint functions.Thus the initial (NPP) is reduced to a sequence of linear programming problems through the successive refinement in a linear relaxation of feasible region in an objective function.The algorithm can obtain finite convergence to the global minimum through the successive refinement of the feasible region and the subsequent solutions to a series of linear programming problems.The proposed algorithm is applied to several test problems.In all cases, convergence to the global minimum is achieved.
hold, where   () denotes the th component of ().