A New Global Optimization Algorithm for Solving Generalized Geometric Programming

A global optimization algorithm for solving generalized geometric programming GGP problem is developed based on a new linearization technique. Furthermore, in order to improve the convergence speed of this algorithm, a new pruning technique is proposed, which can be used to cut away a large part of the current investigated region in which the global optimal solution does not exist. Convergence of this algorithm is proved, and some experiments are reported to show the feasibility of the proposed algorithm.


Introduction
This paper considers generalized geometric programming GGP problem in the following form: where φ j x T j t 1 c jt n i 1 x γ jti i , c jt , β j , γ jti ∈ R, t 1, . . ., T j , i 1, 2, . . ., n, j 0, 1, . . ., m. Generally speaking, GGP problem is a non convex programming problem, which has a wide variety of applications, such as in engineering design, economics and statistics, manufacturing, and distribution contexts in risk management problems 1-4 .
During the past years, many local optimization approaches for solving GGP problem have been presented 5, 6 , but global optimization algorithms based on the characteristics of GGP problem are scarce.Maranas and Floudas 7 proposed a global optimization algorithm Mathematical Problems in Engineering for solving GGP problem based on convex relaxation.Shen and Zhang 8 presented a method to globally solve GGP problem by using linear relaxation.Recently, several branch and bound algorithms have been developed 9, 10 .
The purpose of this paper is to introduce a new global optimization algorithm for solving GGP problem.In this algorithm, by utilizing the special structure of GGP problem, a new linear relaxation technique is presented.Based on this technique, the initial GGP problem is systematically converted into a series of linear programming problems.The solutions of these converted problems can approximate the global optimal solution of GGP problem by successive refinement process.
The main features of this algorithm are: 1 a new linearization technique for solving GGP problem is proposed, which applies more information of the functions of GGP problem, 2 the generated relaxation linear programming problems are embedded within a branch and bound algorithm without increasing the number of variables and constraints, 3 a new pruning technique is presented, which can be used to improve the convergence speed of the proposed algorithm, and 4 numerical experiments are given, which show that the proposed algorithm can treat all of the test problems in finding global optimal solution within a prespecified tolerance.
The structure of this paper is as follows.In Section 2, first, we construct the lower approximate linear functions for the objective function and the constraint functions of GGP problem; then, we derive the relaxation linear programming RLP problem of GGP problem; finally, to improve the convergence speed of our algorithm, we present a new pruning technique.In Section 3, the proposed branch and bound algorithm is described, and the convergence of the algorithm is established.Some numerical results are reported in Section 4.

Linear Relaxation and Pruning Technique
The principal structure in the development of a solution procedure for solving GGP problem is the construction of lower bounds for this problem, as well as for its partitioned subproblems.A lower bound of GGP problem and its partitioned subproblems can be obtained by solving a linear relaxation problem.The proposed strategy for generating this linear relaxation problem is to underestimate every nonlinear function φ j x j 0, . . ., m with a linear function.In what follows, all the details of this procedure will be given.
Let X x, x represents either the initial box X 0 , or modified box as defined for some partitioned subproblem in a branch and bound scheme.
Consider term n i 1 x γ jti i in φ j x j 0, . . ., m .Let n i 1 x γ jti i exp y jt , then, we have From 2.1 , we can obtain the lower bound and upper bound of y jt as follows: To derive the linear relaxation problem, we will use a convex separation technique and a two-part relaxation technique.

First-Part Relaxation
Let y j y j1 , . . ., y jT j , then φ j x can be expressed in the following form: c jt exp y jt f j y j .

2.3
For f j y j , we can derive its gradient and the Hessian matrix: . . .
Therefore, by 2.4 , the following relation holds: exp y jt .

2.5
Let λ j max 1≤t≤T j |c jt |max 1≤t≤T j exp y jt 0.1, then, for all y j ∈ Y j y j , y j , we have ∇ 2 f j y j < λ j .

2.6
Thus, the function 1/2 λ j y j 2 f j y j is a convex function on Y j .Consequently, the function f j y j can be decomposed into the difference of two convex functions, that is, f j y j is a d.c.function, which admits the following d.c.decomposition: where g j y j 1 2 λ j y j 2 f j y j , h j y j 1 2 λ j y j 2 .

2.8
Let y jmid 1/2 y j y j .Since g j y j is a convex function, we have g j y j ≥ g j y jmid ∇g j y jmid T y j − y jmid g l j y j .

Mathematical Problems in Engineering
In addition, for y jt ∈ y jt , y jt , it is not difficult to show y jt y jt y jt − y jt y jt ≥ y 2 jt .

2.10
Furthermore, we can obtain

2.12
Thus, from 2.7 , 2.9 , and 2.12 , we have f l j y j g l j y j − h l j y j ≤ f j y j .

2.13
Hence, by 2.1 , 2.3 , and 2.13 , the first-part relaxation L j x of φ j x can be obtained as follows φ j x f j y j g j y j − h j y j ≥ g l j y j − h l j y j g j y jmid ∇g j y jmid T y j − y jmid −

Second-Part Relaxation
Consider the function ln x i on the interval x i , x i .As we all know, its linear lower bound function and upper bound function can be derived as follows: Then, from 2.14 and 2.15 , we can obtain the linear lower bound function of φ j x denoted by φ l j x as follows: where

2.18
Obviously, φ l j x ≤ L j x ≤ φ j x .Consequently, the corresponding approximation relaxation linear programming RLP problem of GGP problem on X can be obtained
Proof.For all x ∈ X, let and let Δ 1 φ j x − L j x , Δ 2 L j x − φ l j x .Then, it is obvious that we only need to prove To this end, first, we consider the difference Δ 1 φ j x − L j x .By 2.7 , 2.13 , and 2.14 , it follows that  where ξ is a constant vector, and satisfies g j y j − g j y jmid ∇g j ξ T y j − y jmid .By the definition of y jt , we have y j − y j → 0 as δ i → 0 i 1, . . ., n .Thus, we have Second, we consider the difference Δ 2 L j x − φ l j x .From the definitions of L j x and φ l j x , it follows that
From Theorem 2.1, it follows that φ l j x will approximate the function φ j x as δ i → 0 i 1, . . ., n .
Based on the above discussion, it can be seen that the optimal value of RLP problem is smaller than or equal to that of GGP problem for all feasible points, that is, the optimal value of RLP problem provides a valid lower bound for the optimal value of GGP problem.Thus, for any problem P , let us denote the optimal value of P by V P , then we have V RLP ≤ V GGP .

Pruning Technique
In order to improve the convergence speed of this algorithm, we present a new pruning technique, which can be used to eliminate the region in which the global optimal solution of GGP problem does not exist.
Assume that UB is current known upper bound of the optimal value φ * 0 of GGP problem.Let

2.25
Theorem 2.2.For any subrectangle X X i n×1 ⊆ X 0 with X i x i , x i .Let

2.26
If there exists some index k ∈ {1, . . ., n} such that τ k > 0 and ρ k < τ k x k , then there is no globally optimal solution of GGP problem on X 1 ; if τ k < 0 and ρ k < τ k x k , then there is no globally optimal solution of GGP problem on X 2 , where

Mathematical Problems in Engineering
Proof.First, we show that for all x ∈ X 1 , φ 0 x > UB.When x ∈ X 1 , consider the kth

2.28
Note τ k > 0, we have ρ k < τ k x k .From the definition of ρ k and the above inequality, we obtain

2.29
This implies that, for all x ∈ X 1 , φ 0 x ≥ φ l 0 x > UB ≥ φ * 0 .In other words, for all x ∈ X 1 , φ 0 x is always greater than the optimal value of GGP problem.Therefore, there is no globally optimal solution of GGP problem on X 1 .
For all x ∈ X 2 , if τ k < 0 and ρ k < τ k x k with some k, by arguments similar to the above, we can derive that there is no globally optimal solution of GGP problem on X 2 .

Algorithm and Its Convergence
In this section, based on the former relaxation linear programming RLP problem, a branch and bound algorithm is presented to globally solve GGP problem.In order to ensure convergence to the global optimal solution, this algorithm needs to solve a sequence of RLP problems over partitioned subsets of X 0 .
In this algorithm, the set X 0 will be partitioned into subrectangles.Each subrectangle is concerned with a node of the branch and bound tree, and is associated with a relaxation linear subproblem.
At stage k of the algorithm, suppose that we have a collection of active nodes denoted by Q k .For each node X ∈ Q k , we will have computed a lower bound for the optimal value of GGP problem via solution LB X of RLP problem, so that the lower bound of the optimal value of GGP problem on the whole initial box region X 0 at stage k is given by LB k min{LB X , for all X ∈ Q k }.Whenever the solution of RLP problem for any node subproblem turns out to be feasible to GGP problem, we update the upper bound UB if necessary.Then, for each stage k, the active nodes collection Q k will satisfy LB X < UB, for all X ∈ Q k .We now select an active node to partition its associated rectangle into two subrectangles according to the following branching rule.For such two subrectangles, the fathoming step is applied in order to identify whether the subrectangles should be eliminated.In the end, we obtain a collection of active nodes for the next stage.This process is repeated until convergence is obtained.

Branching Rule
As we all know, the critical element in guaranteeing convergence to the global optimal solution is the choice of a suitable partitioning strategy.In this paper, we choose a simple and standard bisection rule.This rule is sufficient to ensure convergence since it drives all the intervals to a singleton for all variables.Consider any node subproblem identified by rectangle The branching rule is as follows.
a Let

3.2
Through this branching rule, the rectangle X is partitioned into two subrectangles X 1 and X 2 .

Algorithm Statement
Based on the former results, the basic steps of the proposed global optimization algorithm are summarized as follows.Let LB X refer to the optimal value of RLP problem on the rectangle X.
Step 1. Choose > 0. Find an optimal solution x 0 and the optimal value LB X 0 for RLP problem with X X 0 .Set LB 0 LB X 0 .If x 0 is feasible to GGP problem, then update the upper bound UB 0 φ 0 x 0 .If UB 0 − LB 0 ≤ , then stop: Step 3.For each new subrectangle X k,t , t 1, 2, utilize the pruning technique of Theorem 2.2 to prune box X k,t , t 1, 2. Update the corresponding parameters K i , y j , y j i 1, . . ., n, j 0, . . ., m .Compute LB X k,t and find an optimal solution x k,t for RLP problem with X X k,t , where t 1, 2. If possible, update the upper bound UB k min{UB k , φ 0 x k,t }, and let x k denote the point which satisfies UB k φ 0 x k,t .

Mathematical Problems in Engineering
Step 4. If UB k ≤ LB X k,t , then set F F {X k,t }.
x k is a global -optimal solution of GGP problem.Otherwise, set k k 1, and go to Step 2.

Convergence of the Algorithm
The convergence properties of the algorithm are given in the following theorem.(b) If the algorithm is infinite, then, along any infinite branch of the branch and bound tree, an infinite sequence of iterations will be generated, and any accumulation point of the sequence {x k } will be a global solution of GGP problem.
Proof. a If the algorithm is finite, then it terminates in some stage k, k ≥ 0. Upon termination, by the algorithm, it follows that UB k − LB k ≤ .From Steps 1 and 3, this implies that φ 0 x k − LB k ≤ .Let φ * 0 denote the optimal value of GGP problem, then, by Section 2, we know that LB k ≤ φ * 0 .Since x k is a feasible solution of GGP problem, φ 0 x k ≥ φ * 0 .Taken together above, this implies that , and the proof of part a is complete.
b Let D denote the feasible region of GGP problem.When the algorithm is infinite, from the construction of the algorithm, we know that LB k is a nondecreasing sequence bounded above by min x∈D φ 0 x , which guarantees the existence of the limit LB lim k → ∞ LB k ≤ min x∈D φ 0 x .Since {x k } is contained in a compact set X 0 , there exists one convergent subsequence {x s } ⊆ {x k }, and suppose lim s → ∞ x s x.Then, from the proposed algorithm, there exists a decreasing subsequence {X r } ⊆ {X s }, where x r ∈ X r , LB r LB X r φ l 0 x r , and lim r → ∞ X r x.According to Theorem 2.1, we have lim The only remaining problem is to prove that x ∈ D. Since X 0 is a closed set, it follows that x ∈ X 0 .Assume that x / ∈ D, then there exists some φ j x j 1, . . ., m such that φ j x > β j .Since φ l j x is continuous, by Theorem 2.1, the sequence {φ l j x r } will convergent to φ j x .By the definition of convergence, there exists r, such that for any r > r, |φ l j x r − φ j x | < φ j x − β j .Therefore, for any r > r, we have φ l j x r > β j , which implies that RLP X r is infeasible.This contradicts the assumption of x r x X r .Therefore, x ∈ D, that is, x is a global solution of GGP problem, and the proof of part b is complete.

Numerical Experiment
In this section, we report some numerical results to verify the performance of the proposed algorithm.The test problems are implemented on a Pentium IV 1.66 GHZ microcomputer.The algorithm is coded in Matlab 7.1 and uses simplex method to solve the relaxation linear programming problems.In our experiments, the convergence tolerance set to 1.0e − 3.
Example 4.1 see 8 .We have the following: 4.1 By using the method in this paper, the optimal solution is 150, 30, 1.3189 , the optimal value is −147.6667,and the number of algorithm iterations is 557.But using the method in 8 , the optimal solution is 88.72470, 7.67265, 1.31786 , the optimal value is −83.249728406, and the number of algorithm iterations is 1829.

4.2
Through using the method in this paper, the optimal solution 0.5, 0.5 with optimal value 0.5 is found after 16 iterations.But using the method in 10 , the optimal solution 0.5, 0.5 with optimal value 0.5 is found after 96 iterations.x ∈ X 0 {x | 0.5 ≤ x 1 ≤ 15, 0.5 ≤ x 2 ≤ 15}.

4.4
Utilizing the method in this paper, we find the optimal value 1.1935 after 15 iterations at an optimal solution x * 1.0, 30.2734, 1.0, 17.4640, 1.0 .In the future work, we will do more numerical experiments to test the performance of the proposed algorithm.

Theorem 3 . 1 .
(a) If the algorithm is finite, then upon termination, x k is a global -optimal solution of GGP problem.

Example 4 . 3 .
We have the following:min − x 1 x 1