Generating Efﬁcient Outcome Points for Convex Multiobjective Programming Problems and Its Application to Convex Multiplicative Programming

. Convex multiobjective programming problems and multiplicative programming problems have important applications in areas such as ﬁnance, economics, bond portfolio optimization, engineering, and other ﬁelds. This paper presents a quite easy algorithm for generating a number of e ﬃ cient outcome solutions for convex multiobjective programming problems. As an application, we propose an outer approximation algorithm in the outcome space for solving the multiplicative convex program. The computational results are provided on several test problems.


Introduction
The convex multiobjective programming problem involves the simultaneously minimize p ≥ 2 noncomparable convex objective functions f j : R n → R, j 1, . . ., p, over nonempty convex feasible region X in R n and may be written by Minf x , s.t.x ∈ X, V P X where f x f 1 x , f 2 x , . . ., f p x T .When X ⊂ R n is a polyhedral convex set and f j , j 1, . . ., p are linear functions, problem V P X is said to be a linear multiobjective programming problem LP X .
For a nonempty set Q ⊂ R p , we denote by Q E and Q WE the sets of all efficient points and weakly efficient points of Q, respectively, that are The set is called the outcome set or image of X under f.A point x 0 ∈ X is said to be an efficient solution for problem . For simplicity of notation, let X E denote the set of all efficient solutions for problem V P X .When f x 0 ∈ Y WE , x 0 is called weakly efficient solution for problem V P X and the set of all weakly efficient solutions is denoted by X WE , it is clear that X E and X WE are the preimage of Y The goal of problem V P X is to generate the sets X E and X WE or at least their subsets.However, it has been shown that, in practice, the decision maker prefers basing his or her choice of the most preferred solution primarily on Y image E and Y image WE rather than X E and X WE .Arguments to this effect are given in 1 .
It is well known that the task of generating X E , X WE , Y image E , Y image WE , or significant portions of these sets for problem V P X is a difficult problem.This is because they are, in general, nonconvex sets, even in the case of linear multiobjective programming problem LP X .
Problem V P X arises in a wide variety of applications in engineering, economics, network planning, production planning, operations research, especially in multicriteria design and in multicriteria decision making see, for instance, 2, 3 .Many of the approaches for analyzing convex multiobjective programming problems involve generating either the sets X E , X WE , Y image E , and Y image WE or a subset thereof, without any input from decision maker see, e.g., 1, 2, 4-13 and references therein .For a survey of recent developments see 6 .This paper has two purposes: i the first is to propose an algorithm for generating a number of efficient outcome points for convex multiobjective programming problem V P X depending on the requirements of decision makers Algorithm 1 in Section 2 .Computational experiments show that this algorithm is quite efficient; ii as an application, we present an outer approximation algorithm for solving the convex multiplicative programming problem CP X associated with the problem V P X in outcome space R p Algorithm 2 in Section 3 , where the problem CP X can be formulated as It is well known that problem CP X is a global optimization problem and is known to be NP-hard, even special cases such as when p 2, X is a polyhedron, and f j is linear for each j 1, 2 see 14 .Because of the wide range of its applications, this problem attracts a lot of attention from both researchers and practitioners.Many algorithms have been proposed for globally solving the problem CP X , see, e.g., 10, 14-20 and references therein .
The paper is organized as follows.In Section 2, we present Algorithm 1 for generating efficient outcome points for convex multiobjective programming problem V P X and its theoretical basis.To illustrate the performance of Algorithm 1, we use it to generate efficient points for a sample problem.The Algorithm 2 for solving the convex multiplicative programming CP X associated with the problem V P X , and numerical examples are described in Section 3.

Theoretical Basis
Assume henceforth that X ⊂ R n is a nonempty, compact convex set given by where all the g 1 , g 2 , . . ., g m are convex functions on R n .Then the set

2.3
As usual, the point y m is said to be an Therefore, we suppose that y m / ∈ G 0 .Obviously, by definition, if x * , y * ∈ R n p is an optimal solution for the problem P k given by min y k s.t.f j x − y j ≤ 0, j 1, . . ., p, g i x ≤ 0, i 1, . . ., m, P k then y k y * is an optimal solution for the problem P 0 k , and the optimal values of these two problems are equal. To

2.4
We consider the set G given by It is obvious that G is a nonempty, full-dimensional compact convex set in R p .The set G is instrumental in Algorithm 1 to be presented in Section 2.2 for generating efficient outcome points for problem V P X .
Remark 2.1.In 1998, Benson 1 presented the outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiobjective linear programming problem.Here, the G seems to be analogous with the set Y considered by Benson 1 .However, note that Y ⊃ Y image and Y image is not necessarily a subset of G.The Figure 1 illustrates the set G in the case of p 2.
Proof.This result is easy to show by using Yu 21, page 22, Theorem 3.2 and the definition of the point y M .Therefore, the proof is omitted.
It is clear that G ⊂ B 0 .The following fact will play an important role in establishing the validity of our algorithm.Proof.Let D G − y w .Since G is the compact convex set and y w belongs to the boundary of G, the set D is also the compact convex set containing the origin 0 of the outcome space R p and 0 belongs to the boundary of D. According to the separation theorem 22 , there is a nonzero vector q ∈ R p such that q, u ≥ 0 ∀u ∈ D.

2.7
Let S ⊂ R p be a p-simplex with vertices 0, e 1 , e 2 , . . ., e p , where e 1 , e 2 , . . ., e p are the unit vectors of R p .By definition, we can choose the point y M such that y w S ⊂ G.This implies that S ⊂ D. From 2.7 , by taking u to be e 1 , e 2 , . . ., e p to see that Furthermore, the expression 2.7 can be written by q, y − y w ≥ 0 ∀y ∈ G, 2.9 that is, q, y ≥ q, y w ∀y ∈ G.

2.10
According to 23, Chapter 4, Theorem 2.10 , a point y * ∈ G is a weakly efficient point of G if and only if there is a nonzero vector v ∈ R p and v ≥ 0 such that y * is an optimal solution to the convex programming problem min v, y | y ∈ G .

2.11
Combining this fact, 2.8 and 2.10 give y w ∈ G WE .To complete the proof, it remains to show that y w ∈ G E .Assume the contrary that By the definition, we have where for each k 1, . . ., p, F k is the optimal solution set for the following problem: It is easy to see that the optimal values of two problems P G k and P 0 k are the same.
From this fact and the definition of the point y m , it follows that Therefore, if y w ∈ G WE \ G E then there is i 0 ∈ {1, 2, . . ., p} such that y w i 0 y m i 0 .Since y ∈ B 0 \ G, we always have y ≥ y m and y w λy M 1 − λ y with 0 < λ < 1.By the choice of the point y M , we have y M y m .Hence, This contradiction proves that y w must belong to the efficient outcome set G E .
Remark which lies on the boundary of G. To determine this efficient outcome point y w , we have to find the unique value λ * of λ, 0 < λ < 1, such that belongs to the boundary of G see Figure 2 .It means that λ * is the optimal value for the problem

2.17
By definition, it is easy to see that λ * is also the optimal value for the following convex programming problem with linear objective function min λ

T y
Note that λ * exists because the feasible region of problem T y is a nonempty, compact convex set.Furthermore, by the definition, it is easy to show that if x * , λ * ∈ R n 1 is an optimal solution for Problem T y , then x * is an efficient solution for the convex multiobjective programming problem V P X , that is, x * ∈ X E .For the sake of convenience, x * is said to be an efficient solution associated with y w , and y w is said to be an efficient outcome point generated by y.It is easily seen that by varying the choices of points y ∈ B 0 \ G, the decision maker can generate multiple points in Y image E . In theory, if y could vary over all of B 0 \ G, we could generate all of the properly efficient points of Y see 24 in this way.
The following Proposition 2.6 shows that for each efficient outcome point y w generated by a given point y, we can determine p new points which belong to B 0 \ G and differ y.This work is accomplished by a technique called cutting reverse polyblock.
A set of the form B y∈V y, y M ⊂ R p , where y, y M : {y | y ≤ y ≤ y M }, V ⊂ B 0 y m , y M , and |V | < ∞, is called a reverse polyblock in hyperrectangle B 0 with vertex set V .A vertex y ∈ V is said to be proper if there is no y ∈ V \{y} such that y, y M ⊂ y , y M .It is clear that a reverse polyblock is completely determined by its proper vertices.Proposition 2.6 see, e.g., 20 .
set contained in a reverse polyblock B y∈V y, y M with vertex set V ⊂ B 0 .Let v ∈ V \ G and y w is the unique point on the boundary of G that belongs to the line segment connecting v and y M .Then, where, as usual, e i denotes the ith unit vector of R p , and

2.19
Remark 2.7.By Proposition 2.3, the point y w as described in Proposition 2.6 belongs to G E .From 2.18 , it is easy to see that v i / v for all i 1, . . ., p and for each i 1, . . ., p, the vertex v i ∈ B 0 \ G because y w ≥ v i and y w / v i .The points v 1 , . . ., v p are called new vertices corresponding to y w .

The Algorithm
After the initialization step, the algorithm can execute the iteration step many times to generate a number of efficient outcome points for problem CMP depending on the user's requirements.Note that the construction of the box B 0 in Substep 1.1 of Algorithm 1 involves solving p convex programming problems, each of which has a simple linear objective function and the same feasible region.Let NumExpect be a positive integer number.The algorithm for generating NumExpect efficient outcome points to the problem V P X and NumExpect efficient solutions associated with them are described as follows.
2.1 Set S ∅.Find an optimal solution x * , λ * ∈ R n 1 and the optimal value λ * to the problem T y . Set

Example
To illustrate the Algorithm 1, we consider the convex multiobjective programming problem V P X see, Benson in 14 with p n 2, where and X ⊂ R 2 that satisfies the constraints What follows is a brief summary of the results of executing Algorithm 1 for determining NumExpect 7 different efficient outcome points for this sample problem, and 7 efficient solutions associated with them.
Step 1.By solving the problem P 1 , we obtain the optimal solution 1.0, 0.0, 1.0, 100.0 and the optimal value y opt 1 1.0.Then, y 1 1.0, 100.0 is the optimal solution for the problem P 0 1 .
By solving problem P 2 , we obtain the optimal value y opt 2 2.380437 and the optimal solution −1.6501599, 2.8250799, 100.0, 2.380437 .Hence, y 2 100.0, 2.380437 is the optimal solution for the problem P 0 2 .From 2.4 , we choose α 110.0 > max y 1 1 , y Set B B \ v, y w with v y,y w w 1 .

2.26
Set B B \ v, y w with v y, y w w 2 .

2.27
Two new vertices corresponding to y w are v 1

2.28
Set By a calculation analogous to above, we yield four next efficient outcome points w 4 , w 5 , w 6 , and w 7 generated by y s 1 , y s 2 , y s 3 , y s 4 , respectively, and four next efficient solution x 4 , x 5 , x 6 , x 7 associated with w 4 , w 5 , w 6 , w 7 , respectively.Namely, w 4 1 , from 3.2 , we have f x * y * .Thus, h y * p j 1 y * j p j 1 f j x * .Now, we show that x * is a global optimal solution of the problem CP X .Indeed, assume the contrary that there is a point x ∈ X such that p j 1 f j x < p j 1 f j x * .Combining this fact and 3.2 gives Since x ∈ X, we have y f x ∈ Y image .Therefore, p j 1 y j < p j 1 y * j .This contradicts the hypothesis that y * is a global optimal solution to problem OP Y and proves that x * is a global optimal solution to problem CP X .This completes the proof.By Theorem 3.3 and Proposition 3.1, solving the problem CP X can be carried out in two stages: 1 finding a global optimal solution y * to the problem OP GE .Then y * is also a global optimal solution to the problem OP Y , 2 finding a global optimal solution x * ∈ X to the problem CP X which satisfies f x * ≤ y * .
In the next section, the outer approximation algorithm is developed for solving the problem OP GE .

Outer Approximation Algorithm for Solving Problem OP GE
Starting with the polyblock B 0 y m , y M see Section 2.1 , the algorithm will iteratively generate a sequence of reverse polyblocks B k , k 1, 2, . .., such that

3.4
For each k 0, 1, 2, . .., the new reverse polyblock B k 1 is constructed via the formula is a lower bound for the problem OP GE , and {β k } is an increasing sequence, that is, Let ε be a given sufficient small real number.Let y * ∈ G E .Then h y * is an upper bound for the problem OP GE .A point y * is said to be an ε-optimal solution to problem OP GE if there is a lower bound β * for this problem such that h y * − β * < ε.
Below, we will present an algorithm for finding ε-optimal solution to problem OP GE .At the beginning of a typical iteration k ≥ 0 of the algorithm, we have from the previous iteration an available nonempty reverse polyblock B k ⊂ R p that contains G and an upper bound θ k for the problem OP GE .In iteration k, firstly problem min{h y | y ∈ V k } is solved to obtain the optimal solution set T opt k .By the construction, The optimal value β k h y k is the best currently lower bound.Then, we solve the convex programming problem T y with y : y k to receive the optimal value λ * .By Proposition 2.3, the feasible solution ω k y λ * y M − y ∈ G E is an outcome efficient point generated by y y k see Remark 2.5 .Now, the best currently upper bound is θ k min{θ k , h ω k } and the feasible solution y best satisfying h y best θ k is said to be a currently best feasible solution.If θ k − β k < ε, then the algorithm terminates, and y best is an ε-optimal solution for the problem OP GE .Otherwise, set B k 1 : B k \ v, y w , where v : y k and y ω ω k .According to Proposition 2.6, the vertex set V k 1 of the reverse polyblock B k 1 is where v 1 , . . ., v p are determined by formula 2.18 .Figure 3 illustrates two beginning steps of the algorithm in the case of p 2.
By the construction, it is easy to see that {θ k } the sequence of upper bounds for the problem OP GE is the nonincreasing sequence, that is, θ k 1 ≤ θ k for all k 0, 1, 2, . . .Now, the outer approximation algorithm for solving OP GE is stated as follows.

Initialization Step
Construct B 0 y m , y M , where y m and y M are described in Section 2.1.Choose ε > 0 ε > 0 is a sufficient small number .Set V 0 {y m } and θ 0 : Numlarge Numlarge is a sufficient large number.This number can be viewed as an initialization the upper bound .for λ k ∈ 0, 1 .Note that all of the points y k are contained in the set B 0 \ G. Furthermore, by the choice of y M see 2.4 , the closure of B 0 \ G is a compact subset of the interior of the cone y M − R p .This observation implies that Vol y k , y M has a lower bound far from zero.
Combining this fact, 3.9 and 3.11 imply that lim k → ∞ λ k 0. Also, the observation implies that y M − y k is bounded.Finally, by 3.10 , we have lim k → ∞ w k − y k 0.

Computational Results
First, the Algorithm 2 has been applied for solving the test example given by Benson in 14 see the example in Section 2.3 where f 1 x x 1 − 2 2 1, f 2 x x 2 − 4 2 1, and g 1 x 25x 2 1 4x 2 2 −100, g 2 x x 1 2x 2 −4.The calculation process for solving this example is described as follows.

Initialization.
Similarly, the example in Section 2.3, we have B 0 y m , y M with y m 1.00000, 2.380437 , and y M 110.000000, 110.000000 .We choose ε 0.025 and set V 0 {y m 1.00000, 2.380437 }, θ 0 10000 the initialization upper bound , k 0 and go to iteration step k 0.
Step 3. Since h w 0 22.619464 < θ 0 , we have θ 0 h w 0 22.619464 currently best upper bound , and y best w 0 currently best feasible point .
Step 3. Since h w 1 16.286498 < θ 1 , we set θ 1 h w 1 the currently best upper bound , and y best w 1 the currently best feasible point .
After 42 iterations, the algorithm terminates with θ 42 9.770252094 and β 42 9.745596873, where θ 42 h w 20 .Then, the ε-optimal solution for the problem OP Y and for the problem CP X are given by y * y best w 20 1.023846379, 9.542693410 , 3.14 and x * 1.845577271, 1.077211364 .The approximate optimal value of problem 3.12 is 9.770252094.
Below, we present the results of computational experiment for two types of problem.We take ε 0.025 in the numerical experiments.We tested our algorithm for each problem with random data for several times.Then, we took results in average.Numerical results are summarized in Tables 1 and 2.
where f j x α j ν j , with α j , β j ∈ R n , ν j ∈ R, j 1, . . ., p, A is an m × n -matrix, and b ∈ R m .Our program was coded in Matlab R2007a and was executed on our own PC with the configuration, CPU Intel Core 2 T5300 1.73 GHz, 1 G Ram. On the two above problems, the parameters are defined as follows: iv the coefficients μ j , ν j , j 1, . . ., p are uniformly distributed on 0, 1 .

Conclusion
In this paper, we have presented Algorithm 1 for generating a finite set Y out to be generated by a given y ∈ B 0 \ G, the algorithm calls for solving a convex programming problem with linear objective function T y .Note that by solving problem T y , we also obtain the efficient solution x * associated with y w , where x * , λ * is the optimal solution for the problem T y .
In 7 , Ehrgott et al. have proposed an outer approximation algorithm for representing an inner approximation of G 0 .This algorithm is the combination of an extension of Benson's outer approximation algorithm 1 for multiobjective linear programming problems and linearization technique.In each iteration step of these algorithms, a polyhedron is obtained from the previous one by adding a new hyperplane to it.The vertices of the new polyhedron can be calculated from those of the previous polyhedron by some available global optimization methods.Unlike those algorithms, our algorithm constructs a reverse polyblock at each iteration step from the previous one by cutting out a box, and its vertices can be easily determined by the formula 2.18 .
As an application, we have proposed the outer approximation algorithm Algorithm 2 for solving the convex multiplicative programming problem CP X associated with the problem V P X in outcome space.Since the number of terms p in the objective function of problem V P X is, in practice, often much smaller than the number of variables n, we hope that the algorithms help to reduce considerably the size of the problems.

1 . 1
Here for any two vectors a, b ∈ R p , the notations a ≥ b and a b mean a − b ∈ R p and a − b ∈ int R p , respectively, where R p is the nonnegative orthant of R p , and int R p is the interior of R p .By definition, Q E ⊆ Q WE .
the efficient outcome set and weakly efficient outcome set for problem V P X , respectively.

1. 1
Construct B 0 y m , y M , where y m and y M are described in Section 2.1.Set B B 0 .1.2 Set Y out E ∅ the set of efficient outcome points , X out E ∅ the set of efficient solutions , Nef : NumExpect, k : 0 the number of elements of the set Y out E , S {y m }.

2. 2
For each y ∈ S do begin k : k 1.

y w , 3 . 5 whereProposition 3 . 4 .
v y k , y k is a global optimal solution to the problem min{h y | y ∈ B k }, and y w is the efficient outcome point generated by y y k .For each k, let V k denote the vertex set of the reverse polyblock B k .The following Proposition 3.4 shows that the function h y achieves a minimum over the reverse polyblock B k at a proper vertex.Let h y p j 1 y j , and let B k be a reverse polyblock.Consider the problem to minimize h y subject to y ∈ B k .An optimal solution y k to the problem then exists, where y k is a proper vertex of B k .Proof.By definition, note that the objective function h y is a continuous function on R p , and B k is a compact, and the problem min{h y | y ∈ B k } has an optimal solution y k ∈ B k .For each y ∈ B k , there is a proper vertex v of B k such that y ∈ v, y M .That means v ≤ y.By definition of the function h y , we have h v ≤ h y .This shows that h y k min{h y | y ∈ V k }, where V k is the vertex set of B k , and the proof is completed.Remark 3.5.By Proposition 3.4, instead of solving problem min{h y | y ∈ B k }, we solve the simple problem min{h y | y ∈ V k }.From 3.4 , it is clear that for each k 0, 1, 2 . .., the optimal value

1 :Theorem 3 . 6 .k
Set k 0 and go to Iteration step k.Iteration Step k k 0, 1, 2, . . . .See Steps k.1 through k.5 below.k.Determine the optimal solution set T opt k Argmin{h y | y ∈ V k }.Choose an arbitrary y k ∈ T opt k and set β k : h y k currently best lower bound .k.2 Let y y k .Find the optimal value λ * to the problem T y .And set ω k y λ * y M − y ∈ G E .k.3 Update the upper bound If h ω k < θ k Then θ k h ω k currently best upper bound and y best ω k currently best feasible point .Journal of Applied Mathematics k.4If θ k − β k ≤ ε Then Terminate the algorithm: y best is an ε-optimal solution Else Set B k 1 : B k \ v, y w where v : y k and y ω ω k and determine the set V k 1 by formula 2.18 .k.5 Set θ k 1 : θ k ; k : k 1 and go to iteration step k.The algorithm terminates after finitely many steps and yields an ε-optimal solution to problem OP GE .Proof.Let ε be a given positive number.Since the function h y is uniformly continuous on the compact set B 0 , we can choose an enough small number δ > 0 such that if y, y ∈ B 0 and y − y < δ, then |h y − h y | < ε.Then, to prove the termination of the algorithm, we need to show only that lim k → ∞ w k − y k 0. 3.8 Observe first that the positive series ∞ k 1 Vol y k , w k is convergent, since the open boxes int y k , w k are disjoint, and all of them are contained in the closure of B 0 \ G.It follows that lim the construction of w k and y k , we have w k − y k λ k y M − y k , Vol y k , y M , 3.11

i n j 1 a ij b 0i 3. 17 with 1 ,
i A a ij ∈ R m×n is a randomly generated matrix with elements in −1, 1 ;ii b b 1 , . . ., b m T is a random vector satisfying the formula b b 0i being a random number in 0, 2 for i 1, . . ., m; . . ., p are vectors with elements randomly distributed on 0, 1 ;

pE
E of efficient outcome points for the convex multiobjective programming problem V P X .The number of Journal of Applied Mathematics such points depends on the requirement of the decision makers.When the selected number is large enough, the convex set conv Y out E R p , where conv Y out E R p is convex hull of the set Y out E R p , may be viewed as an inner approximation of G 0 Y R p and the efficient set conv Y out E R may be viewed as an inner approximation of the efficient outcome set Y image E for the problem V P X .For each efficient outcome point y w ∈ Y image E We denote the optimal value for the problem P 0 k by y k y opt k ∀k 1, 2, . . ., p.
generate various efficient outcome points in Y y w with v y and y w w k .Determine p vertices v 1 , . . ., v p corresponding to y w via formula 2.18 .Set S S ∪ {v 1 , . . ., v p } and B B. If k ≥ Nef Then Terminate the algorithm Else Set S S and return the Step 2.
The following proposition tells us a link between the global solution to the problem OP Y and the efficient outcome set Y The relationship between two problems CP X and OP Y is described by the following theorem and was given in 14, Theorem 2.2 .However, we give here a full proof for the reader's convenience.If y * is a global optimal solution to problem OP Y , then any x * ∈ X such that f x * ≤ y * is a global optimal solution to problem CP X .Furthermore, the global optimal values of two problems CP X and OP Y are equal, that is, Proof.Suppose that y * is a global optimal solution to problem OP Y and x 3. Application to Problem CP XConsider convex multiplicative programming problem CP X associated with the convex multiobjective programming problemV P X min p j 1 f j x s.t.x ∈ X, CP Xwhere X ⊂ R n is a nonempty compact convex set defined by 2.1 and f j : R n → R is convex on R n and positive on X, j 1, 2, . . ., p. T | x ∈ X} is the outcome set of X under f.By assumption, we have Y image ⊂ int R p .* ∈ X satisfies f x * ≤ y * .3.2 By Proposition 3.1, y * ∈ Y image E .Since f x * ∈ Y image and y * ∈ Y image E 2et y y2.The problem T y has the optimal solution x * , λ * 1.117142, 1.441429, 0.007151 , and the optimal value λ * 0.007151.Then w21.779439, 7.546285 ∈ G E .

Table 1 :
Computational results on the problem of Type 1.

Table 2 :
Computational results on the problem of Type 2.