JOPTI Journal of Optimization 2314-6486 2356-752X Hindawi Publishing Corporation 10.1155/2014/406092 406092 Research Article Multiobjective Optimization Involving Quadratic Functions Augusto Oscar Brito 1 Bennis Fouad 2 Caro Stephane 3 Lozano Manuel 1 Escola Politécnica da Universidade de São Paulo, Av. Prof. Mello Moraes 2231, 05508-030 São Paulo, SP Brazil usp.br 2 École Centrale de Nantes Institut de Recherche en Communications et Cybernétique de Nantes, 1 rue de la Noë, 44300 Nantes France irccyn.ec-nantes.fr 3 Institut de Recherche en Communications et Cybernétique de Nantes, 1 rue de la Noë, 44321 Nantes France irccyn.ec-nantes.fr 2014 1592014 2014 04 05 2014 26 07 2014 15 9 2014 2014 Copyright © 2014 Oscar Brito Augusto et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Multiobjective optimization is nowadays a word of order in engineering projects. Although the idea involved is simple, the implementation of any procedure to solve a general problem is not an easy task. Evolutionary algorithms are widespread as a satisfactory technique to find a candidate set for the solution. Usually they supply a discrete picture of the Pareto front even if this front is continuous. In this paper we propose three methods for solving unconstrained multiobjective optimization problems involving quadratic functions. In the first, for biobjective optimization defined in the bidimensional space, a continuous Pareto set is found analytically. In the second, applicable to multiobjective optimization, a condition test is proposed to check if a point in the decision space is Pareto optimum or not and, in the third, with functions defined in n-dimensional space, a direct noniterative algorithm is proposed to find the Pareto set. Simple problems highlight the suitability of the proposed methods.

1. Introduction

Life is about making decisions and the choice of the optimal solutions is not an exclusive subject for scientists, engineers, and economists. Decision making is present in day-to-day life. Looking for an enjoyable vacancy, everyone will formulate an optimization problem to a travel agent, a problem like with a minimum amount of money visit a maximum number of places in a minimum amount of time and with the maximum level of comfort. Usually all real design problems have more than one objective; namely, they are multiobjective. Moreover, the design objectives are often antagonistic.

Edgeworth  was the pioneer to define an optimum for multicriteria economic decision making problem, at King’s College, London. It was about the multiutility problem within the context of two consumers, P and π. “It is required to find a point (x,y) such that in whatever direction we take an infinitely small step, P and π do not increase together but that, while one increases, the other decreases.

Few years later, in 1896, Pareto , at the University of Lausanne, Switzerland, formulated his two main theories, Circulation of the Elites and the Pareto optimum. “The optimum allocation of the resources of a society is not attained so long as it is possible to make at least one individual better off in his own estimation while keeping others as well off as before in their own estimation.

Since then, many researchers have been dedicated to developing methods to solve this kind of problem. Interestingly, solutions for problems with multiple objectives, also called multicriteria optimization or vector optimization, are treated as Pareto optimal solutions or Pareto front, although, as Stadler  observed, they should be treated as Edgeworth-Pareto solutions.

Extensive reviews of multiobjective optimization concepts and methods are given by Miettinen , for evolutionary algorithms by Goldberg  and for evolutionary multiobjective optimization by Deb . The theoretical basis for multiobjective optimization adopted in this work was based on these references.

Thanks to the computer development, optimization of large scale problems became a common task in engineering designs. The development of high speed computers and their increasing use in several industrial branches led to significant changes in the design processes. Currently, the computers, each time faster, allow the engineer to consider a wider range of design possibilities and optimization processes allow systematic choice between alternatives, since they are based on some rational criteria. If used adequately, these procedures can, in most cases, improve or even generate the final results of a design.

Associated with computer development, many of the research done in optimization is focused on numerical methods to solve any kind of problem, but sometimes simplified problems can give important clues to the designer during the trade-off phases of a decision.

The present work aims to bring new approaches to solve multiobjective optimization problems, providing a rapid solution for the Pareto set if the objective functions involved are quadratic.

The rest of the paper is organized into 3 sections. In the first section a general multiobjective optimization problem is formulated and the nature of optimal solutions from the Pareto perspective and the necessary conditions to be met are defined. In the second section, three propositions are done to solve the unconstrained multiobjective optimization problems involving quadratic functions. In the first section the general problem comprises two bidimensional functions. In this case, the proposition permits to find the Pareto front analyticaly. In the second section, the problem considers the minimization problem with three or more functions, keeping the decision space in two dimensions. In this case the proposition helps to find the Pareto points and their boundary in the decision space. In the third section, proposition of the decision space is expanded to any dimensional size. Finally, a section with the conclusions and the proposed future work is presented.

2. Multiobjective Optimization Problem

Multiobjective optimization problems (MOOP) can be defined by the following equations:(1a)minimize:f(X)(1b)subject  to:gi(X)0,i=1,2,,m,(1c)hj(X)=0,j=1,2,,l,(1d)XinfXXsup,where f(X)=[f1,f2,f3,,fk]T:RnRk is a vector with the values of scalar objective functions fi(X):RnR to be minimized. XRn is the vector containing the design variables, also called decision variables, defined in design space Rn. Xinf and Xsup are, respectively, the lower and upper bounds of the design variables. gi(X):RnR represents the ith inequality constraint function and hj(X):RnR the jth equality constraint function. Equations (1b) to (1d) define the region of feasible solutions, S, in design space Rn. The constraints gi(X) are of type “gi(x)0” functions in view of the fact that “gi(x)0” functions may be converted to the first type if they are multiplied by −1. Similarly, the problem considers the “minimization” of fi(X), given the fact that function “maximization” can be transformed into the former by multiplying it by −1.

2.1. Pareto Optimal Solution

The notion of “optimum” in solving problems of multiobjective optimization is known as “Pareto optimal.” A solution is said to be Pareto optimal if there is no way to improve one objective without worsening at least another; that is, the feasible point X*S is Pareto optimal if there is no other feasible point XS such that for all i<>jfi(X)fi(X*) and fj(X)<fj(X*). Due to the conflicting nature of the objective functions, the Pareto optimal solutions are usually scattered in the region S, a consequence of not being able to minimize all the objective functions simultaneously. In solving the optimization problem we obtain the Pareto set or the Pareto optimal solutions defined in the design space and the Pareto front, an image of the objective functions, in the criterion space, calculated over the set of optimal solutions.

2.2. Necessary Condition for Pareto Optimality

In fact, optimizing multiobjective problems expressed by (1a)–(1d) is of general character. The equations represent the problem of single-objective optimization when k=1. According to Miettinen , such as in single-objective optimization problems, the solution X*S for the Pareto optimality must satisfy the Karush-Kuhn-Tucker (KKT) condition, expressed as follows:(2a)i=1kωifi(X*)+j=1mλjgj(X*)+i=1lμihi(X*)=0,(2b)λjgj(X*)=0,(2c)λj0,(2d)μi0,(2e)ωi0;i=1kωi=1,where ωi is the weighting factor for the gradient of the ith objective function, calculated at the point X*, fi(X*). λj represents the weighting factor for the gradient of the jth inequality constraint function, gj(X*), and is zero when the constraint function associated is not active; that is, gj(X*)0. μi represents the weighting factor for the gradient of the ith equality constraint function, hi(X*).

Equations (2a) to (2e) form the necessary conditions for X* to be a Pareto optimal as described by Miettinen . They are sufficient for complete mapping of the Pareto front if the problem is convex and the objective functions are continuously differentiable in the S space. Otherwise, the solution will depend on additional conditions, as shown by Marler and Arora .

The methods we will propose in the next sections can be classified in posteriori preference articulation and an extensive literature review of the most important methods to solve multiobjective optimization problems can be found in Augusto et al. .

3. Two-Dimensional Functions of Class C<sup><bold><bold>1</bold></bold></sup>

In this section we propose a simple strategy to determine the Pareto set in the decision space and the corresponding Pareto front in the function space, for MOOP involving two bidimensional differentiable functions. Consider an unconstrained multiobjective optimization problem. From (2a), the optimality condition can be interpreted by the following proposition.

Proposition 1.

If there exists a Pareto front for the minimization problem with two continuous and differentiable functions defined in R2, say f1(x1,x2) and f2(x1,x2), then the points in the decision space, where the gradients of both functions are parallel and opposite, define a continuous Pareto set that connects both functions minima.

As the gradients of each function are orthogonal to contours and point outwards from the minimum, the curve mentioned in Proposition 1 is the locus where the gradients of both functions are parallel and opposite, as shown in Figure 1.

Graphical representation of Proposition 1. The continuous Pareto set as the locus, where objective function gradients are parallel and opposite.

3.1. Two Quadratic Functions Defined in <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M47"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> Space

Proposition 1 is quite general, but as our focus is on quadratic functions let us solve an unconstrained biobjective optimization problem involving quadratic functions defined in the two-dimensional decision space; that is, f(x1,x2)=[f1,f2]:R2R2. The problem is defined as follows:

minimize: (3a)f1(x1,x2)=a1x12+(b1x1+e1)x2+c1x22+d1x1+cst1,(3b)f2(x1,x2)=a2x12+(b2x1+e2)x2+c2x22+d2x1+cst2.

Applying the optimality condition, i=1kωifi(X*)=0, to (3a) and (3b) results in the following: (4)[2a1x1+b1x2+d12a2x1+b2x2+d2b1x1+2c1x2+e1b2x1+2c2x2+e2]{ω1ω2}={00}.

As the system of (4) is homogeneous, the nontrivial solution, with ω0, requires a singularity; that is, the determinant of the coefficient matrix must be null. Consider (5)|2a1x1+b1x2+d12a2x1+b2x2+d2b1x1+2c1x2+e1b2x1+2c2x2+e2|=0 which results in the following quadratic curve for (x1,x2): (6)αx12+(βx1+ε)x2+γx22+δx1+τ=0, where (7)α=2(a1b2-a2b1),β=4(a1c2-a2c1),γ=2(b1c2-b2c1),δ=2(a1e2-a2e1)+(d1b2-d2b1),ε=2(d1c2-d2c1)+(b1e2-b2e1),τ=(d1e2-d2e1).

Function gradients f1(X) and f2(X) are parallel on the curve defined by (6), but they have to be opposite, resulting in positive weights in (4). Being the system singular, to find a relation between the weights ω1 and ω2 we can use only one of the equations as the other is its linear combination. Using the first equation, this relation can be deduced as follows: (8)ω2ω1=-2a1x1+b1x2+d12a2x1+b2x2+d2 which have positive values if and only if (9)(2a1x1+b1x2+d1)(2a2x1+b2x2+d2)<0.

Therefore, (6) provides the locus where the functions gradients are parallel and (9) defines the Pareto set for the two quadratic functions minimization problem. The upper bound of (9) (10)(2a1x1+b1x2+d1)(2a2x1+b2x2+d2)=0 is reached if the first term 2a1x1+b1x2+d1=0 or the second 2a2x1+b2x2+d2=0. As both terms are the first components of f1 and f2, respectively, these conditions imply that the solution (x1*,x2*) is over f1(x1,x2) minimum or over f2(x1,x2) minimum. In conclusion, the Pareto set for quadratic functions will be a quadratic curve connecting the functions minima and where the gradients are parallel and opposite.

As an example, let us consider the following biobjective problem:

minimize: (11a)f1(x1,x2)=3x12+(x1+1)x2+x22+28x1+69,(11b)f2(x1,x2)=x12-(x1+1)x2+x22-7x1+19.

From (6), the Pareto set takes the form (12)-8x12+(8x1+70)x2+4x22-29x1-21=0 and is constrained by the following inequality: (13)(6x1+x2+28)(2x1-x2-7)<0.

In Figure 2(a) the contours of functions f1 and f2 in the two-dimensional decision space are depicted. The thicker grey continuous curve represents (12) and the thick blue coloured portion of this curve satisfies (13), being as expected the continuous Pareto set, namely, the curve along which the gradient vectors are parallel and opposite. In Figure 2(b), the continuous curve is the image of the Pareto set in the function space, that is, the Pareto front. In addition, the blue dots points are the images of the optimization functions calculated on a regular grid in the design space.

Graphical representation of Proposition 1. The continuous Pareto set as the locus, where objective function gradients are parallel and opposite.

Continuous Pareto set obtained by the proposed method

Continuous Pareto front, the Pareto set image in the function space

Pareto set for the performance functions f1 and f2 obtained by the NSGA II algorithm

Pareto front for performance functions f1 and f2 obtained by the NSGA II algorithm

For comparison, it is shown in Figures 2(c) and 2(d), adapted from Augusto et al. , that the solution was obtained by the genetic algorithm NSGA II of Deb et al. . It can be seen that the points are evenly distributed in the function space, but they are not in the decision space. That happens because the search procedure in most of GAs is focused in the function space, trying to get a well-distributed Pareto front.

3.2. Three or More Functions Defined in <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M80"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> Space

In the previous section we found a closed form solution for the optimization of two quadratic functions in the bidimensional decision space. Unfortunately, we did not find a similar solution when we add more functions in the problem. Nevertheless, the idea behind Proposition 1 remains useful.

Consider an optimization problem involving three continuous differentiable functions f1, f2, and f3. If a point p belongs to the Pareto set, it must satisfy (2a), (2b), (2c), (2d), and (2e). Therefore, one gradient vector, f1(p), will be a linear combination of the other two, f2(p) and f3(p); that is, there will exist positive weights such that (14)ω1f1(p)=-ω2f2(p)-ω3f3(p).

In Figure 3 such condition with the gradient vectors f1(p), f2(p), and f3(p) associated with their weighted factors ω1, ω2, and ω3, respectively, is illustrated.

Pareto optimality condition for three or more functions in R2 decision space.

An equilibrium condition exists when f1(p) is oriented through the opposite angular sector defined by the two other gradient vectors, namely, f2(p) and f3(p).

Based on this idea we suggest the following.

Proposition 2.

Let ei be the unit vector defined by ei=fi(p)/fi(p), with fi(p)0, and eb, the unit vector orthogonal to ei; that is, eb·ei=0. If p belongs to the Pareto set resulting from the multiobjective optimization problem involving continuous and differentiable functions defined in R2, then there exist at least three unit vectors, say, ei(p), ej(p), and el(p), that satisfy the following conditions:(15a)(ej·ei)<0,(15b)(el·ei)<0,(15c)(ej·eb)(el·eb)<0.

The direction of eb divides the decision space in two semiplanes. If the vector fi(p) points to one side, then (15a) and (15b) state that the vectors fj(p) and fl(p) point to the other side and (15c) states that -fi(p) is placed between them.

Equations (15a), (15b), and (15c) form a condition test for a point be Pareto optimum or not. This test can be useful if the problem has few optimization functions as to explore all distinguished sets with three gradient vectors in a problem with k objective functions; the maximum of k!/(k-3)! permutations of i,j,l must be checked. Let us apply Proposition 2 to find the solution of an unconstrained MOOP with three quadratic objective functions with two of them being those defined by (11a) and (11b) and the third defined by (16)f3(x1,x2)=x12+12x2+x22+4x1+40.

In Figure 4, the Pareto set found applying the Pareto test in the points of a regular grid in the design space divided in fifty points in each coordinate axis, (x1,x2)(-10,10] for all (x1,x2)i,j=(-10+(20/50)i,-10+(20/50)j), i,j=1,,50, is shown. The continuous border of the Pareto set was obtained applying Proposition 1 for each pair of objective functions.

Pareto optimality condition applied to the three-objective optimization problem involving functions defined in the two-dimensional decision space.

Pareto set

Pareto front

Pareto front f1-f2 view

Pareto front f1-f3 view

Pareto front f2-f3 view

3.3. Quadratic Functions Defined in <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M129"><mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="double-struck">R</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> Space

In the former two sections we have considered unconstrained MOOP with quadratic functions defined in the two-dimensional space. To proceed to larger dimensions, let us define a quadratic function in the Rn space, f(X):RnR, written as follows: (17)f(X)=12XLTAXL+cst with (18)XL=T(X-X0) and XLRn is a local coordinate system for a convenient definition of f(X), X0Rn is the position of the local coordinate system related to the global one, and T is the coordinates transformation matrix, from local to global coordinates systems.

Using (18), (17) can be rewritten as follows: (19)f(X)=12(X-X0)T(TTAT)(X-X0)+cst. Calling Ar=(TTAT), (19) can be rewritten as follows: (20)f(X)=12(X-X0)TAr(X-X0)+cst. As f(X) is smooth, its gradient vector is (21)f(X)=Ar(X-X0). Matrix A, as well as its transformed form Ar, is the symmetric Hessian of f(X), H(X), containing its second partial derivatives.

With these definitions, let X* be the solution of an unconstrained MOOP involving k quadratic functions defined in Rn space. Accordingly, there exists ωi0, i=1,,k, that satisfy (2a); that is, (22)i=1kωifi(X*)=0.

As fi(X) is a quadratic, (21) can be used and (22) takes the form (23)i=1kωiAri(X*-X0i)=0.

In (23), the weights ωi, as well as the searched solution X*, are unknown. Let us assume that all ωi are known; that is, ωi=ωi*. Accordingly, (23) can be rewritten as follows: (24)i=1kωi*AriX*=i=1kωi*AriX0i. Calling (25)A^=i=1kωi*Ari,(26)b^=i=1kωi*AriX0i then (24) can be rewritten as follows: (27)A^X*=b^.

Let us assume that all Ari are positive definite; that is, XTAriX>0, for all XRnX0. If ωi* is real, nonnegative and satisfies the normalization equality, that is, i=1kωi*=1, then A^ will also be positive definite and therefore its inverse A^-1 will always exist.

Consequently, the Pareto optimum solution X* can easily be found by solving (27); that is, (28)X*=A^-1b^.

In this approach, we have considered that ωi* are known. Consequently, A^, (25), and b^, (26), are promptly found. Although this is not the case for a general solution of (22), this approach is very useful to find the Pareto set and the Pareto front of unconstrained multiobjective optimization problems involving quadratic functions considering the following.

Proposition 3.

Consider a MOOP involving k quadratic functions, with the Hessian of each function being positive definite. To obtain np Pareto optimum solutions the following steps are proposed.

Sort, at random over the interval [0,1], the components of ω*, a vector containing k weights ωi*.

Perform a normalization such as i=1kωi*=1.

Calculate A^=i=1kωi*Ari and b^=i=1kωi*AriX0i.

Solve the linear system X*=A^-1b^, getting the Pareto point X* associated with ω*.

Repeat steps (1) to (4) for the number np of Pareto points wanted.

Even requiring solutions of np linear systems, the method is very fast depending on the order of the matrix A^.

Before advancing to the applications, consider an ellipsoid enclosed in a parallelepiped of sizes 2a, 2b, and 2c, as shown in Figure 5. Also consider local coordinates system, XL=[xL1,xL2,xL3]T with origin centered inside the ellipsoid, fixed to it, and oriented along its semiaxes.

Representation of an ellipsoid, a quadratic function f(X)=0 defined in R3 space.

The family of quadratic functions that represents this ellipsoid can be written as follows: (29)f(X)=12XLTAXL+cst=0 with the matrix A defined in Figure 5.

The ellipsoid can be rotated around the ith coordinate axis; that is, Xri=riXL. Let α, β, and θ be the rotation angles, around xL1, xL2, and xL3 axes, respectively. Each individual rotation matrix is depicted in Figures 9(a), 9(b), and 9(c) and the appendix. Then, the general rotation matrix is defined by (30)R=r1(α)r2(β)r3(θ).

The local coordinate system can be positioned at a point X0, relative to a global coordinates system, X=[x1,x2,x3]T. In such a case, the points on the surface of the ellipsoid can be referenced in the global system as (31)X=RXL+X0.

To get the transformation matrix T of (18), we isolate XL in (31); that is, (32)XL=R-1(X-X0). Being R an orthogonal matrix, its inverse is equal to its transpose; that is, T=R-1=RT.

With the previous definitions, consider the following unconstrained MOOP: (33)minimize:f1(X),f2(X),f3(X) with XR3, with f1(X),f2(X),f3(X) defined in Table 1 and illustrated in Figure 6(a).

Coefficients for objective functions fi(X) definitions.

Function Semiaxis Rotation Origin
a b c α β θ x 01 x 02 x 03
f 1 ( X ) 1 2 3 0 0 π/6 10 10 0
f 2 ( X ) 1 2 3 0 0 0 0 −10 0
f 3 ( X ) 1 2 3 0 0 π /4 −10 10 0

Solution of the unconstrained MOOP with the quadratic functions defined in Table 1.

Pareto set

Pareto set, x1-x2 view

Pareto front

The Pareto set for this problem, illustrated in Figure 6(b), was obtained by applying Proposition 3 algorithm, with np = 5000. To get all the points, an ordinary 2 GHz dual processor computer with 3 Gb RAM, running Matlab, expended 0.99 seconds of processing time.

As all ellipsoids were placed over (x1,x2) plane and were rotated around x3 axis, only, the Pareto set is over the (x1,x2) plane. Bold points at the Pareto set boundary were found with the same method applied to the functions f1(X),f2(X),f3(X) taken in pairs. According to Proposition 1, in such cases, the Pareto set is necessarily a curve.

The Pareto front is shown in Figure 6(d). It should be noticed that this front was obtained by means of a straightforward solution of the Pareto optimality conditions without using any iterative algorithm.

In the next example three ellipsoids with different orientations, as defined in Table 2 and the appendix, were distributed in the (x1,x2,x3) space.

Optimization problem with 3 objective functions.

Function Semiaxis Rotation Origin
a b c α β θ x 1 x 2 x 3
f 1 ( X ) 1 2 3 0 0 0 0 0 0
f 2 ( X ) 1 2 3 0 π /4 0 10 0 0
f 3 ( X ) 1 2 3 0 0 π /6 0 10 10

The Pareto set of this optimization problem found by the proposed methodology delineates the curved surface shown in Figure 7(a). The Pareto front, in the function space, is shown in Figure 7(b).

Solution of the unconstrained MOOP with the quadratic functions defined in Table 2 and the appendix.

Pareto set

Pareto front

Adding to the unconstrained MOOP the function f4(X), defined in Table 3 and the appendix, the proposed method generated in 1.17 seconds the three-dimensional Pareto set illustrated in Figure 8.

Optimization problem with 4 objective functions.

Function Semiaxis Rotation Origin
a b c α β θ x 1 x 2 x 3
f 1 ( X ) 1 2 3 0 0 π /6 0 0 0
f 2 ( X ) 1 2 3 0 - π /30 0 15 0 0
f 3 ( X ) 1 2 3 0 0 π /6 0 15 0
f 4 ( X ) 1 2 3 0 0 0 10 10 15

Pareto set of the unconstrained MOOP with the quadratic functions defined in Table 3 and the appendix.

(a) Rotation α around xL1 axis. (b) Rotation β around xL2 axis. (c) Rotation θ around xL3 axis.

In the problems all functions were defined by convenience in R3 space; nevertheless, Proposition 3 can be applied to quadratic functions defined in Rn space.

4. Conclusions

Most of the real problems are multiobjective with their objective functions being antagonistic. To solve this problem many researchers are developing methods to solve multiobjective optimization problems without reducing them to single objective. Up to now, evolutionary algorithms are widespread as a general technique to find a candidate set of the optimal solutions. These algorithms provide a discrete picture of the Pareto front in the function space, without bringing too much information about the decision space.

In the framework of this paper, we have proposed different methods to determine the Pareto set of unconstrained multiobjective optimization problems involving quadratic objective functions. Three different procedures were proposed. One for biobjective optimization, with functions defined in R2 space, which results in an analytical solution for the Pareto set. For three or more functions also defined in R2 space a condition test that is able to check if a point in the decision space is Pareto optimum or not was proposed. In the third method, suitable for multiobjective optimization with functions defined in Rn space and having Hessian positive definite, a direct algorithm was proposed which finds a Pareto optimum based in an arbitrary valid weighting vector. Some illustrative examples were used to highlight the potentiality of the methods.

It is apparent that the Pareto set for two distinct two-dimensional functions is a curve, and for three and above, the Pareto set is a surface. In three-dimensional space, for two distinct three-dimensional functions, the Pareto set will be a space curve; for three functions, a surface; and for four functions and above, a solid. Although the proposed methods are restricted to unconstrained optimization, the authors believe they can be extended to constrained problems and are working on it.

Appendix

See Figures 9(a), 9(b), and 9(c) and Tables 2 and 3.

Nomenclature DM:

Decision maker

f ( X ) :

Objective functions vector

GA:

Genetic algorithm

g j ( X ) :

j th inequality constraint function

h i ( X ) :

i th equality constraint function

k :

Number of objective functions

KKT:

Karush-Kuhn-Tucker

l :

Number of equality constraint functions

m :

Number of inequality constraint functions

MOOP:

Multiobjective optimization problem

NSGA II:

Nondominated sorting genetic algorithm, version two

n :

Dimension of the design space

R k :

Function or criterion space

R n :

Decision variables or design space

S :

Feasible region in the design space

x i :

i th decision variable

X :

Decision variable vector

X * :

Nondominated solution of a multiobjective optimization problem

X inf , X sup :

Lower and upper bounds of the design space

ω i :

Weighting factor for the ith objective function gradient in KKT condition

ω :

Vector of ωis

λ j :

Weighting factor for jth inequality constraint gradient in KKT condition

λ :

Vector of λjs

μ i :

Weighting factor for ith equality constraint gradient in KKT condition

μ :

Vector of μis

: