Pareto optimality conditions and duality for vector quadratic fractional optimization problems

One of the most important optimality conditions to aid to solve a vector optimization problem is the first-order necessary optimality condition that generalizes the Karush-Kuhn-Tucker condition. However, to obtain the sufficient optimality conditions, it is necessary to impose additional assumptions on the objective functions and in the constraint set. The present work is concerned with the constrained vector quadratic fractional optimization problem. It shows that sufficient Pareto optimality conditions and the main duality theorems can be established without the assumption of generalized convexity in the objective functions, by considering some assumptions on a linear combination of Hessian matrices instead. The main aspect of this contribution is the development of Pareto optimality conditions based on a similar second-order sufficient condition for problems with convex constraints, without convexity assumptions on the objective functions. These conditions might be useful to determine termination criteria in the development of algorithms.


Introduction
There are many contributions, concepts, and definitions that characterize and give the Pareto optimality conditions for solutions of a vector optimization problem (see, for instance [9,28]). One of the most important is the first-order necessary optimality condition that generalizes the Karush-Kuhn-Tucker (KKT) condition. However, to obtain the sufficient optimality conditions, it is necessary to impose additional assumptions (like convexity and its generalizations) in the objective functions and in the constraint set.
In this paper, we deal with a particular case of vector optimization problem (VOP), where each objective function consists of a ratio of two quadratic functions. Without generalized convexity assumptions in the objective functions, but by imposing some additional assumptions on a linear combination of Hessian matrices, Pareto optimality conditions are obtained and duality theorems are established. Let us consider the following vector quadratic fractional optimization problem: where Ω ⊆ R n is an open set, f i , g i , i ∈ I ≡ {1, . . . , m}, and h j , j ∈ J ≡ {1, . . . , �}, are continuously differenciable real-valued functions defined on Ω. In addition, we assume that f i , g i , i ∈ I, are quadratic functions and g i (x) > 0 for x ∈ Ω and i ∈ I. We denote by S the feasible set of elements x ∈ Ω satisfying h j (x) � 0. We say that x is a feasible point if x ∈ S. The value f i (x) g i (x) is the result of the i th objective function if the decision maker chooses the action x ∈ S.
Fractional optimization problems arise frequently in decision making applications, including science management, portfolio selection, cutting and stock, game theory, in the optimization of the ratio performance/cost, or profit/ investment, or cost/time and so on.
There are many contributions dealing with the scalar (single-objective) fractional optimization problem (FP) and vector fractional optimization problem (VFP). In most of them, using convexity or their generalizations, optimality conditions in the KKT sense, and the main duality theorems for optimal points are obtained. With a parametric approach, which transforms the original problem in a simpler associated problem, Dinkelbach [12], Jagannathan [16] and Antczak [1] established optimality conditions, presented algorithms and applied their approaches in a example (FP) consisting of quadratic functions. Using some known generalized convexity, Antczak [1], Khan and Hanson [19], Reddy and Mukherjee [32], Jeyakumar [17], Liang et al. [24] established optimality conditions and theorems that relate the pair primal-dual of problem (FP). In Craven [10] and Weir [37], other results for the scalar optimization (FP) can be found.
Few studies are found involving quadratic functions at both the numerator and denominator of the ratio objective function. Most of them involve the mixing of linear and quadratic functions. The most similar approaches to the scalar quadratic fractional optimization problem (QFP) were considered in [8,11,14,26,36]. On the other hand, Benson [7] considered a pure (QFP) consisting of convex functions, where some theoretical properties and optimality conditions are developed, and an algorithm and its convergence properties are presented.
The closest approaches to the vector optimization case (VQFP) were considered in [2,3,4,20,21,22,23,33]. Using an iterative computational test, Beato et al. [3,4] characterized the Pareto optimal point for the problem (VQFP), consisting of a linear and quadratic functions, and some theoretical results were obtained by using the function linearization technique of Bector et al. [6]. Arévalo and Zapata [2], Konno and Inori [20], Rhode and Weber [33] analyzed the portfolio selection problem. Kornbluth and Steuer [23] used an adapted Simplex method in the problem (VFP) consisting of linear functions. Korhonen and Yu [21,22] proposed an iterative computational method for solving the problem (VQFP), consisting of linear and quadratic functions, based on search directions and weighted sums.
The approach taken in this work is different from the previous ones. The main aspect of this contribution is the development of Pareto optimality conditions for a particular vector optimization problem based on a similar second-order sufficient condition for Pareto optimality for problems with convex constraints without the hypothesis of convexity in the objective functions. These conditions might be useful to determine termination criteria in the development of algorithms, and new extensions can be established from these, where more general vector optimization problems in which algorithms based on quadratic approximations are used locally.
This paper is organized as follows. We start by defining some notations and basic properties in Section 2. In Section 3, the sufficient Pareto optimality conditions are established. In Section 4, the relationship among the associated problems is presented and duality theorems are established. Finally, comments and concluding remarks are presented in Section 5.

Preliminaries
Let R + denote the nonnegative real numbers and x T denote the transpose of the vector x ∈ R n . Furthermore, we will adopt the following conventions for inequalities among vectors. If x = (x 1 , . . . , x m ) T ∈ R m and y = (y 1 , . . . , y m ) T ∈ R m , then x = y if and only if x i = y i , ∀i ∈ I; x < y if and only if x i < y i , ∀i ∈ I; x � y if and only if x i � y i , ∀i ∈ I; x ≤ y if and only if x � y and x � = y.
Similarly we consider the equivalent convention for inequalities >, � and ≥.
Different optimality definitions for the problem (VQFP) are referred as Pareto optimal solutions [31], two of which are defined as follows. Definition 1 .A feasible point x * is said to be a Pareto optimal solution of (VQFP), if there does not exist another Definition 2 .A feasible point x * is said to be a weakly Pareto optimal solution of (VQFP), if there does not exist another x ∈ S such that f (x) Hypotheses of convexity or generalized convexity on the objective functions will be avoided in this work, but we will use such hypotheses in the constraint set. We recall the definition of convexity, where ∇ f (x) denotes the gradient of the function f : R n → R at the point x.
Definition 3 .Let f : Ω ⊆ R n → R be a function defined on an open convex set Ω and differenciable at When f is convex on the set Ω, we simply say that f is convex.
Maeda [27] used the generalized Guignard constraint qualification (GGCQ) [15] to derive the following necessary Pareto optimality conditions for the problem (VOP) in the KKT sense. Assuming differentiability of the objective and the constraint functions, Maeda guarantees the existence of Lagrange multipliers, all strictly positive, associated with the objective functions.
Lemma 2.1 (Maeda [27]) Let x * be a Pareto optimal solution of (VQFP). Suppose that (GGCQ) holds at x * , then there exist vectors τ ∈ R m , λ ∈ R � such that For each i ∈ I and x ∈ R n , we consider the objective functions defined as where w i is the solution of the system 2B i x + b i = 0, that is, w i is the point in which the function x T B i x + b T i x reaches its minimum and this ensures that g i (x) > 0, ∀x ∈ R n . We cannot consider the cases where 2B i x + b i = 0 has no solution.

Sufficient optimality conditions
Without assumptions of generalized convexity, but imposing some additional assumptions on a linear combination of Hessian matrices of the objective functions f i and g i , i ∈ I, we provide in the next theorem a sufficient condition that guarantees that a feasible point of (VQFP) is Pareto optimal point. Similar to a second-order sufficient condition for Pareto optimality, this condition explores the intrinsic characteristics of the problem (VQFP).
We assume, unlike the objective functions, that each h j is convex. Also, given x * ∈ S, for each i ∈ I we define the scalar functions u i : S × S → R + \ {0} and s i : S × S → R by Theorem 3.1 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J and there exist vectors τ ∈ R m , λ ∈ R � , such that If for any x ∈ S, we obtain then x * is a Pareto optimal solution for (VQFP).
Proof Given x ∈ S, we obtain for each i ∈ I Thus, each function f i g i satisfies Suppose that x * is not a Pareto optimal solution of (VQFP). Then there exists another point x ∈ S such that Since From (6), we have and we obtain m inequalities with at least one strict inequality. Multiplying the m inequalities above by their respective τ i > 0, i ∈ I, and summing all the products, we obtain Substituting (1) into (7), we get Using (4) and (8), we obtain That is, On the other hand, by convexity of h j , we have for each j ∈ J, However, since x is feasible point, condition (2) and λ j � 0, j ∈ J, imply that which contradicts (9). Therefore x * is a Pareto optimal solution for (VQFP). [18,19,24,25,32], however some generalized convexity on the functions f i and g i are imposed. In most of them, for each i ∈ I and x ∈ S, the hypothesis f i (x) � 0, g i (x) > 0 and f i , −g i satisfy some generalized convexity. This is not the purpose of this work, but the constraint functions can be assumed in a more general class of convex functions, for example, the generalized convexity of Liang et al. [24] can be used.
In the following, the Pareto optimal solution set is denoted by Eff (V QFP).
Corollary 3.2 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J, and there exist vectors τ ∈ R m , λ ∈ R � , such that (1), (2) and (3) are positive semidefinite matrices for each i ∈ I, then x * ∈ Eff(V QFP).
Proof By hypothesis, given x ∈ S and i ∈ I, we obtain Therefore, the inequality (4) is valid and the result follows from Theorem 3.1.
To ensure that inequality (4) is valid, we start exploring the features of the Hessian matrices of the objective functions of (VQFP).
Negative values can occur in each term τ i Let us check new conditions for which (4) is satisfied, that is, we want to ensure the result of Theorem 3.1 by analysing the function Note that Z(·, x * ) is a quadratic function without the linear part, thus in (VQFP) we obtain  3 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J and there exist vectors τ ∈ R m , λ ∈ R � , such that (1), (2) and (3) Using the previous results to check whether a feasible point is a Pareto optimal solution of (VQFP), we propose the following computational test method.
Pareto optimality test starts with a feasible point, then it seeks to solve a system of linear equations containing m + � unknowns, τ and λ, the inequalities τ > 0, λ � 0, and two equalities (1) and (2). If this system has no solution, then the point x * does not satisfy the first-order necessary condition for Pareto optimality, so the method terminates concluding that x * / ∈ Eff (V QFP). Otherwise, in Step 2, a quadratic optimization problem on S should be solved. If the minimum of the quadratic problem is non-negative, then the procedure ends, concluding that x * ∈ Eff (V QFP). Otherwise, we say that x * has not passed the Pareto optimality test. Its complexity lies in solving a system of linear inequalities plus a quadratic optimization problem.
The next results, which addresses a linear combination of the Hessian matrices, can be used to develop a computational search method.
Looking at the previous Pareto optimality test, if the fixed point x * is assumed to be a variable y, then the linear system in Step 1 becomes a nonlinear system for the variables τ > 0, λ � 0, y ∈ S. And the quadratic optimization problem in Step 2 becomes a quadratic optimization problem of the type min x,y∈S Z(x, y). This raises considerable difficulties. In order to reduce these difficulties, we further explore the characteristics of the matrix One possibility is to search for points y * such that � F(y * ) becomes positive semidefinite. In this case Z(x, y * ) = (x − y * ) T � F(y * )(x − y * ) � 0 depends only on y * ∈ S. Consider a fixed point x * , the next theorem takes advantage of the symmetry and diagonalizations of the matrices A i and B i , i ∈ I, to give sufficient Pareto optimality conditions for a feasible point of (VQFP). Consider the usual inner product �·, ·� in R n .
Theorem 3.4 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J and there exist vectors τ ∈ R m , λ ∈ R � , such that (1), (2) and (3) are valid. Consider also, for each i ∈ I and k ∈ K ≡ {1, . . . , n}, the following functions where p k i and q k i are the columns of orthogonal matrices P i and Q i , constructed from the normalized eigenvectors of the matrices A i and B i , respectively. If for all x ∈ S the following inequalities are valid, where µ A i k and µ B i k are the eigenvalues of the matrices A i and B i associated with the eigenvectors p k i and q k i , respectively. Then x * ∈ Eff(V QFP).
Proof The matrices A i and B i , i ∈ I, are diagonalizable and can be rewritten as where D A i and D B i are diagonal matrices, with their diagonal formed by the eigenvalues µ A i k and µ B i k , k ∈ K, of the matrices A i and B i , respectively. Thus, we obtain Since for all x ∈ S, we have , for all i ∈ I and k ∈ K, we conclude that Z(x, x * ) � 0. Therefore, the inequality (4) is valid and the result follows from Theorem 3.1.
Theorem 3.4 is not simple to use since (11) depends on all points of the feasible set, that is, it depends of the functions γ k i (x, x * , τ), η k i (x, x * , τ), ∀i ∈ I, ∀k ∈ K, and x ∈ S. However, even if for some i ∈ I and k ∈ K, µ A i k γ k i (x, x * , τ) < µ B i k η k i (x, x * , τ) occurs, the inequality (4) can still be satisfied. In order to obtain (11), we present the next corollary, which follows immediately from the previous theorem.
Corollary 3.5 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J, and there exist vectors τ ∈ R m , λ ∈ R � , such that (1), (2) and (3) are valid. Consider also, for each i ∈ I and k ∈ K, where p k i and q k i are the columns of orthogonal matrices P i and Q i , constructed from the normalized eigenvectors of the matrices A i , B i , and µ A i k , µ B i k are the eigenvalues of the matrices A i and B i associated with the eigenvectors p k i and q k i , respectively. If for all x ∈ S, we obtain H i,k (x) � 0 for each i ∈ I and k ∈ K, then x * ∈ Eff(V QFP).
Proof According to Theorem 3.4, it is enough to show that for every feasible point, and for all i ∈ I and k ∈ K, µ A i k γ k i (x, x * , τ) � µ B i k η k i (x, x * , τ) is valid. Given x ∈ S and a pair {i, k} ∈ I × K, we obtain Therefore, the result follows from Theorem 3.4.
From Corollary 3.5, if each quadratic function H i,k (x), {i, k} ∈ I × K, is non-negative in the feasible set, then a feasible point satisfying (1), (2) and (3) is an Pareto optimal solution of (VQFP). Let

and the non-negativity of the quadratic
{i, k} ∈ I × K and x * ∈ S. For example, the unconstrained (VQFP) requires that each matrix �H i,k � be positive semidefinite and that Corollary 3.6 Let x * be a feasible point of (VQFP). Suppose that the constraint function h j is convex for each j ∈ J and there exist vectors τ ∈ R m , λ ∈ R � , such that (1), (2) and (3)  � is positive semidefinite and α i,k = 0 (see (12), (13) and (14)), then x * ∈ Eff(V QFP).
Proof By hypothesis, for all x ∈ S we have H i,k (x) � 0 for each pair {i, k} ∈ I × K. Therefore, the result follows from Corollary 3.5.
Given a pair {i, k} ∈ I × K, writing each entry of the matrix �H where r, s ∈ K, we obtain for each pair {r, s} ∈ K × K, We can draw some conclusions from (15), (16) and (17). For example, for a fixed pair {i, k} ∈ I × K, the vector α i,k is a linear combination of the eigenvectors p k i and q k i . If � is a symmetric matrix. Moreover, if µ A i k µ B i k f i (x * ) < 0, and there exists a pair {r, s} ∈ K × K such that p k i (s)q k i (r) � = p k i (r)q k i (s), then the matrix In this case, if there exists x ∈ S such that H i,k (x) ∈ C \ R, H i,k (x) � 0 does not make sense. However, when (11) is required, it is possible to show that H i,k (x) ∈ C \ R is not possible.
The results of Theorem 3.1, 3.4 and its corollaries can be used in order to develop a method of searching for Pareto optimal solutions of (VQFP), and it might be useful to determine the termination criteria in the development of algorithms.

Duality
The matrix (10) defines a specific function, and by adding some assumptions about it, we obtain new results, such as, a relationship between the problem (VQFP) and a scalar problem associated with it, and the main duality theorems.
In the scalar optimization problem case, Dinkelbach [12] and Jagannathan [16] used a parametric approach that transforms the fractional optimization problem in a new scalar optimization problem. Similarly, we consider the following associated problem to (VQFP).
where Ω ⊆ R n , f i , g i , i ∈ I and h j , j ∈ J are defined in (VQFP), and x * ∈ S.
Using assumptions of generalized convexity, Osuna-Gómez et al. [30] presented the problem (VFP) x * and obtained necessary and sufficient conditions for weakly Pareto optimality and main duality theorems. The results presented in [12,16,30] considered each objective function as f i (x) − α i g i (x), i ∈ I, and they studied the properties of the parameter α i ∈ R. Following the ideas presented by Osuna-Gómez et al. [30], we obtain new results by considering directly However, by imposing hypothesis on the linear combination of matrices � , i ∈ I and x * ∈ S, we consider Pareto optimal solutions rather than weakly Pareto optimal solutions.
To characterize the solutions of the problems (VOP), Geoffrion [13] used the solutions of the associated scalar problems. Similarly, we consider the following weighted scalar problem associated to the problem (VQFP) x * .
where Ω ⊆ R n , f i , g i , i ∈ I and h j , j ∈ J are defined in (VQFP), x * ∈ S and w = (w 1 , . . . , w m ) T ∈ R m , w > 0.

The relationship between the associated problems
The next theorem and its proof are similar to Lemma 1.1 from [30], when Pareto optimal solutions (not necessarily weak) are considered.
In Section 3, we define the matrix � , where x * ∈ S and τ ∈ R m , τ > 0. Let us define now the set W = {w ∈ R m | w > 0}, the function F : W × S → R n×n given by and for each i ∈ I, the functions F i : S → R n×n given by and we can establish some relations among the associated problems (VQFP), (VQFP) x * and (VQFP) w x * .
Theorem 4.2 If x * is a optimal solution of the weighted scalar problem (VQFP) w x * , then x * ∈ Eff(V QFP).
Proof Suppose that x * / ∈ Eff (V QFP), then there exists another point x ∈ S such that This contradicts the minimality of x * in (VQFP) w x * .
Lemma 4.2 Let x * ∈ S. If there exists w ∈ W , such that the matrix F(w, x * ) is positive semidefinite, then the objective function of (V QFP) w x * is convex.
Proof Given x 1 , x 2 ∈ S, we have for each i ∈ I, Hence, for each objective function of (V QFP) x * , we have If there exists w ∈ W such that the matrix F(w, x * ) is positive semidefinite, then Therefore, the objective function of (V QFP) w x * is convex.
Note that the hypothesis of semidefiniteness on the matrix F(w, x * ) or on the matrices F i (x * ), i ∈ I, x * ∈ S, is punctual. However, the next example, we draw a situation in which for all x ∈ S and i ∈ I, we have y T F i (x)y � 0, for all y ∈ S, and then y T F(w, x)y � 0, for all y ∈ S.
Theorem 4.3 Let x * ∈ Eff(V QFP) x * . Suppose that the constraint qualification (GGCQ) is satisfied at x * , and the constraint function h j is convex for each j ∈ J. Then there exists w ∈ W such that if the matrix F(w, x * ) is positive semidefinite, then x * is the optimal solution for the weighted scalar problem (VQFP) w x * .
Proof If x * ∈ Eff (V QFP) x * and satisfies (GGCQ), by Lemma 4.1, there exist w > 0 and λ � 0, such that Therefore, x * is a critical point of the weighted scalar problem (VQFP) w x * , and since F(w, x * ) is positive semidefinite, by Lemma 4.2, the objective function of (VQFP) w x * is convex. Since for each j ∈ J the constraint function h j is convex, it follows that x * is an optimal solution for (VQFP) w x * .

Duality theorems
For a given mathematical optimization problem there are many types of duality. Two well-known duals are the Wolfe dual [38] and the Mond-Weir dual [29]. In this work, we consider the primal problem (VQFP) and discuss the Mond-Weir dual problem, but we use the associated problem (VQFP) x * to generate the constraint set of the dual problem. Let us consider the following vector quadratic fractional dual optimization problem (VQFD).
(VQFD) Maximize f (u) g(u) = � f 1 (u) g 1 (u) , . . . , f m (u) where f i and g i , i ∈ I are the same quadratic functions defined on (VQFP), and its feasible set we denote by Y .
Theorem 4.4 (Weak duality) Let x ∈ S and (u, τ, λ) ∈ Y . If F(τ, u) is positive semidefinite, and the constraint function h j is convex for each j ∈ J, then f (x) g(x) � f (u) g(u) .

Conclusions
The main contribution of this work is the development of Pareto optimality conditions for a particular vector optimization problem, where each objective function consists of a ratio of two quadratic functions with convexity being only assumed on the constraint set. We took advantage of the diagonalization of Hessian matrices. We have shown the relationship between the particular problem and two problems associated with it, and we use some assumptions of the linear combination of Hessian matrices to show the main duality theorems. For the particular problem, the results presented in this work might be useful to determine the termination criteria in the development of algorithms, and new extensions can be established to more general vector optimization problems, in which algorithms based on quadratic approximations are used locally. In future work we plan to develop algorithms using the concepts presented here.