An Effective Generalization of the Direct Support Method in Quadratic Convex Programming

The main objective of our paper is to solve a problem which was 
encountered in an industrial firm. It concerns the conception of a weekly 
production planning with the aim to optimize the quantities to be launched. 
Indeed, one of the problems raised in that company could be modeled as a linear multiobjective program where the decision variables are of two kinds: the 
first ones are upper and lower bounded, and the second ones are nonnegative. 
During the resolution process of the multiobjective case, we were faced with the 
necessity of developing an effective method to solve the mono-objective case 
without any increase in the linear program size, since the industrial case to 
solve is already very large. So, we propose an extension of the direct support 
method presented in this paper. Its particularity is that it avoids the preliminary transformation of the decision variables. It handles the bounds as 
they are initially formulated. The method is really effective, simple to use, and 
permits speeding up the resolution process.


Introduction
Many algorithms have been developed to solve a convex quadratic programming problem.Examples include the most traditional method which is the quadratic simplex method of Wolfe [9].This method is a slightly modified simplex algorithm.Various quadratic programming algorithms are shown to differ only in the manner in which they solve the linear equations expressing the Kuhn-Tucker system for the associated equality constrained subproblems.
In this paper, we propose to solve a generalized convex quadratic program by an adapted direct support method.Our approach is based on the principle of the methods developed by R. Gabasov and F.M. Kirillova [4,5], which permit to solve a single-objective convex quadratic program with nonnegative decision variables or a single-objective convex quadratic program with bounded decision variables.Our work aims to propose a generalization for the singleobjective convex quadratic program with the two types of decision variables: the upper and lower bounded variables and the nonnegative variables.
This paper is devoted to present this method.It is an intermediate method between the active set methods and the interior points methods.Its particularity is that it avoids the preliminary transformation of the decision variables.It handles the constraints of the problems such as they are initially formulated.
The method is really effective, simple to use and direct.It allows us to treat problems in a natural way and permits to speed-up the resolution process.It generates an important gain in memory space and CPU time.Furthermore, the method integrates a suboptimal criterion which permits to stop the algorithm with a desired accuracy.This could be useful in practical applications.
The principle of this method is simple, starting with an initial feasible solution and an initial support, each iteration consists to find a descent direction and a step along this direction to improve the value of the objective function.Then we change the support.
Gabasov et all.have presented in several works comparative study between the adaptive method and others methods for solving convex quadratic programs.The experimental results for several different series of problems have shown the effectiveness of the adaptive method.

Statement of the Problem and Definitions
In this paper, we consider the convex quadratic problem in the following form: where and we note by A H the m × (n x + n y )-matrix (A|H).Let the vectors and the matrices be partitioned in the following way:
• A feasible solution (x 0 , y 0 ) is said to be optimal if where (x, y) is taken among all the feasible solutions of the problem (1)-( 4).
• A feasible solution (x , y ) is called suboptimal or -optimal if where (x 0 , y 0 ) is an optimal solution of the problem ( 1)-( 4) and is a nonnegative number, fixed in advance.
• The set • A pair {(x, y), (J x B , J y B )}, formed by a feasible solution (x, y) and a support (J x B , J y B ), is called a support feasible solution.
• The support feasible solution is said to be nondegenerate, if

Increment Formula of the Objective Function
Let {(x, y), (J x B , J y B )} be a support feasible solution for the problem ( 1)-( 4) and let us consider any other feasible solution (x, ȳ) = (x + Δx, y + Δy).We define: Then, we can write where g(z) = Dz + c and I N = I(J N , J N ) is an (n x + n y − m)-identity matrix.We set and define the potential vector u and the estimations vector E by: Then, the increment formula has the following form: As we can express the potential vector u and the estimations vector E by the formulas: (10) Then, the increment formula has the following final form:

Optimality Criterion
Theorem 4.1 Let {(x, y), (J x B , J y B )} be a support feasible solution for the constraints ( 2)-( 4).Then the following relations: are sufficient for the optimality of the feasible solution (x, y).They are also necessary if the support feasible solution of constraints is nondegenerate.
Proof.Sufficiency Let {(x, y), (J x B , J y B )} be a support feasible solution of the problem (1)-( 4) satisfying the relations (12).For any feasible solution (x, ȳ) of the problem (1)-( 4), the increment formula (11) gives: because the matrix M is positive semi-definite.So From the relations (12), we have where (x, y) is an arbitrary feasible solution of the problem (1)-( 4).Consequently, the vector (x, y) is an optimal solution of the problem ( 1)-( 4).Necessity Let {(x, y), (J x B , J y B )} be an nondegenerate optimal support feasible solution of the problem (1)-( 4) and assume that the relations (12) are not satisfied, that is, there exists at least one index j 0 ∈ J N = J x N ∪ J y N such that We construct another feasible solution (x, ȳ) = (x + θl x , y + θl y ), where θ is a positive real number, and vector, constructed as follows.
For this, two cases can arise: where a j 0 is the j 0 -th column of the matrix A.
where h j 0 is the j 0 -th column of the matrix H.
From the construction of the descent direction l, the vector (x, y) satisfies the principal constraint Ax + Hy = b.In order to be a feasible solution of the problem (1)-( 4), the vector (x, y) must in addition satisfy the inequalities d − ≤ x ≤ d + and y ≥ 0, or in its developed form Two cases can arise (i) If j 0 ∈ J x N , the relations ( 14) are equivalent to So for j ∈ J x B we find We set and In the other hand, for j ∈ J y B , we find Then we set θ y = min θ y j , j ∈ J y B , and θ 0 = min (θ x , θ y ) .
(ii) If j 0 ∈ J y N , the relations ( 14) and ( 15) are equivalent to and θl y j ≥ −y j , j ∈ J y B , θl y j 0 ≥ −y j 0 .
So for j ∈ J x B , we take Then we set In the other hand, for j ∈ J y B , we take and We define θ y = min θ y j 1 , θ y j 0 , θ 0 = min (θ x , θ y ) .
Consequently, the relations (12) are sufficient, and also necessary for the optimality of the feasible solution (x, y), if (x, y) is nondegenerate.

The Suboptimality Condition
In order to evaluate the difference between the optimal value F (x 0 , y 0 ) and another value F (x, y) for any support feasible solution {(x, y), (J x B , J y B )}, when E y ≥ 0, we use the following formula: which is called the suboptimality value.
Theorem 5.1 ( The suboptimality condition) Let {(x, y), (J x B , J y B )} be a support feasible solution of the problem ( 1)-( 4) and an arbitrary nonnegative number.If E y N ≥ 0 and then the feasible solution (x, y) is -optimal.
In the particular case, where = 0, the feasible solution (x, y) is consequently optimal.

Construction of the algorithm
Before presenting the method of resolution, we give some basic definitions: Definition 6.1 (i) The indices set J S ⊂ J N , where J S = (J x S ∪ J y S ) and J N = (J x N ∪ J y N ) such that det M(J S , J S ) = 0 is called an objective function support of the problem ( 1)-( 4).We set (ii) The indices set J P = {J B , J S } is called a support of the problem ( 1)-( 4), where J B = J x B ∪ J y B is the constraints support and J S = J x S ∪ J y S is the objective function support.
(iii) The pair {(x, y), J p } is called a support feasible solution of the problem ( 1)- (4).It is said to be consistent if E(J S ) = 0.
(iv) The direction l t = l t x l t y where l x = l(J x ) and l y = l(J y ), is said a descent direction if Al x + Hl y = 0 and E t x l x + E t y l y < 0.
Given any nonnegative real number and an initial consistent support feasible solution {(x, y), J P }, the aim of the algorithm is to construct an -optimal solution (x , y ) or an optimal solution (x 0 , y 0 ).An iteration of the algorithm consists on moving from {(x, y), J P } to another support feasible solution (x, y), J P such that F (x, y) ≤ F (x, y).
For this purpose, we firstly construct the new feasible solution (x, ȳ) as follows: (x, ȳ) = (x, y) + θ(l x , l y ), where l = (l x , l y ) is the descent direction and θ is the step along this direction.Then we change the support J P on J P .

Computation of the direction l
In this algorithm, the simplex metric is chosen.We will thus vary only one component among those which don't satisfy the relations (12).For the choice of direction, one must consider the following: • The relation E j = 0, j ∈ J S must be verified.
• The value of the objective function must decrease from (x, y) to (x, y).
In order to obtain a maximal increment, we must choose the subscript j 0 such that: where with J x NNO and J y NNO are the subsets respectively of J x N and J y N , whose subscripts don't satisfy the relations of optimality (12).We have two cases: The component l S will be calculated such that We have: As l = Zl N , we will have Finally, we have with Calculation of θ y θ y = min θ y j 1 , θ y j S , θ y j 1 = min θ y j , j ∈ J y B , θ y j S = min θ y j , j ∈ J y S , where θ y j = −y j ly j , if l y j < 0, ∞, if l y j ≥ 0. ( Calculation of θ F The step θ F will be calculated in such a way that the passage from (x, y) to (x, y) will ensure a maximum diminution of the objective function.Let be where α = l t N Ml N .So we must have: We deduce: Calculation of θ x θ x = min θ x j 1 , θ x j S , θ x j 1 = min θ x j , j ∈ J x B , θ x j S = min θ x j , j ∈ J x S , where θ x j is calculated using the formula (21).Calculation of θ y θ y = min(θ y j 0 , θ y j 1 , θ y S ), where θ y j 0 = y j 0 , if E y j 0 > 0, ∞, if E y j 0 < 0,