A Two-Phase Support Method for Solving Linear Programs: Numerical Experiments

. We develop a single artiﬁcial variable technique to initialize the primal support method for solving linear programs with bounded variables. We ﬁrst recall the full artiﬁcial basis technique, then we will present the proposed algorithm. In order to study the performances of the suggested algorithm, an implementation under the MATLAB programming language has been developed. Finally, we carry out an experimental study about CPU time and iterations number on a large set of the NETLIB test problems. These test problems are practical linear programs modelling various real-life problems arising from several ﬁelds such as oil reﬁnery, audit sta ﬀ scheduling, airline scheduling, industrial production and allocation, image restoration, multisector economic planning, and data ﬁtting. It has been shown that our approach is competitive with our implementation of the primal simplex method and the primal simplex algorithm implemented in the known open-source LP solver LP SOLVE.


Introduction
Linear programming is a mathematical discipline which deals with solving the problem of optimizing a linear function over a domain delimited by a set of linear equations or inequations.The first formulation of an economical problem as a linear programming problem is done by Kantorovich 1939, 1 , and the general formulation is given later by Dantzig in his work 2 .LP is considered as the most important technique in operations research.Indeed, it is widely used in practice, and most of optimization techniques are based on LP ones.That is why many researchers have given a great interest on finding efficient methods to solve LP problems.Although some methods exist before 1947 1 , they are restricted to solve some particular forms of the LP problem.Being inspired from the work of Fourier on linear inequalities, Dantzig 1947, 3 developed the simplex method which is known to be very efficient for solving practical linear programs.However, in 1972, Klee and Minty 4 have found an example where the simplex method takes an exponential time to solve it.
In 1977, Gabasov and Kirillova 5 have generalized the simplex method and developed the primal support method which can start by any basis and any feasible solution and can move to the optimal solution by interior points or boundary points.The latter is adapted by Radjef and Bibi to solve LPs which contain two types of variables: bounded and nonnegative variables 6 .Later, Gabasov et al. developed the adaptive method to solve, particularly, linear optimal control problems 7 .This method is extended to solve general linear and convex quadratic problems 8-18 .In 1979, Khachian developed the first polynomial algorithm which is an interior point one to solve LP problems 19 , but it's not efficient in practice.In 1984, Karmarkar presented for the first time an interior point algorithm competitive with the simplex method on large-scale problems 20 .
The efficiency of the simplex method and its generalizations depends enormously on the first initial point used for their initialization.That is why many researchers have given a new interest for developing new initialization techniques.These techniques aim to find a good initial basis and a good initial point and use a minimum number of artificial variables to reduce memory space and CPU time.The first technique used to find an initial basic feasible solution for the simplex method is the full artificial basis technique 3 .In 21, 22 , the authors developed a technique using only one artificial variable to initialize the simplex method.In his experimental study, Millham 23 shows that when the initial basis is available in advance, the single artificial variable technique can be competitive with the full artificial basis one.Wolfe 24 has suggested a technique which consists of solving a new linear programming problem with a piecewise linear objective function minimization of the sum of infeasibilities .In 25-31 , crash procedures are developed to find a good initial basis.
In 32 , a two-phase support method with one artificial variable for solving linear programming problems was developed.This method consists of two phases and its general principle is the following: in the first phase, we start by searching an initial support with the Gauss-Jordan elimination method, then we proceed to the search of an initial feasible solution by solving an auxiliary problem having one artificial variable and an obvious feasible solution.This obvious feasible solution can be an interior point of the feasible region.After that, in the second phase, we solve the original problem with the primal support method 5 .
In 33, 34 , we have suggested two approaches to initialize the primal support method with nonnegative variables and bounded variables: the first approach consists of applying the Gauss elimination method with partial pivoting to the system of linear equations corresponding to the main constraints and the second consists of transforming the equality constraints to inequality constraints.After finding the initial support, we search a feasible solution by adding only one artificial variable to the original problem, thus we get an auxiliary problem with an evident support feasible solution.An experimental study has been carried out on some NETLIB test problems.The results of the numerical comparison revealed that finding the initial support by the Gauss elimination method consumes much time, and transforming the equality constraints to inequality ones increases the dimension of the problem.Hence, the proposed approaches are competitive with the full artificial basis simplex method for solving small problems, but they are not efficient to solve large problems.
In this work, we will first extend the full artificial basis technique presented in 7 , to solve problems in general form, then we will combine a crash procedure with a single artificial variable technique in order to find an initial support feasible solution for the initialization of the support method.This technique is efficient for solving practical problems.Indeed, it takes advantage of sparsity and adds a reduced number of artificial variables to the original problem.Finally, we show the efficiency of our approach by carrying out an experimental study on some NETLIB test problems.
The paper is organized as follows: in Section 2, the primal support method for solving linear programming problems with bounded variables is reviewed.In Section 3, the different techniques to initialize the support method are presented: the full artificial basis technique and the single artificial variable one.Although the support method with full artificial basis is described in 7 , it has never been tested on NETLIB test problems.In Section 4, experimental results are presented.Finally, Section 5 is devoted to the conclusion.

State of the Problem and Definitions
The linear programming problem with bounded variables is presented in the following standard form: where c and x are n-vectors; b an m-vector; A an m × n -matrix with rank A m < n; l and u are n-vectors.In the following sections, we will assume that l < ∞ and u < ∞.We define the following index sets: So we can write and partition the vectors and the matrix A as follows:

2.5
i A vector x verifying constraints 2.2 and 2.3 is called a feasible solution for the problem 2.1 -2.3 .
ii A feasible solution x 0 is called optimal if z x 0 c T x 0 max c T x, where x is taken from the set of all feasible solutions of the problem 2.1 -2.3 .
iii A feasible solution x is said to be -optimal or suboptimal if where x 0 is an optimal solution for the problem 2.1 -2.3 , and is a positive number chosen beforehand.
iv We consider the set of indices v The pair {x, J B } comprising a feasible solution x and a support J B will be called a support feasible solution SFS .
vi An SFS is called nondegenerate if l j < x j < u j , j ∈ J B .
Remark 2.1.An SFS is a more general concept than the basic feasible solution BFS .Indeed, the nonsupport components of an SFS are not restricted to their bounds.Therefore, an SFS may be an interior point, a boundary point or an extreme point, but a BFS is always an extreme point.That is why we can classify the primal support method in the class of interior search methods within the simplex framework 35 .
i We define the Lagrange multipliers vector π and the reduced costs vector Δ as follows: where Theorem 2.2 the optimality criterion 5 .Let {x, J B } be an SFS for the problem 2.1 -2.3 .So the relations: Δ j ≤ 0 for x j u j , Δ j 0 for l j < x j < u j , j ∈ J N

2.8
are sufficient and, in the case of nondegeneracy of the SFS {x, J B }, also necessary for the optimality of the feasible solution x.
The quantity β x, J B defined by: is called the suboptimality estimate.Thus, we have the following theorem 5 .
Theorem 2.3 sufficient condition for suboptimality .Let {x, J B } be an SFS for the problem 2.1 -2.3 and an arbitrary positive number.If β x, J B ≤ , then the feasible solution x is -optimal.

The Primal Support Method
Let {x, J B } be an initial SFS and an arbitrary positive number.The scheme of the primal support method is described in the following steps: with the formula 2.9 .
3 If β x, J B 0, then the algorithm stops with {x, J B }, an optimal SFS.
4 If β x, J B ≤ , then the algorithm stops with {x, J B }, an -optimal SFS.
5 Determine the nonoptimal index set: J NNO j ∈ J N : Δ j < 0, x j < u j or Δ j > 0, x j > l j .
7 Compute the search direction d using the relations:
13 If β x, J B 0, then the algorithm stops with {x, J B }, an optimal SFS.
14 If β x, J B ≤ , then the algorithm stops with {x, J B }, an -optimal SFS.
15 If θ 0 θ j 0 , then we put J B J B .
16 If θ 0 θ j 1 , then we put J B J B \ {j 1 } ∪ {j 0 }. 17 We put x x and J B J B .Go to the step 1 .

Finding an Initial SFS for the Primal Support Method
Consider the linear programming problem written in the following general form: We assume that l < ∞ and u < ∞.
Let m m 1 m 2 be the number of constraints of the problem 3.1 and I {1, 2, . . ., m}, J 0 {1, 2, . . ., p} are, respectively, the constraints indices set and the original variables indices set of the problem 3.1 .We partition the set I as follows: I I 1 ∪ I 2 , where I 1 and I 2 represent, respectively, the indices set of inequality and equality constraints.We note by e j the j-vector of ones, that is, e j 1, 1, . . ., 1 ∈ R j and e j the jth vector of the identity matrix I m of order m. After , where h i is a p-vector computed with the formula

3.6
Indeed, from the system 3.3 , we have x p i b i − p j 1 a ij x j , i ∈ I 1 .By using the bound constraints 3.4 , we get However, the experimental study shows that it's more efficient to set u p i , i ∈ I 1 , to a given finite big value, because for the large-scale problems, the deduction formula 3.6 given above takes much CPU time to compute bounds for the slack variables.
The initialization of the primal support method consists of finding an initial support feasible solution for the problem 3.2 -3.5 .In this section, being inspired from the technique used to initialize interior point methods 36 and taking into account, the fact that the support method can start with a feasible point and a support which are independent, we suggest a single artificial variable technique to find an initial SFS.Before presenting the suggested technique, we first extend the full artificial basis technique, originally presented in 7 for standard form, to solve linear programs presented in the general form 3.1 .

The Full Artificial Basis Technique
Let x be a p-vector chosen between l and u and w an m-vector such that w b − Ax .We consider the following subsets of I 1 : and we assume without loss of generality that I 1 {1, 2, . . ., k}, with k ≤ m 1 and We make the following partition for x e , u e and H e : x e x e x e− , where

3.13
After adding s m − k artificial variables to the equations k 1, k 2, . . ., m of the system 3.11 , where s is the number of elements of the artificial index set . ., m}, we get the following auxiliary problem: 3.14 where represents the artificial variables vector, where δ is a real nonnegative number chosen in advance.
−e s , then we get the following auxiliary problem:

3.19
The variables indices set of the auxiliary problem 3.17 -3.19 is

3.20
Let us partition this set as follows: J J N ∪ J B , where

3.21
Remark that the pair {y, J B }, where with w I 1 w i , i ∈ I 1 and w I a w i , i ∈ I a , is a support feasible solution SFS for the auxiliary problem 3.17 -3.19 .Indeed, y lies between its lower and upper bounds: for δ ≥ 0 we have Furthermore, the m × m -matrix and y verifies the main constraints: by replacing x, x e− , x e , and x a with their values in the system 3.18 , we get

3.26
Therefore, the primal support method can be initialized with the SFS {y, J B } to solve the auxiliary problem.Let {y * , J * B } be the obtained optimal SFS after the application of the primal support method to the auxiliary problem 3.17 -3.19 , where

3.27
If ψ * < 0, then the original problem 3.2 -3.5 is infeasible.Else, when J * B does not contain any artificial index, then { x * x e * , J * B } will be an SFS for the original problem 3.2 -3.5 .Otherwise, we delete artificial indices from the support J * B , and we replace them with original or slack appropriate indices, following the algebraic rule used in the simplex method.
In order to initialize the primal support method for solving linear programming problems with bounded variables written in the standard form, in 7 , Gabasov et al. add m artificial variables, where m represents the number of constraints.In this work, we are interested in solving the problem written in the general form 3.1 , so we have added artificial variables only for equality constraints and inequality constraints with negative components of the vector w.Remark 3.3.If we choose x such that x l or x u, then the vector y, given by the relationship 3.22 , will be a BFS.Hence, the simplex algorithm can be initialized with this point.
so we add only m 2 artificial variables for the equality constraints.

The Single Artificial Variable Technique
In order to initialize the primal support method using this technique, we first start by searching an initial support, then we proceed to the search of an initial feasible solution for the original problem.
The application of the Gauss elimination method with partial pivoting to the system of equations 3.3 can give us a support J B {j 1 , j 2 , . . ., j r }, where r ≤ m.However, the experimental study realized in 33, 34 reveals that this approach takes much time in searching the initial support, that is, why it's important to take into account the sparsity property of practical problems and apply some procedure to find a triangular basis among the columns corresponding to the original variables, that is, the columns of the matrix A. In this work, on the base of the crash procedures presented in 26, 28-30 , we present a procedure to find an initial support for the problem 3.2 -3.5 .
Procedure 1 searching an initial support . 1 We sort the columns of the m × p-matrix A according to the increasing order of their number of nonzero elements.Let L be the list of the sorted columns of A.
2 Let a j 0 L r the rth column of L be the first column of the list L verifying: ∃i 0 ∈ I such that where pivtol is a given tolerance.Hence, we put J B {j 0 }, I piv {i 0 }, k 1.
3 Let j k be the index corresponding to the column r k of the list L, that is, a j k L r k .If a j k has zero elements in all the rows having indices in I piv and if ∃i k ∈ I such that then go to step 3 , else go to step 5 .5 We put s 0, I a ∅, J a ∅. 6 For all i ∈ I \ I piv , if the ith constraint is originally an inequality constraint, then we add to J B an index corresponding to the slack variable p i, that is, J B J B ∪ {p i}.If the ith constraint is originally an equality constraint, then we put s s 1 and add to this latter an artificial variable x p m 1 s for which we set the lower bound to zero and the upper bound to a big and well-chosen value M. Thus, we put J B J B ∪ {p m 1 s}, J a J a ∪ {p m 1 s}, I a I a ∪ {i}.After the application of the procedure explained above Procedure 1 for the problem 3.2 -3.5 , we get the following linear programming problem:

where x a
x p m 1 1 , x p m 1 2 , . . ., x n T is the s-vector of the artificial variables added during the application of Procedure 1, with n p m 1 s, H a e i , i ∈ I a is an m × s -matrix.Since we have got an initial support, we can start the procedure of finding a feasible solution for the original problem.
Procedure 2 searching an initial feasible solution .Consider the following auxiliary problem:

3.31
where x n 1 is an artificial variable, ρ b − Ax , and x is a p-vector chosen between l and u.We remark that is an obvious feasible solution for the auxiliary problem.Indeed,

3.33
Hence, we can apply the primal support method to solve the auxiliary problem 3.31 by starting with the initial SFS {y, J B }, where J B {j 0 , j 1 , . . ., j m−1 } is the support obtained with Procedure 1.Let be, respectively, the optimal solution, the optimal support, and the optimal objective value of the auxiliary problem 3.31 .If ψ * < 0, then the original problem 3.2 -3.5 is infeasible.Else, when J * B does not contain any artificial index, then { x * x e * , J * B } will be an SFS for the original problem 3.2 -3.5 .Otherwise, we delete the artificial indices from the support J * B and we replace them with original or slack appropriate indices, following the algebraic rule used in the simplex method.
Remark 3.6.The number of artificial variables n a s 1 of the auxiliary problem 3.31 verifies the inequality: 1 ≤ n a ≤ m 2 1.
The auxiliary problem will have only one artificial variable, that is, n a 1, when the initial support J B found by Procedure 1 is constituted only by the original and slack variable indices J a ∅ , and this will hold in two cases.
Case 1.When all the constraints are inequalities, that is, I 2 ∅.
Case 2. When I 2 / ∅ and step 4 of Procedure 1 ends with I piv I 2 .
The case n a m 2 1 holds when the step 4 of Procedure 1 stops with I piv I 1 .
Remark 3.7.Let's choose a p-vector x between l and u and two nonnegative vectors v e ∈ R m 1 and v a ∈ R s .If we put in the auxiliary problem 3.31 , ρ b − v − Ax , with v H e v e H a v a , then the vector is a feasible solution for the auxiliary problem.Indeed, Ax H e x e H a x a ρx n 1 Ax H e v e H a v a b − H e v e − H a v a − Ax b.

3.36
We remark that if we put v e 0 R m 1 , v a 0 R s , we get v 0 R m , then ρ b − Ax and we obtain the evident feasible point that we have used in our numerical experiments, that is, It's important here, to cite two other special cases.
Case 1.If we choose the nonbasic components of y equal to their bounds, then we obtain a BFS for the auxiliary problem, therefore we can initialize the simplex algorithm with it.
Case 2. If we put v e e m 1 and v a e s , then v H e e m 1 H a e s i∈I 1 e i i∈I a e i .Hence, the vector is a feasible solution for the auxiliary problem 3.31 , with ρ b − v − Ax .

Numerical Example
Consider the following LP problem: where

3.40
We put M 10 10 , pivtol 10 −6 , x 0 R 4 , and we apply the two-phase primal support method using the single artificial variable technique to solve the problem 3.39 .
Phase 1.After adding the slack variables x 5 and x 6 to the problem 3.39 , we get the following problem in standard form: where , u e M M .

3.42
The application of Procedure 1 to the problem 3.41 gives us the following initial support: J B {2, 1, 3, 5}.In order to find an initial feasible solution to the original problem, we add an artificial variable x 7 to problem 3.41 , and we compute the vector ρ: ρ b − Ax b.Thus, we obtain the following auxiliary problem: max ψ −x 7 , s.t.Ax H e x e ρx 7 b, l ≤ x ≤ u, 0 R 2 ≤ x e ≤ u e , 0 ≤ x 7 ≤ 1 .
Therefore, the optimal solution and the optimal objective value of the original problem 3.39 are T , z * 5 3 . 3.44

Experimental Results
In order to perform a numerical comparison between the simplex method and the different variants of the support method, we have programmed them under the MATLAB programming language version 7.4.0R2007a .Since the test problems available in the NETLIB library present bounds on the variables which can be infinite, we have built a sample of 68 test problems having finite bounds and a same optimal objective value as those of NETLIB.The principle of the building process is as follows: let xl and xu be the obtained bounds after the conversion of a given test problem from the "mps" format to the "mat" format and x * the optimal solution.We put x min min{x * j , j 1, p} and x max max{x * j , j 1, p}.Thus, the constraint matrix and the objective coefficients vector of the new problem remains the same as those of NETLIB, but the new lower bounds l and upper bounds u will be changed as follows:

4.1
Table 1 represents the listing of the main characteristics of the considered 68 test problems, where NC, NV, NEC, and D represent, respectively, the number of constraints, the number of variables, the number of equality constraints, and the density of the constraints matrix the ratio between the number of nonzero elements and the number of total elements of the constraints matrix, multiplied by 100 .There exist many LP solvers which include a primal simplex algorithm package for solving LP problems, we cite: commercial solvers such as CPLEX 37 , MINOS 28 and opensource solvers such as LP SOLVE 38 , GLPK 39 , and LPAKO 30 .The LP SOLVE solver is well documented and widely used in applications and numerical experiments 40, 41 .Moreover, the latter has a mex interface called mxlpsolve which can be easily installed and used with the MATLAB language.That is why we compare our algorithm with the primal simplex algorithm implemented in LP SOLVE.However, MATLAB is an interpreted language, so it takes much time in solving problems than the native language C used for the implementation of LP SOLVE.For this reason, the numerical comparison carried out between our method and LP SOLVE concerns only the iterations number.In order to compare our algorithm with the primal simplex algorithm in terms of CPU time, we have developed our own simplex implementation.The latter is based on the one given in 42 .In order to compare the different solvers, we have given them the following names.

Solver1. SupportSav
The two-phase support method, where we use the suggested single artificial variable technique in the first phase, that is, the initial support is found by applying Procedure 1, and the feasible solution is found by applying Procedure 2.

Solver2. SupportFab
The two-phase support method, where we use the full artificial basis technique presented above in the first phase.

Solver3. SimplexFab
The two-phase primal simplex method, where we use the full artificial basis technique presented above in the first phase we use the BFS presented in Remark 3.3, with x l .

Solver4. LP Solve PSA
The primal simplex method implemented in the open-source LP solver LP SOLVE.The different options are Primal-Primal, PRICER DANTZIG, No Scaling, No Presolving .For setting these options, the following MATLAB commands are used.i mxlpsolve "set simplextype", lp, 5 ; % Phase 1: Primal, Phase 2: Primal .
The Lagrange multipliers vector and the basic components of the search direction in the first three solvers are computed by solving the two systems of linear equations: A T B π c B and A B d B −a j 0 d j 0 , using the LU factorization of A B 43 .The updating of the L and U factors is done, in each iteration, using the Sherman-Morrison-Woodbury formula 42 .
In the different implementations, we have used the Dantzig's rule to choose the entering variable.To prevent cycling, the EXPAND procedure 44 is used.We have solved the 68 NETLIB test problems listed in Table 1 with the solvers mentioned above on a personal computer with Intel R Core TM 2 Quad CPU Q6600 2.4 GHz, 4 GB of RAM, working under the Windows XP SP2 operating system, by setting the different tolerances appropriately.We recall that the initial point x , needed in the three methods Solvers 1, 2 and 3 , must be located between its lower and upper bounds, so after giving it some values and observing the variation of the CPU time, we have concluded that it's more efficient to set x l.The upper bounds for the slack and artificial variables added for the methods SupportSav, SupportFab, and SimplexFab are put to M 10 10 .
Numerical results are reported in Tables 2, 3, 4, and 5, where nit 1 , nit, and cput represent respectively the phase one iterations number, the total iterations number, and the CPU time in seconds of each method necessary to find the optimal objective values presented in 25 or 45 ; cput 1 , shown in columns 2 and 7 of Table 2, represents the CPU time necessary to find the initial support with Procedure 1.The number of artificial variables added to the original problem in our implementations is listed in Table 6.
In order to compare the different solvers, we have ordered the problems according to the increasing order of the sum of the constraints and variables numbers NC NV , as they are presented in the different tables and we have computed the CPU time ratio Table 7 and the iteration ratio Table 8

4.2
The above ratios see 46 indicate how many times SupportSav is better than the other solvers.Ratios greater than one mean that our method is more efficient for the considered problem: let S be one of the solvers 2, 3, and 4. If cputr 1S ≥ 1, resp., nitr 1S ≥ 1 , then our solver Solver1 is competitive with SolverS in terms of CPU time, resp., in terms of iterations number .Ratios greater or equal to one are mentioned with the bold character in Tables 7 and  8.
We plot the CPU time ratios of the solvers SupportFab and SimplexFab over SupportSav Figures 1 a and 1 c , and the iteration ratios of each solver over SupportSav Figures 1 b , 1 d , and 1 e .Ratios greater than one correspond to the points which are above the line y 1 in the graphs of Figure 1.iii In terms of iterations number, SupportSav is competitive with LP SOLVE PSA in solving 72% of the test problems with an average iteration ratio of 1.49 the majority of iteration ratios are above the line y 1 in Figure 1 e .Particularly, for the LP "scsd8" Problem number 55 , our method is 9.58 times faster than the primal simplex implementation of the open-source solver LP SOLVE.iv We remark from the last row of Table 6 that the total number of artificial variables added in order to find the initial support for SupportSav 13234 is considerably less than the total number of artificial variables added for SimplexFab 27389 and SupportFab 27389 .v If we compute the total number of iterations necessary to solve the 68 LPs for each solver, we find 143737 for SupportSav, 159306 for SupportFab, 163428 for SimplexFab, and 158682 for LP SOLVE PSA.Therefore, the total number of iterations is the smallest for our solver.

Conclusion
In this work, we have proposed a single artificial variable technique to initialize the primal support method with bounded variables.An implementation under the MATLAB environnement has been developed.In the implementation, we have used the LU factorization of the basic matrix to solve the linear systems and the Sherman-Morrison-Woodbury formula to update the LU factors.After that, we have compared our approach SupportSav to the full artificial basis support method SupportFab , the full artificial basis simplex method SimplexFab , and the primal simplex implementation of the opensource solver LP SOLVE LP SOLVE PSA .The obtained numerical results are encouraging.
Indeed, the suggested method SupportSav is competitive with SupportFab, SimplexFab, and LP SOLVE PSA.Note that during our experiments, we have remarked that the variation of the initial support and the initial point x affects the performances CPU time and iterations number of the single artificial variant of the support method.Thus, how to choose judiciously the initial point and the initial support in order to improve the efficiency of the support method?In future works, we will try to apply some crash procedure like those proposed in 25, 27 in order to initialize the support method with a good initial support.Furthermore, we will try to implement some modern sparse algebra techniques to update the LU factors 47 .

Remark 3 . 2 .
We have I I 1 ∪ I 2 I 1 ∪ I − 1 ∪ I 2 I 1 ∪ I a .Since in the relationship 3.22 , we have y B x B w I 1 |w I a | and w I 1 ≥ 0, we get y B |w I 1 ∪ I a | |w I | |w|.

Remark 3 . 5 .
The inputs of Procedure 1 are: i the m × p -matrix of constraints, A; ii a pivoting tolerance fixed beforehand, pivtol; iii a big and well chosen value M. The outputs of Procedure 1 are: i J B {j 0 , j 1 , . . ., j m−1 }: the initial support for the problem 3.2 -3.5 ; ii I a : the indices of the constraints for which we have added artificial variables; iii J a : the indices of artificial variables added to the problem 3.2 -3.5 ; iv s |I a | |J a |: the number of artificial variables added to the problem 3.2 -3.5 .

abcde
Ratio of the CPU time of SupportFab over the CPU time of SupportSav cputr 12 Ratio of the iterations number of SupportFab over the iterations number of SupportSav nitr 12 Ratio of the CPU time of SimplexFab over the CPU time of SupportSav cputr 13 Ratio of the iterations number of SimplexFab over the iterations number of SupportSav nitr 13 Ratio of the iterations number of LP SOLVE PSA over the iterations number of SupportSav nitr 14

Figure 1 :
Figure 1: Ratios of the different solvers over SupportSav.

Table 1 :
Characteristics of the test problems.

Table 2 :
CPU time and iterations number for Solver1 SupportSav .

Table 3 :
CPU time and iterations number for Solver2 SupportFab .

Table 4 :
CPU time and iterations number for Solver3 SimplexFab .

Table 5 :
Iterations number for Solver4 LP SOLVE PSA .

Table 6 :
The number of artificial variables added in Phase one of the three Solvers.

Table 7 :
CPU time ratio of the different solvers over SupportSav.SupportSav is competitive with SimplexFab in solving 68% of the test problems with an average CPU time ratio of 1.35.The peaks of the graph ratios correspond to the problems where our approach is 2 up to 5 times faster than SimplexFab.Particularly, for the LP "capri" Problem number 21 , SupportSav is 5.29 times faster than SimplexFab in terms of iterations number and solves the problem 3.99 times faster than SimplexFab in terms of CPU time.

Table 8 :
Iteration ratio of the different solvers over SupportSav.