A Fast Algorithm for Solving a Class of the Linear Complementarity Problem in a Finite Number of Steps

The linear complementarity problem is receiving a lot of attention and has been studied extensively. Recently, El foutayeni et al. have contributed many works that aim to solve this mysterious problem. However, many results exist and give good approximations of the linear complementarity problem solutions. The major drawback of many existing methods resides in the fact that, for large systems, they require a large number of operations during each iteration; also, they consume large amounts of memory and computation time. This is the reason which drives us to create an algorithm with a finite number of steps to solve this kind of problem with a reduced number of iterations compared to existing methods. In addition, we consider a new class of matrices called the E-matrix.


Introduction
In the last decades, the complementarity problem has played a very important role in several domains. It has been the focus of many researchers and scientists. As an example, we can cite the works of Cottle [1,2] published between 1964 and 1966. Note that the above problem appears in older works without reporting the name of complementarity problems. In particular, we mention the work of Du Val [3] and Ingleton [4]. Let f be a function defined in ℝ n to ℝ n . A complementarity problem (CP) associated with the function f consists to find a vector z ∈ ℝ n such as z ≥ 0,f ðzÞ ≥ 0, and z T f ðzÞ = 0. If the function f is affine, it is presented in form f ðzÞ = Mz + q, where q is a vector of ℝ n and M is a square matrix of order n. Then, we have a linear complementarity problem denoted by LCPðM, qÞ. The origin of the name complementarity comes from the fact that if z ∈ ℝ n + is a solution of a linear complementarity problem, then z i = 0 or f i ðzÞ = 0 for all i = 1, 2, ⋯, n. The linear complementarity problem has been widely studied by researchers from a variety of backgrounds, which has been a rich and varied literature (see [5][6][7][8][9][10][11][12][13][14][15][16][17] and references therein). In [18], Kadiri and Yassine described a new purification method for solving monotonic linear complementarity problems. In their paper, the proposed method is associated with each iterate of the sequence, generated by an interior point method, one basis that is not necessarily feasible. The authors proved that, under the strict complementarity and nondegeneracy hypotheses, the sequence of bases converges on a definite number of iterations to an optimal basis which gives the exact solution to the problem. In [19], to solve the linear complementarity problem, Alves and Judice [19] proposed a pivoting heuristic based on tabu search and its integration into an enumerative framework. Recently, El foutayeni et al. [20][21][22][23][24][25][26][27][28] added a contribution to the resolution of the linear complementarity problem. In particular, in [27], they proved the equivalent between solving a linear complementarity problem and solving a nonlinear equation. Also, they give a globally convergent hybrid algorithm which is based on vector divisions for solving the linear complementarity problem. In [27], the same authors determined the conditions that allow a linear complementarity problem to have a solution. They calculated the solution when it exists. In [24], they proposed to solve the linear complementarity problem in the case where it has several solutions. The aim of [29] is to propose an iterative method of interior points that converge in the polynomial time to the best solution of the linear complementarity problem; this convergence requires at most oðn 0,5 LÞ iterations, where n is the number of the variables and L is the length of a binary coding of the input; furthermore, the algorithm does not exceed oðn 3,5 LÞ arithmetic operations until its convergence. In [24], El foutayeni and Khaladi have shown that the linear complementarity problem LCPðM, qÞ is completely equivalent to finding the fixed point of the map x = max ð0, ðI − MÞx − qÞ, and they showed that to find an approximation of the solution to the second problem, they proposed an algorithm that starts from an arbitrary interval vector X ð0Þ , then they generalize a sequence of the interval vector ðX ðkÞ Þ k=1,⋯ that converges to the best solution of linear complementarity problems. Newly, in [30], for solving the linear complementarity problem, Wang et al. [31] propose an interior point method to find the solution of the linear complementarity problem, where the matrix is a real square hidden Z-matrix. In this context, we can see the works [31][32][33][34][35][36][37][38][39].
It is well known that it is impossible to ensure the existence of a linear complementarity problem solution associated with any matrix and vector. This leads us to ask the following questions: Under which conditions on the matrix and the vector does this type of problem admit a solution, and if it exists, what are the conditions for the uniqueness of this solution? Once the existence and uniqueness are assured, how we can express this solution according to the data of the problem? Despite the great importance of the linear complementarity problems in several areas, they are not yet completely resolved. However, many results exist and give good approximations of the solutions, but the main disadvantage of many existing methods resides in the fact that, for large systems, they require a large number of operations during each iteration and they consume large amounts of memory and computation time. This is the reason that drives us to look for new methods that deal with this kind of problem which lower the number of operations at each iteration compared to existing methods.
In the present work, we formulate an algorithm that can solve the linear complementarity problem LCPðM, qÞ. This algorithm has a finite number of steps and converges to the solution. Also, we consider a new class of matrices called the E-matrix. The algorithm has been surprisingly effective. A numerical implementation of the algorithm is given in this work.
We organized this document as follows. In Section 1, we give preliminary definitions and we list some initial notations that we need throughout this document. In Section 2, we present the proposed linear complementarity problem under some conditions. In Section 3, we formulate an algorithm for solving our linear complementarity problem with the E class's matrix. And in Section 4, we give numerical examples to confirm the theoretical part of our algorithm.

Preliminary and Notations
In this section, we recall preliminary definitions and general notations used in this paper.
For any positive integer n, let ℝ n×n be the ensemble of all real n × n matrices. We denote by I the matrix of identity, e k is the k th column of I, and e = ð1, ⋯, 1Þ T is a vector where all entries equal to 1. We also use the following notation C k· and C ·k to represent the k th row and the k th column of the C matrix, respectively; Y n = fy/|y| = eg is the ensemble of all ±1 vectors of ℝ n , and its cardinality is equal to 2 n . For each x ∈ R n , we define his sign vector sgn ðxÞ by ðsgn ðxÞÞ i = 1 if x i ≥ 0 and ðsgn ðxÞÞ i = −1 if x i < 0 with i ∈ f1, 2, ⋯, ng. Then, sgn ðxÞ∈Y n . For each z ∈ ℝ n , we denote T z = diag ðz 1 , ⋯, z n Þ.
is called an interval matrix.

Proposition 3. An interval matrix A is singular if and only if the inequality
has a nontrivial solution.
Proof. We suppose that A contains a singular matrix S, then there exist x ≠ 0 such that Sx = 0, which implies that jðM + IÞxj = jðM + IÞx − Sxj ≤ ðI + jMjÞjxj: Conversely, let (2) hold for x ≠ 0: Define y ∈ ℝ n and z ∈ Y n by y i = ½ðI Therefore, ðI + MÞ − T y ðI+|M | ÞT z is singular, and since jy i j ≤ 1 for each i due to ð1Þ, it follows that jðI Hence, ðI + MÞ − T y ðI+|M | ÞT z ∈ A and A is singular. We use the previous proposition to show the regularity or the singularity of the matrix A in some cases.

Proposition 4. Let A be regular and let
hold for some z′, z″ ∈ Y n and x′ ≠ x″:

Abstract and Applied Analysis
Proof. We assume that for each i, We shall prove in this case that i.e., the inequality (4), with x′ − x″ ≠ 0, then A is singular following the first proposition, and this is a contradiction.
We use the Sherman-Morrison formula to prove the efficiency of the proposed algorithm.

Main Results
It is a known fact (see El foutayeni and Khaladi [27]) that the linear complementarity problem LCPðq, MÞ is completely equivalent to solving the equation To present the algorithm, we define a new class of matrices that we call the class of E-matrices. Notation 6. We denote E the set of E -matrices such that MP is the set of the principal minors of M and λ i is the eigenvalues of M.

Lemma 7.
We have the equivalence between the following four properties of a matrix A: (1) All the principal minors of matrix A are positive We select an arbitrary vector x ≠ 0, and we assume that If AðΓÞ is the main submatrix with rows and columns of Γ, xðΓÞ is the vector wherein coordinates have indices of Γ and coincide with those of x; hence, for i ∈ Γ, the coordinates z i of the vector z = AðΓÞxðΓÞ coincide with y i : So, there exists a diagonal matrix U ≥ 0 (over Γ × Γ) such as z = −UxðΓÞ, i.e., ðAðΓÞ + UÞxðΓÞ = 0. Therefore, the matrix AðΓÞ + U is singular. Note that the principal minors of AðΓÞ are positive. So, we have the same result for AðΓÞ + U since U is diagonal positive. This is a contradiction that proves the implication. It is easy to check that the identity matrix is an E-matrix; every symmetric positive definite matrix is an E-matrix and any Hilbert matrix is an E-matrix (we recall that a Hilbert matrix is a square matrix of general terms h ij = 1/ði + j − 1Þ).

Theorem 9.
For all matrix M ∈ E-matrices and vector q ∈ ℝ n , the following algorithm "SolveLCP" has a finite number of steps and converges to the solution of the linear complementarity problem LCPðM, qÞ if it exists.
The linear complementarity problem LCPðM, qÞ implies According to a change of variables, z = |x | −x and w = |x | +x. Hence, if we consider T z = diag ðzÞ with z = sgn ðxÞ, then the equation (4) becomes The problem is that we do not know the values of either x or z, but we know that they must satisfy T z x = jxj ≥ 0, i.e., z i x i ≥ 0 for each i: Step 1. The algorithm beginning with the vector p = 0 and during each pass of the loop "while" it increases by 1; hence p k becomes p k + 1 which means that after a finite number 3 Abstract and Applied Analysis of steps, p k will become superior to 2 n−k and the algorithm will end.
Case 2. x i z i < 0, we assume the existence of j such as j = min fi : x i z i < 0g and we updated the matrix T z and z. So, we bring back z j to −z j , and the T z matrix will be modified. Then, for all real t,Tz = T z − 2tz j e j e T j .
Step 5. Let us show that if log 2 p j > n − j, then the matrix A is singular and the solution x does not exist. This will be proven by showing that if A is regular so p j ≤ 2 n−j for each j. So, we can demonstrate that every j can appear at most 2 n−j times ðj = n, ⋯, 1Þ; we have two cases Case 1. j = n: we suppose that n appears at least twice in the sequence and that x ′ , z ′ and x } , z } correspond to the two closest occurrences, that is to say, that there is no other occurrence of n between them. So, x i ′ z i ′ ≥ 0 and x i z i ≥ 0 for i = f1, ⋯, n − 1g, and x n ′z n ′ < 0, x } n z } n < 0, z n ′z n } = −1, which implies that x n ′z n ′x } n z } n > 0 and x n ′x n } < 0: Then, x i ′z i ′x i z i ≥ 0 for all i = f1, ⋯, n − 1g. But since we obtain the relation (11) using the fact that x = ½ðI + MÞ + ðI − MÞT z −1 q and x ′ ≠ x } since x n ′ x n } < 0; it follows from Proposition 4 that there is an i where z i ′z i } = −1 and x i ′x i } > 0, which implies that x i ′z i ′x i } z i } < 0, which is a contradiction, so n occurs at most once in the sequence.
Case 2. j < n: let z ′ , x ′ and z } , x } correspond to two occurrences of j, so that Then, as the condition (11) holds because of x = ½ðI + MÞ + ðI − MÞT z −1 q and x ′ ≠ x } since x n ′ x n } < 0, then Proposition 4 implies the existence of an i where z i ′ z i } = −1 and x i ′ x i } > 0, as well So as z i ′ z i } = −1,i should have entered the sequence of j; there is an occurrence of some i > j in the sequence; this means, by the assumptions, that j cannot appear there more than ð2 n−j−1 +⋯+2 + 1Þ + 1 = 2 n−j times.

Numerical Examples
In this section, we demonstrate the effectiveness of our proposed algorithm in relation to the execution time and the number of iterations. To do this, we made comparisons between our algorithm and other existing methods. In the first, we give a simple example of a matrix E-matrix of order 4, for which we find the solution in a short time. In the second example, we compare the results obtained by our method with those obtained by the method of El foutayeni et al., the method of Yu, and the method of Chen-Harker-Kanzow-Smale (CHKS), and in the third example, we compare the execution time of our method with the method of Lemke and the method of the interior point. : ð12Þ It is easy to prove that the associated matrix M is an E-matrix, by applying the proposed algorithm. Then, we obtain z = ð0, 1, 2, 0Þ T and the elapsed time is 0.000547 seconds.
Example 11. In this example, we compare the results obtained with our method to those obtained with the method of El foutayeni, the method given by Yu, and the method of Chen-Harker-Kanzow-Smale (CHKS). For attending this, we adopt our MATLAB program to calculate the optimal solution z, the final values w = Mz + q, the number of iterations, and the time in seconds. Considering the following linear complementarity problem LCPðM, qÞ, where M = ðm ij Þ 1≤i,j≤n such as m ii = 4 for i = j,m i,i+1 = m i+1,i = −1 for all i = 1, ⋯, n, and it equals to 0 otherwise, and q = ðq i Þ 1≤i≤n such as q i = −1: Tables 1-4 present the summaries of the results obtained, where Iter represents the iteration numbers when the algorithm ends and Time indicates the total cost in seconds to resolve the problem.

Abstract and Applied Analysis
From Figures 1 and 2, we can notice that our method can be comparable with the method of El foutayeni, the method of Yu, and the method of CHKS, from the iteration numbers and CPU computation time in seconds.
Example 12. In this example, we compare three different methods in order to solve a linear complementarity problem CPðM, qÞ . The first one is our method, the second one is Lemke's method, and the last one is the interior point method [30]. We take the same example, where M = ðm ij Þ 1≤i,j≤n such as m ii = 4, m i,i+1 = m i+1,i = −1 for all i = 1, ⋯, n and zero in the rest and q = ðq i Þ 1≤i≤n such as q i = −1: The matrix M is definitely positive, so we ensure the convergence of Lemke's method. In Table 5, the first column represents the dimension of the linear complementarity problem. The second provides (the third and the fourth) the computation time in seconds for Lemke's method to be performed (interior point algorithm and our algorithm).
Based on this table, in the case where n = 1000, we conclude that Lemke's method is divergently compared to time (it needs 334 seconds to display the results), but our      Abstract and Applied Analysis method only needs 0.928443 seconds to find the solution of LCPðM, qÞ. The same is said for the point interior method. We noticed that our algorithm is faster than the other algorithms compared to the execution time. Then, we can deduce that the performance of our method is effective.

Conclusion
Solving a linear LCP complementarity problem has been the goal of much research. Thus, in this article, we have proposed an algorithm allowing us to solve the LCP linear complementarity problem. This algorithm has a finite number of steps and converges to the solution. In addition, we have considered a new class of matrices called the E-matrix such that the algorithm is efficient. In perspective, we seek to find a simple method to solve linear complementarity problems with any matrix M and vector q without treating the cases on the matrix M, so that it is fast in execution time and in the number of iterations. A digital implementation of the algorithm is given in this work.

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that they have no conflicts of interest.