Modified SOR-Like Method for Absolute Value Equations

In this paper, based on the work of Ke and Ma, a modified SOR-like method is presented to solve the absolute value equations (AVE), which is gained by equivalently expressing the implicit fixed-point equation form of the AVE as a two-by-two block nonlinear equation. Under certain conditions, the convergence conditions for the modified SOR-like method are presented. ,e computational efficiency of the modified SOR-like method is better than that of the SOR-like method by some numerical experiments.


Introduction
Consider the absolute value equations (AVE) where A ∈ R n×n and b ∈ R n and |x| denotes all the components of the vector x ∈ R n by absolute value. Replacing "|x|" in (1) by "B|x|" with B ∈ R n×n naturally generates the general AVE [1,2]. At present, the AVE gradually attracts considerable attention because some optimization problems such as linear programming, convex quadratic programming, and linear complementarity problem [3][4][5][6][7] can be formulated as the AVE (1). In recent years, to efficiently find the numerical solution of the AVE (1), a great deal of effort is developing iteration methods. For example, for solving the AVE (1), a generalized Newton method was presented in [8] and is simply described as follows: where D(x k ) � diag(sign(x k )); D(x) � diag(sign(x)) denotes a diagonal matrix corresponding to sign(x). ere are other forms of the generalized Newton method; one can see [9][10][11][12][13] for more details. Clearly, at every iteration step of the generalized Newton method (2), the inverse of the matrix A − D(x k ) should be computed. Noting that the matrix A − D(x k ) is changed with the iteration index k, the calculations of the generalized Newton method may be very costly. To overcome this changed iteration matrix, the Picard iteration method in [14] is easily considered as follows: Clearly, the Picard iteration method (3) is needed to compute the inverse of the matrix A. Similarly, by reformulating the AVE (1) as a nonlinear equation with two-by-two block form, combining with the classical SOR-like iteration method, an SOR-like iteration method in [15] was proposed to solve it and it is simply described as follows: Some convergence conditions of the SOR-like iteration method were given when the involved parameter satisfied certain conditions. Further, from the aspect of the involved iteration matrix of the SOR-like iteration method, in [16], some new convergence conditions were presented.
It is noted that if the matrix A in (3) or (4) is ill-conditioned, then at every iteration step of the Picard and SOR-like methods an ill-conditioned linear system needs to be solved. In this case, the cost of the calculation of the inverse of the matrix A may be high. To overcome the inverse of the matrix A, Li [17] extended the classical AOR iteration method for solving the AVE and discussed the convergence properties for the AOR method. By using the Gauss-Seidel splitting, the generalized Gauss-Seidel (GGS) iteration method was presented in [18] to solve the AVE (1).
In this paper, we fasten on the SOR-like iteration method for solving the AVE (1). By expressing equivalently the implicit fixed-point equation of the AVE as a nonlinear equation with two-by-two block form, a modified SOR-like iteration method is presented by a concrete matrix splitting of the involved coefficient matrix. A considerable advantage of the modified SOR-like iteration method is that the inverse of the matrix A can be avoided. From this point of view, the computing efficiency of the modified SOR-like iteration method may be better than the SOR-like iteration method when both are used to solve the AVE (1).
For our later analysis, here some terminologies are briefly explained. Let R n be the finite dimensional Euclidean space, whose norm is denoted by ‖ · ‖. For x ∈ R n , sign(x) denotes a vector with elements equal to 1, 0, − 1 depending on whether the value of the corresponding element of x is larger than zero, equal to zero, or less than zero. e rest of the layout of this paper is divided into three sections. In the second section, the modified SOR-like iteration method is designed and its convergence conditions are presented. In the third section, some numerical experiments are reported. In the fourth section, some concluding remarks are given to end this paper.

Modified SOR-Like Iteration Method
In this section, the modified SOR-like iteration method is presented. For this purpose, by using y � |x| for the AVE (1), we have i.e., where D � diag(A), and L and U are strictly lower and upper triangular matrices obtained from A, respectively. If we take where then we have where and ω > 0. Based on equation (10), the modified SOR-like iteration method is naturally obtained and described below. e modified SOR-like iteration method: let the initial vectors x (0) ∈ R n and y (0) ∈ R n be given and Lemma 1 is quoted for the latter use.

Let the iteration errors be
where (x * , y * ) is the solution pair of equation (6) and (x (k) , y (k) ) is generated by the iteration method (11). en, the following convergence conditions with respect to the modified SOR-like iteration method (11) can be given (see eorem 1).
If sα < 1, where Proof. Let us subtract equation (11) from with (x * , y * ) being the solution pair of equation (6). en From (18), we can get It holds that By left-multiplying (20) by the nonnegative matrix we have Let Clearly, if ρ(T) < 1, then lim k⟶∞ T k � 0. is implies In this way, the iteration sequence x (k) , y (k) produced by the modified SOR-like method (11) can converge to the solution of equation (6).
Next, we just need to get sufficient conditions such that ρ(T) < 1. Assumed that λ denotes an eigenvalue of the matrix T. en which is equal to Using Lemma 1 for equation (26), |λ| < 1 is equal to erefore, if condition (14) holds, then ρ(T) < 1. If the idea of this proof for the modified SOR-like method (11) is extended to the SOR-like method (4), then the corresponding matrix T is as follows: where ] � ‖A − 1 ‖ (see [15]). By simple computations, we can get that if then the SOR-like method (4) is convergent. erefore, we obtain a new convergence condition for the SOR-like method (4) and see the following result.

Theorem 2. Let the conditions of eorem 1 be satisfied. Denote
then e x k+1 , e y k+1 where Mathematical Problems in Engineering Comparing eorem 2 with eorem 3.1 in [15], it is easy to see that the region of the parameter w in eorem 2 is the same as that in eorem 3.1 in [15]. Both demand 0 < ω < 2 in eorem 2 and eorem 3.1 in [15]. e difference between eorem 2 and eorem 3.1 in [15] is on α and β. e former is α + � � β < 1 and the latter is See eorem 3.1 in [15]. In form, the latter is more complicated than the former.

Corollary 1. Let the conditions of eorem 1 be satisfied. Denote
then e x k+1 , e y k+1 where e x k , e y k

Numerical Examples
In this section, two numerical examples are provided to show the effectiveness of the modified SOR-like method from two aspects: the iteration step (denoted by "IT") and the computing time (denoted by "CPU"). We compared the modified SOR-like method with the SOR-like method [15].
Here, all initial vector for these two testing methods are set to be zero vector, and both are terminated if the relative residual error (RES) satisfies or if the iteration step is larger than 500. All the tests are performed in MATLAB 7.0. In the following tables, "MSOR" and "SOR" denote the modified SOR-like method and the SOR-like method [15], respectively. "− " denotes the iteration steps larger than 500 or the CPU times (second) larger than 500 seconds.
To get fast convergence rate of the modified SOR-like method and the SOR-like method [15], the experimentally optimal parameter ω exp is adopted, which results in the smallest iteration step.
Example 1 (see [6,7,17]). Let the AVE in (1) be composed with where In Table 1, we list some numerical results of the modified SOR-like method and the SOR-like method for Example 1. From Table 1, it is easy to see that both methods can quickly converge to the unique solution x * for different dimensions n when the experimentally optimal parameters ω exp are used. An interesting fact is that the experimentally optimal parameters ω exp of both methods are the same. Furthermore, the value of the experimentally optimal parameter is stable and unchanged as the different dimension increases. Further, we find that the iteration steps of both methods are also the same. Moreover, the iteration steps of both methods are also stable and unchanged as the different dimension increases. ese numerical results show that both methods are suitable to solve the AVE (1).
It is noted that, from the view of the elapsed CPU time, the consumption of the CPU time of the modified SOR-like method is less than that of the SOR-like method. at is to say, the modified SOR-like method has better computational efficiency because it costs much cheaper than the SOR-like method.
In brief, the numerical results in Table 1 show that under certain conditions, the computational efficiency of the modified SOR-like method overmatches the SOR-like method. (1), we chose a random A by the following structure:

Example 2. For the AVE in
and its all singular values exceed 1. e right-hand side b is set to be b � Ax * − |x * |, where For Example 2, we also compare the modified SOR-like method with the SOR-like method in [15] and see Table 2 for the concrete numerical results. Table 2 shows that both methods quickly converge to the unique solution x * when the experimentally optimal parameters ω exp are applied. ese numerical results further ensure the observations results of Table 1, i.e., the modified SOR-like method overmatches the SOR-like method in terms of the computational efficiency under certain conditions.

Conclusion
In this paper, by equivalently expressing the absolute value equations (AVE) as a nonlinear equation with two-by-two block form, we have presented a modified SOR-like method to solve the AVE and discussed its convergence properties under certain conditions. e computational efficiency of the modified SOR-like method overmatches the SOR-like method in [15] by some numerical experiments under certain conditions. In addition, it is worth thinking about that it is necessary to find the theoretical optimal parameter ω to obtain the least iteration step of the modified SOR-like method in the future, although it is a very difficult task.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.