MN-PGSOR Method for Solving Nonlinear Systems with Block Two-by-Two Complex Symmetric Jacobian Matrices

For solving the large sparse linear systems with 
 
 2
 ×
 2
 
 block structure, the generalized successive overrelaxation (GSOR) iteration method is an efficient iteration method. Based on the GSOR method, the PGSOR method introduces a preconditioned matrix with a new parameter for the coefficient matrix which can enhance the efficiency. To solve the nonlinear systems in which the Jacobian matrices are complex and symmetric with the block two-by-two form, we try to use the PGSOR method as an inner iteration, with the help of the modified Newton method as an efficient outer iteration method. This new method is called the modified Newton-PGSOR (MN-PGSOR) method. Local convergence properties of the MN-PGSOR are analyzed under the Hölder condition. Finally, we give the comparison of our new method with some previous methods in the numerical results. The MN-PGSOR method is superior in both iteration steps and computing time.


Introduction
We consider solving the nonlinear system where F: D ⊂ C 2n ⟶ C 2n is a nonlinear and continuously differentiable function. Here, the Jacobian matrix of F(x) is large and sparse, which has the block form where W(x) and T(x) ∈ R n×n are both symmetric and real positive semidefinite. It is easy to see that F ′ (x) is complex symmetric. Denote i � �� −1 √ as the imaginary unit. From the earlier researches in [1,2] for solving nonlinear systems, the solution of (1) can be obtained by solving the Newton equations: F ′ x k s k � −F x k , k � 0, 1, 2, . . . , s k � x k+1 − x k . (3) However, when the size of problem (1) becomes large, it is difficult and expensive to find an exact solution to the above equations. e presentation and application of the Inexact Newton method [3][4][5] overcame this difficulty and accelerated the speed to obtain the solution of (3). Furthermore, the Inexact Newton method can be used as an iteration method with inner steps and outer steps, and the advantage of this method is that the explicit calculating and reserving of the corresponding inverse of the Jacobian matrices are unnecessary in the computation. Due to the excellent performance of this inner-outer iteration method, many researches focus on using different iteration methods as the inner iterations of the Newton method, such as the Krylov subspace method [6,7] and the HSS-based methods [8][9][10][11][12].
In order to advance the method to solve the Newton equations (3) efficiently, the modified Newton method [13] was presented in 2007, and it is as follows: x k+1 � y k − F ′ x k −1 F y k , k � 0, 1, 2, . . . .
e modified Newton method has been the mainstream for solving the nonlinear systems. It is obvious that the modified Newton method (4) achieved an improvement; it only needs one more step than the Inexact Newton method, but the convergence speed and convergence order have been well improved. Drawing support from the efficient performance of the modified Newton method, which has been a sophisticated method as the outer method, we need to find more suitable inner methods for various forms of nonlinear systems so that we can improve the efficiency. To solve the nonlinear systems (1) faster, of which the Jacobian matrix is complex symmetric with block two-by-two form, we focus on finding an efficient inner iteration method to solve the block complex linear system: where W and T ∈ R n×n are both symmetric and real positive semidefinite. We know that (5) is a special situation of the generalized saddle-point problem [14], and it has wide application in finite element discretization of constrained optimization problems for elliptic partial differential equations [15][16][17].
To solve the linear systems with block form faster, some efficient methods were proposed, such as the preconditioned Krylov subspace method [18], the GMRES method [19], and the generalized successive overrelaxation method [20]. In 2013, Bai et al. proposed an HSS-based method to solve linear systems with block form, which is called the preconditioned MHSS (PMHSS) iteration method, and it can be applied to distributed control problems [21]. HSS method [22] was proposed by Bai et al. and it has been studied in many fields because of its effectiveness. Over the years, based on the PMHSS method, some new iteration methods which can solve the linear systems with block complex symmetric matrices were proposed. In 2018, Wang et al. [23] proposed a double preconditioned modified HSS method (DPMHSS) to solve (5), and it was proved to be a considerable way because each step in the iterations can be solved by direct methods. Meanwhile, they showed the way to use the DPMHSS method as an inner iteration method and to coordinate the modified Newton method to solve the nonlinear systems.
is new approach is called the modified Newton-DPMHSS method. In the same year, in order to improve the DPMHSS method, Chen et al. added a preconditioner for the coefficient matrix in (5), and the MDPMHSS method was proposed in [24]. e MDPMHSS method can be also used as the inner iteration of the modified Newton equations and it accelerated the convergence rate of the whole iteration than the modified Newton-DPMHSS method. In 2020, Qi et al. [25] proposed the modified Newton-AGSOR method, which can solve nonlinear systems with complex and symmetric Jacobian matrices of block form efficiently. e inner iteration method AGSOR method in [25] is a modified method based on the generalized successive overrelaxation (GSOR) method [20], and it introduced two parameters to accelerate the GSOR method. e modified Newton-AGSOR method performs better than some other methods proposed in recent years, and fewer conditions are needed for the corresponding Jacobian matrix.
In this paper, we focus on improving the inner iteration method of the modified Newton method to solve the nonlinear systems with complex symmetric Jacobian matrices of the block form faster. Following the idea of the GSOR method, which can solve the block linear systems efficiently. We introduce a preconditioned matrix with a new parameter for the coefficient matrix in (5), and we propose a preconditioned GSOR (PGSOR) method to solve block linear systems by integrating the new preconditioned matrix with the coefficient matrix in (5). We also use the PGSOR method as the inner iteration in the process. Meanwhile, we apply the modified Newton method as the outer iteration method. en, we obtain a new method to solve the nonlinear systems with the block two-by-two complex symmetric Jacobian matrices, and we denote this new approach as MN-PGSOR method.
We organize the rest of this paper as follows. In Section 2, we give the PGSOR method which can solve complex symmetric linear systems with block two-by-two form, and the convergence properties are analyzed. In Section 3, we introduce the modified Newton-PGSOR (MN-PGSOR) method, which can solve nonlinear systems with complex symmetric Jacobian matrices of block form. We use the modified Newton method as the outer iteration, while the PGSOR method is applied as inner iteration to solve the nonlinear systems. In Section 4, we give the local convergence analysis of the MN-PGSOR method under the Hölder continuous condition. In Section 5, we give the numerical examples, and the experimental results show the efficiency of the MN-PGSOR method compared to some other methods. In the final section, we give some summaries of the results we have obtained.

The PGSOR Method for Solving Linear Systems with Block Form
In this section, we consider solving the linear system (5). Assume that W and T ∈ R n×n are both symmetric and real positive semidefinite, and at least one of them is positive definite. Based on the GSOR method, we design the PGSOR method by preconditioning the linear system (5).
where c is a positive constant and I is the n × n unit matrix. en, We can split the coefficient matrix of (7) as where Hence, we obtain the PGSOR method for solving the linear system (5), and we can easily rewrite the iteration equation for solving (7) as follows: where which is the iteration matrix, and Furthermore, motivated by the ideas of some accelerated methods for solving block complex linear systems, such as the double preconditioned modified HSS (DPMHSS) method in [23], we multiply the block diagonal matrix Ω � αI 0 0 βI on both sides of (7); then, we obtain Actually, we can solve (13) by the same splitting method in (8); then, it becomes the preconditioned accelerated GSOR (PAGSOR) method in [26]. When α � β, the PAG-SOR method reduces to the PGSOR method. Although the PAGSOR method has a smaller spectral radius than the AGSOR method under some proper conditions, when it is used as an inner iteration method for solving nonlinear systems, the increasing of the parameters will influence the efficiency. Due to the lack of practicability of the PAGSOR method, we only introduce the PGSOR method as an inner iteration method in our works.
Obviously, iteration equation (10) can be transformed to the following algorithm.
. ., via the following procedure until x k � (y T k , z T k ) T meets the stopping criterion: where α is a positive constant. Here, At each equation in the above method, we find that the coefficient matrix W in these two subsystems is positive definite, so those it can be solved by direct methods efficiently. In the following, the convergence properties of the PGSOR method are discussed by theoretical analysis.
Proof. Let 0 ≠ x ∈ C n be an eigenvector of S and μ be the corresponding eigenvalue; then, or From the assumptions, it is easily known that μ ≥ 0 and cW + T are symmetric and positive definite. en, erefore,

Journal of Mathematics
Because the function f(μ) � (cμ −1)/(c + μ) is increasing monotonously with respect to μ, we have a consequence that where μ min , μ max are the smallest and largest eigenvalues of S, respectively. en, the proof is completed.
□ Remark 1. It is easy to find that Lemma 1 is consistent with Lemma 2.2 in [27], and the conditions of W, T are the same as those in [27]. erefore, the PGSOR method for solving block complex linear systems (5) has the same convergence properties as the PGSOR method in [27]. e proof of the convergence properties of the PGSOR method for solving block two-by-two real linear systems can be found in [27]; it is easy to see that the PGSOR method is to employ the GSOR method for a new equivalent formation of the linear systems. e PGSOR iteration method is convergent if Besides, from eorem 2.4 in [27], it gives the analysis of the optimal parameter (α * , c * ). en, we can directly obtain the results of the optimal parameter of the PGSOR method for solving (5) as follows: and the corresponding optimal convergence radius of the PGSOR method is where and μ min , μ max are the smallest and largest eigenvalues of S � W −1 T, respectively.

Remark 2. From eorem 2.3 in [27]
, we see that the PGSOR method has a smaller spectral radius than the GSOR method if c satisfies where μ min and μ max are the smallest and largest eigenvalues of S � W −1 T, respectively. Assuming that the conditions in Remark 1 is satisfied, we have e proof of the results refers to Corollary 2.2 in [27]. It shows that, in our experiments, due to the difficulty to find the experimental optimal parameters, the area of the theoretical optimal parameters above turns out to be a reference.

The MN-PGSOR Method
In the following, we will introduce the modified Newton-PGSOR (MN-PGSOR) method to solve nonlinear systems, in which the corresponding Jacobian matrices have a block form. Firstly, we utilize the PGSOR method as the inner iteration method in the modified Newton iteration, which we mentioned in a previous section. We focus on solving large sparse linear system as where F: D ⊂ C 2n ⟶ C 2n is nonlinear and continuously differentiable. Here, the Jacobian matrix of F(x) has the block two-by-two form: where W(x), T(x) ∈ R n×n are both positive semidefinite and symmetric, and at least one of them is positive definite. Here, we notice that the condition of F ′ (x) is different from that in the AGSOR method. In order to enhance the computing efficiency, we utilize the PGSOR method as the inner iteration method. Moreover, we see that the PGSOR method only needs to solve the matrix equations in which the coefficients are symmetric definite, so the calculation in each iteration can be solved by direct algorithms. Secondly, we introduce the MN-PGSOR method to solve the Newton equations, which are derived from the modified Newton methods (4) as follows: For the convenience of writing, we disassemble F(x) as 4 Journal of Mathematics where U(x), V(x) ⊂ C n . Combining with the PGSOR method in (14), we have the following MN-PGSOR method.

e MN-PGSOR Method
(1) Give an initialized x 0 ∈ C n . Set two positive constant sequences l k ∞ k�0 and m k ∞ k�0 , and set real positive constants α, c, and tol, which is the tolerance to determine the stopping criterion. (2) Compute x k+1 for k � 0, 1, 2, . . . via the following scheme until x k satisfies ‖F(x k )‖ ≤ tol ‖F(x 0 )‖: (ii) For l � 0, 1, . . . , l k −1, apply the PGSOR method to the first system in (28): method to the second system in (28): We give some remarks on the MN-PGSOR method as follows.
Remark 3. According to Remark 1, as in the MN-PGSOR method, when the PGSOR method is applied to solve the two Newton equations (28), the selection of parameters needs to be in the region, which is 0 . For the option of the theoretical optimal parameters for the PGSOR method, refer to the results in Remarks 1 and 2. In the MN-PGSOR method, it computes k times of the outer iteration to complete the procedure. After one outer iteration by one outer iteration in the MN-PGSOR method, although the coefficient matrix of the Newton equation (28) varies with respect to the variations of x k , the theoretical analysis of the optimal parameters can be used as a reference in practical computation, and the results of the range of the optimal parameter α * in Remark 2 are especially important.
From the above algorithm, after direct operations, the MN-PGSOR method can be transformed to the following expressions. e expressions for d k,l k and h k,m k , can be straightforwardly obtained, where Journal of Mathematics Furthermore, the MN-PGSOR method is equivalent to Obviously, the Jacobian matrix F ′ (x) can be split into We find that G α,c (x) and R α,c (x) satisfy and then From the results of (36) and (37), we can transform the form (34) of the MN-PGSOR method to another expression:

Local Convergence of the MN-PGSOR Method
In this section, we focus on proving the local convergence of the MN-PGSOR method, which is under the Hölder condition. Before the proof, we give some necessary lemmas and definitions.
for any h ∈ C n . Moreover, if the nonlinear map Lemma 2 (see [1]). Assume that M, N ∈ C n×n , M is nonsingular and it satisfies ‖M −1 ‖ ≤ ε. If ‖M − N‖ ≤ δ and δε < 1, then N is nonsingular and it satisfies To analyze the local convergence of the MN-PGSOR method under the Hölder condition, we need some assumptions and notations firstly. Assume that the nonlinear map F: D ⊂ C 2n ⟶ C 2n is G-differentiable on an open set D 0 ⊂ D, the Jacobian matrix F ′ (x) has the block two-by-two 6 Journal of Mathematics form, and it is continuous and symmetric. Moreover, there is an interior point x * ⊂ D 0 such that F(x * ) � 0. We denote N(x * , r) as an open ball, which is centered at x * with radius r > 0.

Assumption 1.
Assuming that x * ⊂ D 0 and F(x * ) � 0, there exists a positive constant r, for all x ∈ N(x * , r) ⊂ D 0 ; the following conditions hold: (A1) ( e bounded condition) ere exist two positive constants η and υ such that (A2) ( e Hölder condition) For some p ∈ (0, 1], there exist nonnegative constants K ω and K t such that In Assumption 1, we see that when p � 1, then the Hölder condition becomes a stronger condition, which is called the Lipschitz condition. Now, under the Hölder condition, we give the local convergence theorem for the MN-PGSOR method. We show the performance of the function F at the neighborhood of the point x * . Moreover, we give suitable conditions of the radius r of the set N(x * , r) to ensure the local convergence of the MN-PGSOR method.
. en, From Assumption A2, we have en, By using the results of (36) and (37), we obtain Obviously, From Lemma 2, we prove that if

Journal of Mathematics
holds, given that It is equivalent to Similar to calculation (56), we compute (65) e idea of this proof is to find the region of r such that for any x ∈ N(x * , r) ⊂ D 0 . It is because if the above inequality holds, then Since g(r p , μ * ) 2 < 1, for any x 0 ∈ N(x * , r), we have erefore, the solution sequences x k ∞ k�0 of each iteration of the MN-PGSOR method are well-defined. In addition, Let k ⟶ ∞ and lim k⟶∞ sup||x k − x * || 1/k ≤ g(r p 0 , μ * ) 2 hold. e proof of eorem 1 is completed.

Numerical Examples
To confirm the efficiency of the MN-PGSOR method in numerical experiments, we compare the MN-PGSOR method to some previous methods in CPU time, outer iteration steps, and total inner iteration steps. To arrange the previous methods in chronological order, they are the modified Newton-DPMHSS method, the modified Newton-MDPMHSS method, and the modified Newton-AGSOR method. From [24], we know that the MN-DPMHSS method and the MN-MDPMHSS method outperform the Newton-HSS method and Newton-MHSS method. Furthermore, in [25], we see that the MN-AGSOR method performs better than the MN-DPMHSS method and the MN-MDPMHSS method. We list the inner iteration number (inner IT) and outer iteration number (outer IT), computer time (CPU), and the error estimates (RES) of all these methods; the results of the experiments show the efficiency and validity of the MN-PGSOR method. All the numerical calculations and runs are done in Matlab 2019b on Intel dual-core CPU (2.6 GHz) and 16 GB RAM.
Example 1. Let us consider the nonlinear systems as follows: where (x, y) ∈ Ω � (0, 1) × (0, 1) and zΩ is the boundary of Ω. Here, ϱ is a positive constant which indicates the magnitudes of this reaction term. We operate the problem with the centered finite difference scheme and use Δt � h � 1/(N + 1) as the equidistant grids, and then the transformation of the above nonlinear system is as the following form: where A N is a tridiagonal matrix with A N � tridiag(−1, 2, −1), n � N × N, and ⊗ denotes the Kronecker product. All the calculations are started from an initial guess u 0 � [1, 1, . . . , 1] T , and it is stopped until the current outer iteration satisfies the condition In the inner iteration, the tolerance η k and tolerance η k are both set to be 0.1, 0.2, and 0.4, which control the convergence of the corresponding inner iteration methods.
In Tables 1-4, we list the optimal parameters for these four methods, respectively. In our tests, to compare the optimal performance of each method, we utilize the optimal parameters we obtained in experiments for each method in Tables 1-4 and compare inner IT, outer IT, and CPU times between the MN-DPMHSS, MN-MDPMHSS, MN-AGSOR, and MN-PGSOR methods. We notice that there are two parameters α, c for the MN-PGSOR method; it is coherent to the idea that we introduce a preconditioned matrix with parameter c which can improve the efficiency than the previous method.
By calculations, Journal of Mathematics 13 We see en, the solution sequences u k of each iteration of the MN-PGSOR method are convergent to u * ; it is compatible with the theoretical results in Section 4.
We choose the situation that α 1 � α 2 � 1 and β 1 � β 2 � 1. Firstly, we present the optimal parameters of these four methods in Tables 1-4, respectively. All of these optimal parameters are tested by our experiments, and we list the optimal parameters of each method as a pair form. In Tables 5-8, we compare the MN-PGSOR method to the other methods under four different conditions. We set N � 30, 40, 50, and 80, respectively. e tolerance of the inner method of each method is set to be η � 0.4 to see the comparisons of these four conditions. In Table 8, we give the optimal experimental parameters of each method for all the cases of ρ � 1, 10, 200 when N � 80, which are in a pair form.
Secondly, for consideration of other situations when the tolerance is set to be η � 0.1 or η � 0.2, we only list two tables of each situation as representation for N � 40 in Table 9 and N � 50 in Table 10. From Tables 5 to 10, we see that the computing time of the MN-PGSOR method is less than other methods. Besides, the MN-PGSOR method needs fewer inner and outer iteration numbers to complete the process, which means that the MN-PGSOR method converges more rapidly with less error than the other methods.
In Figures 1 and 2, we plotted the optimal values of the parameters α and c versus the variation of the problem size N, respectively. Here, we set ρ � 10 and η � 0.4. Obviously, the curves of both the optimal values of α and c are almost two horizontal straight lines when problem N varies. In the experimental results, we see that the optimal parameters α, c of the MN-PGSOR method are much more stable than other methods in different conditions, especially for c, which indicates that it is time-saving to find the optimal parameters in the practical test than other methods. Besides, the MN-PGSOR method obtains a significant improvement in CPU time, the inner iteration steps, and the outer iteration steps.           The optimal value of α with the corresponding problem size N  erefore, to solve such type of nonlinear systems, it is more suitable to use the MN-PGSOR method to reduce the difficulty of searching the optimal parameters.

Conclusions
In this paper, we introduce the modified Newton-PGSOR (MN-PGSOR) method, which can solve the nonlinear systems with the corresponding Jacobian matrices of block form. In Section 2, we construct the PGSOR method as the inner iteration method of the MN-PGSOR method, which can solve complex block linear systems efficiently, and we show the convergence properties of the PGSOR method. In addition, we give the theoretical analysis of the optimal parameters and the corresponding convergence radius of the PGSOR method. Due to the performance of the modified Newton method as an outer iteration method, we utilize the modified Newton method instead of other Newton methods to solve the nonlinear systems. Furthermore, under the Hölder continuous condition, we discuss the local convergence properties of the MN-PGSOR method. In Section 5, all the experimental results prove the efficiency of the MN-PGSOR method over other methods. We see that the MN-PGSOR method performs better than some other methods, and the optimal parameters in our experiments are more stable than other methods. In summary, the MN-PGSOR method outperforms some previous methods in CPU time, the total inner iteration number, and the outer iteration number.

Data Availability
Some or all data, or code generated or used during the study are available from the corresponding author by request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.