Newton-PGSSand Its ImprovementMethod for SolvingNonlinear Systems with Saddle Point Jacobian Matrices

+e preconditioned generalized shift-splitting (PGSS) iteration method is unconditionally convergent for solving saddle point problems with nonsymmetric coefficient matrices. By making use of the PGSS iteration as the inner solver for the Newtonmethod, we establish a class of Newton-PGSS method for solving large sparse nonlinear system with nonsymmetric Jacobian matrices about saddle point problems. For the new presented method, we give the local convergence analysis and semilocal convergence analysis under Hölder condition, which is weaker than Lipschitz condition. In order to further raise the efficiency of the algorithm, we improve the method to obtain the modified Newton-PGSS and prove its local convergence. Furthermore, we compare our new methods with the Newton-RHSS method, which is a considerable method for solving large sparse nonlinear system with saddle point nonsymmetric Jacobian matrix, and the numerical results show the efficiency of our new method.


Introduction
In this paper, we will explore effective and convenient methods for solving nonlinear nonsymmetric saddle-point problem: where F: D ⊂ R n ⟶ R n is a continuous differentiable nonlinear function and the function F � (F 1 , . . . , F n+m ) T with F i � F i (x), i � 1, 2, . . . , and x � (x 1 , . . . , x n+m ) T is defined on an open convex subset of (n + m)-dimensional real linear space R n+m . Moreover, the Jacobian matrix F ′ (x) is large, sparse, and nonsymmetric saddle point with the form where A(x) ∈ R n×n is a real positive definite matrix and B(x) ∈ R n×m is a full-column rank matrix (m < n). is kind of large sparse nonsymmetric saddle-point nonlinear systems (1) always arises in many scientific and engineering computing areas, such as elastomechanics equations and Stokes equation. Some of them have not been solved analytically, so we can only explore the method to obtain the numerical simulation at our utmost. In the past, researchers have developed some methods to solve nonlinear function [1][2][3][4][5][6][7][8][9][10]. In these methods, the most typical and popular method for solving the nonlinear system (1) is the Newton method. e principle of solving nonlinear equations by the Newton method is very simple. In each step, we expand the nonlinear equation at x k by Taylor expansion and take its linear part to construct the approximate equation of the nonlinear equation. en, we calculate the zero point of the approximate equation as the next iteration point, and it is represented as follows: e sequence x k calculated by this iteration will converge to the numerical solution eventually as k ⟶ + ∞ under certain conditions. We know that an excellent algorithm is not only accurate but also efficient. When the dimension n is large, the cost of each step of the traditional Newton algorithm is very expensive. e reason for this phenomenon is that, at each iterative step, a linear system F ′ x k s k � − F x k , k ≥ 0, s k � x k+1 − x k , (4) must be exactly and accurately solved. We hope to give up a little bit of "precision" in exchange for greater "efficiency." is idea led to the development of inexact Newton methods which were first proposed by Dembo et al. [11]. In recent decades, the inexact Newton method has been extensively studied and applied in some fields. e linear equation (4) can be solved efficiently by some methods which will discard some precision, but the calculation amount and time will be greatly reduced. In addition, we know that the traditional Newton method is second-order convergence, and increasing the order of convergence can make the algorithm converge to the exact solution faster. erefore, we consider improving the Newton method to improve the order of convergence. Next, we introduce the traditional Newton method and the improved Newton method. In the inexact Newton methods, the termination condition of the Newton equation (4) is where s k � x k+1 − x k is obtained by applying some linear iterative methods. e inexact Newton methods usually have the unified form as shown in Algorithm 1 .
Here, F ′ (x k ) is the Jacobian matrix and η k ∈ [0, 1) is commonly called forcing term which is used to control the level of accuracy. e algorithm mentioned above has R-order of convergence two at least. e researchers present the modified Newton iteration to improve convergence order as shown in Algorithm 2. From what is mentioned above, inexact-modified Newton methods only need to calculate F ′ (x k ) − 1 once per m step and have less computation compared with inexact-modified Newton methods.
is kind of method has R-order of convergence m+1 at least as the outer iteration and the PGSS iteration method as the inner iteration. In this paper, we can establish the modified Newton-PGSS as m � 2. e inexact Newton methods consist of two parts: inner iteration and outer iteration. e outer iteration is the Newton method, which is used to solve nonlinear problems, and each iteration has to solve a linear equation in order to generate the sequence x k . Linear iterative methods, such as the classical splitting methods or the modern Krylov subspace methods [12,13], are applied inside the Newton methods to solve the Newton equations approximately. A significant advantage of such inner-outer iterations is that one can reduce the inverse of the Jacobian matrix storage and calculation of each step, so as to improve the operation efficiency.
erefore, this kind of inner-outer iterative methods has been widely studied. Newton-Krylov subspace [3] methods which utilize the Krylov subspace iteration methods as the inner iterations have been effectively and successfully used in many fields, see [14][15][16].
By introducing the inexact Newton method [1-4, 7, 8], we know that the efficiency of the inner iteration will affect the efficiency of the whole algorithm.
us, we want to explore the excellent inner iteration to obtain efficient innerouter iterative methods. In other words, efficient linear iteration should be employed to solve the Newton equation (4) with real nonsymmetric saddle-point Jacobian matrix.
ere are many ways to solve the saddle point linear problem [3,[17][18][19][20][21][22][23][24][25]. Recently, Cao et al. [26][27][28][29] proposed a method which is based on the shift-splitting iteration method presented by Bai et al. [30] to solve the saddle-point problem. is method is more efficient than other algorithms such as the Uzawa-type iteration methods, the successive over-relaxation (SOR-like) iteration methods [31,32], and the Hermitian and skew-Hermitian splitting (HSS) iteration methods [33][34][35]. In addition, the PGSS iteration method is convergent unconditionally and the preconditioner generated by it is also very excellent [26]. When applying the PGSS method for solving complex linear system, at each iterative step, it needs to solve single linear subsystem with their coefficient matrices being the M PGSS one (1/2)(Ω + A). Furthermore, in order to increase the efficiency of algorithm, we optimized the outer iteration and then we propose modified Newton-PGSS method to solve the saddle problems. Because there was no Newton method to solve the saddle point system problem, we compare the Newton-PGSS method with the traditional methods, for example, the Newton-RHSS method [31,36,37]. e organization of the paper is as follows. In Section 3, we introduce the Newton-PGSS method. In Sections 4 and 5, we offer the convergence properties of this method. We establish local convergence theorem and semilocal convergence properties under some proper hypothesis for the Newton-PGSS method, respectively. We show the modified Newton-PGSS method in Section 6. Numerical examples are presented to confirm the efficiency of our new method in Section 7. Finally, in Section 8, some brief conclusions are given.

Preliminaries
First of all, we review the PGSS method [26] for solving large sparse nonsymmetric saddle-point linear system: where A ∈ R (n+m)×(n+m) is a real nonsymmetric saddle-point matrix. e PGSS Iteration Method [27]. Given an initial guess x 0 ∈ R n+m , compute x k+1 for k � 0, 1, 2, . . . , using the following iteration scheme until x k satisfies the stopping criterion: (1) Let the initial guess x 0 be given.
(2) For k � 0 until "convergence" do: Find some η k ∈ [0, 1) and s k that satisfy ALGORITHM 1: Inexact Newton methods. 2 Journal of Mathematics where Ω is a matrix with the form αI 1 0 0 βI 2 , where I 1 is a n × n identity matrix and I 2 is a m× m identity matrix and α and β are real numbers greater than 0. We can get x k+1 from (7) leading to the PGSS iterative scheme: where Here, M α,β is the iteration matrix of the PGSS iteration method.
Theorem 1 (see [27]). A � b ∈∈R (n+m)×(n+m) is a nonsymmetric saddle-point matrix, α is a nonnegative constant, and β is a positive constant. en, the iteration matrix M α,β of PGSS is which satisfies

The Newton-PGSS Method
In this section, we describe an inner-outer iteration method for solving systems of nonlinear equations with complex symmetric Jacobian matrices. We use Newton methods as outer iteration and apply the PGSS method as the inner solver for the modified Newton method, in other words, the PGSS iteration is employed to solve the following two linear systems: en, we get the Newton-PGSS method for solving nonlinear system (1).
e Newton-PGSS Method. Let F: D ⊂ R n+m ⟶ R n+m be a continuously differentiable function with the complex symmetric Jacobian matrix F ′ (x) at any x ∈ D, and let where A(x) ∈ R n×n is a real-positive definite matrix and B(x) ∈ R n×m is a full column rank matrix (m < n). Given an initial guess x 0 ∈ D, two positive constants α and β and sequence l k ∞ k�0 of positive integers, compute x k+1 for k � 0, 1, 2, . . . , until x k converges. e algorithm can be concluded as Algorithm 3.

Local Convergence of the Newton-PGSS Method
In this section, we prove the local convergence of Newton-PGSS method under the Hölder condition.
) and V(x) and W(x) are defined as follows. Suppose F ′ (x) is continuous and positive definite at a point x * ∈ D, at which F(x * ) � 0.
Denote with N(x * , r) an open ball centered at x * with radius r > 0.

Assumption 1.
For all x ∈ N(x * , r) ⊂ N 0 , assume the following conditions hold.
(A1) e bounded condition: there exist positive constants δ and c such that (A2) e Hölder condition: there exist nonnegative constants K w and K t such that (1) Let the initial guess x 0 be given.

Remark 1.
We can know the fact that Lipschitz condition is a special case of Hölder condition when p � 1, and we can call Hölder condition Lipschitz. Hence, Lipschitz condition is stronger than Hölder condition. Now, under Assumption 1, we establish the local convergence theorem for the Newton-PGSS, and we can know the properties of function F around the numerical solution x * and the information about the radius of the neighborhood. e properties and information mentioned above will affect the given method about the local convergence.
(1) Given an initial guess x 0 , a nonnegative constant α, a positive constant β, and a positive integer sequences l k 2) For l � 0, 1, . . . , l k − 1, apply algorithm PGSS to the linear system (12): I 1 is a n × n identity matrix and I 2 is a m × m identity matrix en, the Newton-PGSS method can be rewritten as en, the Newton-PGSS method can be equivalently expressed as e Jacobian matrix F′(x) can be rewritten as  Journal of Mathematics Since by Banach lemma, F(x) − 1 exists and inequality holds, and Moreover, since it holds that is completes the proof of Lemma 1.

Journal of Mathematics
From the bounded condition, we have and we can get the inequality Hence, by making use of the Banach lemma, we can obtain Similarly, en, we have We can use (33); hence, Now, we turn to estimate the error about the Newton − PGSS iteration x k ∞ 0 defined above. Clearly, it holds that where Hence, we can obtain 6 Journal of Mathematics where is function is about t increasing and about c decreasing; hence, In fact, for k � 0, we have‖x 0 − x * ‖ < r < r 0 , as x (0) ∈ N(x * , r). It follows that Hence, x 1 ∈ N(x * , r), and by making use of mathematical methods of induction, suppose x m ∈ N(x * , r) is valid for some positive integer k � m. en, by making use of the function above again, we can straightforwardly deduce the estimate which show that it also holds true for k�m+1 as the following. In addition, we have and, hence, x m+1 ∈ N(x * , r 0 ). Now, the conclusion what we are proving above is as follows.
(A1) e bounded condition: there exist positive constants δ and c such that (A2) e Hölder condition: there exist nonnegative constants L a and L b for all x, y ∈ N(x 0 , r) ∈ N 0 with the exponent p ∈ (0, 1], and we define L: � L a + L b .

Lemma 2.
Under Assumption 2, for all x, y ∈∈N(x * , r), then F ′ (x) − 1 exists, and we have the following inequations: Proof. e proof is omitted since it is the same as Lemma 1.

Theorem 3. Under Assumption 2, for all x, y ∈∈N(x * , r), then F ′ (x) − 1 exists, and we have the inequations in (45).
Now, we construct the following sequence of functions: with the constants satisfying where η � max k η k < 1 and r � min(r 1 , r 2 ); let t 0 � 0, and the sequence t k are generated by the following formula: Some properties of the function g(t) and h(t) and the sequence t k are given by the following lemmas.
e proof is omitted since it is straightforward.

Theorem 4.
Under the assumptions of lemma in this section, r: � min(r 1 , r 2 ) with satisfying And, define l 0 � liminf k⟶∞ l k , and the constant l 0 satisfies where the symbol ⌊·⌋ is used to denote the smallest integer no less than the corresponding real number, τ ∈ (0, ((1 − θ)/θ)) a prescribed positive constant, and en, the iteration sequence x k ∞ k�0 generated by the Newton − PGSS is well defined and converges to x * , which satisfies F(x * ).
Proof. Firstly, we construct the sequence We have Furthermore, g(0) � c > 0; hence, we have r * which satisfies g(r * ) � 0, where t 1 � 2cδ because (49) and (50). Hence, we have (58) erefore, we have Now, we assume that t k− 1 < t k < r * , and by making use of mathematical methods of induction, we have hence t k+1 > t k .
Next, prove the following inference by mathematical induction: 8 Journal of Mathematics where Because we can derive inequality Hence, And because we can give en, and we can have inequality Since the sequence t k ∞ k�0 converges to t * and where r * < ���� � (b/a) p , the sequence x k also converges to x * . e proof has been completed as above.

The Modified Newton-PGSS Method and Its Local Convergence
In this section, we improve Newton-PGSS and introduce the modified Newton-PGSS and prove the local convergence of the modified Newton-PGSS method briefly.
Journal of Mathematics e modified Newton method is a kind of algorithm based on the Newton method. Its principle is to reduce the calculation times of the inverse matrix of Jacobian matrix, making the algorithm more efficient. It only needs to calculate inverse matrix once every two steps. e format of the algorithm is shown below: (73) en, we get the modified Newton-PGSS method for solving nonlinear system (1) (Algorithm 4).

Assumption 3.
For all x ∈ N(x * , r) ⊂ N 0 , assume the following conditions hold.
(A1) e bounded condition: there exist positive constants δ and c such that (A2) e Hölder condition: there exist nonnegative constants K w and K t such that with the exponent p ∈ (0, 1].

Lemma 4.
Under Assumption 3, for all x, y ∈ N(x * , r), if r ∈ (0, (1/(cK))) (1/p) , then F ′ (x) − 1 exists. And, the following inequalities hold with K: � K a + K b for all x, y ∈∈N(x * , r): with l 0 � liminf k⟶∞ l k , and the constant u satisfies where the symbol ⌊·⌋ is used to denote the smallest integer no less than the corresponding real number, τ ∈ (0, ((1 − θ)/θ)) is a prescribed positive constant, and with α and β are more than 0. en, for any x ∈ N(x * , r) ⊂ N 0 , t ∈ (0, r) and c > u, it holds that Proof. It is the same as eorem 2. In eorem 5, we get the fact that ‖x (1) − x * ‖ ≤ g(r p 0 ; )‖x (0) − x * ‖ which is the modified Newton-PGSS has the similar result as the following.

Theorem 6.
Under the conditions of eorem 5, we have the fact that, for any x 0 ∈ N(x * , r)with corresponding l k ∞ k�0 , m k ∞ k�0 of positive integers, the iteration sequence x k ∞ k�0 which is generated by the modified Newton-PGSS method is well defined and converges to x * . Furthermore, it has the following properties: Proof.
Newton-PGSS method and Newton-RHSS as their inner iterations are splitting methods. And, the second step, we will discuss which is more effective as preconditioner in Newton-GMRES algorithm. e numerical results in Example 1 were computed using MATLAB Version R2011b, on an iMac with a 3.20 GHz Intel Core i5-6500 CPU, and 8.00 GB RAM, with machine accuracy eps � 2.22 × 10 − 16 .
where Ω � (0, 1) × (0, 1), with zΩ is the boundary of Ω, u is a vector-valued function representing the velocity v > 0 is the viscosity constant, △ is the componentwise Laplace operator, and w is a scalar function representing the pressure. By discreting the function above with the upwind scheme, we obtain the saddle point problem in which where f � e u 1 1 , e u 1 2 , . . . , e u 2 1 , . . . , e u p p , e v 1 1 , with ⊗ being the Kronecker product symbol. By applying the centered finite difference scheme on the equidistant discretization grid with the step size Δt � h � (1/(N + 1)), the system of nonlinear equations (1) is obtained with the following form: Firstly, we compare the algorithms whose inner iterations are splitting methods, such as Newton RHSS, Newton PGSS, and modified Newton PGSS. e parameters needed in the problem are chosen by using the traversal method for the purpose of comparison: the initial guess u 0 � 0, the stopping criterion for the outer iteration is set to be and the prescribed tolerance η k and η k for controlling the accuracy of the iteration are both set to be η, which satisfies inequality For different inner tolerance η � 0.4, 0.2, and 0.1 and problem parameters v � 1 and 0.1, the results about outer IT, inner IT, and CPU are listed in the numerical tables corresponding to the referred inexact Newton methods. Because the linear matrix of the solution is different in each iteration, there is no way to find the optimal parameters in theory.
us, we get the most efficient algorithm by traversing for the parameters of different algorithms, and then, we tabulate these results. For the selection of a single parameter, we traverse the parameters from 0 with an interval of 1 in the beginning. When the number of steps, time, and error show an earlier increase and later decrease trend, the iteration is stopped to determine the range of parameters. We use this method to narrow the parameter range and get "the best parameters at present" until the result (such as step) does not change. For the selection of two parameters (denoted them as α and β), first, we fix the parameter α and traverse the parameter β by using the single parameter traversal method. en, we fix the parameter β and traverse the parameter α. We repeat the process until the result does not change. We can get information from Tables 1-6 that Newton-PGSS performs better than Newton-RHSS in the iterative CPU. Moreover, the Newton-MPGSS algorithm is    much better than Newton-RHSS in the number of generation steps. As we known, Krylov subspace method is more efficient than the stationary iterative methods in saddle point. Secondly, we will compare the effects of PGSS and RHSS as preconditioners on Newton-GMRES. In Tables 7-12, we can find it that PGSS and RHSS are more efficient as preprocessing operators than without them as using GMRES methods. Furthermore the PGSS is more efficient than RHSS as preconditioners. In the inner iteration, RHSS and PGSS are treated as preprocessing operators, and then the Krylov subspace method is used to solve the problem, which is better than the Krylov subspace method in CPU and step number. Although the effect of PGSS as preconditioner is not much better than that of RHSS when n is small, it can be seen that PGSS has great advantages in both steps and CPU compared with RHSS with the increase of n.

Conclusions
e Newton-PGSS method is a considerable method for solving large sparse nonlinear system with nonsymmetric saddle point problems with the nonsymmetric Jacobian matrix. is is the first time to solve this kind of problem, and we utilize the PGSS iteration as the inner solver for the Newton equation. And, we establish a modified Newton-PGSS method for solving large sparse nonlinear system with nonsymmetric saddle point problems with the nonsymmetric Jacobian matrix. We give the local convergence and semilocal convergence analysis of the new method under proper conditions. Finally, the numerical results show that the modified Newton-PGSS outperforms the other splitting method in the sense of CPU time and iterative steps. Furthermore, when we apply the Newton-GMRES method to solve the problems, PGSS will accelerate the algorithm as preconditioner and make it more efficient than RHSS.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.