Global Convergence of Schubert ’ s Method for Solving Sparse Nonlinear Equations

and Applied Analysis 3 For a scalar α ∈ (0, 1), θi can be chosen to satisfy 󵄨󵄨󵄨󵄨detCi 󵄨󵄨󵄨󵄨 ≥ n √α 󵄨󵄨󵄨󵄨detCi−1 󵄨󵄨󵄨󵄨 , θ i ∈ [ 1 − n √α 1 + n √α , 1] . (15) Therefore, |det(B)| ≥ α|detB|, and θi can be chosen so that B is nonsigular, 󵄨󵄨󵄨󵄨θ i − 1 󵄨󵄨󵄨󵄨 ≤ θ < 1. (16) The dependence of θi on the iteration k is suppressed, but θ is independent of k [3]. 3. The Algorithm In this section, we will globalize Schubert’s method. To this end, we introduce a derivative-free line search proposed by Li and Fukushima [13] to determine a step length α k . Algorithm 2. Given constants σ 1 > 0 and β ∈ (0, 1). Let α k = β ik , where i k is the smallest nonnegative integer such that 󵄩󵄩󵄩󵄩 F (x k + β ikd k ) 󵄩󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩󵄩F (xk) 󵄩󵄩󵄩󵄩 − σ1 󵄩󵄩󵄩󵄩 β ikd k 󵄩󵄩󵄩󵄩 2 + ε k 󵄩󵄩󵄩󵄩F (xk) 󵄩󵄩󵄩󵄩 , (17) and {ε k } is a given positive sequence satisfying


Introduction
In this paper, we consider the quasi-Newton method [1] for solving the general nonlinear equation where the function  :   →   is continuously differentiable.The ordinary quasi-Newton method for solving (1) generates a sequence {  } by the following iterative scheme: where the quasi-Newton direction   is obtained by solving the system of linear equations Here the matrix   is an approximation to the Jacobian matrix of  at   which usually satisfies the quasi-Newton condition (i.e., the secant condition): where   =  +1 −   and   = ( +1 ) − (  ).The matrix   can be updated by different quasi-Newton update formulae.However, quasi-Newton method is not applicable for solving large-scale problems due to the density of   .Fortunately, a large-scale problem usually has the property of sparsity, and then it is natural to extend some known quasi-Newton methods based on this property.Early in 1970, Schubert [2] modified Broyden's method and proposed a sparse Broyden's method, that is, the Schubert's method [3] with   defined in the following way: where where  and  are the sparsity patterns of the Jacobian matrix   () and   denotes the th column of the  ×  identity matrix.It has been proved by Broyden [4] that the Schubert's method is locally convergent when the Jacobian satisfies a Lipschitz condition.Lam [5] further showed the local and superlinear convergence of Schubert's method in the special case when ()  ̸ = 0,  = 1, 2, . . .,  at each iteration.As an improvement, Marwil considered the following updated formula: ( ()     ()  ) +    (  −     )    ()   , (7)

Schubert's Update
In this section, we present some useful properties of the Schubert's update.For the sake of convenience, we introduce some notations.
Specifically, the following inequality [3] The following theorem states the local and superlinear convergence of Schubert's method, which has been proved by Marwil [3].
Theorem 1. Suppose that  :   →   satisfies the following conditions.
(1)  is continuously differentiable in an open convex set  0 .
(3) There exists  = ( It is noticed that the matrix  +1 determined by ( 7) may be singular even if   is nonsingular.To overcome such a difficulty, Marwil [3] proposed a nonsingular schubert's method by using Here, we have omitted the subscript  and used  to represent  +1 and  to represent   .The parameters   are chosen so that  is nonsingular when  is nonsingular and the details are given below.

The Algorithm
In this section, we will globalize Schubert's method.To this end, we introduce a derivative-free line search proposed by Li and Fukushima [13] to determine a step length   .
Algorithm 3. Consider the following.
Step 1. Stop if (  ) = 0. Otherwise, solve the following system of linear equations to get   .
Step Step 3. Let   be determined by Algorithm 2.
We then show some useful properties of Algorithm 3.
Similar to Lemma 2.4 in [13], we have the following result.
Lemma 5. Let the level set Ω be bounded and let {  } be generated by Algorithm 3. Then the sequence {‖(  )‖} is convergent.
In order to establish the global convergence of Algorithm 3, we need the following assumptions.
Assumption 6 (iv) is the same as that in [13], which is not as strong as the assumption adopted in [14], where the uniform nonsingularity of   () is assumed.
We first introduce some notations.Define and then we have The following lemma is an extension of Lemma 2.5 in [13].
Lemma 7. Let the sequence {  } be generated by Algorithm 3. Suppose that the conditions (i then one has In particular, there is a subsequence of {  } tending to zero.If then one has In particular, the whole sequence {  } converges to zero.
Proof.By the Lipschitz continuity of   , we have Denote According to the updated (12), we have Subtracting     +1 from both sides of the above equality gives Taking norms yields (36) Making summation on both sides,  = 1, . . ., , yields Then it follows that According to Lemma 2.5 of [13], we get lim Moreover, for each  we have The last inequality implies (44) with  1 = 2 1 .
We show the global convergence of Algorithm 3 in the following section.
If there are finitely many  for which   is determined by (21), by Lemma 8, there exists a subsequence {  } ∈ that converges to zero.Similar to the proof of Lemma 9, it is not difficult to show that (44) holds for all  ∈  sufficiently large, where  denotes the index set of  > k,  ∈ .In particular, the subsequence {  }  is bounded.Without loss of generality, we suppose that {  }  converges to some  * .
In the latter part of this section, we give the superlinear convergence of Algorithm 3.
The following theorem establishes the superlinear convergence of Algorithm 3.

Theorem 12.
Let the conditions in Theorem 10 hold.Then the sequence {  } generated by Algorithm 3 converges to the unique solution  * of (1) superlinearly.
Let  and  be as specified by Theorem 11.It follows from Lemma 8 that there is an index k such that the following inequality holds for all  ≥ k:

Numerical Experiments
In this section, we will present some numerical results to show the efficiency of Algorithm 3 for a class of sparse nonlinear equations.
The numerical experiments are done by using MATLAB version 7.10 on a Core (TM) 2 PC with Windows XP.The details of the problems are given as follows, where  0 denotes the initial point.
In Tables 1 and 2, we list the results of Algorithm 3 for solving Problems 1 to 14 with  0 = .Because  0 is very important for the performance of Broyden's method, we also present the results with  0 =   ( 0 ).We can see that Algorithm 3 can be applied to solve a class of nonlinear equations, where the dimension of which can be up to 20000.Since the Schubert's update formula (7) can maintain the sparsity pattern of Jacobian matrix exactly, so Algorithm 3 is especially effective for solving large-scale nonlinear equations with sparse Jacobian matrix, such as tridiagonal or block diagonal Jacobian matrix.

Remarks
In this paper, based on the work of Schubert, Broyden, and Marwil, we have globalized Schubert's method and proposed a global algorithm by using a nonmonotone line search.We have established the global and superlinear convergence.Numerical results showed that the algorithm is especially effective for large-scale problems.
Pro: the problem; : the dimension of the problem; ‖( 0 )‖: the initial Euclidean norms of (); ‖(  )‖: the final Euclidean norms of (); Iter: the total number of iterations; Times: the CPU time in second.