We present a third-order method for solving the systems of nonlinear equations. This method is a Newton-type scheme with the vector extrapolation. We establish the local and semilocal convergence of this method. Numerical results show that the composite method is more robust and efficient than a number of Newton-type methods with the other vector extrapolations.
1. Introduction
Finding the solution of nonlinear equations is important in scientific and engineering computing areas. In this paper, we focus on the following nonlinear system of equations:
(1)F(x)=0,
where F:Rn→Rn is differentiable. Here, F(x)=(f1(x),f2(x),…,fn(x))T and x∈Rn.
Some efficient methods for solving the system of (1) have been brought forward [1]. The Newton method for (1) is a second-order method. Its iterative formula is given by
(2)xk+1=xk-F′(xk)-1F(xk),
where xk is the current approximate solution and F′(xk) is the Jacobian matrix of F(x) at xk. Potra and Pták [2] propose the modified Newton method (PPM) given by
(3)yk=xk-F′(xk)-1F(xk),xk+1=yk-F′(xk)-1F(yk).
In each iteration, PPM needs two evaluations of the vector function and one evaluation of the Jacobian matrix and the order is three.
Though the PPM can reduce the computational cost of Jacobian matrix, in some cases, the sequences produced by PPM converge slowly and even cannot converge because of the accumulation of the computational error. This problem limits its practical application.
In order to solve this problem, we will introduce the vector extrapolation technique to improve the convergence of PPM. Many vector extrapolation methods have been developed, such as the minimal polynomial extrapolation (MPE) method [3], the reduced rank extrapolation (RRE) method [4, 5], the modified minimal polynomial extrapolation (MMPE) method [6–8], the topological ε-algorithm (TEA) [6], and vector ε-algorithms (VEA) [9, 10]; also see [11, 12] and the references therein. These methods could be applied to the solvers of linear and nonlinear systems and accelerate their convergence.
In this paper, we construct a new extrapolation method and combine it with PPM, thus obtaining a Newton-type method. We will show by numerical results that the composite method can be of practical interest. The local and semilocal convergence are also established for the method.
2. The Method
We introduce the following Newton-type method:
(4)yk=xk-F′(xk)-1F(xk),zk=yk-F′(xk)-1F(yk),xk+1=zk-2(xk-yk-ω(yk-zk))·(zk-yk)∥xk-yk-ω(yk-zk)∥2(zk-yk),
where ∥·∥ is Euclidean norm and 0<ω≤2.
This iteration scheme consists of a PPM iterate to get zk from xk, followed by a modified iterate to calculate xk+1 from xk,yk, and zk.
We now derive the last substep. Let f(x)=0 be a scalar real equation; then King’s method [13] is described as
(5)yk=xk-f(xk)f′(xk),zk=yk-f(yk)f′(xk),xk+1=zk-2(zk-yk)2xk-yk-ω(yk-zk).
In order to extend the method (5) to the case of vector functions, we define the vector inverse as
(6)v-1=vT∥v∥2,v∈Rn.
The last substep is obtained by applying the above vector inverse to the scalar King method.
The following theorem will give the order of convergence of the method with 0<ω≤2 given by (4).
Theorem 1.
Suppose that the function F:D⊂Rn→Rn is continuously differentiable and F′(x*) is nonsingular, where D is an open set and x*∈D is the solution of F(x)=0. Define λ=∥F′(x*)-1∥. Further, assume that there exists a positive number γ such that for any x∈D,
(7)∥F′(x)-F′(x*)∥≤γ∥x-x*∥;
then there exists a set S such that for any x0∈S, the sequence {xk} generated by (4) with 0<ω≤2 converges to x* and the order of convergence is three.
Proof.
We can write (4) as xk+1=G(xk) where
(8)y=x-F′(x)-1F(x),z=y-F′(x)-1F(y),G(x)=z-2(x-y-ω(y-z))·(z-y)∥x-y-ω(y-z)∥2(z-y).
Without loss of generality, we use the Euclidean norms as ∥·∥ in the following. Let δ=1/20λγ and S={x∣∥x-x*∥≤δ}∩D. Let x∈S and x≠x*.
It is obtained from (7) that
(9)∥F′(x)-F′(x*)∥≤γδ≤120λ.
By Banach lemma, we obtain that F′(x) is nonsingular and ∥F′(x)-1∥≤(20/19)λ. So y and z are well defined.
By making use of Taylor expansion and (7), we have
(10)∥F(x*)-F(x)-F′(x)(x*-x)∥=∥∫01F′(x+t(x*-x))(x*-x)dt-F′(x)(x*-x)∥=∥∫01[F′(x+t(x*-x))-F′(x)](x*-x)dt∥≤∫01∥F′(x+t(x*-x))-F′(x)∥∥x*-x∥dt≤γ∥x-x*∥2∫01tdt=12γ∥x-x*∥2.
So for y we obtain
(11)∥y-x*∥=∥F′(x)-1∥∥F(x*)-F(x)-F′(x)(x*-x)∥≤1019λγ∥x-x*∥2.
Similarly to (10), we get
(12)∥F(y)-F(x*)-F′(x*)(y-x*)∥≤12γ∥y-x*∥2.
It is obtained by (7) and (12) that
(13)∥z-x*∥wiwi≤∥F′(x)-1∥[∥F′(x)-F′(x*)∥∥y-x*∥mmmm..mmmm∥F′(x)-1∥[∥F′(x)-F′(x*)∥∥y-x*∥+∥F(x*)-F(y)-F′(x*)(x*-y)∥]wiwi≤200361λ2γ2∥x-x*∥3+10006859λ3γ3∥x-x*∥4.
Therefore it follows that
(14)∥G(x)-x*∥≤(FV∥x-x*∥∥z-x*∥+(3-ω)∥z-x*∥∥y-x*∥c+(2-ω)∥z-x*∥2+2∥y-x*∥2FV)×(∥x-x*∥-(ω+1)∥y-x*∥-ω∥z-x*∥)-1≤292700480111λ2γ2∥x-x*∥3≤29271920444∥x-x*∥<∥x-x*∥.
This proves that G(x)∈S and G is a contraction mapping. Thus, for any x0∈S, the sequence {xk} produced by (4) is well defined and it converges to x*. Finally, it is shown from (13) that the order of the method (4) is three.
3. The Semilocal Convergence
In this section, we will establish the semilocal convergence of method (4). This convergence may be derived by using recurrence relations, which have been used in establishing the convergence of Newton’s method and some third-order methods [14–29]. In what follows, an attempt is made to use recurrence relations to establish the semilocal convergence for the method (4). The recurrence relations based on one constant which depend on F are derived. Further, based on these recurrence relations, the error estimate is obtained for the present iterative method.
In order to establish the recurrence relations for the present iterative method, we will use the following scalar functions which are defined by
(15)g1(t)=1+12t+t22-ωt,g2(t)=11-tg1(t),g3(t)=12t2g2(t)[4+(2-ω)t2-ωt+14t(2+(2-ω)t2-ωt)2],g4(t)=g2(t)g3(t),
where 0<ω≤2.
Let h(t)=(2-ωt)(tg1(t)-1)=(1-(1/2)ω)t3+(1-ω)t2+(2+w)t-2. For any positive real number ω, it is easy to obtain h(0)h(2/ω)<0, so h(t) has at least a real zero point t^∈(0,2/ω). Furthermore, let f(t)=g4(t)-1. It can be included f(0)<0 and f(t^)→+∞, so f(t) has at least a real zero in t*∈(0,t^). Furthermore, it can be obtained that f(t) is an increasing function in (0,2/ω). So t* is the unique zero of f(t) in (0,2/ω). For the functions defined by (15), we have the following results.
Lemma 2.
Let t* be the unique real root of g4(t)-1=0 in (0,2/ω). Then
g1(t) is an increasing function in [0,t*] and satisfies 1≤g1(t)≤g1(t*);
g2(t) is an increasing function in [0,t*] and satisfies 1≤g2(t)≤g2(t*);
g3(t) is an increasing function in [0,t*] and satisfies 0≤g3(t)≤g3(t*)<1;
g4(t) is an increasing function in [0,t*] and satisfies 0≤g4(t)≤g4(t*)=1;
t*g1(t*)/(1-g3(t*))=1.
Proof.
The results (a)–(d) can be obtained by simple derivations. We only prove the validity of (e). Noticing that
(16)g4(t)=g2(t)g3(t),g2(t)=11-tg1(t),
we get
(17)1=g4(t*)=g3(t*)1-t*g1(t*),
which can be converted to (e).
Theorem 3.
Assume that the function F:D⊂Rn→Rn is continuously differentiable where D is an open set and there exists a positive number γ such that for any x,v∈D(18)∥F′(x)-F′(v)∥≤γ∥x-v∥.
Let g1(t), g2(t), g3(t), and g4(t) be defined by (15). Further, define αk, βk, and ρk as
(19)αk=∥F′(xk)-1F(xk)∥,βk=∥F′(xk)-1∥,ρk=αkβkγ.
Let x0∈D satisfy F(x0)≠0, F′(x0) be nonsingular, ρ0∈(0,t*)⊂(0,2/ω), and S={x∣∥x-x0∥≤θα0}⊂D where θ=g1(t*)/(1-g3(t*)) and t* is the root of g4(t)-1=0 in (0,2/ω); then we have that
{xk} generated by the method (4) is well defined in S and satisfies
(20)∥xk+1-xk∥≤g1(ρk)αk;
αk, βk, and ρk are well defined and satisfy
(21)αk+1≤g3(ρk)αk,(22)βk+1≤g2(ρk)βk,(23)ρk+1≤g4(ρk)ρk.
Proof.
Without loss of generality, we use the Euclidean norms as ∥·∥ in the following. We firstly consider the case k=0.
Since 1<θ, it is obvious that y0∈S by the definition α0=∥y0-x0∥. By Taylor expansion we have
(24)∥F(y0)∥=∥F(x0)+F′(x0)(y0-x0)+∫x0y0[F′(x)-F′(x0)]dx∥=∥∫01[F′(x0+t(y0-x0))-F′(x0)](y0-x0)dt∥≤∫01∥F′(x0+t(y0-x0))-F′(x0)∥∥y0-x0∥dt≤γ∥y0-x0∥2∫01tdt=12γα02.
Furthermore,
(25)∥z0-y0∥≤∥F′(x0)-1∥∥F(y0)∥≤12β0γα02=12ρ0α0.
It then follows that
(26)∥z0-x0∥≤∥z0-y0∥+∥y0-x0∥≤(1+12ρ0)α0.
Taking account of the relation
(27)1+12ρ0<g1(ρ0)<g1(t*)<θ,
we have z0∈S. Since
(28)∥x0-y0-ω(y0-z0)∥≥∥y0-x0∥-ω∥z0-y0∥≥(1-12ωρ0)α0>0,
we obtain that x1 is well defined and
(29)∥x1-z0∥≤2∥z0-y0∥2∥x0-y0-ω(y0-z0)∥≤ρ02α02-ωρ0,(30)∥x1-y0∥≤∥x1-z0∥+∥z0-y0∥≤ρ0α022+(2-ω)ρ02-ωρ0,(31)∥x1-x0∥≤∥x1-y0∥+∥y0-x0∥≤α0[1+12ρ0+ρ022-ωρ0]=g1(ρ0)α0<θα0.
This shows x1∈S and the validity of (20).
By condition (18) we have
(32)∥F′(x1)-F′(x0)∥≤γ∥x1-x0∥≤γg1(ρ0)α0.
Because
(33)γg1(ρ0)α0β0=g1(ρ0)ρ0<1,
by Banach lemma we obtain that F′(x1) is nonsingular and
(34)β1=∥F′(x1)-1∥≤β01-g1(ρ0)ρ0=g2(ρ0)β0.
This is to say that (22) holds.
Now we consider F(x1). By making use of (24), (25), and (30), we obtain
(35)∥F(x1)∥wwi=∥F(x0)+F′(x0)(x1-x0)+∫x0x1[F′(x)-F′(x0)]dxmimm+∫x0x1[F′(x)-F′(x0)]dx∥wwi≤2∥z0-y0∥∥x0-y0-ω(y0-z0)∥∥F(y0)∥wwiwi+∥-F(y0)+∫x0x1[F′(x)-F′(x0)]dx∥wwi≤2∥z0-y0∥∥F(y0)∥∥y0-x0∥-ω∥z0-y0∥wwwii+∥-∫x0y0[F′(x)-F′(x0)]dxvvviv..vv+∫x0x1[F′(x)-F′(x0)]dx∥wwi=γρ0α022-ωρ0+∥∫y0x1[F′(x)-F′(x0)]dx∥wwi≤γρ0α022-ωρ0+∫01∥F′(y0+t(x1-y0))-F′(x0)∥vvvvvvvvvvvvv..vvvv×∥x1-y0∥dtwwi≤γρ0α022-ωρ0wwwii+γ∫01[∥y0-x0∥+t∥x1-y0∥x′]∥x1-y0∥dtwwi≤γρ0α022-ωρ0wwwii+γ[∥y0-x0∥+12∥x1-y0∥]∥x1-y0∥wwi≤γρ0α022-ωρ0wwwii+γ2ρ0α022+(2-ω)ρ02-ωρ0[1+14ρ02+(2-ω)ρ02-ωρ0]wwi=γ2ρ0α02[4+(2-ω)ρ02-ωρ0+14ρ0(2+(2-ω)ρ02-ωρ0)2].
Finally, we prove (21) and (23). By making use of (34) and (35), we have
(36)α1≤∥F′(x1)-1∥∥F(x1)∥≤12α0ρ02g2(ρ0)wi×[4+(2-ω)ρ02-ωρ0+14ρ0(2+(2-ω)ρ02-ωρ0)2]=g3(ρ0)α0.
It then holds that
(37)ρ1=α1β1γ≤ρ0g2(ρ0)g3(ρ0)=g4(ρ0)ρ0<g4(t*)ρ0=ρ0<t*.
Now we consider the cases k≥1. By induction we can obtain the following facts.
By Lemma 2, we obtain that
(38)ρk≤g4(ρk-1)ρk-1<g4(t*)ρk-1=ρk-1<t*,
which leads to
(39)g3(ρk)<g3(ρk-1)<⋯<g3(ρ0)<g3(t*)<1,g1(ρk)<g1(t*).
It follows that
(40)αk≤g3(ρk-1)αk-1≤g3(ρk-1)⋯g3(ρ0)α0<g3(t*)kα0.
This further yields
(41)∥xk-xk-1∥≤g1(ρk-1)αk-1<g1(t*)g3(t*)k-1α0.
Thus it is obtained that
(42)∥xk-x0∥≤∑i=1k∥xi-xi-1∥<∑i=1kg1(t*)g3(t*)i-1α0.
Next we show that yk, zk are well defined in S. By Lemma 2 and (42), we have
(43)∥yk-x0∥≤αk+∥xk-x0∥≤g3(t*)kα0+∑i=1kg1(t*)g3(t*)i-1α0<∑i=0kg1(t*)g3(t*)iα0<g1(t*)1-g3(t*)α0.
This means that yk∈S. Furthermore, by analogous procedures to (24), (25), and (26), we obtain that
(44)∥zk-yk∥≤12ρkαk,∥zk-xk∥≤∥zk-yk∥+∥yk-xk∥≤(1+12ρk)αk.
Since
(45)1+12ρk<g1(ρk)<g1(t*),
we get
(46)∥zk-xk∥<g1(ρk)αk<g1(t*)g3(t*)kα0.
Hence it follows that
(47)∥zk-x0∥≤∥zk-xk∥+∥xk-x0∥≤∑i=0kg1(t*)g3(t*)iα0<g1(t*)1-g3(t*)α0.
This shows that zk∈S. Similarly to the case k=0, we obtain that xk+1 is well defined and have
(48)∥xk+1-zk∥≤ρk2αk2-ωρk,∥xk+1-yk∥≤∥xk+1-zk∥+∥zk-yk∥≤ρkαk22+(2-ω)ρk2-ωρk,∥xk+1-xk∥≤g1(ρk)αk.
By (42) we obtain
(49)∥xk+1-x0∥<g1(t*)1-g3(t*)α0,
which shows xk+1∈S.
We can prove analogously to (35) that
(50)∥F(xk+1)∥≤γ2ρkαk2[4+(2-ω)ρk2-ωρk+14ρk(2+(2-ω)ρk2-ωρk)2].
Because
(51)∥F′(xk+1)-F′(xk)∥≤γg1(ρk)αk,g1(ρk)ρk<g1(ρ0)ρ0<1,
we obtain that F′(xk+1) is nonsingular and
(52)βk+1=∥F′(xk+1)-1∥≤g2(ρk)βk.
From (50) and (52), we have
(53)αk+1≤g3(ρk)αk.
It then follows that
(54)ρk+1=αk+1βk+1γ≤g4(ρk)ρk<g4(t*)ρk=ρk.
Thus far, we have proved all conclusions of this theorem.
The theorem given below will establish the convergence of the sequence {xk} and give the error estimate for it.
Theorem 4.
Let the conditions of Theorem 3 be satisfied. Denote p=g1(t*) and q=g3(t*). Then the sequence {xk} generated by (4) converges to a unique solution x*∈S of F(x), and it holds that
(55)∥x*-xk∥<pqk1-qα0.
Proof.
Since 0<q<1, it follows from (41) that
(56)∥xk+m-xk∥≤∑i=0m-1∥xk+i+1-xk+i∥<∑i=0m-1pqk+iα0<pqk1-qα0.
This means that {xk} is a Cauchy sequence and thus there exists a x* such that limk→∞xk=x*. By letting m→∞ in (56), we obtain (55). From (56) and (42), we can get
(57)∥x*-x0∥≤∥x*-xk∥+∥xk-x0∥<∑i=0∞pqiα0=p1-qα0.
This shows x*∈S.
From (50) and (40), we obtain that
(58)∥F(xk)∥<γ2ρ0α02q2(k-1)×[4+(2-ω)ρ02-ωρ0+14ρ0(2+(2-ω)ρ02-ωρ0)2].
By letting k→∞ in (58), we obtain F(x*)=0; namely, x* is a solution of F(x).
Now, we prove the uniqueness of x* in S. Let x** be another zero of F(x) in S. By mean value theorem, we have
(59)0=F(x**)-F(x*)=F′(ξ)(x**-x*),
where ξ is between x* and x**. Since
(60)β0∥F′(ξ)-F′(x0)∥≤β0γ∥ξ-x0∥≤ρ0g1(t*)1-g3(t*)<t*g1(t*)1-g3(t*)=1,
it follows by Banach lemma that F′(ξ) is invertible and hence x**=x*. This ends the proof.
4. Numerical Tests
In this section, we present some numerical results for the method given by (4) (NTM) and compare it with PPM on their numerical behavior. We also test the composite methods combining PPM with some known vector extrapolation methods mentioned in Section 1, which are indicated as VEA-PPM, MPE-PPM, and RRE-PPM, respectively. We use ∥Fk∥2 to denote the value of ∥F(x)∥2 at the kth approximate solution xk.
We consider the nonlinear elliptic differential equation:
(61)-∇·K(Θ(ψ))∇ψ=0.
This equation often arises from the flow model in porous media and in this case, ψ is the pressure, Θ the fluid saturation, and K the conductivity. The boundary conditions can be given by
(62)-K(Θ(ψ))∇ψ=VB,on ΓN,ψ=ψB,on ΓD.
In this test, we consider the one-dimensional case. The uniform cell-centered finite difference (CCFD) approximation method is used to discretize the boundary value problem. For the detailed CCFD formulations, we refer to [30] or the references therein. The values K(Θ(ψ)) on the faces of each cell are taken as the harmonic mean of cell-central ones. Here, we take K(Θ(ψ))=kψ where k is a positive real constant. The input boundary condition is given by VB=1, while the output boundary condition is ψB=1.
The discrete scheme leads to a nonlinear equation system with m variables. We test two cases with the sizes m=100 and 1000, respectively. We take ω=2 in our method. All methods start from the initial approximate solutions and stop when they satisfy the given criteria. For the case m=100, the stopping criterion is ∥F∥2<1e-12, while it is taken as ∥F∥2<1e-11 for m=1000. In these tables, we show the iteration number cost by various methods.
The computational results are displayed in Tables 1 and 2. In the tables, denote S=(1,1,…,1)T and “D” indicates that the method is divergent or cannot converge in 50 steps. We use NTM to represent the proposed method.
Results of the case m=100.
x0
PPM
VEA-PPM
MPE-PPM
RRE-PPM
NTM
0.5S
D
D
8
8
5
1S
D
6
7
7
5
1.5S
22
6
6
6
5
2S
D
5
6
6
4
3S
D
5
6
6
4
Results of the case m=1000.
x0
PPM
VEA-PPM
MPE-PPM
RRE-PPM
NTM
0.1S
D
D
10
10
7
0.5S
D
D
9
9
6
1S
D
30
8
8
6
1.5S
D
8
7
7
5
2S
D
7
7
7
5
3S
D
6
7
7
5
From the numerical results, we can know that the performance of NTM is more efficient and robust than PPM.
5. Conclusions
We establish the convergence of a third-order method for systems of nonlinear equations; an existence-uniqueness theorem and the error estimate for this method are also obtained. Numerical results show that this method is more robust and efficient than a number of Newton-type methods with the other vector extrapolation algorithms.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work is supported by the Scientific and Technical Research Project of Hubei Provincial Department of Education (no. D20132701).
OrtegaJ. M.RheinboldtW. C.1970New York, NY, USAAcademic PressMR0273810PotraF. A.PtákV.1984103Boston, Mass, USAPitmanResearch Notes in MathematicsMR754338CabayS.JacksonL. W.A polynomial extrapolation method for finding limits and antilimits of vector sequences1976135734752MR042869210.1137/0713060ZBL0359.65029EddyR. P.WangP. C. C.Extrapolation to the limit of a vector sequence1979New York, NY, USAAcademic Press387396MešinaM.Convergence acceleration for the iterative solution of the equations X=AX+f1977102165173MR045165010.1016/0045-7825(77)90004-4ZBL0344.65019BrezinskiC.Généralisations de la transformation de Shanks, de la table de Padé et de l'epsilon-algorithme1975124317360MR042102710.1007/BF02575753PugachevB. P.Acceleration of the convergence of iterative processes and a method of solving systems of non-linear equations19771751992072-s2.0-000031626110.1016/0041-5553(77)90023-4SidiA.FordW. F.SmithD. A.Acceleration of convergence of vector sequences198623117819610.1137/0723013MR821914ZBL0596.65016WynnP.On a device for computing the em(sn) transformation1956109196MR008405610.2307/2002183WynnP.Acceleration techniques for iterated vector and matrix problems196216301322MR014564710.1090/S0025-5718-1962-0145647-XZBL0105.10302BrezinskiC.Redivo ZagliaM.1991Amsterdam, The NetherlandsNorth-HollandMR1140920JbilouK.SadokH.Vector extrapolation methods. Applications and numerical comparison20001221-214916510.1016/S0377-0427(00)00357-5MR1794654ZBL0974.65034KingR. F.A family of fourth order methods for nonlinear equations197310876879MR034358510.1137/0710072ZBL0266.65040RallL. B.1979New York, NY, USARobert E. KriegerMR601777CandelaV.MarquinaA.Recurrence relations for rational cubic methods: I. The Halley method199044216918410.1007/BF02241866MR1053497ZBL0701.65043HeJ.WangJ.YaoJ.-C.Local convergence of Newton's method on Lie groups and uniqueness balls20132013910.1155/2013/367161367161MR3129324CandelaV.MarquinaA.Recurrence relations for rational cubic methods. II. The Chebyshev method199045435536710.1007/BF02238803MR1088077ZBL0714.65061GutiérrezJ. M.HernándezM. A.Recurrence relations for the super-Halley method19983671810.1016/S0898-1221(98)00168-0MR1647688ZBL0933.65063LingY.XuX.YuS.Convergence behavior for Newton-Steffensen's method under γ-condition of second derivative201320131110.1155/2013/682167682167MR3126752HernándezM. A.Chebyshev's approximation algorithms and applications2001413-44334452-s2.0-003525169710.1016/S0898-1221(00)00286-8ZBL0985.65058EzquerroJ. A.HernándezM. A.Recurrence relations for Chebyshev-type methods200041222723610.1007/s002459911012MR1731419ZBL0952.47050AmatS.BusquierS.MagreñánÁ. A.Reducing chaos and bifurcations in Newton-type methods201320131010.1155/2013/726701726701MR3081605WangX.KouJ.GuC.Semilocal convergence of a class of modified super-Halley methods in Banach spaces2012153377979310.1007/s10957-012-9985-9MR2915596ZBL1250.65075ProinovP. D.New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems201026134210.1016/j.jco.2009.05.001MR2574570ZBL1185.65095XuX.XiaoY.LiuT.Semilocal convergence analysis for inexact Newton method under weak condition201220121310.1155/2012/982925982925MR2965468ZBL1246.90146AmatS.MagreñánÁ. A.RomeroN.On a two-step relaxed Newton-type method201321924113411134710.1016/j.amc.2013.04.061MR3073284ArgyrosI. K.HiloutS.Improved generalized differentiability conditions for Newton-like methods201026331633310.1016/j.jco.2009.12.001MR2657368ZBL1196.65100GutiérrezJ. M.MagreñánÁ. A.RomeroN.On the semilocal convergence of Newton-Kantorovich method under center-Lipschitz conditions2013221798810.1016/j.amc.2013.05.078MR3091907LinR.ZhaoY.ŠmardaZ.KhanY.WuQ.Newton-Kantorovich and Smale uniform type convergence theorem for a deformed Newton method in Banach spaces20132013810.1155/2013/923898923898MR3147849KouJ.SunS.YuB.Multiscale time-splitting strategy for multiscale multiphysics processes of two-phase flow in fractured media201120112486190510.1155/2011/861905MR2777043ZBL05900360