JMATH Journal of Mathematics 2314-4785 2314-4629 Hindawi Publishing Corporation 10.1155/2014/965097 965097 Research Article Newton Type Iteration for Tikhonov Regularization of Nonlinear Ill-Posed Problems in Hilbert Scales Shobha Monnanda Erappa http://orcid.org/0000-0002-3530-5539 George Santhosh Neta Beny Department of Mathematical and Computational Sciences National Institute of Technology Karnataka Mangalore 575 025 India nitk.ac.in 2014 172014 2014 13 02 2014 11 06 2014 1 7 2014 2014 Copyright © 2014 Monnanda Erappa Shobha and Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Recently, Vasin and George (2013) considered an iterative scheme for approximately solving an ill-posed operator equation F(x)=y. In order to improve the error estimate available by Vasin and George (2013), in the present paper we extend the iterative method considered by Vasin and George (2013), in the setting of Hilbert scales. The error estimates obtained under a general source condition on x0-x^ (x0 is the initial guess and x^ is the actual solution), using the adaptive scheme proposed by Pereverzev and Schock (2005), are of optimal order. The algorithm is applied to numerical solution of an integral equation in Numerical Example section.

1. Introduction

In this study, we are interested in approximately solving a nonlinear ill-posed operator equation: (1)F(x)=y, where F:D(F)XY is a nonlinear operator. Here D(F) is the domain of F, and ·,· is the inner product with corresponding norm · on the Hilbert spaces X and Y. Throughout this paper we denote by Br(x0) the ball of radius r centered at x0,F(x) denotes the Fréchet derivative of F at xD(F), and F*(·) denotes the adjoint of F(·). We assume that yδY are the available noisy data satisfying (2)y-yδδ, where δ is the noise level. Equation (1) is, in general, ill-posed, in the sense that a unique solution that depends continuously on the data does not exist. Since the available data is yδ, one has to solve (approximately) the perturbed equation (3)F(x)=yδ instead of (1).

To solve the ill-posed operator equations, various regularization methods are used, for example, Tikhonov regularization, Landweber iterative regularization, Levenberg-Marquardt method, Lavrentiev regularization, Newton type iterative method, and so forth (see, e.g., ).

In , Vasin and George considered the iteration (which is a modified form of the method considered in ) (4)xn+1,αδ:=xn,α-Rα-1[A0*(F(xn,αδ)-yδ)+α(xn,αδ-x0)], where R0:=A0*A0+βI, A0=F(x0),x0,αδ=x0 is the initial guess, α>0 is the regularization parameter, and β>α. Iteration (4) was used to obtain an approximation for the zero xαδ of F(x0)*(F(x0)-yδ)+α(x-x0)=0 and proved that xαδ is an approximate solution of (1). The regularization parameter α in  was chosen appropriately from the finite set DN:={αi:0<α0<α1<<αN} depending on the inexact data yδ and the error level δ satisfying (2) using the adaptive parameter selection procedure suggested by Pereverzev and Schock .

In order to improve the rate of convergence many authors have considered the Hilbert scale variant of the regularization methods for solving ill-posed operator equations, for example, . In this study, we present the Hilbert scale variant of (4).

We consider the Hilbert scale {Xt}tR (see [14, 18, 23, 2629]) generated by a strictly positive self-adjoint operator B:D(B)XX, with the domain D(B) dense in X satisfying Bxx, for all xD(B). Recall [19, 28] that the space Xt is the completion of D:=k=0D(Bk) with respect to the norm xt, induced by the inner product (5)u,vt=Bt/2u,Bt/2v,u,vD(B).

In this paper, we consider the sequence {xn,α,sδ} defined iteratively by (6)xn+1,α,sδ=xn,α,sδ-Rβ-1[A0*(F(xn,α,sδ)-yδ)+αBs(xn,α,sδ-x0)], where Rβ-1:=(A0*A0+βBs)-1, x0,α,sδ:=x0, is the initial guess, β>α, for obtaining an approximation for zero xα,sδ of (cf. [21, 30]) (7)A0*(F(x)-yδ)+αBs(x-x0)=0.

As in , we use the following center-type Lipschitz condition for the convergence of the iterative scheme.

Assumption 1.

Let x0X be fixed. There exists a constant k0 such that, for every uBr(x0)Br(x^)D(F) and vX, there exists an element Φ(x0,u,v)X satisfying (8)[F(x0)-F(u)]v=F(x0)Φ(x0,u,v),Φ(x0,u,v)k0vx0-u.

The error estimates in this work are obtained using the source condition on x0-x^. In addition to the advantages listed in [16, see page 3], the method considered in this paper gives optimal order for a range of values of smoothness assumptions on x0-x^. The regularization parameter α is chosen from some finite set {α0<α1<α2<αN} using the balancing principle considered by Pereverzev and Schock in .

The paper is organized as follows. In Section 2, we give the analysis of the method for regularization of (6) in the setting of Hilbert scales. The error analysis and adaptive scheme of parameter α are given in Section 3. In Section 4, implementation of the method along with a numerical example is presented to validate the efficiency of the proposed method and we conclude the paper in Section 5.

2. The Method

First we will prove that the sequence (xn,α,sδ) defined by (6) converges to the zero xα,sδ of (7) and then we show that xα,sδ is an approximation to the solution x^ of (1).

Let As=A0B-s/2. We make use of the relation (9)(As*As+αI)-1(As*As)pαp-1,p>0,0<p1, which follows from the spectral properties of the positive self-adjoint operator As*As, s>0. Usually, for the analysis of regularization methods in Hilbert scales, an assumption of the form (cf. [18, 24]) (10)F(x^)x~x-b,xX on the degree of ill-posedness is used. In this paper instead of (10) we require only a weaker assumption: (11)d1x-bA0xd2x-b,xD(F), for some positive reals b, d1, and d2.

Note that (11) is simpler than that of (10). Now we define f and g by (12)f(t)=min{d1t,d2t},g(t)=max{d1t,d2t},qwwwwwwwwwwwwwwwwwtR,|t|1. The following proposition is used for further analysis.

Proposition 2 (cf. see [<xref ref-type="bibr" rid="B29">29</xref>, Proposition 2.1]).

For s>0 and |ν|1, (13)f(ν)x-ν(s+b)(As*As)ν/2xg(ν)x-ν(s+b),xH.

Let us define a few parameters essential for the convergence analysis. Let (14)ψ2(s):=g(s/(s+b))f(s/(s+b));ψ2(s)¯:=1f(s/(s+b));en,α,sδ:=xn,α,sδ-xn-1,α,sδ,n0;δ0<βb/2(s+b)2k0ψ2(s)¯(ψ2(s)+α022β2),x^-x0ρ, with (15)ρ<-1k0+1k0ψ2(s)×ψ2(s)[(α22β2+ψ2(s))-2k0ψ2(s)¯β-b/2(s+b)δ],γρ:=ψ2(s)¯β-b/2(s+b)δ+ψ2(s)(k02ρ2+ρ). Further let γρ<α02/4k0β2, and (16)r1=α+α2-4k0γρβ22k0β,r2=min{1ψ2(s)k0,α-α2-4k0γρβ22k0β}. For r(r1,r2), let (17)q=ψ2(s)(k0r+β-αβ). Then q<1.

Lemma 3.

Let Proposition 2 hold. Then for all hX, the following hold:

(A0*A0+βBs)-1A0*A0hψ2(s)h,

(A0*A0+βBs)-1Bshψ2(s)(1/β)h.

Proof.

Observe that, by Proposition 2, (18)(A0*A0+βBs)-1A0*A0h=B-s/2(As*As+βI)-1As*AsBs/2h1f(s/(s+b))×(As*As)s/2(s+b)(As*As+βI)-1(As*As)Bs/2h1f(s/(s+b))(As*As+βI)-1As*As×(As*As)s/2(s+b)Bs/2hg(s/(s+b))f(s/(s+b))Bs/2h-sg(s/(s+b))f(s/(s+b))h,(A0*A0+βBs)-1Bsh=B-s/2(As*As+βI)-1Bs/2h1f(s/(s+b))×(As*As)s/2(s+b)(As*As+βI)-1Bs/2h1f(s/(s+b))×(As*As+βI)-1(As*As)s/2(s+b)Bs/2hg(s/(s+b))f(s/(s+b))1βBs/2h-sg(s/(s+b))f(s/(s+b))1βh.

This completes the proof of the lemma.

Theorem 4.

Let en,α,sδ and q be as in (14) and (17), respectively; let xn,α,sδ be as defined in (6) with δ(0,δ0]. Then under Assumption 1 and Lemma 3, the following estimates hold for all n0:

xn+1,α,sδ-xn,α,sδqnγρ;

xn,α,sδBr(x0).

Proof.

If xn,α,sδBr(x0), then by Assumption 1, (19)xn+1,α,sδ-xn,α,sδ=xn,α,sδ-xn-1,α,sδ-(A0*A0+βBs)-1×[A0*(F(xn,α,sδ)-F(xn-1,α,sδ))hhhhhhh+αBs(xn,α,sδ-xn-1,α,sδ)(F(xn,α,sδ)-F(xn-1,α,sδ))]=(A0*A0+βBs)-1×[A0*A0(xn,α,sδ-xn-1,α,sδ)hhhhhh-A0*(F(xn,α,sδ)-F(xn-1,α,sδ))hhhhhh+(β-α)Bs(xn,α,sδ-xn-1,α,sδ)]=(A0*A0+βBs)-1A0*×01[A0-F(xn,α,sδ+t(xn,α,sδ-xn-1,α,sδ))]hhhhhhhhh×(xn,α,sδ-xn-1,α,sδ)dt+(A0*A0+βBs)-1(β-α)Bs(xn,α,sδ-xn-1,α,sδ)=Γ1+Γ2, where (20)Γ1:=(A0*A0+βBs)-1A0*×01[A0-F(xn,α,sδ+t(xn,α,sδ-xn-1,α,sδ))]hhhhhh×(xn,α,sδ-xn-1,α,sδ)dt and Γ2:=(A0*A0+βBs)-1(β-α)Bs(xn,α,sδ-xn-1,α,sδ), and hence by Assumption 1 and Lemma 3(a), we have (21)Γ1=-(A0*A0+βBs)-1A0*A0hhhhhh×01Φ(xn-1,α,sδ+t(xn,α,sδ-xn-1,α,sδ),hhhhhhhhhhhhhx0,xn,α,sδ-xn-1,α,sδ)dtψ2(s)k0rxn,α,sδ-xn-1,α,sδ and by Lemma 3(b), (22)Γ2β-αβψ2(s)xn,α,sδ-xn-1,α,sδ. Hence, by (19), (21), and (22), we have (23)xn+1,α,sδ-xn,α,sδψ2(s)(k0r+β-αβ)xn,α,sδ-xn-1,α,sδ=qxn,α,sδ-xn-1,α,sδqnx1,α,sδ-x0,α,sδ=qne1,α,sδ.

Next we show that e1,α,sδ<γρ, using Assumption 1 and Lemma 3. Observe that (24)e1,α,sδ=x1,α,sδ-x0,α,sδ=(A0*A0+βBs)-1A0*(F(x0,α,sδ)-yδ)=B-s/2(As*As+βI)-1As*(F(x0)-yδ)=B-s/2(As*As+βI)-1As*hhhh×[yδ-y+F(x^)hhhhhhh-F(x0)-A0(x^-x0)+A0(x^-x0)yδ]B-s/2B-s/2(As*As+βI)-1As*(yδ-y)+B-s/2(As*As+βI)-1As*hhhhh×01(F(x0+t(x^-x0))-A0)hhhhhhhhhh×(x^-x0)dt(As*As+βI)-1+B-s/2(As*As+βI)-1As*A0(x^-x0)1f(s/(s+b))β-b/2(s+b)δ+g(s/(s+b))f(s/(s+b))k0ρ22+g(s/(s+b))f(s/(s+b))ρψ2(s)¯β-b/2(s+b)δ+ψ2(s)(k0ρ22+ρ):=γρ<r. Hence, (a) follows from (23) and (24).

To prove (b), note that x1,α,sδ-x0,α,sδγρ<r. Suppose xm,α,sδBr(x0) for some m; then (25)xm+1,α,sδ-x0xm+1,αδ-xm,α,sδ++x1,α,sh,δ-x0(qm+q(m-1)++1)e1,α,sδ11-qe1,α,sδγρ1-q<r. Thus, by induction xn,α,sδBr(x0) for all n0. This proves (b).

Next we go to the main result of this section.

Theorem 5.

Let xn,α,sδ be as in (6), δ(0,δ0], and assumptions of Theorem 4 hold. Then (xn,α,sδ) is a Cauchy sequence in Br(x0) and converges, say, to xα,sδBr(x0)¯. Further A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0 and (26)xn,α,sδ-xα,sδCqn, where C=γρ/(1-q).

Proof.

Using relation (a) of Theorem 4, we obtain (27)xn+m,α,sδ-xn,α,sδi=0i=m-1xn+i+1,α,sδ-xn+i,α,sδi=0i=m-1q(n+i)e1,α,sδ=qne1,α,sδ+q(n+1)e1,α,sδ++q(n+m)e1,α,sδqn(1+q+q2++qm)e1,α,sδqn(11-q)γρCqn. Thus, xn,α,sδ is a Cauchy sequence in Br(x0) and hence it converges, say, to xα,sδBr(x0)¯.

Now letting n in (6), we obtain (28)A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0. This completes the proof.

The following assumption on source function and source condition is required to obtain the error estimates.

Assumption 6.

There exists a continuous, strictly monotonically increasing function φ:(0,As2](0,) such that the following conditions hold:

limλ0φ(λ)=0,

supλ>0(αφ(λ)/(λ+α))φ(α) for all λ(0,As2], and

there exists wX with wE2, such that (29)(As*As)s/2(s+b)Bs/2(x0-x^)=φ(As*As)w.

Remark 7.

If x0-x^Xt, for example, x0-x^tE1, for some positive constant E1 and 0t2s+b, then, we have (As*As)s/2(s+b)Bs/2(x0-x^)=φ(As*As)w, where φ(λ)=λt/(s+b), w=(As*As)(s-t)/2(s+b)Bs/2(x^-x0), and wg((s-t)/(s+b))E1:=E2.

Theorem 8.

Let xα,sδ be the solution of (7) and suppose Assumptions 1 and 6 hold; then (30)xα,sδ-x^ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r.

Proof.

Let M=01F(x^+t(xα,sδ-x^))dt. Then (31)F(xα,sδ)-F(x^)=M(xα,sδ-x^). Since A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0, one can see that (32)(A0*A0+αBs)(xα,sδ-x^)=(A0*A0+αBs)(xα,sδ-x^)-A0*(F(xα,sδ)-yδ)-αBs(xα,sδ-x0)=A0*[A0-M](xα,sδ-x^)+A0*(yδ-y)+αBs(x0-x^),xα,sδ-x^=(A0*A0+αBs)-1×[A0*(A0-M)(xα,sδ-x^)hhhhhhhh+A0*(yδ-y)+αBs(x0-x^)]=s1+s2+s3, where (33)s1:=(A0*A0+αBs)-1A0*(A0-M)(xα,sδ-x^),s2:=(A0*A0+αBs)-1A0*(yδ-y),s3:=(A0*A0+αBs)-1αBs(x0-x^). Note that by Assumption 1 and Lemma 3, (34)s1ψ2(s)k0rxα,sδ-x^, by Proposition 2, (35)s2ψ2(s)¯α-b/2(s+b)δ, and by Assumption 6, (36)s3ψ2(s)¯φ(α).

Hence, by (34)–(36) and (32), we have (37)xα,sδ-x^ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r. This completes the proof of the theorem.

2.1. Error Bounds under Source Conditions Theorem 9.

Let xn,α,sδ be as in (6). If assumptions in Theorems 5 and 8 hold, then (38)x^-xn,α,sδCqn+ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r, where C is as in Theorem 5. Further if nδ:=min{n:qnα-b/2(s+b)δ}, then (39)x^-xn,α,sδCs(φ(α)+α-b/2(s+b)δ), where Cs=C+(ψ2(s)¯/(1-ψ2(s)k0r)).

2.2. A Priori Choice of the Parameter

The error estimate φ(α)+α-b/2(s+b)δ in Theorem 9 attains minimum for the choice α:=α(δ,s,b) which satisfies φ(α)=α-b/2(s+b)δ. Clearly α(δ,s,b)=φ-1(ψs,b-1(δ)), where (40)ψs,b(λ)=λ[φ-1(λ)]b/2(s+b),0<λAs2.

Thus, we have the following theorem.

Theorem 10.

Suppose that all assumptions of Theorems 5 and 8 are fulfilled. For δ>0, let α(δ,s,b)=φ-1(ψs,b-1(δ)), and let nδ be as in Theorem 9. Then (41)x^-xn,α,sδO(ψs,b-1(δ)).

2.3. Adaptive Scheme and Stopping Rule

In this subsection, we consider the adaptive scheme suggested by Pereverzev and Schock in , modified suitably for choosing the parameter α which does not involve even the regularization method in an explicit manner.

Let i{0,1,2,,N} and αi=μiα0, where μ=η2(1+s/b), η>1, and α0=δ2(1+s/b). Let ni:=min{n:qnαi-b/2(s+b)δ}, and let xni,αi,sδ be as defined in (6) with α=αi and n=ni. Then, from Theorem 9, we have (42)x^-xni,αi,sδCs(φ(αi)+αi-b/2(s+b)δ).

Further, let (43)l:=max{i:φ(αi)αi-b/2(s+b)δ}<N,(44)k:=max{i:xni,αi,sδ-xnj,αj,sδ4Csαj-b/2(s+b)δ,hhhhhhhhj=0,1,2,,i-1αj-b/2(s+b)}, where Cs is as in Theorem 9. The proof of the following theorem is analogous to the proof of Theorem 4.4 in , so we omit the details.

Theorem 11.

Let xn,α,sδ be as in (6) with α=αi and δ(0,δ0], and assumptions in Theorem 9 hold. Let l and k be as defined in (43) and (44), respectively. Then lk and (45)x^-xnk,αk,sδ6Csη(ψs,b-1(δ)).

3. Implementation of the Method

Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 11 involves the following steps:

choose α0>0 such that δ0<(βb/2(s+b)/2k0ψ2(s)¯)(ψ2(s)+(α02/2β2)) and η>1;

choose N big enough but not too large and αi:=μiα0, i=0,1,2,,N, where μ=η2(1+s/a);

choose ρ<(-1/k0)+(1/k0ψ2(s))  ψ2(s)[((α2/2β2)+ψ2(s))-2k0ψ2(s)¯β-b/2(s+b)δ].

Algorithm 1.

set i=0;

choose ni=min{n:qnαi-a/2(s+a)δ};

solve xni,αi,sδ by using the iteration (6);

if xni,αi,sδ-xnj,αj,sδ>4Csαj-b/2(s+b)δ,j<i, then take k=i-1;

4. Numerical Example Example 1.

In this example, we consider a nonlinear integral operator, F:D(F)L2(0,1)L2(0,1), defined by (46)F(x)(t):=01k(t,s)x(s)3ds=f(t) with (47)k(t,s)={(1-t)s,0st1(1-s)t,0ts1. The Fréchet derivative of F is given by (48)F(u)w=301k(t,s)(u(s))2w(s)ds.

In our computation, we take y(t)=(t-t11)/110 and yδ=y+σ(y/e)e, where e=(ei) is a random vector with ei~(0,1) and σ>0 is a constant . Then the exact solution (49)x^(t)=t3.

We take L:DL2(0,1)L2(0,1) as (50)Lx=k=1kx,ekekwith  ek(t)=2sin(kπt),(51)x0(t)=t3+t15 as our initial guess, so that the function x0-x^ satisfies the source condition x0-x^tE,t[0,1/2) (see [20, Proposition 5.3]). Thus, we expect to have an accuracy of order at least O(δ1/5).

As in , we use the (n,n) matrix (52)B:=B21/2withB2=(n+1)2π2(2-1-1-1-12) as a discrete approximation of the first-order differential operator (50).

We choose α0=0.0171, β=1.19, k0=1,s=2, and q=0.9. The results of the computation are presented in Tables 1 and 2. The plots of the exact and the approximate solution obtained with δ=1.8E-5 are given in Figures 1 and 2.

The last column of Tables 1 and 2 shows that the error xαk,sδ-x^ is of O(δ1/5).

Iterations and corresponding error estimates with δ=1.8E-5, σ=1.

n k n k α x n k , α , s δ - x ^ x n k , α , s δ - x ^ L L L ( δ ) 1 / 5
8 7 21 1.0606 0.9249 2.9244
16 2 9 0.2857 1.0033 3.1724
32 2 9 0.1431 1.0417 3.2940
64 2 9 0.1429 1.0608 3.3543
128 2 9 0.1428 1.0704 3.3847
256 2 9 0.1428 1.0754 3.4005
512 2 9 0.1428 1.0784 3.4098
1024 2 9 0.1428 1.0807 3.4172

Iterations and corresponding error estimates with δ=1.8E-5, σ=0.1.

n k n k α x n k , α , s δ - x ^ x n k , α , s δ - x ^ L L L ( δ ) 1 / 5
8 7 21 1.0605 0.9249 2.9246
16 2 9 0.2856 1.0033 3.1726
32 2 9 0.1431 1.0417 3.2942
64 2 9 0.1429 1.0608 3.3546
128 2 9 0.1428 1.0704 3.3850
256 2 9 0.1428 1.0754 3.4008
512 2 9 0.1428 1.0784 3.4101
1024 2 9 0.1428 1.0807 3.4175

Curves of the exact and approximate solutions for n={8,16,32,64}.

Curves of the exact and approximate solutions for n={128,256,512,1024}.

5. Conclusion

In this paper, we present an iterative regularization method for obtaining an approximate solution of a nonlinear ill-posed operator equation F(x)=y in the Hilbert scale setting. Here F:D(F)XY is a nonlinear operator and we assume that the available data is yδ in place of exact data y. The convergence analysis was based on the center-type Lipschitz condition. We considered a Hilbert scale, (Xt)tR, generated by B, where B:D(B)XX is a linear, unbounded, self-adjoint, densely defined, and strictly positive operator on X. For choosing the regularization parameter α, the adaptive scheme considered by Pereverzev and Schock in  was used. Finally, a numerical example is presented in support of our method which is found to be efficient.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

Ms. Monnanda Erappa Shobha thanks NBHM, DAE, Government of India, for the financial support.

Argyros I. K. Hilout S. A convergence analysis for directional two-step Newton methods Numerical Algorithms 2010 55 4 503 528 10.1007/s11075-010-9368-y MR2734023 ZBLl1204.65049 2-s2.0-78049470301 Argyros I. K. Hilout S. Weaker conditions for the convergence of Newton's method Journal of Complexity 2012 28 3 364 387 10.1016/j.jco.2011.12.003 MR2914733 ZBLl1245.65058 2-s2.0-84859940483 Argyros I. K. Cho Y. J. Hilout S. Numerical Methods for Equations and its Applications 2012 New York, NY, USA CRC Press, Taylor and Francis MR2964315 Bakushinsky A. B. Kokurin M. Y. Iterative Methods for Approximate Solution of Inverse Problems 2004 Dordrecht, The Netherlands Springer MR2133802 Engl H. W. Kunisch K. Neubauer A. Regularization of Inverse Problems 1996 Dordrecht, The Netherlands Kluwer Academic Publishers Engl H. W. Regularization methods for the stable solution of inverse problems Surveys on Mathematics for Industry 1993 3 2 71 143 MR1225782 ZBLl0776.65043 Engl H. W. Kunisch K. Neubauer A. Convergence rates for Tikhonov regularisation of nonlinear ill-posed problems Inverse Problems 1989 5 4 523 540 10.1088/0266-5611/5/4/007 MR1009037 2-s2.0-36149030945 George S. Newton-type iteration for Tikhonov regularization of nonlinear ill-posed problems Journal of Mathematics 2013 2013 9 439316 MR3097200 10.1155/2013/439316 Hanke M. A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems Inverse Problems 1997 13 1 79 95 10.1088/0266-5611/13/1/007 MR1435869 2-s2.0-0003062572 Kaltenbacher B. A note on logarithmic convergence rates for nonlinear Tikhonov regularization Journal of Inverse and Ill-Posed Problems 2008 16 1 79 88 10.1515/jiip.2008.006 MR2387652 ZBLl1154.65042 2-s2.0-40449100087 Kaltenbacher B. Neubauer A. Scherzer O. Iterative Regularization Methods for Nonlinear Ill-Posed Porblems 2008 Berlin, Germany de Gruyter Kelley C. T. Iterative Methods for Linear and Nonlinear Equations 1995 Philadelphia, Pa, USA SIAM Jin Q. On a regularized Levenberg-Marquardt method for solving nonlinear inverse problems Numerische Mathematik 2010 115 2 229 259 10.1007/s00211-009-0275-x MR2606961 2-s2.0-77952099629 Tautenhahn U. On the method of Lavrentiev regularization for nonlinear ill-posed problems Inverse Problems 2002 18 1 191 207 10.1088/0266-5611/18/1/313 MR1893590 2-s2.0-0036469711 Vasin V. Irregular nonlinear operator equations: Tikhonov's regularization and iterative approximation Journal of Inverse and Ill-Posed Problems 2013 21 1 109 123 10.1515/jip-2012-0084 ZBLl1276.65031 2-s2.0-84876540578 Vasin V. George S. Expanding the applicability of Tikhonov's regularization and iterative approximation for ill-posed problems Journal of Inverse and Ill-Posed Problems 2013 10.1515/jip-2013-0025 Pereverzev S. Schock E. On the adaptive selection of the parameter in regularization of ill-posed problems SIAM Journal on Numerical Analysis 2005 43 5 2060 2076 10.1137/S0036142903433819 MR2192331 2-s2.0-33750173498 Egger H. Neubauer A. Preconditioning Landweber iteration in Hilbert scales Numerische Mathematik 2005 101 4 643 662 10.1007/s00211-005-0622-5 MR2195402 ZBLl1080.65043 2-s2.0-26644456996 Jin Q. Error estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scales Inverse Problems 2000 16 1 187 197 10.1088/0266-5611/16/1/315 MR1741236 2-s2.0-0041457245 Lu S. Pereverzev S. V. Shao Y. Tautenhahn U. On the generalized discrepancy principle for Tikhonov regularization in Hilbert scales Journal of Integral Equations and Applications 2010 22 3 483 517 10.1216/JIE-2010-22-3-483 MR2727328 ZBLl1206.47015 2-s2.0-79953663726 Mahale P. Nair M. T. A simplified generalized Gauss-Newton method for nonlinear ill-posed problems Mathematics of Computation 2009 78 265 171 184 10.1090/S0025-5718-08-02149-2 MR2448702 ZBLl1198.65101 2-s2.0-58149239967 Mathe P. Tautenhahn U. Error bounds for regularization methods in Hilbert scales by using operator monotonicity Far East Journal of Mathematical Sciences 2007 24 1 1 21 MR2281851 Natterer F. Error bounds for Tikhonov regularization in Hilbert scales Applicable Analysis 1984 18 1-2 29 37 10.1080/00036818408839508 MR762862 Neubauer A. On Landweber iteration for nonlinear ill-posed problems in Hilbert scales Numerische Mathematik 2000 85 2 309 328 10.1007/s002110050487 MR1754723 ZBLl0963.65058 2-s2.0-0034381932 Jin Q. Tautenhahn U. Inexact Newton regularization methods in Hilbert scales Numerische Mathematik 2011 117 3 555 579 10.1007/s00211-010-0342-3 MR2772419 ZBLl1214.65030 2-s2.0-79751535143 Jin Q. Tautenhahn U. Implicit iteration methods in Hilbert scales under general smoothness conditions Inverse Problems 2011 27 4 045012 10.1088/0266-5611/27/4/045012 MR2781036 2-s2.0-79953660521 George S. Nair M. T. Error bounds and parameter choice strategies for simplified regularization in Hilbert scales Integral Equations and Operator Theory 1997 29 2 231 242 10.1007/BF01191432 MR1472102 ZBLl0889.65062 2-s2.0-0031461751 Tautenhahn U. On a general regularization scheme for nonlinear ill-posed problems: II. Regularization in Hilbert scales Inverse Problems 1998 14 6 1607 1616 10.1088/0266-5611/14/6/016 MR1662488 2-s2.0-0001679731 Tautenhahn U. Error estimates for regularization methods in Hilbert scales SIAM Journal on Numerical Analysis 1996 33 6 2120 2130 10.1137/S0036142994269411 MR1427456 ZBLl0865.65040 2-s2.0-0000414254 Jin Q. On a class of frozen regularized Gauss-Newton methods for nonlinear inverse problems Mathematics of Computation 2010 79 272 2191 2211 10.1090/S0025-5718-10-02359-8 MR2684361 ZBLl1208.65073 2-s2.0-77956577037 George S. On convergence of regularized modified Newton's method for nonlinear ill-posed problems Journal of Inverse and Ill-Posed Problems 2010 18 2 133 146 10.1515/JIIP.2010.004 MR2652627 ZBLl1279.65069 2-s2.0-77953008488