Recently, Vasin and George (2013) considered an iterative scheme for approximately solving an ill-posed operator equation F(x)=y. In order to improve the error estimate available by Vasin and George (2013), in the present paper we extend the iterative method considered by Vasin and George (2013), in the setting of Hilbert scales. The error estimates obtained under a general source condition on x0-x^ (x0 is the initial guess and x^ is the actual solution), using the adaptive scheme proposed by Pereverzev and Schock (2005), are of optimal order. The algorithm is applied to numerical solution of an integral equation in Numerical Example section.
1. Introduction
In this study, we are interested in approximately solving a nonlinear ill-posed operator equation:
(1)F(x)=y,
where F:D(F)⊆X→Y is a nonlinear operator. Here D(F) is the domain of F, and 〈·,·〉 is the inner product with corresponding norm ∥·∥ on the Hilbert spaces X and Y. Throughout this paper we denote by Br(x0) the ball of radius r centered at x0,F′(x) denotes the Fréchet derivative of F at x∈D(F), and F′*(·) denotes the adjoint of F′(·). We assume that yδ∈Y are the available noisy data satisfying
(2)∥y-yδ∥≤δ,
where δ is the noise level. Equation (1) is, in general, ill-posed, in the sense that a unique solution that depends continuously on the data does not exist. Since the available data is yδ, one has to solve (approximately) the perturbed equation
(3)F(x)=yδ
instead of (1).
To solve the ill-posed operator equations, various regularization methods are used, for example, Tikhonov regularization, Landweber iterative regularization, Levenberg-Marquardt method, Lavrentiev regularization, Newton type iterative method, and so forth (see, e.g., [1–16]).
In [16], Vasin and George considered the iteration (which is a modified form of the method considered in [8])
(4)xn+1,αδ:=xn,α-Rα-1[A0*(F(xn,αδ)-yδ)+α(xn,αδ-x0)],
where R0:=A0*A0+βI, A0=F′(x0),x0,αδ=x0 is the initial guess, α>0 is the regularization parameter, and β>α. Iteration (4) was used to obtain an approximation for the zero xαδ of F′(x0)*(F(x0)-yδ)+α(x-x0)=0 and proved that xαδ is an approximate solution of (1). The regularization parameter α in [16] was chosen appropriately from the finite set DN:={αi:0<α0<α1<⋯<αN} depending on the inexact data yδ and the error level δ satisfying (2) using the adaptive parameter selection procedure suggested by Pereverzev and Schock [17].
In order to improve the rate of convergence many authors have considered the Hilbert scale variant of the regularization methods for solving ill-posed operator equations, for example, [18–26]. In this study, we present the Hilbert scale variant of (4).
We consider the Hilbert scale {Xt}t∈R (see [14, 18, 23, 26–29]) generated by a strictly positive self-adjoint operator B:D(B)⊂X→X, with the domain D(B) dense in X satisfying ∥Bx∥≥∥x∥, for all x∈D(B). Recall [19, 28] that the space Xt is the completion of D:=⋂k=0∞D(Bk) with respect to the norm ∥x∥t, induced by the inner product
(5)〈u,v〉t=〈Bt/2u,Bt/2v〉,u,v∈D(B).
In this paper, we consider the sequence {xn,α,sδ} defined iteratively by
(6)xn+1,α,sδ=xn,α,sδ-Rβ-1[A0*(F(xn,α,sδ)-yδ)+αBs(xn,α,sδ-x0)],
where Rβ-1:=(A0*A0+βBs)-1, x0,α,sδ:=x0, is the initial guess, β>α, for obtaining an approximation for zero xα,sδ of (cf. [21, 30])
(7)A0*(F(x)-yδ)+αBs(x-x0)=0.
As in [16], we use the following center-type Lipschitz condition for the convergence of the iterative scheme.
Assumption 1.
Let x0∈X be fixed. There exists a constant k0 such that, for every u∈Br(x0)∪Br(x^)⊆D(F) and v∈X, there exists an element Φ(x0,u,v)∈X satisfying
(8)[F′(x0)-F′(u)]v=F′(x0)Φ(x0,u,v),∥Φ(x0,u,v)∥≤k0∥v∥∥x0-u∥.
The error estimates in this work are obtained using the source condition on x0-x^. In addition to the advantages listed in [16, see page 3], the method considered in this paper gives optimal order for a range of values of smoothness assumptions on x0-x^. The regularization parameter α is chosen from some finite set {α0<α1<α2⋯<αN} using the balancing principle considered by Pereverzev and Schock in [17].
The paper is organized as follows. In Section 2, we give the analysis of the method for regularization of (6) in the setting of Hilbert scales. The error analysis and adaptive scheme of parameter α are given in Section 3. In Section 4, implementation of the method along with a numerical example is presented to validate the efficiency of the proposed method and we conclude the paper in Section 5.
2. The Method
First we will prove that the sequence (xn,α,sδ) defined by (6) converges to the zero xα,sδ of (7) and then we show that xα,sδ is an approximation to the solution x^ of (1).
Let As=A0B-s/2. We make use of the relation
(9)∥(As*As+αI)-1(As*As)p∥≤αp-1,p>0,0<p≤1,
which follows from the spectral properties of the positive self-adjoint operator As*As, s>0. Usually, for the analysis of regularization methods in Hilbert scales, an assumption of the form (cf. [18, 24])
(10)∥F′(x^)x∥~∥x∥-b,x∈X
on the degree of ill-posedness is used. In this paper instead of (10) we require only a weaker assumption:
(11)d1∥x∥-b≤∥A0x∥≤d2∥x∥-b,x∈D(F),
for some positive reals b, d1, and d2.
Note that (11) is simpler than that of (10). Now we define f and g by
(12)f(t)=min{d1t,d2t},g(t)=max{d1t,d2t},qwwwwwwwwwwwwwwwwwt∈R,|t|≤1.
The following proposition is used for further analysis.
Proposition 2 (cf. see [<xref ref-type="bibr" rid="B29">29</xref>, Proposition 2.1]).
For s>0 and |ν|≤1,
(13)f(ν)∥x∥-ν(s+b)≤∥(As*As)ν/2x∥≤g(ν)∥x∥-ν(s+b),x∈H.
Let us define a few parameters essential for the convergence analysis. Let
(14)ψ2(s):=g(s/(s+b))f(s/(s+b));ψ2(s)¯:=1f(s/(s+b));en,α,sδ:=∥xn,α,sδ-xn-1,α,sδ∥,∀n≥0;δ0<βb/2(s+b)2k0ψ2(s)¯(ψ2(s)+α022β2),∥x^-x0∥≤ρ,
with
(15)ρ<-1k0+1k0ψ2(s)×ψ2(s)[(α22β2+ψ2(s))-2k0ψ2(s)¯β-b/2(s+b)δ],γρ:=ψ2(s)¯β-b/2(s+b)δ+ψ2(s)(k02ρ2+ρ).
Further let γρ<α02/4k0β2, and
(16)r1=α+α2-4k0γρβ22k0β,r2=min{1ψ2(s)k0,α-α2-4k0γρβ22k0β}.
For r∈(r1,r2), let
(17)q=ψ2(s)(k0r+β-αβ).
Then q<1.
Lemma 3.
Let Proposition 2 hold. Then for all h∈X, the following hold:
∥(A0*A0+βBs)-1A0*A0h∥≤ψ2(s)∥h∥,
∥(A0*A0+βBs)-1Bsh∥≤ψ2(s)(1/β)∥h∥.
Proof.
Observe that, by Proposition 2,
(18)∥(A0*A0+βBs)-1A0*A0h∥=∥B-s/2(As*As+βI)-1As*AsBs/2h∥≤1f(s/(s+b))×∥(As*As)s/2(s+b)(As*As+βI)-1(As*As)Bs/2h∥≤1f(s/(s+b))∥(As*As+βI)-1As*As∥×∥(As*As)s/2(s+b)Bs/2h∥≤g(s/(s+b))f(s/(s+b))∥Bs/2h∥-s≤g(s/(s+b))f(s/(s+b))∥h∥,∥(A0*A0+βBs)-1Bsh∥=∥B-s/2(As*As+βI)-1Bs/2h∥≤1f(s/(s+b))×∥(As*As)s/2(s+b)(As*As+βI)-1Bs/2h∥≤1f(s/(s+b))×∥(As*As+βI)-1∥∥(As*As)s/2(s+b)Bs/2h∥≤g(s/(s+b))f(s/(s+b))1β∥Bs/2h∥-s≤g(s/(s+b))f(s/(s+b))1β∥h∥.
This completes the proof of the lemma.
Theorem 4.
Let en,α,sδ and q be as in (14) and (17), respectively; let xn,α,sδ be as defined in (6) with δ∈(0,δ0]. Then under Assumption 1 and Lemma 3, the following estimates hold for all n≥0:
∥xn+1,α,sδ-xn,α,sδ∥≤qnγρ;
xn,α,sδ∈Br(x0).
Proof.
If xn,α,sδ∈Br(x0), then by Assumption 1,
(19)xn+1,α,sδ-xn,α,sδ=xn,α,sδ-xn-1,α,sδ-(A0*A0+βBs)-1×[A0*(F(xn,α,sδ)-F(xn-1,α,sδ))hhhhhhh+αBs(xn,α,sδ-xn-1,α,sδ)(F(xn,α,sδ)-F(xn-1,α,sδ))]=(A0*A0+βBs)-1×[A0*A0(xn,α,sδ-xn-1,α,sδ)hhhhhh-A0*(F(xn,α,sδ)-F(xn-1,α,sδ))hhhhhh+(β-α)Bs(xn,α,sδ-xn-1,α,sδ)]=(A0*A0+βBs)-1A0*×∫01[A0-F′(xn,α,sδ+t(xn,α,sδ-xn-1,α,sδ))]hhhhhhhhh×(xn,α,sδ-xn-1,α,sδ)dt+(A0*A0+βBs)-1(β-α)Bs(xn,α,sδ-xn-1,α,sδ)=Γ1+Γ2,
where
(20)Γ1:=(A0*A0+βBs)-1A0*×∫01[A0-F′(xn,α,sδ+t(xn,α,sδ-xn-1,α,sδ))]hhhhhh×(xn,α,sδ-xn-1,α,sδ)dt
and Γ2:=(A0*A0+βBs)-1(β-α)Bs(xn,α,sδ-xn-1,α,sδ), and hence by Assumption 1 and Lemma 3(a), we have
(21)∥Γ1∥=∥-(A0*A0+βBs)-1A0*A0hhhhhh×∫01Φ(xn-1,α,sδ+t(xn,α,sδ-xn-1,α,sδ),hhhhhhhhhhhhhx0,xn,α,sδ-xn-1,α,sδ)dt∥≤ψ2(s)k0r∥xn,α,sδ-xn-1,α,sδ∥
and by Lemma 3(b),
(22)∥Γ2∥≤β-αβψ2(s)∥xn,α,sδ-xn-1,α,sδ∥.
Hence, by (19), (21), and (22), we have
(23)∥xn+1,α,sδ-xn,α,sδ∥≤ψ2(s)(k0r+β-αβ)∥xn,α,sδ-xn-1,α,sδ∥=q∥xn,α,sδ-xn-1,α,sδ∥≤qn∥x1,α,sδ-x0,α,sδ∥=qne1,α,sδ.
Next we show that e1,α,sδ<γρ, using Assumption 1 and Lemma 3. Observe that
(24)e1,α,sδ=∥x1,α,sδ-x0,α,sδ∥=∥(A0*A0+βBs)-1A0*(F(x0,α,sδ)-yδ)∥=∥B-s/2(As*As+βI)-1As*(F(x0)-yδ)∥=∥B-s/2(As*As+βI)-1As*hhhh×[yδ-y+F(x^)hhhhhhh-F(x0)-A0(x^-x0)+A0(x^-x0)yδ]B-s/2∥≤∥B-s/2(As*As+βI)-1As*(yδ-y)∥+∥B-s/2(As*As+βI)-1As*hhhhh×∫01(F′(x0+t(x^-x0))-A0)hhhhhhhhhh×(x^-x0)dt(As*As+βI)-1∥+∥B-s/2(As*As+βI)-1As*A0(x^-x0)∥≤1f(s/(s+b))β-b/2(s+b)δ+g(s/(s+b))f(s/(s+b))k0ρ22+g(s/(s+b))f(s/(s+b))ρ≤ψ2(s)¯β-b/2(s+b)δ+ψ2(s)(k0ρ22+ρ):=γρ<r.
Hence, (a) follows from (23) and (24).
To prove (b), note that ∥x1,α,sδ-x0,α,sδ∥≤γρ<r. Suppose xm,α,sδ∈Br(x0) for some m; then
(25)∥xm+1,α,sδ-x0∥≤∥xm+1,αδ-xm,α,sδ∥+⋯+∥x1,α,sh,δ-x0∥≤(qm+q(m-1)+⋯+1)e1,α,sδ≤11-qe1,α,sδ≤γρ1-q<r.
Thus, by induction xn,α,sδ∈Br(x0) for all n≥0. This proves (b).
Next we go to the main result of this section.
Theorem 5.
Let xn,α,sδ be as in (6), δ∈(0,δ0], and assumptions of Theorem 4 hold. Then (xn,α,sδ) is a Cauchy sequence in Br(x0) and converges, say, to xα,sδ∈Br(x0)¯. Further A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0 and
(26)∥xn,α,sδ-xα,sδ∥≤Cqn,
where C=γρ/(1-q).
Proof.
Using relation (a) of Theorem 4, we obtain
(27)∥xn+m,α,sδ-xn,α,sδ∥≤∑i=0i=m-1∥xn+i+1,α,sδ-xn+i,α,sδ∥≤∑i=0i=m-1q(n+i)e1,α,sδ=qne1,α,sδ+q(n+1)e1,α,sδ+⋯+q(n+m)e1,α,sδ≤qn(1+q+q2+⋯+qm)e1,α,sδ≤qn(11-q)γρ≤Cqn.
Thus, xn,α,sδ is a Cauchy sequence in Br(x0) and hence it converges, say, to xα,sδ∈Br(x0)¯.
Now letting n→∞ in (6), we obtain
(28)A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0.
This completes the proof.
The following assumption on source function and source condition is required to obtain the error estimates.
Assumption 6.
There exists a continuous, strictly monotonically increasing function φ:(0,∥As∥2]→(0,∞) such that the following conditions hold:
limλ→0φ(λ)=0,
supλ>0(αφ(λ)/(λ+α))≤φ(α) for all λ∈(0,∥As∥2], and
there exists w∈X with ∥w∥≤E2, such that
(29)(As*As)s/2(s+b)Bs/2(x0-x^)=φ(As*As)w.
Remark 7.
If x0-x^∈Xt, for example, ∥x0-x^∥t≤E1, for some positive constant E1 and 0≤t≤2s+b, then, we have (As*As)s/2(s+b)Bs/2(x0-x^)=φ(As*As)w, where φ(λ)=λt/(s+b), w=(As*As)(s-t)/2(s+b)Bs/2(x^-x0), and ∥w∥≤g((s-t)/(s+b))E1:=E2.
Theorem 8.
Let xα,sδ be the solution of (7) and suppose Assumptions 1 and 6 hold; then
(30)∥xα,sδ-x^∥≤ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r.
Proof.
Let M=∫01F′(x^+t(xα,sδ-x^))dt. Then
(31)F(xα,sδ)-F(x^)=M(xα,sδ-x^).
Since A0*(F(xα,sδ)-yδ)+αBs(xα,sδ-x0)=0, one can see that
(32)(A0*A0+αBs)(xα,sδ-x^)=(A0*A0+αBs)(xα,sδ-x^)-A0*(F(xα,sδ)-yδ)-αBs(xα,sδ-x0)=A0*[A0-M](xα,sδ-x^)+A0*(yδ-y)+αBs(x0-x^),xα,sδ-x^=(A0*A0+αBs)-1×[A0*(A0-M)(xα,sδ-x^)hhhhhhhh+A0*(yδ-y)+αBs(x0-x^)]=s1+s2+s3,
where
(33)s1:=(A0*A0+αBs)-1A0*(A0-M)(xα,sδ-x^),s2:=(A0*A0+αBs)-1A0*(yδ-y),s3:=(A0*A0+αBs)-1αBs(x0-x^).
Note that by Assumption 1 and Lemma 3,
(34)∥s1∥≤ψ2(s)k0r∥xα,sδ-x^∥,
by Proposition 2,
(35)∥s2∥≤ψ2(s)¯α-b/2(s+b)δ,
and by Assumption 6,
(36)∥s3∥≤ψ2(s)¯φ(α).
Hence, by (34)–(36) and (32), we have
(37)∥xα,sδ-x^∥≤ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r.
This completes the proof of the theorem.
2.1. Error Bounds under Source ConditionsTheorem 9.
Let xn,α,sδ be as in (6). If assumptions in Theorems 5 and 8 hold, then
(38)∥x^-xn,α,sδ∥≤Cqn+ψ2(s)¯(φ(α)+α-b/2(s+b)δ)1-ψ2(s)k0r,
where C is as in Theorem 5. Further if nδ:=min{n:qn≤α-b/2(s+b)δ}, then
(39)∥x^-xn,α,sδ∥≤Cs(φ(α)+α-b/2(s+b)δ),
where Cs=C+(ψ2(s)¯/(1-ψ2(s)k0r)).
2.2. A Priori Choice of the Parameter
The error estimate φ(α)+α-b/2(s+b)δ in Theorem 9 attains minimum for the choice α:=α(δ,s,b) which satisfies φ(α)=α-b/2(s+b)δ. Clearly α(δ,s,b)=φ-1(ψs,b-1(δ)), where
(40)ψs,b(λ)=λ[φ-1(λ)]b/2(s+b),0<λ≤∥As∥2.
Thus, we have the following theorem.
Theorem 10.
Suppose that all assumptions of Theorems 5 and 8 are fulfilled. For δ>0, let α(δ,s,b)=φ-1(ψs,b-1(δ)), and let nδ be as in Theorem 9. Then
(41)∥x^-xn,α,sδ∥≤O(ψs,b-1(δ)).
2.3. Adaptive Scheme and Stopping Rule
In this subsection, we consider the adaptive scheme suggested by Pereverzev and Schock in [17], modified suitably for choosing the parameter α which does not involve even the regularization method in an explicit manner.
Let i∈{0,1,2,…,N} and αi=μiα0, where μ=η2(1+s/b), η>1, and α0=δ2(1+s/b). Let ni:=min{n:qn≤αi-b/2(s+b)δ}, and let xni,αi,sδ be as defined in (6) with α=αi and n=ni. Then, from Theorem 9, we have
(42)∥x^-xni,αi,sδ∥≤Cs(φ(αi)+αi-b/2(s+b)δ).
Further, let
(43)l:=max{i:φ(αi)≤αi-b/2(s+b)δ}<N,(44)k:=max{i:∥xni,αi,sδ-xnj,αj,sδ∥≤4Csαj-b/2(s+b)δ,hhhhhhhhj=0,1,2,…,i-1αj-b/2(s+b)},
where Cs is as in Theorem 9. The proof of the following theorem is analogous to the proof of Theorem 4.4 in [31], so we omit the details.
Theorem 11.
Let xn,α,sδ be as in (6) with α=αi and δ∈(0,δ0], and assumptions in Theorem 9 hold. Let l and k be as defined in (43) and (44), respectively. Then l≤k and
(45)∥x^-xnk,αk,sδ∥≤6Csη(ψs,b-1(δ)).
3. Implementation of the Method
Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 11 involves the following steps:
choose α0>0 such that δ0<(βb/2(s+b)/2k0ψ2(s)¯)(ψ2(s)+(α02/2β2)) and η>1;
choose N big enough but not too large and αi:=μiα0, i=0,1,2,…,N, where μ=η2(1+s/a);
if ∥xni,αi,sδ-xnj,αj,sδ∥>4Csαj-b/2(s+b)δ,j<i, then take k=i-1;
else set i=i+1 and return to Step (2).
4. Numerical ExampleExample 1.
In this example, we consider a nonlinear integral operator, F:D(F)⊂L2(0,1)→L2(0,1), defined by
(46)F(x)(t):=∫01k(t,s)x(s)3ds=f(t)
with
(47)k(t,s)={(1-t)s,0≤s≤t≤1(1-s)t,0≤t≤s≤1.
The Fréchet derivative of F is given by
(48)F′(u)w=3∫01k(t,s)(u(s))2w(s)ds.
In our computation, we take y(t)=(t-t11)/110 and yδ=y+σ(∥y∥/∥e∥)e, where e=(ei) is a random vector with ei~ℵ(0,1) and σ>0 is a constant [26]. Then the exact solution
(49)x^(t)=t3.
We take L:D⊂L2(0,1)→L2(0,1) as
(50)Lx=∑k=1∞k〈x,ek〉ekwithek(t)=2sin(kπt),(51)x0(t)=t3+t15
as our initial guess, so that the function x0-x^ satisfies the source condition ∥x0-x^∥t≤E,t∈[0,1/2) (see [20, Proposition 5.3]). Thus, we expect to have an accuracy of order at least O(δ1/5).
As in [26], we use the (n,n) matrix
(52)B:=B21/2withB2=(n+1)2π2(2-1-1⋱⋱⋱⋱-1-12)
as a discrete approximation of the first-order differential operator (50).
We choose α0=0.0171, β=1.19, k0=1,s=2, and q=0.9. The results of the computation are presented in Tables 1 and 2. The plots of the exact and the approximate solution obtained with δ=1.8E-5 are given in Figures 1 and 2.
The last column of Tables 1 and 2 shows that the error ∥xαk,sδ-x^∥ is of O(δ1/5).
Iterations and corresponding error estimates with δ=1.8E-5, σ=1.
n
k
nk
α
∥xnk,α,sδ-x^∥
∥xnk,α,sδ-x^∥LLL(δ)1/5
8
7
21
1.0606
0.9249
2.9244
16
2
9
0.2857
1.0033
3.1724
32
2
9
0.1431
1.0417
3.2940
64
2
9
0.1429
1.0608
3.3543
128
2
9
0.1428
1.0704
3.3847
256
2
9
0.1428
1.0754
3.4005
512
2
9
0.1428
1.0784
3.4098
1024
2
9
0.1428
1.0807
3.4172
Iterations and corresponding error estimates with δ=1.8E-5, σ=0.1.
n
k
nk
α
∥xnk,α,sδ-x^∥
∥xnk,α,sδ-x^∥LLL(δ)1/5
8
7
21
1.0605
0.9249
2.9246
16
2
9
0.2856
1.0033
3.1726
32
2
9
0.1431
1.0417
3.2942
64
2
9
0.1429
1.0608
3.3546
128
2
9
0.1428
1.0704
3.3850
256
2
9
0.1428
1.0754
3.4008
512
2
9
0.1428
1.0784
3.4101
1024
2
9
0.1428
1.0807
3.4175
Curves of the exact and approximate solutions for n={8,16,32,64}.
Curves of the exact and approximate solutions for n={128,256,512,1024}.
5. Conclusion
In this paper, we present an iterative regularization method for obtaining an approximate solution of a nonlinear ill-posed operator equation F(x)=y in the Hilbert scale setting. Here F:D(F)⊂X→Y is a nonlinear operator and we assume that the available data is yδ in place of exact data y. The convergence analysis was based on the center-type Lipschitz condition. We considered a Hilbert scale, (Xt)t∈R, generated by B, where B:D(B)⊂X→X is a linear, unbounded, self-adjoint, densely defined, and strictly positive operator on X. For choosing the regularization parameter α, the adaptive scheme considered by Pereverzev and Schock in [17] was used. Finally, a numerical example is presented in support of our method which is found to be efficient.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
Ms. Monnanda Erappa Shobha thanks NBHM, DAE, Government of India, for the financial support.
ArgyrosI. K.HiloutS.A convergence analysis for directional two-step Newton methodsArgyrosI. K.HiloutS.Weaker conditions for the convergence of Newton's methodArgyrosI. K.ChoY. J.HiloutS.BakushinskyA. B.KokurinM. Y.EnglH. W.KunischK.NeubauerA.EnglH. W.Regularization methods for the stable solution of inverse problemsEnglH. W.KunischK.NeubauerA.Convergence rates for Tikhonov regularisation of nonlinear ill-posed problemsGeorgeS.Newton-type iteration for Tikhonov regularization of nonlinear ill-posed problemsHankeM.A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problemsKaltenbacherB.A note on logarithmic convergence rates for nonlinear Tikhonov regularizationKaltenbacherB.NeubauerA.ScherzerO.KelleyC. T.JinQ.On a regularized Levenberg-Marquardt method for solving nonlinear inverse problemsTautenhahnU.On the method of Lavrentiev regularization for nonlinear ill-posed problemsVasinV.Irregular nonlinear operator equations: Tikhonov's regularization and iterative approximationVasinV.GeorgeS.Expanding the applicability of Tikhonov's regularization and iterative approximation for ill-posed problemsPereverzevS.SchockE.On the adaptive selection of the parameter in regularization of ill-posed problemsEggerH.NeubauerA.Preconditioning Landweber iteration in Hilbert scalesJinQ.Error estimates of some Newton-type methods for solving nonlinear inverse problems in Hilbert scalesLuS.PereverzevS. V.ShaoY.TautenhahnU.On the generalized discrepancy principle for Tikhonov regularization in Hilbert scalesMahaleP.NairM. T.A simplified generalized Gauss-Newton method for nonlinear ill-posed problemsMatheP.TautenhahnU.Error bounds for regularization methods in Hilbert scales by using operator monotonicityNattererF.Error bounds for Tikhonov regularization in Hilbert scalesNeubauerA.On Landweber iteration for nonlinear ill-posed problems in Hilbert scalesJinQ.TautenhahnU.Inexact Newton regularization methods in Hilbert scalesJinQ.TautenhahnU.Implicit iteration methods in Hilbert scales under general smoothness conditionsGeorgeS.NairM. T.Error bounds and parameter choice strategies for simplified regularization in Hilbert scalesTautenhahnU.On a general regularization scheme for nonlinear ill-posed problems: II. Regularization in Hilbert scalesTautenhahnU.Error estimates for regularization methods in Hilbert scalesJinQ.On a class of frozen regularized Gauss-Newton methods for nonlinear inverse problemsGeorgeS.On convergence of regularized modified Newton's method for nonlinear ill-posed problems