Newton Type Iteration for Tikhonov Regularization of Nonlinear Ill-Posed Problems in Hilbert Scales

Recently, Vasin and George (2013) considered an iterative scheme for approximately solving an ill-posed operator equation F(x) = y. In order to improve the error estimate available by Vasin and George (2013), in the present paper we extend the iterative method considered by Vasin and George (2013), in the setting of Hilbert scales.The error estimates obtained under a general source condition on x 0 − x (x 0 is the initial guess and x is the actual solution), using the adaptive scheme proposed by Pereverzev and Schock (2005), are of optimal order. The algorithm is applied to numerical solution of an integral equation in Numerical Example section.


Introduction
Inverse problems have been one of the fastest growing research area in applied mathematics in the last decades.It is well known that these problems typically lead to mathematical models that are ill-posed (according to Hadamard's definition [1]) in the sense that it is not possible to provide a unique solution.
In this paper, we consider the task of approximately solving the nonlinear ill-posed equation  () = . ( This equation and the task of solving it make sense only when placed in an appropriate framework.Throughout this paper, we will assume that  : () ⊆  →  is a nonlinear operator between Hilbert spaces  and  with inner product and corresponding norm denoted by ⟨⋅, ⋅⟩ and ‖ ⋅ ‖, respectively, and  ∈ .We assume that (1) has a unique solution x.For  > 0, let   ∈  be an available data with       −        ≤ .
In [4], Bakushinskii proposed an iterative method, namely, the iteratively regularized Gauss-Newton method, in which the iterations are defined by where  , :=   (   ) (here and in the following   denotes the Fréchet derivative of ) and (  ) is a sequence of real numbers satisfying for some constant  1 > 1.For convergence analysis, Bakushinskii used the following Hölder-type source condition on the exact solution x of (1) for some  ∈ .Later Hohage [9,10] and Langer and Hohage [11] considered the iteratively regularized Gauss-Newton method under different source conditions and stopping rules.
In [5], Bakushinskii generalized the procedure in [4] by considering a generalized form of the regularized Gauss-Newton method in which the iterations are defined by where  , and (  ) are as in (3), and each   for  > 0 is a piecewise continuous function.It should be noted that the convergence of (3) was also shown by Bakushinsky and Smirnova in [12].In [6] and the error estimate is obtained under the following Hölder-type source condition: Recently, Mahale and Nair [8], considered the iteration procedure: where  0 :=   ( 0 ), (  ) is as in (3), and   for  > 0 is a positive real-valued piecewise continuous function defined in [0, ] with  ≥ ‖ 0 ‖ 2 .In [8], the stopping index   for the iteration is chosen such that 1 ≤  <   , where  > 0 is a sufficiently large constant not depending on , and To prove the results in [8], Mahale and Nair considered the following general source condition: for some  ∈  with ‖‖ ≤ ,  > 0. Here,  : (0, ] → (0, ∞) is a continuous, strictly monotonically increasing function satisfying lim  → 0 () = 0 with  ≥ ‖ 0 ‖ 2 .
Note that the source conditions ( 5) and ( 8) involves the Fréchet derivative at the exact solution x which is unknown in practice.But the source condition (12) depends on the Fréchet derivative of  at  0 .
In [7], Kaltenbacher considered the following iteration procedure: where   :=   ( with for condition (a) and for condition (b).
In [13], the author considered a particular case of the method (9), that is, or equivalently, for approximately solving (1).Analysis in [13], was carried out using a suitably constructed majorizing sequence, and the stopping rule in [13] was based on this majorizing sequence.
Recall [14,Definition 1.3.11],that a nonnegative increasing sequence (  ) (i.e.,  +1 −   ≥ 0) is said to be a majorizing sequence of a sequence The majorizing sequence in [13], depends on the initial guess  0 , and the conditions on  0 (see, e.g., (3.2) in [13]) are restrictive, so the method is not suitable for practical consideration.
In this paper, we consider the sequence (17) and analyze it by considering its even and odd terms separately and obtain the optimal order of the error.The regularization parameter  is chosen according to the balancing principle considered by Pereverzev and Schock in [15].
The organization of this paper is as follows.Proposed method and its convergence analysis are given in Section 2. Error analysis and parameter choice strategy are discussed in Section 3. Section 4 deals with the implementation of the method; a numerical example is given in Section 5, and finally, the paper ends with a conclusion in Section 6. (17) Let

Convergence of the Method
where   :=  * 0  0 +  and  0 is the initial guess.
Assumption 2. There exists a constant  0 > 0 such that for every ,  ∈ () and  ∈ , there exists an element Φ(, , ) ∈  satisfying for all ,  ∈ () and  ∈ . Let Hereafter, for convenience, we use the notation   ,   , and   for    ,    , and , the parameter  is selected from some finite set Throughout this paper, we assume that the operator  is Fréchet differentiable at all  ∈ ().
Remark 3. Note that if ‖ 0 − x‖ ≤ , then by Assumption 2, we have This can be seen as follows: Using the inequality (24), we prove the following.
The main result of Section 2 is the following.

Error Analysis
The next assumption on source condition is based on a source function  and a property of the source function .We will be using this assumption to obtain an error estimate for ‖   − x‖.
Theorem 9. Suppose that all assumptions of Theorems 5 and 7 are fulfilled.For  > 0, let and let   be as in Theorem 8 with  =   .Then,

Adaptive Choice of the Parameter.
In the balancing principle considered by Pereverzev and Schock in [15], the regularization parameter  =   is selected from some finite set Let and let     ,  :=     , where     is defined as in (20) with  =   and  + 1 =   .Then, from Theorem 8, we have Precisely, we choose the regularization parameter  =   from the set   defined by where  > 1.
To obtain a conclusion from this parameter choice, we consider all possible functions  satisfying Assumptions 2 and (  ) ≤ /√  .Any of such functions is called admissible for x, and it can be used as a measure for the convergence of    → x (see [16]).
The main result of Section 3 is the following.
Proof.The proof is analogous to the proof of Theorem 4.4 in [13] and is omitted therefore here.

Implementation of the Method
Finally, the balancing algorithm associated with the choice of the parameter specified in Theorem 10 involves the following steps.
We choose  0 = 1.7 × ( +   ),  = 1.7,  0 = 1, and  = 0.71.The results of the computation are presented in Table 1.The plots of the exact solution and the approximate solution obtained are given in Figures 1 and 2.

Conclusion
In this paper, we considered a modified Gauss-Newton method for approximately solving the nonlinear ill-posed operator equation () = , where  : () ⊆  →  is a nonlinear operator between the Hilbert spaces  and .The same method was considered in [13] by the author, but the analysis in [13] was based on a suitably constructed majorizing sequence.In this paper, we analyze the sequence by considering its even and odd terms separately.The analysis in this paper is easier than that of [13].We use the adaptive method considered by Pereverzev and Schock in [15] for choosing the regularization parameter.The optimality of this method is proved under a general source condition.Finally, a numerical example of nonlinear integral equation shows the performance of this method.

Figure 1 :
Figure 1: Curves of the exact and approximate solutions.

(Figure 2 :
Figure 2: Curves of the exact and approximate solutions.