Inverse Free Iterative Methods for Nonlinear Ill-Posed Operator Equations

We present a new iterative method which does not involve inversion of the operators for obtaining an approximate solution for the nonlinear ill-posed operator equation F(x) = y. The proposed method is a modified form of Tikhonov gradient (TIGRA) method considered by Ramlau (2003). The regularization parameter is chosen according to the balancing principle considered by Pereverzev and Schock (2005). The error estimate is derived under a general source condition and is of optimal order. Some numerical examples involving integral equations are also given in this paper.


Introduction
This paper is devoted to the study of nonlinear ill-posed problem where  : () ⊆  →  is a nonlinear operator between the Hilbert spaces  and .We assume that  is a Fréchetdifferentiable nonlinear operator acting between infinite dimensional Hilbert spaces  and  with corresponding inner products ⟨⋅, ⋅⟩ and norms ‖ ⋅ ‖, respectively.Further it is assumed that (1) has a solution x, for exact data; that is, ( x) = , but due to the nonlinearity of  this solution need not be unique.Therefore we consider a  0 -minimal norm solution of (1).Recall that [1][2][3] a solution x of (1) is said to be an  0 -minimal norm ( 0 -MNS) solution of (1) if In the following, we always assume the existence of an  0 -MNS for exact data .The element  0 ∈  in (3) plays the role of a selection criterion [4] and is assumed to be known.
Since (1) is ill-posed, regularization techniques are required to obtain an approximation for x.Tikhonov regularization has been investigated by many authors (see e.g., [2,4,5]) to solve nonlinear ill-posed problems in a stable manner.In Tikhonov regularization, a solution of the problem (1) is approximated by a solution of the minimization problem where  > 0 is a small regularization parameter and   ∈  is the available noisy data, for which we have the additional information that       −        ≤ .
It is also known that [1] for properly chosen regularization parameter , the minimizer    of the functional   () is a 2 International Journal of Mathematics and Mathematical Sciences good approximation to a solution x with minimal distance from  0 .Thus the main focus is to find a minimizing element    of the Tikhonov functional (4).But the Tikhonov functional with nonlinear operator  might have several minima, so to ensure the convergence of any optimization algorithm to a global minimizer    of the Tikhonov functional (3), one has to employ stronger restrictions on the operator [6].
In [6], Ramlau considered the TIGRA method defined iteratively by where   0 =  0 ,   is a scaling parameter and   is a regularization parameter, which will change during the iteration and obtained a convergence rate estimate for the TIGRA algorithm under the following assumptions: (1)  is twice Fréchet differentiable with a continuous second derivative, (2) the first derivative is Lipschitz continuous: (3) there exists  ∈  with (4) ‖‖ ≤  and  ≤ 0.241.
In [15], Scherzer considered a similar iteration procedure under a much stronger condition In this paper we consider a modified form of iteration (7).Precisely we consider the sequence (  , ) defined iteratively by where   0, :=  0 ,  0 =   ( 0 ), and  is the regularization parameter.The regularization parameter  is chosen from a finite set  = {  :  0 <  1 < ⋅ ⋅ ⋅ <   < 1} using the adaptive method considered by Pereverzev and Schock in [16].We prove that   , converges to the unique solution    (see Theorem 2) of the equation and that    is a good approximation for x.Our approach, in the convergence analysis of the method as well as the choice of the parameters, is different from that of [6].
Note that in TIGRA method (i.e., (7)) the scaling parameter   and the regularization parameter   will change during the iteration, but in the proposed method no scaling parameter is used and the regularization parameter does not change during the iteration.Also, in the proposed method one needs to compute the Fréchet derivative only at one point  0 .These are the main advantages of the proposed method.
The organization of this paper is as follows.In Section 2 we provide some preparatory results and Section 3 deals with the convergence analysis of the proposed method.Error bounds under an a priori and under the balancing principle are given in Section 4. Numerical examples involving integral equations are given in Section 5. Finally the paper ends with conclusion in Section 6.

Preparatory Results
Throughout this paper we assume that the operator  satisfies the following assumptions.
Notice that in the literature the stronger than (a) condition is used for some (, , V) ∈ .However, holds in general and / 0 can be arbitrarily large [17].It is also worth noticing that (a)  implies (a) but not necessarily vice versa and element  is less accurate and more difficult to find than Φ (see also the numerical example).
has a unique solution    in   ( 0 ).Observe that and hence Therefore by ( 17) it follows that  * 0   +  is invertible.
Assumption 3.There exists a continuous, strictly monotonically increasing function  : (iii) There exists V ∈  such that One of the crucial results we are going to use to prove our main result is the following theorem.Theorem 4. Let 0 <  0  < 1 and    be the solution of (12). Proof.

Convergence Analysis
In this paper we present a semilocal convergence analysis under the following conditions.
(C1) Suppose  < √1 −  0 .Let and let where Let ‖ 0 − x‖ ≤  with and let where and let In due course we will make use of the following lemma extensively.
Lemma 5. Let  be as in (38) and  be as in (37).Suppose Assumption 1 holds.Then for , so by Assumption 1, we have that This completes the proof.
Remark 10.If Assumption 1 is fulfilled only for all  ∈   ( 0 ) ∩  ̸ = , where  is a convex closed a priori set, for which x ∈ , then we can modify process (11) by the following way: to obtain the same estimate in the following Theorem 11; here   is the metric projection onto the set  and  is the step operator in (11).

Error Bounds under Source Conditions
Combining the estimates in Theorems 4 and 8 we obtain the following.
Theorem 11.Let the assumptions in Theorems 4 and 8 hold and let   , be as in (11).

A Priori Choice of the Parameter.
Observe that the estimate /√ + () in Theorem 11 is of optimal order for the choice  :=   which satisfies /√ = ().Now, using the function () := √ −1 (), 0 <  ≤ , we have In view of the above observation, Theorem 11 leads to the following.
Theorem 12. Let () = √ −1 (), 0 <  ≤  and assumptions in Theorem 11 hold.For  > 0, let   =  −1 [ −1 ()] and let   be as in Theorem 11.Then International Journal of Mathematics and Mathematical Sciences 4.2.Balancing Principle.Note that the a priori choice of the parameter could be achieved only in the ideal situation when the function  is known.The point is that the best function  measuring the rate of convergence in Theorem 11 is usually unknown.Therefore in practical applications different parameters  =   are often selected from some finite set and corresponding elements   ,  ,  = 1, 2, . . .,  are studied on line.Let and let     :=     ,  .Then from Theorem 11, we have We consider the balancing principle suggested by Pereverzev and Schock [16], for choosing the regularization parameter  from the set   defined by where  0 =  √  for some constant  (see [18]) and  > 1.
To obtain a conclusion from this parameter choice we considered all possible functions  satisfying Assumption 1 and (  ) ≤ /√  .Any of such functions is called admissible for x and it can be used as a measure for the convergence of     → x (see [19]).
The main result of this section is the following theorem, proof of which is analogous to the proof of Theorem 4.4 in [13].

Numerical Examples
Let the half-space be modeled by two layers of constant different densities  1 ,  2 , separated by a surface  to be determined.In the Cartesian coordinate system, whose plane  coincides with the ground surface and the axis  is directed downward, the inverse gravimetry problem has the form (see [20] and References in it) where (, ) = Δ(, ) + ().
International Journal of Mathematics and Mathematical Sciences 7 The discrete variant of the derivative   ( 0 ) has the form where  0 (, ) =  is constant and  0  is symmetric matrix, for which the component with member (, ) is evaluated by formula (67).

Conclusion
We have considered an iterative method which does not involve inversion of operator, for obtaining approximate in an iterative manner.For choosing the regularization parameter  we made use of the adaptive method suggested in [16].
Δ =  2 − 1 is the density jump at the interface , described by the function (, ) to be evaluated, and Δ(, ) is the anomalous gravitational field caused by deviation of the interface  from horizontal asymptotic plane  = ; that is, for the actual solution û(, ) the following relation holds: 1/2 .The results of numerical experiments are presented in Table 1.Here ũ is the numerical solution obtained by our method; the relative error of solution and residual =       (ũ  ) −                 (73)

Table 1 :
Mesh size and corresponding errors.nonlinear ill-posed operator equation () =  when the available data is   in place of the exact data .It is assumed that  is Fréchet differentiable.The procedure involves finding the fixed point of the function  () =  − [  ( 0 ) * ( () −   ) +  ( −  0 )]