Two-Step Newton-Tikhonov Method for Hammerstein-Type Equations: Finite-Dimensional Realization

Finite-dimensional realization of a Two-StepNewton-Tikhonovmethod is considered for obtaining a stable approximate solution to nonlinear ill-posed Hammerstein-type operator equations KF x f . Here F : D F ⊆ X → X is nonlinear monotone operator, K : X → Y is a bounded linear operator, X is a real Hilbert space, and Y is a Hilbert space. The error analysis for this method is done under two general source conditions, the first one involves the operator K and the second one involves the Fréchet derivative of F at an initial approximation x0 of the the solution x̂: balancing principle of Pereverzev and Schock 2005 is employed in choosing the regularization parameter and order optimal error bounds are established. Numerical illustration is given to confirm the reliability of our approach.


Introduction
Tikhonov's regularization e.g., 1 method has been used extensively to stabilize the approximate solution of nonlinear ill-posed problems. In recent years, increased emphasis has been placed on iterative regularization procedures 2, 3 for obtaining the approximate solution of such problems. In this paper, we examine the use of iterative regularization procedures for Hammerstein-type 4, 5 equations of the form where F : D F ⊆ X → X, is nonlinear monotone operator, F · −1 does not exists, and K : X → Y is a bounded linear operator. Throughout this paper, D F is the domain of F, F · is the Fréchet derivative of F, X is a real Hilbert space, and Y is a Hilbert space. The inner product and the corresponding norm are denoted by ·, · and · , respectively.

ISRN Applied Mathematics
Recall that 6 the operator F is said to be a monotone operator if F x − F y , x − y ≥ 0, for all x, y ∈ D F . It is assumed throughout that f δ ∈ Y are the available noisy data with and F possesses a uniformly bounded Fréchet derivative for each x ∈ D F cf. 7 , that is, for some M.
Observe that the solution x of 1.1 with f δ in place of f can be obtained by first solving Kz f δ 1.4 for z and then solving the nonlinear problem F x z. 1.5 In 4, 5, 8 this was exploited. In 4 , z is approximated with z δ α where z δ α K * K αI −1 K * f δ , α > 0, δ > 0, 1.6 and then solve 1.5 iteratively using the following Newton-type procedure: x δ n 1,α x δ n,α − F x 0 −1 F x δ n,α − z δ α 1.7 with x δ 0,α : x 0 and obtained local linear convergence. Here and below x 0 ∈ D F is a known initial approximation of the solution x of 1.1 such that x 0 − x ≤ ρ.
In 8 , to solve 1.5 , George and Kunhanandan used the iteration and obtained local quadratic convergence.
A sequence x n in X with lim x n x * is said to be convergent of order p > 1, if there exist positive reals β and γ such that, for all n ∈ N, If a sequence x n satisfies x n − x * ≤ βq n , 0 < q < 1, then x n is said to be linearly convergent. As in 8 , it is assumed that the solution x of 1.1 satisfies The regularization parameter α is chosen from a finite set using the adaptive method considered by Pereverzev and Schock in 9 . In 10 , Argyros and Hilout considered a method called Two-Step Directional Newton Method TSDNM for approximating a zero x * of a differentiable function F defined on a convex subset D of a Hilbert space H with values in R. Motivated by TSDNM, in 11 , we propose a Two-Step Newton-Tikhonov Methods TSNTM for solving 1.1 .
In fact, in 11 we consider two cases of F, in the first case we assume that F x 0 −1 exist and in the second case we assume F is monotone. In this paper we consider the finitedimensional realization of the second case, that is, F is monotone. The finite-dimensional realization of the method and associated algorithm are proposed for which local-cubic convergence is established theoretically and validated numerically. The organization of this paper is as follows. Section 2 deals with Discretized Tikhonov regularization and Section 3 investigates the convergence of the Discretized TSNTM. Section 4 discusses the algorithm and finally the paper ends with a numerical example in Section 5.

Discretized Tikhonov Regularization
This section deals with discretized Tikhonov regularized solution z h,δ α of 1.4 and an a priori and an a posteriori error estimate for F x − z h,δ α using an error estimate for F x − z α from 8 .
The following assumption is used in 8 to obtain the error estimate.
Assumption 2.1. There exists a continuous, strictly monotonically increasing function ϕ : for some w ∈ X such that w ≤ 1.
We assume that ε h → 0 and τ h → 0 as h → 0. The above assumption is satisfied if, P h → I pointwise and if K and F x are compact operators. Further we assume that Tikhonov regularization method for the regularized equation 1.4 consists of solving the equation where C 1/2 max{Mρ, 1} 1.

2.9
Now the result follows from 2.9 , Theorem 2.3 and the following triangle inequality:

An Adaptive Choice of the Parameter
In this subsection, we consider the balancing principle established by Pereverzev and Shock 9 for choosing the parameter α. Let be the set of possible values of the parameter α.

ISRN Applied Mathematics
Let We use the following theorem, proof of which is analogous to the proof of Theorem 4.3 in 8 , for our error analysis. Theorem 2.5 cf. 8 , Theorem 4.3 . Let l be as in 2.13 , let k be as in 2.14 , and let z h,δ α k be as in 2.7 with α α k : μ k α 0 , μ > 1. Then l ≤ k and

Discretized Two-Step Newton Method (DTSNM)
We need the following assumptions for the convergence of DTSNM and to obtain the error estimate.

ISRN Applied Mathematics 7
First we consider a DTSNM for approximating the zero x h,δ c,α k of and then we show that x h,δ c,α k is an approximation to the solution x of 1.1 where c ≤ α k . For an initial guess x 0 ∈ X and for R x : P h F x P h α k /c P h , the DTSNM is defined as where x h,δ 0,α k : P h x 0 . Note that with the above notation and let k 0 be such that Remark 3.3. Note that the above assumption is satisfied if we choose k 0 < min{1, 1/ 1 τ 0 8/4 3 1 τ 0 }. Let g : 0, 1 → 0, 1 be the function defined by

3.17
Now by Assumption 3.1 and 3.7 we have

3.18
This proves a . Now b follows from a and the triangle inequality To prove c we observe that

3.22
The last but one step follows from Assumption 3.1 and 3.7 . Similarly one can prove that Thus from 3.20 , 3.22 , 3.23 , a and b we have ≤ g e h,δ n−1,α k e h,δ n−1,α k .

3.27
Therefore by 3.27 and 2.9 we have

3.28
Now since g is monotonic increasing and e h,δ 0,α k ≤ γ ρ , we have g e h,δ 0,α k ≤ g γ ρ . This completes the proof of the theorem. Proof. Note that by b of Theorem 3.4 we have

3.29
that is, x 1 ∈ B r P h x 0 . Again note that by 3.29 and c of Theorem 3.4 we have

3.31
The last but one step follows from the monotonicity of g and 3.28 . And by 3.31 and c of Theorem 3.4 we have 3.32 that is, x h,δ 2,α k , y h,δ 2,α k ∈ B r P h x 0 . Continuing this way one can prove that x h,δ n,α k , y h,δ n,α k ∈ B r P h x 0 , for all n ≥ 0. This completes the proof.
The main result of this section is the following theorem. Theorem 3.6. Let y h,δ n,α k and x h,δ n,α k be as in 3.5 and 3.6 , respectively, and let assumptions of Theorem 3.5 hold. Then x h,δ n,α k is a Cauchy sequence in B r P h x 0 and converges to where C 0 1/ 1 − g γ ρ 3 1 τ 0 k 0 γ ρ /2 1/ 1 − g γ ρ 2 3 g γ ρ Proof. Using the relation b and e of Theorem 3.4 and 3.28 , we obtain This completes the proof.
Hereafter we assume that r < 1/k 0 and K 1 < 1 − k 0 r / 1 − c . The proof of the following theorem is analogous to the proof of Theorem 3.14 in 11 but for the sake of completeness we give the proof.

3.39
where Γ : and hence by 3.39 and 3.40 we have

3.41
This completes the proof of the theorem.

Theorem 3.8. Suppose x h,δ c,α k is the solution of 3.4 and Assumption 2.1 and Theorem 3.7 hold.
In addition if τ 0 < 1, then Proof. Suppose x δ c,α k and x h,δ c,α k are the solutions of 3.36 and 3.4 , respectively, then by 3.36 we have, So from 3.4 and 3.43 , Then by 3.44 we have and hence

3.47
Now the result follows from 2.9 , 3.47 and the relation The following theorem is a consequence of Theorems 3.6, 3.7, and 3.8.
Theorem 3.9. Let x h,δ n,α k be as in 3.6 and let assumptions in Theorems 3.6, 3.7 and 3.8 hold. Then where C 0 and γ 1 are as in Theorem 3.6.
Step 6. Otherwise, repeat with i 1 in place of i.
Step 8. Solve x h,δ n k ,α k using the iteration 3.6 .
In the next section we consider an example to illustrate the above algorithm. The computational results provided endorse the reliability and effectiveness of our method.

Example
In this section we consider an example satisfying the assumptions made in this paper and give the numerical illustration. Consider the operator KF : L 2 0, 1 → L 2 0, 1 where K : L 2 0, 1 → L 2 0, 1 is defined by where k t, s Then for all x t , y t : x t > y t see 7 , Section 4.3 : Thus the operator F is monotone. The Fréchet derivative of F is given by Let V n be a sequence of finite-dimensional subspaces of X and let P h P 1/n denote the orthogonal projection on X with range R P h V n . We assume that dim V n n 1 and P h x − x → 0 as h → 0 for all x ∈ X. We choose the linear splines {v 1 , v 2 , . . . , v n 1 } in a uniform grid of n 1 points in 0, 1 as a basis of V n .
Since w i ∈ V n , w i is of the form  where F h2 F y h,δ n,α k t 1 , F y h,δ n,α k t 2 , . . . , F y h,δ n,α k t n 1 T .

Numerical Example
Example 5.1. To illustrate the method discussed in the above section, we consider the space X Y L 2 0, 1 and the Fredholm integral operator K : L 2 0, 1 → L 2 0, 1 . The algorithm in Section 5 is applied by choosing V n as the space of linear splines in a uniform grid of n 1 points in 0, 1 . In our computation, we take f t 1/36π 2 27 sin πt−sin 3πt 1/36π 27t 2 cos πt− 3t 2 cos 3πt 6t cos 3πt − 3 cos 3πt − 27t cos πt and f δ f δ. Then the exact solution x t sin πt.

5.16
We use x 0 t sin πt 3 4π 2 1 tπ 2 − t 2 π 2 − cos 2 πt 5.17 as our initial guess, so that the function x 0 − x satisfies the source condition where ϕ 1 λ λ. Thus we expect to have an accuracy of order at least O δ ε h 1/2 .
We choose α 0 1.5 δ 2 , μ 1.5, δ 0.0667 c, ε h 1/10n 2 , ρ 0.19, γ ρ 0.8173, and g γ ρ 0.54 approximately. For all n the number of iteration n k 3 in this example. The results of the computation are presented in Table 1. The plots of the exact and the approximate solution obtained are given in Figures 1 and 2.