Error Estimates of a Difference Approximation Method for a Backward Heat Conduction Problem

We introduce a central difference method for a backward heat conduction problem (BHCP). Error estimates for this method are provided together with a selection rule for the regularization parameter (the space step length). A numerical experiment is presented in order to illustrate the role of the regularization parameter.


Introduction
The backward heat conduction problem (BHCP) is also referred to as the final boundary value problem.In general no solution which satisfies the heat conduction equation with final data and the boundary conditions exists.Even if a solution exists, it will not be continuously dependent on the final data.The BHCP is a typical example of an illposed problem which is unstable in numerical simulations and requires special regularization methods.In the context of approximation method for this problem, many approaches have been investigated.Such authors as Lattès and Lions [6], Showalter [10], Ames [1], Miller [9] have approximated the BHCP by quasi-reversibility methods.In [11], Schröter and Tautenhahn established an optimal error estimate for a special BHCP.Mera and Jourhmane used many numerical methods with regularization techniques to approximate the problem in [8,5], and so forth.A mollification method has been studied by Hào in [4].Recently, Liu [7] used a group preserving scheme to solve the backward heat equation numerically.A difference approximation method for solving sideways heat equation was provided by Eldén in [2], where he used a central difference to replace the time derivative in heat equation.In this paper, we consider a special BHCP [4]: (1.1) We want to obtain the temperature distribution u(x,t) for 0 < t < 1.Since the data ϕ(•) are based on (physical) observations and are not known with complete accuracy, we assume 2 Error estimates for a backward heat conduction problem that ϕ(•) and ϕ δ (•) satisfy where ϕ(•) and ϕ δ (•) belong to L 2 (R), ϕ δ (•) denotes the measured data and δ denotes the noise level.As usual, assume that there exists an a priori condition for our problem (1.1): where E is a positive bound.Problem (1.1) has a unique solution according to [3].In order to use the Fourier transform technique, we define the Fourier transform of function f (x)(x ∈ R) as the following: We consider the problem (1.1) in L 2 -space with respect to the variable x.Then taking Fourier transform with respect to x, the problem (1.1) can be reformulated in frequency space as follows: The solution to (1.5) is given by u(ξ,t) = e ξ 2 (1−t) ϕ(ξ). (1.6) Then by inverse Fourier transform, the unique solution of (1.1) can be expressed: e ixξ e ξ 2 (1−t) ϕ(ξ)dξ. (1.7) From (1.6), we can easily see that Since in our problem u(ξ,t) is assumed to be in L 2 (R), we see that the exact data function ϕ(ξ) must decay rapidly as ξ → ∞.In the same time, it is easy to see that a small perturbation in the data ϕ(ξ) may cause dramatically large error in the solution u(ξ,t).In addition, the magnifying factor is e ξ 2 ; hence, it is a severely ill-posed problem.By using central difference with step length h to approximate the second derivative u xx , we can get a regularized problem (with noisy data): (1.9) Similarly we can easily get a solution in the frequency space for problem (1.9): Then by inverse Fourier transform, e ixξ e 4sin 2 (hξ/2)(1−t)/h 2 ϕ δ (ξ)dξ. (1.11) In the forthcoming section, we will see that (1.11) is a stable approximate solution for the problem (1.1).

Error estimates
In this section, we will prove error estimates between the exact solution (1.7) and the approximate solution (1.11).The following conclusion is the main result of this paper.
Theorem 2.1.Supposed that u(x,t) is given by (1.7) with exact data ϕ and that v(x,t) is given by (1.11) with noisy data ϕ δ .If there exists a bound u(•,0) ≤ E and the data functions satisfy ϕ − ϕ δ ≤ δ, and if h = 2(ln(E/δ)) −1/2 is chosen, then the following error estimates can be obtained.
It is easy to see that the space step length h is the regularization parameter of this problem.In the conclusion, we give a rule for choosing the regularization parameter, which is very important for the study of ill-posed problems.

A numerical example
In this section, by a numerical experiment, we will study how the regularization parameter h influences the approximation.
It is easy to verify that the function is the unique solution of the initial problem Hence, u(x,t) given by (3.1) is also the solution of the following backward heat equation for 0 ≤ t < 1: Let s = 1 − t,w(x,s) = u(x,t), we have ) First let us list some notations: x i = ih, for i = −n, ...,0,...,n.w i = w i (s) = w(ih,s).Then (3.6) with the initial condition (3.5) can be discretized as w −n (0) . . .w 0 (0) . . .
This ordinary differential equation system can easily be solved.In our numerical implementation we use a Runge-Kutta method.A standard ODE solver ode45 in Matlab is such a method.Now we focus on the numerical experiment.The main aim is to investigate the role of regularization parameter h.We do the numerical experiment in the interval x ∈ [−20,20] and t ∈ [0,1].This is reasonable in that the initial data at the points x = −20, 20 in (3.5) can be considered to be 0 in the computation by noting that the final value u(x,1) → 0 in (3.3) when x → ±∞.To test the accuracy of an approximate solution, we compute the root mean square error (RMSE) by (3.9) at total 2n + 1 test points at the x-axis, where v i and u i are, respectively, the approximate and exact temperature at a test point.Obviously, (3.9) is to approximate L 2 norm error.For convenience, in our numerical experiment the noisy data is generated by w δ i (0) = w i (0) + δ, i = −n, ...,n, (3.10)where w i (0) are the exact data given in (3.8).Thus, Since (3.9), it follows E(w(0)) = δ.From Figure 3.1, one can see that for fixed δ = 0.001, the L 2 -error attain a minimum at the "optimal" h = 2(ln(E/δ)) −1/2 ≈ 0.76 (here we take u(x,0) L 2 (R) = E ≈ 1.1).
From Figure 3.2, we can see that for a fixed h, the L 2 -error increase as t approaches 0.