Identifying Initial Condition in Degenerate Parabolic Equation with Singular Potential

A hybrid algorithm and regularizationmethod are proposed, for the first time, to solve the one-dimensional degenerate inverse heat conduction problem to estimate the initial temperature distribution from point measurements.The evolution of the heat is given by a degenerate parabolic equation with singular potential. This problem can be formulated in a least-squares framework, an iterative procedure which minimizes the difference between the given measurements and the value at sensor locations of a reconstructed field. The mathematical model leads to a nonconvex minimization problem. To solve it, we prove the existence of at least one solution of problem and we propose two approaches: the first is based on a Tikhonov regularization, while the second approach is based on a hybrid genetic algorithm (married genetic with descent method type gradient). Some numerical experiments are given.


Introduction
The inverse problem is expressed when the PDE solution is measured or specified, and we are interested in determining some properties: coefficients, forcing term, boundary, or initial condition from the partial knowledge of the system in a limited time interval (see [1,2]).
In the last recent years, an increasing interest has been devoted to degenerate parabolic equations.Indeed, many problems coming from physics (boundary layer models in [3], models of Kolmogorov type in [4], etc.), biology (Wright-Fisher models in [5] and Fleming-Viot models in [6]), and economics (Black-Merton-Scholes equations in [7]) are described by degenerate parabolic equations [8].
The identification of the initial state of nondegenerate parabolic problems is well studied in the literature (see [9][10][11]).However, as far as we know, the degenerate case has not been analysed in the literature.
The mathematical model leads to a nonconvex minimization problem find  0 ∈  ad such that  ( 0 ) = min where the functional  is defined as follows: subject to  being the weak solution of the parabolic problem (1) with initial state ,  obs an observation of  in Ω×]0, [, and  the observation operator.The space  ad is the set of admissible initial states.Problem (3) is ill-posed in the sense of Hadamard.To solve this problem, we propose two approaches.(5) Here, the last term in (5) stands for the so-called Tikhonov-type regularization ( [12,13]),  being a small regularizing coefficient that provides extra convexity to the functional   and   a priori (background) knowledge of the true state  exact 0 (the state to estimate).We consider that the values of   are given in each point of analysis grid-points.
The second approach is applied when there is a partial knowledge of values of   (example 20%); the regularization parameter is very difficult to determine.To overcome this problem, we propose a new approach, based on a hybrid genetic algorithm (married genetic with descent method gradient type).Finally, we make a comparison between the two mentioned approaches (with 20% of   ).
First of all, we prove that problem (3) has at least one solution.The gradient of the functional  is calculated with the adjoint method.Numerical experiments are presented to show the performance of our approaches.
We want to estimate  0 thanks to an observation  obs (, ) of (, ) in Ω×]0, [.The minimization problem associated with this problem is where the functional  is as follows: subject to  being the weak solution of the parabolic problem (6) with initial state ,   the background state, and  the observation operator.The space  ad is the set of admissible initial states (will be defined later).We now specify some notations we shall use.Let us introduce the following functional spaces (see [14][15][16]): with We recall that (see [16])  1  is an Hilbert space and it is the closure of  ∞  (0, 1) for the norm are compacts.Firstly, we prove that problem ( 6) is well-posed, the functional  is continuous, and  is -derivable in  ad .
The weak formulation of problem ( 6) is Let We discuss the following cases.
(1) Noncoercive Case (see [14],  = 0).In this case, the bilinear form  becomes We have () = 0 at  = 0, from where the bilinear form  will be noncoercive.Let where  is a real strictly positive constant.(16) We recall the following theorem.
Theorem 1 (see [14,17,18]).For all  ∈  2 (Ω×]0, [) and  0 ∈  2 (0, 1), there exists a unique weak solution which solves problem (6) such that and there is a constant   such that for any solution of (6) sup Theorem 2. Let  be the weak solution of (6) with initial state  0 .In noncoercive case, the function is continuous, and the functional  has at least one minimum in   .
Theorem 3. Let  be the weak solution of (6) with initial state  0 .The function is -derivable in   .
Consider the not bounded operator (, ()) where Let where  is a real strictly positive constant. (25) We recall the following theorem.
Theorem 6.Let  be the weak solution of ( 6) with initial state  0 .The function is -derivable in   .
Consider  =   − , with  being the weak solution of ( 6) with initial state  0 and   is the weak solution of ( 6) with initial state   0 =  0 +  0 .Consequently,  is the solution of the variational problem: Hence,  is the weak solution of ( 6) with  = 0. We apply the estimate in Theorem 1 with  = 0.This gives the following.
Proof of Theorem 3. Let  0 ∈  ad and  0 such that  0 + 0 ∈  ad ; we define the function where  is the solution of the variational problem and we pose We want to show that We easily verify that the function  is solution of the following variational problem: By the same way as that used in the proof of continuity, we deduce Hence, the function  :  0 →  is -derivable in  ad and we deduce the existence of the gradient of the functional .
Consider  =   − , with  being the weak solution of (6) with initial state  0 , and   is the weak solution of (6) with initial state   0 =  0 +  0 .
Consequently,  is the solution of variational problem (50) Take V = ; this gives since Ω is independent of , which gives By integrating between 0 and  with  ∈ [0, ] we obtain and since () ⩾ 0 and −/  > 0, ∀ ∈ Ω, we obtain this gives Sup From where Which gives the continuity of the function International Journal of Differential Equations Hence, the functional  is continuous in Proof of Theorem 6.Let  0 ∈  ad and  0 such that  0 + 0 ∈  ad ; we define the function where  is the solution of the variational problem and we pose We want to show that We easily verify that the function  is the solution of the following variational problem: By the same way as that used in the proof of continuity, we deduce Hence, in all cases, the function  :  0 →  is -derivable in  ad and we deduce the existence of the gradient of the functional .
Now, we are going to compute the gradient of  with the adjoint state method.
And with () = 0 we may now rewrite (69) as The Gâteaux derivative of the functional at  0 in the direction ℎ ∈  2 (Ω) is given by After some calculations, we arrive at The adjoint model is Problem ( 75) is retrograde; we make the change of variable  ↔  − , which gives with ψ() = ( − ).From ( 71), (74), and (75) the gradient of  is given by With the change of variable  ↔  − , the gradient becomes To calculate a gradient of , we solve two problems: ( 6) and (76).The result solution of ( 6) is used in the second member of problem (76).
Let ℎ be the steps in space and Δ the steps in time. Let Letting   = (   ) ∈{1,2,...,} , finally we get where Step 2 (discretization of the functional one has).

Numerical Experiments and Results
In this section, we discuss two cases: In case we have a priori knowledge   of  exact 0 in each point of analysis grid-points, we apply the Tikhonov approach to solve the minimization problem (8).The data   is assumed to be corrupted by measurement errors, which we will refer to as noise.In particular, we suppose that   =  exact 0 + .Here, we study the impact of err (err = ‖‖ 2 ) on the construction of the solution.
In case we have a partial knowledge of values of   (example 20%): firstly, we apply the hybrid approach to rebuild the initial state.Secondly, we make a comparison between both hybrid and Tikhonov approaches.
The tests have been performed in Matlab 2012A, on a Windows 7 platform.
is deduced from the differentiability and continuity of the functional , and we have where  is the solution of (76).The main steps for descent method at each iteration are the following: (i) Calculate   solution of ( 6) with initial condition  0 .
In all figures, the observed function is drawn in red and built function in blue.
Let  be number of points in space and  number of points in time.

2 International 2 𝐿 2 ( 2 𝐿 2 (
Journal of Differential EquationsThe first approach is based on regularization, for the first time, applied to solve a degenerate inverse problem.The problem thus consists of minimizing a functional of the form   ()    () −  obs ()         −        Ω) .

Figure 1 :
Figure 1: Initial temperature.This figure shows that we can rebuild the initial state.