MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2015/573932 573932 Research Article Time- or Space-Dependent Coefficient Recovery in Parabolic Partial Differential Equation for Sensor Array in the Biological Computing Zhou Guanglu 1 Wu Boying 2 Ji Wen 3 Rho Seungmin 4 Chen Bo-Wei 1 Department of Computer Science Harbin Institute of Technology at Weihai Weihai Shandong 264209 China hitwh.edu.cn 2 Department of Mathematics Harbin Institute of Technology Harbin Heilongjiang 150001 China hit.edu.cn 3 Beijing Key Laboratory of Mobile Computing & Pervasive Device Institute of Computing Technology Beijing China cas.cn 4 Department of Multimedia Sungkyul University Anyō Gyeonggi 100190 Republic of Korea sungkyul.ac.kr 2015 1642015 2015 03 11 2014 18 12 2014 19 12 2014 1642015 2015 Copyright © 2015 Guanglu Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This study presents numerical schemes for solving a parabolic partial differential equation with a time- or space-dependent coefficient subject to an extra measurement. Through the extra measurement, the inverse problem is transformed into an equivalent nonlinear equation which is much simpler to handle. By the variational iteration method, we obtain the exact solution and the unknown coefficients. The results of numerical experiments and stable experiments imply that the variational iteration method is very suitable to solve these inverse problems.

1. Introduction

Various inverse problems in a parabolic partial differential equation are widely encountered in modeling physical phenomena . There are three kinds of inverse parameter problems of parabolic partial differential equations, including determining an unknown time-dependent coefficient, an unknown space-dependent coefficient, and an unknown source term.

The aim of this paper is to find ( u ( x , t ) , a ( x , t ) ) in the parabolic equation (1) u t = x a x , t u x + f u , x , t , 000000000 0 < x < L , 0 < t < T , where a ( x , t ) is only a function with respect to x or t .

When a ( x , t ) = a ( t ) , the boundary conditions and an extra measurement of (1) are as follows: (2) u x , 0 = f x , 0 x 1 , u 0 , t = g 0 t , 0 t T , u 1 , t = g 1 t , 0 t T , u x , t = E t , 0 t T , x 0,1 , where f ( u , x , t ) = ω ( t ) u ( x , t ) + ϕ ( x , t ) , ϕ ( x , t ) , ω ( t ) , f ( x ) , g 0 ( t ) , g 1 ( t ) , and E ( t ) are known functions. This equation is widely used to determine the unknown properties of a region by measuring only data on its boundary or a specified location in the domain. These unknown properties such as the conductivity medium are important to the physical process but usually cannot be measured directly or are very expensive to be measured. The existence and uniqueness of the solution to this problem are discussed in [4, 5].

There are various numerical methods to solve (1) and (2) or similar problems. Now we give a quick review of the previous work placed to our problem. Cannon  reduced the problem to a nonlinear integral equation for the coefficient a ( t ) . This approach works well for a parabolic equation in one space variable but does not easily extend to higher-dimensional problems because it depends on the explicit form of the fundamental solution of the heat operator. In Cannon and Yin , this problem was studied from a different point of view. The authors first transformed a large class of parabolic inverse problems into a nonclassical parabolic equation whose coefficients consist of trace type functional on the solution and its derivatives subject to some initial and boundary conditions. For the resulted nonclassical problem, they introduced a variation form by defining a new function; then both continuous and discrete Galerkin procedures are employed to the nonclassical problem. Authors of  presented the backward Euler finite difference scheme. It is shown that this scheme is stable in the maximum norm and error estimation was obtained. In , several first- and second-order finite difference numerical schemes have been developed to solve the nonclassical problem which is obtained by applying the transformation technique in  to problem (1) and (2). Also, a method is proposed in  to solve this problem which is based on a semianalytical approach. Authors of  used the pseudospectral Legendre method to solve this problem. An unconditionally stable efficient fourth-order numerical algorithm based on the functional transformation, the Pade approximation, and the Richardson extrapolation is proposed in  to compute the main function and the unknown time-dependent coefficient in (1). The Chebyshev cardinal functions are employed in  to recover the unknown coefficient. These schemes are efficient and easy to implement but the convergence order is low.

When a ( x , t ) = a ( x ) , the boundary conditions and an extra measurement of (1) are as follows: (3) u x , 0 = u 0 x , 0 x L , (4) u x x = 0 = 0 , 0 t T , (5) u 1 , t = g t , 0 t T , (6) u x , T = u 1 x , 0 x L , where u 0 ( x ) , u 1 ( x ) , f ( u , x , t ) = f ( x , t ) , and g ( t ) are known functions. It is widely known that this model describes the heat conduction procedure in a given inhomogeneous medium with some input source f ( x , t ) and the coefficient a ( x ) represents a heat conduction property, namely, the heat capacity. There are various numerical methods to solve (1) and (3)–(6) or similar problems. Deng et al.  applied the gradient iteration algorithm for obtaining the approximate solutions. Kansa method is used by  to solve problem (1) and (2) and the stable experiments are given. Authors in  give an iterative fixed point projection method for this problem. In addition, there are other methods .

Although there are many methods for recovering the above inverse problems, those methods only give approximate solution. So it is worth noting that the variational iteration method can give the exact solution.

Professor He proposed variational iteration method (VIM) firstly in 1998  and developed quickly VIM in 2006 and 2007. Based on the use of Lagrange multipliers for the identification of optimal values of parameters in a functional, VIM gives rapidly convergent successive approximations of the exact solution if such a solution exists. There are three standard variational iteration algorithms , called VIM-I, VIM-II, and VIM-III, for solving differential difference equations, integrodifferential equations, fractional differential equations, and fractal differential equations. These three forms of VIM have been proved by many authors to be a powerful mathematical tool for addressing various kinds of linear and nonlinear problems . The reliability of the method and the reduction in the burden of computational work give this method wider application . In addition, some reviews can be found in He [24, 33, 34]. Since the applications of VIM in inverse problems are very few, we use VIM-I to recover the unknown coefficients here. Furthermore, VIM gives the exact solution of this problem. Thus the variational iteration method is suitable for finding the approximation solution of the problem.

The rest of the paper is organized in four sections including Introduction. Section 2 gives the detailed progress and proof for recovering the unknown coefficients by applying VIM. In Section 3, numerical examples and a stable experiment are presented to imply the accuracy of VIM. Finally, a brief conclusion ends this paper.

2. Application of He’s Variational Iteration Method

In this section, we will apply He’s variational iteration method (VIM) to recover time- or space-dependent coefficient problems. The detailed introduction of VIM can be found in [24, 33, 34].

2.1. Recovering Time-Dependent Coefficients

Using (1) and (2), we obtain (7) E t = u t x , t = a t u x x x , t + ω t E t + ϕ x , t .

Assuming that u x x ( x , t ) 0 , we have (8) a t = E t - ω t E t - ϕ x , t u x x x , t .

Therefore the inverse problem (1) and (2) is equivalent to the following problem: (9) u t = E t - ω t E t - ϕ x , t u x x x , t u x x + ω t u + ϕ x , t , 0 < x < 1 , 0 < t < T , (10) u x , 0 = f x , 0 x 1 , (11) u 0 , t = g 0 t , 0 t T , (12) u 1 , t = g 1 t , 0 t T .

From (9), (13) - u t + E t - ω t E t - ϕ x , t u x x x , t u x x + ω t u + ϕ x , t = 0 .

Constructing a correction function for the above equation: (14) u n + 1 x , t = u n x , t M + x x λ y u n y y y , t - u ~ n y y y , t - u ~ n s y , t E t - ω t E t - ϕ x , t u n x x x , t MMMMMMM + E t - ω t E t - ϕ x , t u n x x x , t MMMMMMM · u ~ n y y y , t MMMMMMM E t - ω t E t - ϕ x , t u n x x x , t + ω t u ~ n y , t + ϕ y , t d y .

In the following, we determine the Lagrange multiplier λ via variation theory: (15) δ u n + 1 x , t = δ u n x , t M + δ x x λ y u n y y y , t - u ~ n y y y , t - u ~ n s y , t E t - ω t E t - ϕ x , t u n x x x , t MMMMMMMM + E t - ω t E t - ϕ x , t u n x x x , t MMMMMMMM · u ~ n y y y , t MMMMMMMM E t - ω t E t - ϕ x , t u n x x x , t + ω t u ~ n y , t + ϕ y , t d y .

Applying δ u ~ n = 0 , then (16) δ u n + 1 x , t = δ u n x , t 1 - λ x + δ u n x , t λ x - x x λ ′′ y δ u n y , t d y = 0 , so (17) λ ′′ y = 0 , 1 - λ x = 0 , λ x = 0 .

Thus λ ( y ) = y - x ; this gives the iterative formula: (18) u n + 1 x , t = u n x , t M + x x y - x - u n s y , t E t - ω t E t - ϕ x , t u n x x x , t MMMMMMMM + E t - ω t E t - ϕ x , t u n x x x , t MMMMMMMM · u n y y y , t MMMMMMMM E t - ω t E t - ϕ x , t u n x x x , t + ω t u n y , t + ϕ y , t d y .

Now, take u 0 ( x , t ) and x x u 0 ( x , t ) x x u x , t as an initial value. By (18), we can obtain the n -order approximate solution u n ( x , t ) of (9). Putting (19) h n t = E t - ω t E t - ϕ x , t u n x x x , t , then (20) u n + 1 x , t = u n x , t M + x x y - x - u n s y , t + h n t u n y y y , t MMMMMMM - u n s y , t + h n t u n y y y , t + ω t u n y , t + ϕ y , t d y and its derivative about x : (21) x u n + 1 x , t = x u n x , t M - x x - u n s y , t + h n t u n y y y , t MMMM y , t + h n t u n y y y , t + ω t u n y , t + ϕ y , t d y and the derivative of the above about x : (22) x x u n + 1 x , t = x x u n x , t M - - u n t x , t + h n t u n x x x , t t u n x , t + ϕ x , t ϕ x , t MMM + ω t u n x , t + ϕ x , t .

Inserting x = x , we obtain (23) x x u n + 1 x , t = x x u n x , t M - - u n t x , t + h n t u n x x x , t t u n x , t + ϕ x , t MMM + ω t u n x , t + ϕ x , t .

From (18), one can infer that (24) u n x , t = u n - 1 x , t = = u 0 x , t = E t u n t x , t = u n - 1 t x , t = = u 0 t x , t = E t , so (25) - u n t x , t + h n t u n x x x , t M + ω t u n x , t + ϕ x , t = - E t + E t - ω t E t - ϕ x , t u n x x x , t u n x x x , t M + ω t E t + ϕ x , t = 0 which leads to the following: (26) x x u n + 1 x , t = x x u n x , t , so as to deduce (27) x x u n x , t = x x u n - 1 x , t = = x x u 0 x , t x x u x , t .

Therefore, by (8), the approximate solution a n ( t ) to a ( t ) can be expressed in the following form: (28) a n t = E t - ω t E t - ϕ x , t u n x x x , t .

2.2. Recovering Space-Dependent Coefficients

Using (1) and (3)–(6), we obtain (29) u t = x a x u x + f x , t ; then (30) u t x , t - f x , t = x a x u x ; putting t = T and integrating the above equation with x from 0 to x , (31) 0 x u t w , T - f w , T d w = a x u x x , T - u x 0 , T ; applying condition (4), (32) u x 0 , T = 0 ; thus (33) 0 x u t w , T - f w , T d w = a x u x x , T .

Assuming that u x ( x , T ) = u 1 ( x ) 0 , we have (34) a x = 0 x u t w , T - f w , T d w u 1 x .

Therefore, the inverse problem (1) and (3)–(6) is equivalent to the following problem: (35) u t = x 0 x u t w , T - f w , T d w u 1 x u x + f x , t , MMMMMMMMMMMM 0 < x < L , 0 < t < T , with the initial condition (36) u x , 0 = u 0 x , 0 x L and boundary conditions (37) u x , t x x = 0 = 0 , 0 t T , u 1 , t = g t , 0 t T .

Next, we are concerned with the approximate solutions of (35)–(37) by the variational iteration method. Applying the variation theory, we can construct an iteration formula.

From (35), (38) u t - x 0 x u t w , T - f w , T d w u 1 x u x - f x , t = 0 .

Constructing a correction function for the above equation, (39) u n + 1 x , t = u n x , t M + T t λ s u n s x , s - x 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s MMMmmmMM · 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s MMMmmmMMMM 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s - f ~ x , s d s .

In the following, we determine the Lagrange multiplier λ via variation theory: (40) δ u n + 1 x , t = δ u n x , t M + δ T t λ s u n s x , s - x 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s MMMMMMM · 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s MMMMMMMMM 0 x u ~ n t w , T - f ~ w , T d w u 1 x u ~ n x x , s - f ~ x , s d s .

Applying δ u ~ n = 0 , then (41) δ u n + 1 x , t = δ u n x , t 1 + λ t - T t λ s δ u n x , s d s = 0 , so (42) 1 + λ t = 0 , λ s = 0 .

Thus λ ( s ) = - 1 ; this gives the iterative formula (43) u n + 1 x , t = u n x , t M - T t u n s x , s - x 0 x u n t w , T - f w , T d w u 1 x u n x x , s MMMM · 0 x u n t w , T - f w , T d w u 1 x u n x x , s MMMMMM 0 x u n t w , T - f w , T d w u 1 x u n x x , s - f x , s d s .

Now, take u 0 ( x , t ) and x u 0 ( x , T ) u 1 ( x ) as an initial value. By (43), we can obtain the n -order approximate solution u n ( x , t ) of (35).

If u n t ( x , T ) u t ( x , T ) , then we can approximate to a ( x ) by the following: (44) a n x = 0 x u n t w , T - f w , T d w u 1 x .

Now, we prove that u n t ( x , T ) u t ( x , T ) .

By (43), x u n + 1 ( x , T ) = x u n ( x , T ) , which leads to the following: (45) x u n x , T = x u n - 1 x , T = = x u 0 x , T , so as to deduce (46) u n t x , T u t x , T .

3. Numerical Examples Example 1.

Considering a special case of (1) and (2) with [9, 13], (47) ω t = 0 , ϕ x , t = 0 , f x = e x / 2 , g 0 t = 1 + 2 t 3 1 + t 3 + sin t 2 , g 1 t = e 1 + 2 t 3 1 + t 3 + e sin t 2 , E t = 1.13315 1 + 2 t 3 1 + t 3 + 1.13315 sin t 2 , with x = 0.25 , for which the exact solution is (48) u x , t = e x / 2 1 + 2 t 3 1 + t 3 + e x / 2 sin t 2 , a t = 2 6 t 2 + 1 + t 3 2 cos t / 2 1 + t 3 1 + 2 t 3 + 1 + t 3 sin t / 2 .

Beginning with (49) u 0 x , t = f x a g 0 t + b g 1 t , where a , b are the unknown parameters to be further determined, according to (18), one can obtain the first-order approximation u 1 ( x , t ) , and we find (50) u i x , t = u 1 x , t , i = 2,3 , .

Incorporating the initial condition u 0 ( x , 0 ) = f ( x ) , u 0 ( 0 , t ) = g 0 ( t ) , u 0 ( 1 , t ) = g 1 ( t ) of Example 1 into u 1 ( x , t ) , the unknown parameters a , b can be obtained. Therefore, the first-order approximation (51) u 1 x , t = e x / 2 1 + 2 t 3 1 + t 3 + e x / 2 sin t 2 is obtained, which is the exact solution of Example 1. From (28), we have (52) a 1 t = 2 6 t 2 + 1 + t 3 2 cos t / 2 1 + t 3 1 + 2 t 3 + 1 + t 3 sin t / 2 , which is equal to the exact a ( t ) of Example 1.

Example 2.

Finding a ( t ) in (1) and (2) with [9, 13], (53) ω t = 0 , ϕ x , t = 3 + cos t e t cos x , f x = cos x , g 0 t = e t , g 1 t = cos 1 e t , E t = 1 + t + t 2 2 e - t , where x = 4 / 9 . The true solution is u ( x , t ) = e t cos ( x ) while a ( t ) = 2 + cos ( t ) .

Beginning with (54) u 0 x , t = f x a g 0 t + b g 1 t , where a , b are the unknown parameters to be further determined, according to (18), one can obtain the first-order approximation u 1 ( x , t ) , and we find (55) u i x , t = u 1 x , t , i = 2,3 , .

Incorporating the initial condition u 0 ( x , 0 ) = f ( x ) , u 0 ( 0 , t ) = g 0 ( t ) , u 0 ( 1 , t ) = g 1 ( t ) of Example 2 into u 1 ( x , t ) , the unknown parameters a , b can be obtained. Therefore, the first-order approximation u 1 ( x , t ) is obtained and u 1 ( x , t ) = e t cos ( x ) , which is the exact solution of Example 2. From (28), we have a 1 ( t ) = 2 + cos ( t ) , which is equal to the exact a ( t ) of Example 2.

Example 3.

We solve the problem (1) and (2) with [9, 13]: (56) ω t = - t 2 - 1 , ϕ x , t = 2 t x + t e t 2 + 1 x , f x = e x , g 0 t = 1 , g 1 t = e t 2 + 1 , E t = e 2 t 2 + 1 / 7 , whose true solution is u ( x , t ) = e x ( t 2 + 1 ) while a ( t ) = 1 / ( 1 + t 2 ) 2 , x = 2 / 7 .

Beginning with (57) u 0 x , t = f x a g 0 t + b g 1 t , where a , b are the unknown parameters to be further determined, according to (18), one can obtain the first-order approximation u 1 ( x , t ) , and we find (58) u i x , t = u 1 x , t , i = 2,3 , .

Incorporating the initial condition u 0 ( x , 0 ) = f ( x ) , u 0 ( 0 , t ) = g 0 ( t ) , u 0 ( 1 , t ) = g 1 ( t ) of Example 3 into u 1 ( x , t ) , the unknown parameters a , b can be obtained. Therefore, the first-order approximation u 1 ( x , t ) is obtained and u 1 ( x , t ) = e x ( t 2 + 1 ) , which is the exact solution of Example 3. From (28), we have a 1 ( t ) = 1 / ( 1 + t 2 ) 2 , which is equal to the exact a ( t ) of Example 3.

The above three examples are about time-dependent coefficient; in the following we take space-dependent coefficient examples.

Applying the above VIM, we begin with u 0 ( x , t ) = u 0 ( x ) ( a g ( t ) + b ) , where a , b are the unknown parameters to be further determined. Incorporating the initial and boundary condition (36) and (37) into u 0 ( x , t ) , the unknown parameters a , b can be obtained. According to (43), one can obtain the first-order approximation u 1 ( x , t ) . Here, T = L = 1 .

Example 4.

We take the boundary conditions, initial condition, and additional specification function (3)–(6) as [14, 15] (59) g t = e t , u 0 x = x 3 , u 1 x = e T x 3 , f x , t = 0 , with the exact solution as (60) u x , t = e t x 3 and the identifying coefficient as (61) a x = 1 12 x 2 .

Therefore, the first-order approximation u 1 ( x , t ) = ( a + b + a 2 ( - 1 + e t ) + a b t ) x 3 is obtained. We can determine a = 1 , b = 0 , so u 1 ( x , t ) = e t x 3 which is the exact solution of Example 4. From (44), we have a 1 ( x ) = 1 / 12 x 2 which is equal to the exact a ( x ) of Example 4.

Example 5.

Finding a ( x ) in (1) and (3)–(6) in  (62) g t = e t + 1 , u 0 x = x 2 e x , u 1 x = e x + T x 2 , f x , t = 0 . The true solution is u ( x , t ) = e x + t x 2 while a ( x ) = e x ( x 2 - 2 x + 2 ) - 2 / x ( x + 2 ) e x .

From (43), the first-order approximate solution (63) u 1 x , t = e x - a e 2 - 1 + a e - e t + b 1 + a e - 1 + t x 2 . Incorporating the initial conditions, we determine a = 1 / e , b = 0 . Therefore, the first-order approximation u 1 ( x , t ) is obtained and u 1 ( x , t ) = e x + t x 2 which is the exact solution of Example 5. From (44), we have a 1 ( x ) = e x ( x 2 - 2 x + 2 ) - 2 / x ( x + 2 ) e x which is equal to the exact a ( x ) of Example 5.

Example 6.

We solve the problem (1) and (3)–(6) in : (64) g t = e t , u 0 x = x 3 , u 1 x = e T x 3 , f x , t = x 3 e t - 9 x 2 e t , whose true solution is u ( x , t ) = x 3 e t while a ( x ) = x .

We can determine a = 1 , b = 0 . Therefore, the first-order approximation u 1 ( x , t ) = x 2 ( ( b + a e t ) x + ( ( - 1 + a ) ( - e + e t ) + b ( - 1 + t ) ) ( 9 + ( - 1 + a ) x ) ) = e t x 3 , which is the exact solution of Example 6. From (44), we have a 1 ( x ) = x which is equal to the exact a ( x ) of Example 6.

In order to imply the stability of this method, we perturb the additional specification data u 1 ( x ) as (65) u 1 δ x = u 1 x 1 + δ × random x with δ = 1 % ; the reconstruction results are also stable, see Figure 1.

Numerical solutions a ( x ) of Example 6 with random x - 1,1 .

4. Conclusion

The VIM has been applied in solving a variety of equations, but it was rarely applied in inverse problems. Here, we develop the new application area of VIM; our contribution is that we apply VIM to solve the inverse problem of time- or space-dependent coefficients in a parabolic partial differential equation and obtain the exact solution. The numerical results fully demonstrate the superiority of VIM for these inverse problems.

Appendix

To imagine the basic idea behind He’s VIM, we consider the following general differential equation: (A.1) L u x , t + R u x , t + N u x , t = g x , t , where L is the highest order derivative that is assumed to be easily invertible, R is a linear differential operator of order less than L , N u represents the nonlinear terms, and g is a source term. The basic characteristic of He’s method is to construct a correction function for (A.1), which reads (A.2) u n + 1 x , t M = u n x , t + 0 x λ y L u n y , t + R u ~ n y , t MMMMMMMMMM + N u ~ n y , t - g y , t d y , where λ is a Lagrange multiplier which can be identified optimally via variation theory, u n is the nth approximate solution, and u ~ n denotes a restricted variation; that is, δ u ~ n = 0 .

To solve (A.1) by He’s VIM, we first determine the Lagrange multiplier λ that can be identified optimally via variation theory. Then, the successive approximations u n ( x , t ) , n = 0,1 , 2 , , of the solution u ( x , t ) can be readily obtained upon using the obtained Lagrange multiplier and any selective function u 0 . Consequently, the exact solution may be obtained by using (A.3) u x , t = lim n + u n x , t .

In summary, we have the following variation iteration formula: (A.4) u n + 1 x , t = u n x , t M + 0 x λ y L u n y , t + R u n y , t MMMMMM + N u n y , t - g y , t d y , where u 0 ( x , t ) is an arbitrary function satisfying initial and boundary conditions.

It should be specially pointed out that the more accurate the identification of the multiplier is, the faster the approximations converge to their exact solutions.

Remark 7.

We cite an integrate of x in (A.2) as an example; one needs an integrate of t by a similar method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment