Fractal-Based Methods and Inverse Problems for Differential Equations: Current State of the Art

1 Department of Mathematics and Statistics, University of Guelph, Guelph, ON, Canada N1G 2W1 2Department of Economics, Management, and Quantitative Methods, University of Milan, 20122 Milan, Italy 3 Department of Applied Mathematics and Sciences, Khalifa University, P.O. Box 127788, Abu Dhabi, UAE 4Department of Mathematics and Statistics, Acadia University, Wolfville, NS, Canada B4P 2R6 5Department of Applied Mathematics, University of Granada, 18071 Granada, Spain


Inverse Problems for Fixed Point Equations
According to Keller [1], "we call two problems inverse of one another if the formulation of each involves all or part of the solution of the other.Often, for historical reasons, one of the two problems has been studied extensively for some time, while the other one is newer and not so well understood.In such cases, the former is called the direct problem, while the latter is the inverse problem."In practice, a general inverse problem asks us to use observed data to estimate parameters in the functional form of the governing model of the phenomenon under study [2][3][4][5][6].
There is a fundamental difference between the direct and the inverse problem; often the direct problem is wellposed while the corresponding inverse problem is ill-posed.Hadamard [7] introduced the concept of well-posed problem to describe a mathematical model that has the properties of existence, uniqueness, and stability of the solution.When one of these properties fails to hold, the mathematical model is said to be an ill-posed problem.There are several inverse problems in literature that are ill-posed whereas the corresponding direct problems are well-posed.The literature is rich in papers studying ad hoc methods to address ill-posed inverse problems by minimizing a suitable approximation error along with utilizing some regularization techniques [8][9][10][11][12][13].
Many inverse problems may be recast as the approximation of a target element  in a complete metric space (, ) by the fixed point  of a contraction mapping  :  → .Thanks to a simple consequence of Banach's Fixed Point Theorem known as the Collage Theorem, most practical methods of solving the inverse problem for fixed point equations seek an operator  for which the collage distance (, ) is as small as possible.

Mathematical Problems in Engineering
This vastly simplifies this type of inverse problem as it is much easier to estimate (, ) than it is to find the fixed point  and then compute (, ).One now seeks a contraction mapping  that minimizes the so-called collage error (, ), in other words, a mapping that sends the target  as close as possible to itself.This is the essence of the method of collage coding which has been the basis of most, if not all, fractal image coding and compression methods.Barnsley et al. [14,15] were the first to see the potential of using the Collage Theorem above for the purpose of fractal image approximation and fractal image coding [16].However, this method of collage coding may be applied in other situations where contractive mappings are encountered.
We have shown this to be the case for inverse problems involving several families of differential equations and application to different areas: ordinary differential equations [17][18][19], Urison-type integral equations [20], random differential equations [21,22], boundary value problems [23][24][25], parabolic partial differential equations [26], stochastic differential equations [27][28][29], and others [3,30].In practical applications, from a family of contraction mappings   ,  ∈ Λ ⊂ R  , one wishes to find the parameter  for which the approximation error (,   ) is as small as possible.In practice, the feasible set is often taken to be Λ  = { ∈ R  : 0 ≤   ≤  < 1} which guarantees the contractivity of   for any  ∈ Λ  .The main difference between this "collage" approach and the one based on the Tikhonov regularization is the following (see [12,13]): in the collage approach, the constraint  ∈ Λ  guarantees that   is a contraction and, therefore, it replaces the effect of the regularization term in the Tikhonov approach (see also [6,8,11]).
The collage-based inverse problem can be formulated as an optimization problem as follows: This is a nonlinear and nonsmooth optimization model.However, as the next sections below show, the above model (2) can often be reduced to a quadratic optimization program.Several algorithms can be used to solve it including, for instance, penalization methods and particle swarm ant colony techniques.The paper is organized as follows.Section 2 presents the method for the case of differential equations while Section 3 illustrates the case of different families of partial differential equations (PDEs), namely, elliptic, parabolic, and hyperbolic equations.Section 4 presents applications to a randomly forced vibrating string and also to an inverse problem on perforated domains.

Inverse Problems for IVPs by the Collage Theorem
In [18], and subsequent works [17,[20][21][22]25], the authors showed how collage coding could be used to solve inverse problems for systems of differential equations having the form u =  (, ) , by reducing the problem to the corresponding Picard integral operator associated with it Let us recall the basic results in the case where  belongs to  for some  and  > 0.
Theorem 3 (see [18]).Let ,  ∈ ℓ 2 (R).Then, , (10) where Λ = { ∈ R  : ‖‖ 2 ‖‖ 2 < 1}.This is a classical quadratic optimization problem which can be solved by means of classical numerical methods.A penalized version of ( 10) is the following: min Let Δ min be the minimum value of Δ over Λ.This is a nonincreasing sequence of numbers (depending on ) and as shown in [16] it is possible to show that lim inf  → +∞ Δ min = 0.This means that the distance between the target element and the unknown solution of the differential equation can be made arbitrarily small.In Kunze et al. [21], the authors considered the case of inverse problems for random stochastic differential equations while in Capasso et al. [29] the case of stochastic differential equations is analyzed.
Example 4. Suppose that the stochastic process   is believed to follow a geometric Brownian motion; then it satisfies the stochastic differential equation where   is a Wiener process and the constants  and  are the percentage drift and the percentage volatility, respectively.We consider the following inverse problem: given realizations/paths    , 1 ≤  ≤ , estimate the values  and .Taking the expectation in (12), we see that E(  ) satisfies the simple fixed point equation Hence, to solve the inverse problem, we construct the mean of the realizations and use collage coding to determine the value of  that minimizes the collage distance  2 ( *  ,  *  ).We can then estimate the value of  by using the known formula var(  ) =  2  2 0 (  2  − 1), approximating var(  ) from the realizations.As an example, we set  = 2,  = 4, and  0 = 1 and then generate  paths on [0, 1], dividing the interval into  subintervals in order to simulate the Brownian motion on [0, 1].Beginning with these paths, we seek estimates of  and  using collage coding.Figure 1 shows five paths for the Brownian motion and the process   .Table 1 presents the numerical results of the example.
where (V) and (, V) are linear and bilinear maps, respectively, both defined on a Hilbert space .Let us denote by ⟨⋅, ⋅⟩ the inner product in , ‖‖ 2 = ⟨, ⟩ and (, V) = ‖ − V‖, for all , V ∈ .The inverse problem may now be viewed as follows: suppose that we have an observed solution  and a given (restricted) family of bilinear functionals   (, V),  ∈ R  .We now seek "optimal" values of .The existence and uniqueness of solutions to this kind of equation are provided by the classical Lax-Milgram representation theorem.Suppose that we have a "target" element  ∈  and a family of bilinear functionals   .Then, by the Lax-Milgram theorem, there exists a unique vector   ∈  such that (V) =   (  , V) for all V ∈ .We would like to determine if there exists a value of the parameter  such that   =  or, more realistically, such that ‖  − ‖ is small enough.The following theorem will be useful for the solution of this problem. where In order to ensure that the approximation   is close to a target element  ∈ , we can, by the Generalized Collage Theorem, try to make the term ()/  as close to zero as possible.The appearance of the   factor complicates the procedure as does the factor 1/(1−) in standard collage coding, that is, (1).If inf ∈Λ   ≥  > 0, then the inverse problem can be reduced to the minimization of the function () on the space Λ; that is, The choice of  according to (18) for minimizing the residual is, in general, not stabilizing (see [8]).However, as the next sections show, under the condition inf ∈Λ   ≥  > 0, our approach is stable.Following our earlier studies of inverse problems using fixed points of contraction mappings, we will refer to the minimization of the functional () as a "generalized collage method." Such an optimization problem has a solution that can be approximated with a suitable discrete and quadratic program, derived from the application of the Generalized Collage Theorem and an adequate use of an orthonormal basis in the Hilbert space , as seen in [24].
Next, we perturb the target function (, ), leaving (, ) and (, ) exact.Table 2 presents the  2 error ‖ −  noisy ‖ between the true solution  and the noised target  noisy and the resulting error ‖− collage ‖ between the true  and the collage-coded approximation  collage for numerous cases of  and .Note that ‖‖ 2 = 1.38082 and ‖‖ 2 = 0.5.In Figure 3, we present graphs of the results obtained by minimizing (18) with  =  = 3 through  =  = 5.
These results have been extended to a wider class of elliptic equations problems in [31,32], by considering not only Hilbert but also reflexive Banach spaces, and even replacing the primal variational formulation of such a problem, (15), with a more general constrained variational one.Let us mention that this kind of formulation arises, for instance, when the boundary constraints are weakly imposed.
Theorem 7 (see [32,33]).Let , , , and  be reflexive Banach spaces, let  :  → R and  :  → R be bounded and linear functionals, and let Λ be a nonempty set and assume that for each  ∈ Λ,   and   are positive real numbers and for which an approximated quadratic program follows from the preceding collage-type result and some properties of Schauder bases in the involved reflexive Banach spaces (see [32] for the details).

Parabolic Equations. Suppose that we have a given
Hilbert space  and let us consider the following abstract formulation of a parabolic equation: where  :  → R is a linear functional,  :  ×  → R is a bilinear form, and  ∈  is an initial condition.The aim of the inverse problem for the above system of (29) consists of getting an approximation of the coefficients and parameters starting from a sample of observations of a target  ∈ .To do this, let us consider a family of bilinear functionals   and let   be the solution to We would like to determine if there exists a value of the parameter  such that   =  or, more realistically, such that ‖  − ‖ is small enough.To this end, Theorem 8 states that the distance between the target solution  and the solution   of ( 27) can be reduced by minimizing a functional which depends on parameters.Theorem 8. Let  : [0, ] →  2 () be the target solution which satisfies the initial condition in (29) and suppose that (/) exists and belongs to .Suppose that   (, V) : Λ ×  ×  → R is a family of bilinear forms for all  ∈ Λ.One has the following result: where   is the solution of (27) s.t.  (0) = (0) and   () = ().
where  :  → R is a linear functional,  :  ×  → R is a bilinear form, and ,  ∈  define the initial conditions.As in the previous sections, the aim of the inverse problem for the above system of equations consists of reconstructing the coefficients starting from a sample of observations of a target  ∈ .We consider a family of bilinear functionals   and let   be the solution to We would like to determine if there exists a value of the parameter  such that   =  or, more realistically, such that ‖  − ‖ is small enough.Theorem 10 states that the distance between the target solution  and the solution   of ( 31) can be reduced by minimizing a functional which depends on parameters.
Theorem 10.Let  : [0, ] →  2 () be the target solution which satisfies the initial condition in (30) and suppose that ( 2 / 2 ) exists and belongs to .Suppose that there exists a family of   > 0 such that   (V, V) ≥   ‖V‖ 2 for all V ∈ .
The results we get from the generalized collage method are summarized in Table 4, which again proves the stability of the method under noise.

The Vibrating String
Driven by a Stochastic Process.Before stating and solving two inverse problems, we begin by giving the details and motivations for the specific model we are interested in studying.We consider the following system of coupled differential equations.The first one is a stochastic differential equation and the second one is a hyperbolic partial differential equation.On a domain  ⊂ R  , we have the equations where   is the law of   and    is the Dirac delta "function" at the point   .
For instance, imagine we have a flexible string directed along the -axis, with the string kept stretched by a constant horizontal tension, and forced to vibrate perpendicularly to the -axis under random force (, ).If  denotes the displacement of a point  at time , it is well known that  satisfies the following equation: where  ∈  and  > 0. In our model, we suppose that (, ) =  2 ()   (), where   is a stochastic process which is a solution of the stochastic differential equation (34).In other words, the hyperbolic equation has a forcing term that is driven by this stochastic process.The random vibration on an infinite string has received recent attention (see [34,35]).Figure 4 presents some snapshots of the displacement for the related finite string problem.
In the next sections, we present a solution method for solving two different parameter identification problems for this system of coupled differential equations: one for  1 and one for .Both of these two methods are based on the numerical schemes which have been presented in the previous sections.
Before we begin the analysis, a few words about (35) are in order, since this equation contains the generalized function    .That is,    () has a meaning only when it is integrated with respect to a test function ().Thus, the meaning of ( 35) is that for each  ∈   [36]).This replaces the quantity    with its expectation E(   ()).Since   is absolutely continuous, then E(   ()) =    (), the density of the distribution of   .The previous model can therefore be rewritten in an averaged form as Note that the averaged equation, (39), has solutions in the usual sense.For this particular case, we have E((, )) = ũ(, ).To see this, just notice that the operator  2 / 2 − Δ  is linear, so Furthermore, since  1 and  2 are deterministic functions.Thus, E() is the solution to the deterministic PDE (42) However, this is clearly the same PDE for which ũ is the solution.Thus, we must have E() = ũ.This inverse problem can be solved using the techniques illustrated in a previous section on hyperbolic differential equations.
We remember that the following relationship holds between the expectation of a random variable and its density:  Thus, we can obtain E(  ), which now can be used for solving the inverse problem for (45) via the method illustrated in Section 2. We recover the function The last step involves the analysis of (47).The only unknown in this model is .Taking the  2 expansions of Φ and  with respect to the same  2 orthonormal basis {  }, we get and then for  ≥ 0, a linear system in   , the solution of which is our final step.

Inverse Problems on Perforated Domains.
A porous medium (or perforated domain) is a material characterized by a partitioning of the total volume into a solid portion often called the "matrix" and a pore space usually referred to as "holes." Mathematically speaking, these holes can be either materials different from those of the matrix or real physical holes.When formulating differential equations over porous media, the term "porous" implies that the state equation is written in the matrix only while boundary conditions should be imposed on the whole boundary of the matrix, including the boundary of the holes.Examples of this are Stokes or Navier-Stokes equations that are usually written only in the fluid part while the rocks play the role of "mathematical" holes.Porous media are encountered everywhere in real life and the concept of porous media is essential in many areas of applied sciences and engineering including petroleum engineering, chemical engineering, civil engineering, aerospace engineering, soil science, geology, and material science.Since porosity in materials can take different forms and appear in varying degrees, solving differential equations over porous media is often a complicated task.Indeed, the size of holes and their distribution within a material play an important role in its characterization, and simulations conducted over porous media that include a large number of matrix-holes interfaces present real challenges.This is due to the need for a very fine discretization mesh which often requires a significant computational time and might even sometimes be irrelevant.This major difficulty is usually overcome by using the mathematical theory of "homogenization, " where the heterogeneous material is replaced by a fictitious homogeneous one through a delicate approach that is not simply an averaging procedure.Several techniques are currently in use in homogenization including the multiple scale method, the method of oscillating test functions of Tartar, the two-scale convergence method, and, most recently, the periodic unfolding method.
In the case of porous media, or heterogeneous media in general, characterizing the properties of the material is a tricky process and can be done on different levels, mainly the microscopic and macroscopic scales, where the microscopic scale describes the heterogeneities and the macroscopic one describes the global behavior of the composite.To provide a numerical example of an inverse problem on a perforated domain, we set Ω = [0, 1] 2 and for 0 <  ≤ 0. (51) Suppose that, for a certain , we sample the solution    of the above model on the perforated domain Ω  .We aim at estimating  starting from this data and by using the model which is obtained when  = 0 instead.In fact, it is much easier to solve an inverse problem on the initial domain Ω instead of on the perforated domain Ω  .
For a fixed value of , we solve the diffusion problem numerically and sample the solution at  ×  uniformly

Figure 2 :
Figure 2: (Left to right, top to bottom) for two-dimensional Example 4, the graphs of our actual (, ), and the collage-coded approximations of  with  =  = 3 through  =  = 9.

coupled with the deterministic PDE 𝑑 2 Figure 4 :
Figure 4: Snapshots of a randomly forced vibrating string, with time increasing from left to right and top to bottom.

Table 1 :
Minimal collage distance parameters for different  and , to five decimal places.

. Inverse Problems for BVPs by the Generalized Collage Theorem
3.1.Elliptic Equations.Let us consider the following variational equation:

Table 3 :
Collage coding results for the parabolic equation in Example 9.

Table 4 :
Collage coding results for the hyperbolic equation in Example 11.