Inverse Problems via the (Generalized Collage Theorem) for Vector-Valued Lax-Milgram-Based Variational Problems

In recent years a great deal of attention has been paid to the inverse problems in distributed systems, that is, the determination of unknownparameters in the functional form of the governing model of the phenomenon under study [1– 4].The literature is rich in papers studying ad hocmethods to address ill-posed inverse problems by minimizing a suitable approximation error along with utilizing some regularization techniques [5–7]. Many inverse problems may be recast as the approximation of a target element x in a complete metric space (X, d) by the fixed point x of a contraction mapping T : X → X. Thanks to a simple consequence of Banach’s fixed point theorem known as the Collage Theorem, most practical methods of solving the inverse problem for fixed point equations seek an operator T for which the collage distance d(x, Tx) is as small as possible.


Inverse Problems via the Collage Theorem
In recent years a great deal of attention has been paid to the inverse problems in distributed systems, that is, the determination of unknown parameters in the functional form of the governing model of the phenomenon under study [1][2][3][4].The literature is rich in papers studying ad hoc methods to address ill-posed inverse problems by minimizing a suitable approximation error along with utilizing some regularization techniques [5][6][7].
Many inverse problems may be recast as the approximation of a target element  in a complete metric space (, ) by the fixed point  of a contraction mapping  :  → .Thanks to a simple consequence of Banach's fixed point theorem known as the Collage Theorem, most practical methods of solving the inverse problem for fixed point equations seek an operator  for which the collage distance (, ) is as small as possible.
This vastly simplifies this type of inverse problem as it is much easier to estimate (, ) than to find the fixed point  and then compute (, ).One now seeks a contraction mapping  that minimizes the so-called collage error (, ), in other words, a mapping that sends the target  as close as possible to itself.This is the essence of the method of collage coding which has been the basis of most, if not all, fractal image coding and compression methods.Barnsley [8,9] was the first to see the potential of using the Collage Theorem above for the purpose of fractal image approximation and fractal image coding.In practical applications, from a family of contraction mappings   ,  ∈ Λ ⊂ R  , one wishes to find the parameter  for which the approximation error (,   ) is as small as possible.In practice the feasible set is often taken to be Λ  = { ∈ R  : 0 ≤   ≤  < 1} which guarantees the contractivity of   for any  ∈ Λ  .A difference between the "collage" approach and the one based on Tikhonov regularization is the following: in the collage approach, the constraint  ∈ Λ  guarantees that   is a contraction and, therefore, replaces the effect of the regularization term in the Tikhonov approach (see [4,7]).The collage approach works well for low-dimensional parametrization in particular, while Tikhonov regularization is a fundamentally nonparametric methodology.The collage-based inverse ( This is a nonlinear and nonsmooth optimization model that can often be reduced to a quadratic optimization program. Several algorithms can be used to solve it including penalization methods and particle swarm ant colony techniques.This method of collage coding may be applied in other situations where contractive mappings are encountered.These ideas have been extended to inverse problems for Initial Value Problems (IVP) in [10].In this setting, the contractive Picard operator plays the role of  and the space  contains continuous and appropriately bounded functions on a closed interval of observation.Given a target function, perhaps the interpolation of observational data points, the Collage Theorem can be applied to find the Picard operator within a prescribed class that minimizes the collage distance.We have applied this technique to inverse problems involving several families of differential equations and application to different areas (see [10][11][12][13][14][15]).
Example 2. We present the results to an IVP inverse problem solved using collage coding, Tikhonov regularization, and the Landweber-Fridman method.Consider the following steadystate heat equation on Ω = [0, 1]: where () is the variable thermal diffusivity of the medium at  ∈ Ω, () represents a heat source or sink at each  ∈ Ω, and () denotes the temperature at  ∈ Ω.
The inverse problem we look at is to estimate the variable thermal diffusivity () given  uniformly distributed values of (), with low-amplitude Gaussian noise added, and the forcing function ().For this example, we assume that Morozov's discrepancy principle was used to find the value of the Tikhonov regularization parameter, with our discrepancy below a tolerance level of 10 −8 .Table 1 presents the results; the subscripts indicate the method used to solve the inverse problem.
In a manner analogous to the Collage Theorem we have also formulated a Generalized Collage Theorem for solving Boundary Value Problems (BVP), replacing the minimization of the true error by the minimization of something akin to the collage distance.In place of Banach's fixed point theorem for contraction maps on a complete metric space, we have appealed to the Lax-Milgram representation theorem.
These results have been extended to a wider class of elliptic equations problems in [16,17], by considering not only Hilbert but also reflexive Banach spaces and even replacing the primal variational formulation of such a problem with a more general constrained variational one.Let us mention that this kind of formulation arises, for instance, when the boundary constraints are weakly imposed.
The paper is organized as follows.Section 2 is concerned with an extension of the Collage Theorem stated in [16,Corollary 4.2] to a finite-dimensional vector-valued context, as well as a discretization scheme based on the use of suitable Schauder bases.Section 3 presents three different numerical examples which show how to solve inverse problems for systems of elliptic differential equations.

Vector-Valued Lax-Milgram and the Inverse Problem
In this section we deal with a Lax-Milgram theorem stated in terms of a system of suitable variational equations and with a collage type result that follows from it.We refer to [18][19][20][21] for some recent vectorial versions of the Lax-Milgram theorem and some applications to the study of mixed variational equations.
The first result is the following vector-valued version of the Lax-Milgram theorem, which is a direct consequence of the characterization of the solvability of systems with infinitely many variational equations given in [22, Theorem 3.2]-specifically of its finite-dimensional case [22, Corollary 4.6]-and of the fact that if  ≥ 1, ,  1 , . . .,   are real vector spaces,  1 :  ×  1 → R, . . .,   :  ×   → R are bilinear forms, and  * 1 :  1 → R, . . .,  *  :   → R are linear forms such that the system Let us also suppose that, for all  ∈ Λ,   ∈  is the unique solution of the variational system Then for each  0 ∈  and for all  ∈ Λ the inequality is valid.

Mathematical Problems in Engineering
Proof.The unisolvency and continuous dependence on the initial data of the solution in Theorem 3 imply the announced result.Indeed, let  ∈ Λ and notice that   −  0 is a solution of the system of variational equations Then, in view of Theorem although if Under such an assumption,  > 0, and suppose that each space   ( = 1, . . ., ) has a Schauder basis {Υ  } ≥1 , in such a way that if {Υ *  } ≥1 denotes its sequence of biorthogonal functionals, then the nonrestrictive condition holds.In order to discretize our optimization problem, let us also assume that  admits a Schauder basis {Θ  } ≥1 and define for each  ≥ 1 and  = 1, . . ., and let   be the th-projection of  onto   ; that is, for all  ∈ , We also suppose that for all  ∈ Λ,  = 1, . . ., , and  ≥ 1 and if then it suffices to minimize or equivalently the discrete objective function which is easier to minimize.

Numerical Examples
In this section we present three different numerical examples.
Example 1.We consider the linear system with We solve the linear system, sample each solution component at  uniformly distributed data points in [0, 1], add relative noise of , and fit 6th-degree polynomials to the resulting data to produce a target solution ũ. Figure 1 shows the numerical solution  and the target functions ũ with  = 20 and  = 0.05.We consider the inverse problem: Given ũ and (), approximate () and the coefficient  such that the resulting system admits ũ as an approximate solution.We set With {  }  =1 equal to the -dimensional "hat basis" of [0, 1], for various values of , noise ,  degree , and basis dimension , we construct the objective function in (26) and minimize it to find  = (,  0 , . . .,   ).The results are presented in Table 2.
Example 2. We consider the coupled system with (1 − ) −3/5 ] . (31) Note that the forcing functions  1 () and  2 () are not Hilbertian, living instead in  1,3/2 0 (0, 1), for example.We  The BVP has singularities at  = 0 and  = 1, since at each endpoint one of  1 () or  2 () is undefined.The forward problem can be solved numerically by using collocation techniques.We use COMSOL to solve it.Figure 2 presents the numerical solution.Next, we represent the solution in the subspace  511 generated by the first 511 terms of the Schauder basis; call these representations ũ1 () and ũ2 ().We consider the inverse problem: find () such that ũ is admitted as an approximate solution to (30) with () replaced by ().
We solve the inverse problem by constructing the objective function in (26) with  = 511,   equal to the th element of the Schauder basis, and () = ∑  =0     .With these choices, we have that Example 3. We consider the 2D linear system − ∇ ⋅ ( (, ) ∇) +  =  (, ) , with where  1 (, ) and  2 (, ) have been chosen so that the actual solution to the system is Analogous to the process followed in Example 1, we sample each solution component at  ×  uniformly distributed data points in [0, 1] × [0, 1] and add relative noise of  to each of these data points.A target solution ũ is constructed We consider the inverse problem: Given ũ, , and (, ), approximate (, ) such that the resulting system admits ũ as an approximate solution.We set  (, ) = For various values of , noise ,  degree 1, and basis dimension  = , we construct the objective function in (26) and minimize it to find   ,   , for  = 1, . . .,  and  = 1, . . ., .The results are presented in Table 3.

2 Mathematical
Problems in Engineering problem can be formulated as an optimization problem as follows: min ∈Λ   (,   ) .
admits a solution, then such a solution is unique if, and only if, the corresponding homogeneous problem has one and only one solution.Given a real normed space , we write  * for its topological dual space.Suppose that  is a real reflexive Banach space,  ≥ 1,  1 , . . .,   are real Banach spaces, and  1 :  ×  1 → R, . . .,   :  ×   → R are continuous bilinear forms.

Table 2 :
[23,vered parameter values for Example 1. True values are  = (3, 5, 2, 0).Figure 2: The numerical solution at 511 points for Example 2.work in the reflexive framework of Section 2. (In[16], we observed that the Hilbert space solution framework failed to work for a single equation with forcing function  1 ().)Following[23,Proposition 4.8], we can construct a Schauder basis {  } ∈N in the Sobolev space