Identification of Flexural Rigidity in a Kirchhoff Plates Model Using a Convex Objective and Continuous Newton Method

This work provides a detailed theoretical and numerical study of the inverse problem of identifying flexural rigidity in Kirchhoff plate models. From a mathematical standpoint, this inverse problem requires estimating a variable coefficient in a fourth-order boundary value problem.This inverse problem and related estimation problems associatedwith general plates and shellmodels have been investigated by numerous researchers through an optimization framework using the output least-squares (OLSs) formulation. OLS yields a nonconvex framework and hence it is suitable for investigating only the local behavior of the solution. In this work, we propose a new convex framework for the inverse problem of identifying a variable parameter in a fourth-order inverse problem. Existence results, optimality conditions, and discretization issues are discussed in detail. The discrete inverse problem is solved by using a continuous Newton method. Numerical results show the feasibility of the proposed framework.


Introduction
Let Ω be a nonempty bounded domain in R 2 with a sufficiently smooth boundary Γ.We consider the following fourth-order elliptic boundary value problem (BVP): where / is the usual normal derivative.The above BVP models the pure bending (without twisting) of a Kirchhoff plate occupying the region Ω.Here, the parameter (, ) is the flexural rigidity and the solution  is the lateral deflection under the force  per unit area (see [1] for details).The parameter  is connected to the thickness of the plate, Young's modulus of elasticity, and the Poisson ratio of the material.Boundary conditions (1b) and (1c) are the socalled clamped boundary conditions.However, all the results given below can easily be transferred to the pinned boundary conditions given by  = Δ = 0 on Γ. ( Moreover, the proposed framework can be extended to the following more general plate model [2]: Given the force  and the flexural rigidity , the direct problem is to find the lateral deflection .In this work, our objective is to identify the flexural rigidity (, ) so that the corresponding deflection  is the nearest to a deflection measurement  in a certain sense.This inverse problem and its dynamic analogue have been explored mostly through the output least-squares formulation.In the present context, the output least-squares (OLSs) approach seeks to minimize the functional: defined by an appropriate norm.Here,  is the data (a measurement of ) and () is the unique solution of the weak form of (1a), (1b), and (1c) that corresponds to the flexural rigidity (, ).
One of the major difficulties associated with the OLS approach is the fact that the coefficient-to-solution map  :  → () is nonlinear and hence, in general, the OLS functional is nonconvex.Consequently, the OLSbased approach only allows the study of local properties of the inverse problem.Moreover, the numerical algorithms designed through the necessary optimality conditions can only ensure convergence to a local optimizer and such conditions are not sufficient optimality conditions.
In the following, we will give a quick overview of some related works.In an interesting paper, Lesnic et al. [3] investigate the coefficient identification problem in a fourthorder differential equation that governs the deflection  of the Euler-Bernoulli beam and give useful stability estimates.An optimization-based approach for the same problem was recently given in [4].In a recent paper, Marinov and Marinova [5] developed a numerical method by using the socalled method of variational embedding for solving an inverse problem for bending stiffness estimation in a Kirchhoff-Love plate from overdetermined data.Ewing et al. [6] study a nonlinear analogue of this problem using the mixed finite element method.
Manservisi and Gunzburger [7] studied a control problem associated with a plate model and give interesting applications to a simplified model for the "sag bending process" in the manufacturing of automobile windscreens.Inspired by this inverse problem and its useful application, Engl and Kügler [8] and Kügler [9] presented a regularized Landweber method for its numerical solution.Salazar and Westbrook [10] studied the identification of the flexural rigidity of a nonhomogeneous plate of uniform thickness under the assumption that Poisson's ratio is constant and only the Young modulus varies pointwise.Lopes et al. [11] used identification techniques to detect holes in plates.In a related work, Kim and Kreider [12] conducted a numerical study of the inverse problem of parameter identification in nonlinear elastic and viscoelastic plates.They presented numerical results for a variety of constitutive models.In an intriguing paper by Yuan and Yamamoto [13], they studied Lipschitz stability estimates for both inverse problems of determining spatially varying Lamé coefficients and determining the mass density by a finite number of boundary observations.This inverse problem for the case of interior measurement was studied in [14][15][16].A systematic study of inverse problems of parameter identification in plates and shells models was conducted in a series of excellent papers by White [2,17,18] where the author investigates various aspects of the inverse problem.An overview of various approaches for parameter identification problems in inverse problems can be found in [19].
In this work, our primary objective is to employ a new convex functional for the identification of flexural rigidity (, ).The convexity of the new functional circumvents one of the major drawbacks, nonconvexity, of the OLS functional.Furthermore, the proposed convex objective functional characterizes global optimality, and the associated variational inequality is a necessary as well as a sufficient optimality condition.The new objective functional also has a computational advantage in that to compute its gradient one does not need to compute the derivative of the coefficient-tosolution map.
This paper is divided into six sections.Section 2 begins with the essential background material and continues with the introduction of the related minimization problem which is subsequently shown to have a solution.The problem is discretized using finite elements in Section 3 and it is shown that the continuous minimization problem can be approximated by the discrete one.A description of the continuous Newton method is given in Section 4 and the details of numerical experiments and their results are provided in Section 5.The paper concludes with some remarks concerning the outlined approach and other related issues.

Modified Output Least-Squares (MOLSs)
The variational formulation of (1a), (1b), and (1c) plays an important role in formulating the MOLS approach.Assuming that the feasible parameters (, ) are in  ∞ (Ω),  ∈  2 (Ω), and by taking  =  2 0 (Ω), the weak form of (1a), (1b), and (1c) reads: find  ∈  such that In the following, we take  := { ∈  ∞ | 0 <  ≤ (, )} to be the set of admissible coefficients.A variety of other conditions can be imposed on the feasible coefficients (, ) so that the following two inequalities hold for some constants  > 0 and  > 0: Motivated by [20,21], we propose the following objective functional to identify the flexural rigidity: where  is the measured data and () solves the weak form (5).
To give optimality conditions for the above minimization problem using (8) and to design some derivative-based numerical schemes, we need to compute the derivatives of (⋅).Since  depends on (), the derivatives of the solution map are instrumental to evaluate the derivatives of the functional .For this, we give the following result.Lemma 1.For each  in the interior of ,  is infinitely differentiable at .Given  = (), the first derivative  = () is the unique solution of the variational equation: (9) and the second derivative  2  =  2 ()( 1 ,  2 ) is the unique solution of the variational equation: Proof.A direct proof relying on ( 6) and ( 7) can be extracted from [20].A proof based on the implicit function theorem can be found in [2].
The following result gives derivative formulas for (8).

Theorem 2. Let 𝑎 be an arbitrary element in the interior of 𝐾.
Then, consider the following: (1) The first derivative of  is given by (2) The second derivative of  is given by Proof.The first derivative of  is a direct consequence of the chain rule: We note that, from (9), we have and hence It now follows that where in the last step we applied (9) again.
The following result gives a useful feature of MOLS.
Theorem 3. The modified output least-squares functional  defined in ( 8) is convex in the interior of the set .
Proof.In view of the form of the second derivative of , it follows from ( 7) that the following inequality holds for all  in the interior of : which proves the convexity of the functional .
The inverse problem of identifying parameters is ill-posed and hence we need to regularize the MOLS functional.In the following, we assume that  is the set of admissible coefficients which we further assume to be closed and convex subset of  :=  2 (Ω).We also assume that  is in the interior of the domain of the parameter-to-solution map so that we can use the convexity of MOLS.We now consider the following regularized minimization problem: find  * ∈  by solving min where  > 0 is the regularization parameter,  ∈  is the data, and () is a regularization map.Throughout this work, we take () := ‖‖ 2  2 (Ω) .The associated inner product is denoted by ⟨⋅, ⋅⟩.
We have the following useful information concerning a minimizer of the MOLS functional.

Mathematical Problems in Engineering
Proof.Since   () ≥ 0, for every  ∈ , we can find a minimizing sequence The inequality confirms that the sequence . By the compact embedding of  2 (Ω) into  ∞ (Ω), there exists a subsequence that converges weakly in  2 (Ω) and strongly in  ∞ (Ω).By using the same notation for the subsequences as well, we obtain that   ⇀ ã ∈  in  2 (Ω) and   → ã in  ∞ (Ω).Let   = (  ) be the corresponding sequence of the solutions of the weak form (5). It follows from ( 6) and ( 7) that {  } is uniformly bounded in  2 (Ω) and consequently there exists a weakly convergent subsequence.Once again by using the same notion for the subsequences, we assume that   ⇀ ũ.By the definition of   , we have which, in view of the convergence properties of   and   , after passing to the limit implies that where V ∈  is arbitrary.Since the solution to the above variational problem is unique, we deduce that ũ = (ã).In fact,   converges to ũ strongly in  2 (Ω).From the identities, After a rearrangement of the terms, we deduce that and the strong convergence follows at once from ( 6) and (7).
We next claim that For this, we notice that the following identities hold due to the definitions of   ,   , ũ, and ã: For (26), we note that Finally, we have where we used the weak-lower semicontinuity of ‖ ⋅ ‖  2 (Ω) .
The uniqueness follows from the fact that the regularizer is strictly convex.Finally, since   () is a convex functional, the variational inequality is a necessary and sufficient optimality condition.The proof is complete.
The continuous problem (18) has to be discretized to obtain a numerical solution.In this work, we use the finite element discretization on a triangulation { ℎ } of Ω.We choose   to be the finite-dimensional space of the parameter space  :=  2 (Ω) relative to  ℎ .Analogously, we assume that {  } is a sequence of finite-dimensional subspaces of  and, for each ,   :  →   is a projection operator.We suppose that   and   have the property that, for every We define   =   ∩  and assume that To keep the structure of finite-dimensional set of admissible sets quite general and also to allow other regularizers, we impose approximation properties given by the following assumption: for any  ∈ , there exists a sequence {  } such that (  ) →  () .
The above property is an analogue of (29) for the set of feasible constraints.
We now define  () :   → R as the following discrete analogue of (8): where   () is the solution of the following discrete variational problem: Here, the data  is replaced by the projected data    for computational convenience.Since    converges to  strongly in , all of the following results hold irrespective of whether  or    is used in the definition of  () .We then define the following regularized analogue of the above discrete MOLS and solve min The solvability of the above regularized problems can easily be proved by arguments used above.
We can now prove the following convergence result.
Theorem 5. Assume that, for each ,  *  ∈   is a solution of (35).Then, any weak  2 (Ω) limit point of the sequence { *  } is a solution of (18).Proof.By (30), there exists a constant  such that  ()   ( *  ) ≤ , for all .Therefore, the sequence { *  } is bounded in  2 (Ω) and hence it has a subsequence that converges weakly in  2 (Ω) and strongly in  ∞ (Ω).For simplicity of notation, we assume that  *  ⇀  * ∈  in  2 (Ω) and  *  →  * in  ∞ (Ω).We will show that  * is a solution of (18).Let   = (  ) be the corresponding solution of (33).Evidently, {  } is bounded and hence there is a subsequence that weakly converges to some  * in .We claim that  * = ( * ).In view of the definition of   , we have Let V ∈  be arbitrary.We set V  =   V in the above identity to get which after a rearrangement of terms implies that and after passing to the limit  → ∞, we obtain that and because V was chosen arbitrarily, we obtain that  * solves (5).That is,  * = ( * ).By using the arguments given in Theorem 4, we can also show that which, in view of the lower semicontinuity of Now consider an arbitrary  ∈ .By (31a), (31b), and (31c), there exists a sequence {  } such that   ∈   , for all ,   →  in  ∞ , and (  ) → ().This, however, implies that lim and hence, by the optimality of  *  , we have Since  ∈  was arbitrary, this shows that  * is a solution of (8), which completes the proof.
Then, the stiffness matrix () and the load vector  are given by We also define the matrix () by the following condition: Using the above notation, the following formulas can be verified: (46)

Continuous Newton Method
After the discretization, the finite-dimensional minimization problem can be solved by any known optimization solver.In this paper, we will employ the continuous Newton method recently proposed by Zhang et al. [22].Continuous methods for optimization problems solve an optimization problem by following the solution trajectory of a system of ordinary differential equations.Although numerous authors have focused on continuous optimization methods for solving unconstrained and constrained optimization problems, most of these works have been of a theoretical nature.However, recently, many researchers have advocated the usefulness of continuous methods for solving large scale optimization problems and have given significant numerical evidence to show the competitiveness of these methods with the conventional optimization methods (see [23][24][25][26]).
In this work, we will use the continuous Newton method [22] to solve the finite-dimensional optimization problems given by the regularized MOLS and OLS.For simplicity, the side constraints will be ignored.Zhang et al. [22] propose following the trajectory given by the following initial value problem: where () is defined by Here, J corresponds to either the regularized MOLS or the regularized OLS, the minimum eigenvalue of the Hessian of J() is given by  min (), and  2 >  1 > 0.Moreover, The idea of the proposed scheme is to combine the steepest descent and Newton's direction through a convex combination.It involves a simple yet efficient strategy to get around the potential singularities of the Hessian.To be specific, when  min () is between  1 and  2 , if its value is closer to  2 , then () will be larger, in turn putting more emphasis on the Newton direction.On the other hand, if  min () is closer to  1 , then () will be larger, putting more weight on the gradient direction.The approach is particularly appealing as one of the main challenges in the nonlinear inverse problems is the ill-posedness of the Hessian.Moreover, since the continuous Newton method takes into account the behaviour of the Hessian by examining its minimum eigenvalue, we also gain insight into the relative convexity (or nonconvexity) of the OLS or MOLS functionals.Finally, this approach places less emphasis on computing an optimal regularization parameter, a difficult task in inverse problems.

Numerical Experiments
In this section, we present the results of numerical experiments designed to gauge the preliminary effectiveness of the MOLS approach when coupled with the continuous Newton method and applied to the flexural rigidity inverse problem.

Experiment
Setup.We consider first an example boundary value problem (Example 1) derived from (1a), (1b), and (1c): where the solution  and the exact parameter  are given by and where (, ) is subsequently defined by (50).The domain Ω is taken as the unit square, Ω = (0, 1) × (0, 1), with the boundary Γ as the square's outside edges.Additionally, we considered a similar example (Example 2) with the recovery of the same parameter as Example 1, but on a simply supported circular plate (with appropriate changes to the forward problem applied).
Finally, we considered an example (Example 3) again on the unit square, but with the recovery of a different parameter and with more complicated mixed boundary conditions (i.e., clamped along the top and bottom edges, simply supported

Algorithm Details.
The optimization was performed using the continuous Newton method outlined in the previous section applied to both the discretized MOLS and OLS functionals.Additionally, for means of comparison, the optimization was also performed using a conjugate gradient trust-region-reflective method (MATLAB's fminunc) using similar stopping criteria.In all examples,  1 seminorm regularization was used for computational simplicity with a fixed regularization parameter used per example (see Table 1 for values).
In the continuous Newton algorithm, the Lanczos iteration was used to compute/estimate the minimum eigenvalue of ∇ 2  and in those cases where the Lanczos method failed to converge in the given (limited) number of iterations, the method defaulted to the steepest descent trajectory.The resulting ordinary differential equation was solved using MATLAB's ode113 solver with default parameters.The ODE was solved over the time domain [0, ] with  = 100 for MOLS and  = 1000 for OLS.The additional continuous Newton algorithmic parameters were fixed for each example (again, see Table 1 for values).

Numerical Results
. Figures 1-3, respectively, show the computed parameter at several algorithm steps for the OLS and MOLS approaches using both the continuous Newton and fminunc algorithms.These figures also show the final output of the algorithm and the error between the computed and exact parameters.
In Figure 4, we compare the minimum eigenvalues of the Hessian of both the MOLS and OLS functional over a number of continuous Newton algorithm iterations applied to Example 1.Here, we see that, for the OLS method, the algorithm transitions to both the "mixed" step direction and, when the minimum eigenvalue becomes exceedingly small or negative, the steepest descent direction.We also note that, for the same regularization parameter, the MOLS method uses the full Newton direction.The minimum and  maximum of  min along with the  2 error and the total number of algorithm iterations for all examples are provided in Table 1.

Discussion
The results of the numerical experiments suggest the following: (i) The MOLS functional converges faster than the OLS functional (Table 1) for the flexural rigidity inverse problem under similar conditions (i.e., similar regularization parameters, optimization algorithm).(ii) In at least one instance for the MOLS approach (see Example 2 in Table 1), the continuous Newton method shows accelerated convergence over the fminunc algorithm.
(iii) The eigenvalues of the Hessian of the MOLS functional remain strictly positive compared with the Hessian of the OLS functional, whose minimum eigenvalues can become negative after a number of algorithm iterations (see both Table 1 and Figure 4).(iv) The more complicated domains and boundary conditions in Examples 2 and 3 have a negative impact in terms of algorithm performance and accuracy in recovery of the flexural rigidity parameter.As can be seen in Table 1, this is particularly true in the case of the OLS approach.(v) Overall, for either the MOLS or OLS approach, the error in the recovery of the coefficient is most prominent along the boundaries with significantly better reconstruction in the domain's interior.Again, in particular, the OLS approach seems to have the most difficulty in reconstructing the flexural rigidity along the boundaries.

Concluding Remarks
Further directions for research are as follows: (i) Thorough comparison of the performance and stability differences between the OLS and MOLS approaches.(ii) Thorough comparison of differing approaches to solve the ODE system arising from the continuous Newton method including a comparison of the relative performance and stability of calculating/estimating the eigenvalues of the Hessian of .(iii) Comparison/exploration of differing regularization approaches.(iv) The use of mixed methods for solving the variational problem.

Computed A at step 2 Computed A at step 7 Computed A at step 13 Figure 2 :
Figure 2: Example 2: MOLS, continuous Newton.