CONTINUOUS DEPENDENCE ON DATA FOR QUASIAUTONOMOUS NONLINEAR BOUNDARY VALUE PROBLEMS

We devote this paper to quasiautonomous second-order differential 
equations in Hilbert spaces governed by maximal monotone operators. Some bilocal boundary conditions are associated. We discuss the continuous dependence of the solution both on the operator and on the boundary values. One uses the methods of 
nonlinear analysis. Some applications to internal approximate schemes are given.


Introduction
The main purpose of this paper is to prove the continuous dependence on A, a, b, f of the solution of the second-order evolution equation pu (t) + ru (t) ∈ Au(t) + f , a.e.t ∈ (0,T), (1.1) subject to the two-point boundary condition Here A : D(A) ⊆ H → H is a maximal monotone operator (possibly multivalued) in a real Hilbert space H, D(A) is its domain, a,b ∈ D(A), f ∈ L 2 (0,T;H), and p, r are two continuous functions from [0, T] to R.  In [10,11], Barbu proved the existence of the solution in the case p ≡ 1, r ≡ 0. The author considered the boundary value problems u (t) ∈ Au(t) + f (t), a.e.t ∈ (0,T), (1.4) generator and A 1/2 the unique extension of A 0 1/2 to a maximal monotone operator.The operator A 1/2 is called the square root of A. Regularity properties of S 1/2 (t) are presented in [10,11,13].
In [5], it is shown that the application which associates to {A, a,b} the unique solution u of (1.3) with f ≡ 0 is continuous in the following sense.Consider the boundary value problem (1.3) (with f ≡ 0) and the sequence of problems u n (t) ∈ A n u n (t), a.e.t ∈ (0,T), for all ξ ∈ H, and for all λ > 0, then the solution u n of (1.7) converges to the solution u of (1.3) (with f ≡ 0), uniformly on [0, T].
In [6], we have a similar result on (0,∞).The case of the first-order differential equations is analyzed in [7,15].The continuous dependence on data for the antiperiodic solutions to a class of second-order evolution equations with constant coefficients is given in [3].

N. Apreutesei 69
If A n and A satisfy condition (1.8), we say that A n converges to A in the sense of the resolvent.This and other types of convergences of the sequences of operators can be found in [8].They are of physical interest because of their applications in the homogenization theory, singular perturbation problems, convergence problems in optimal control, stochastic optimization, and so forth.In [22], the authors show that in Banach spaces with some specific properties, the convergence in the sense of resolvent of a sequence (A n ) to A implies the convergence of (A n 1/2 ) to A 1/2 in the same sense.In the present paper, we prove that the unique solution u of problem (1.1)-(1.2) depends strongly continuous on the data A, a, b, f .More exactly, we take the sequence of evolution equations subject to the boundary conditions (1.10) Using an idea from [1,2], in the next sections, one uses the weighted space ᏸ = L 2 r/ p (0,T;H), where Therefore, the scalar product in ᏸ is and the corresponding norm is where (•,•) and • are the scalar product and the norm of H, respectively.Actually, the spaces L 2 (0,T;H) and ᏸ contain the same functions and have equivalent norms.The difference between them is that the operator is maximal monotone only in ᏸ (see [1]).Taking into account this remark, we may write (1.1) in the form p r ru ∈ Au + f , a.e.t ∈ (0,T). (1.15) In Section 2, we recall some definitions and results from the theory of maximal monotone operators.The main result is stated in Section 3 and proved in Section 4. The proof combines an idea related to the case p ≡ 1, r ≡ 0, f ≡ 0 (see [5]) with some techniques from the existence theory (see [1,2]).In the last section, we give a numerical approximation of (1.1)-(1.2) with f ≡ 0 by an internal approximating scheme (see [9]).

Preliminaries
Throughout this paper, H is a real Hilbert space of norm • and scalar product (•,•).Denote by "→" and " " the strong and the weak convergence in all the involved spaces, respectively.
The nonlinear multivalued operator A with the domain D(A) and the range R(A) is said to be monotone if (y 1 − y 2 ,x 1 − x 2 ) ≥ 0 for all y i ∈ Ax i , x i ∈ D(A), i = 1,2.The operator A is called maximal monotone if it is monotone and it has not any proper monotone extension.It is known that A is maximal monotone if and only if A is monotone and R(A + λI) = H for all λ > 0 (or equivalently, for some λ > 0) (see [13, Theorem 1.2, page 39]).For x ∈ D(A), let A 0 x be the element of least norm in Ax, that is, (2.1) The single-valued operator A 0 which associates to each x ∈ D(A) the element A 0 x is called the minimal section of A. For every maximal monotone operator A, one can define the resolvent J λ and the Yosida approximation A λ of A, namely, 2) The realization of A in L 2 (0,T;H) is the operator Ꮽ given by a.e. on (0,T) . (2.4) If A is maximal monotone in H, then Ꮽ is maximal monotone in L 2 (0,T;H).If A λ and Ꮽ λ are Yosida approximations of A and Ꮽ, respectively, then (Ꮽ λ u)(t) = A λ u(t) for all λ > 0, a.e.t ∈ [0,T], for u ∈ L 2 (0,T;H).Definition 2.1.A sequence {A n } of maximal monotone operators in H is said to be convergent to A in the sense of the resolvent if (2.5)

N. Apreutesei 71
The following characterization of the convergence in the sense of resolvent is true even in reflexive Banach spaces (see [ → A as n → ∞), that is, for all x ∈ D(A), and for all y ∈ Ax, there exist x n ∈ D(A n ), y n ∈ A n x n such that x n → x, y n → y in H.
We recall now the definition of the Mosco convergence of a sequence of functions and a result concerning the equivalence between the Mosco convergence of the functions (ϕ n ) and the convergence in the sense of the resolvent of the subdifferential operators (∂ϕ n ) (see [8, Theorem 3.66, page 373]).
are convex, lower-semicontinuous, proper functions, then the following statements are equivalent: , for all λ > 0 and for all ξ ∈ H and there exist (u,v) ∈ ∂ϕ, and

The main result
Consider the boundary value problems ) We now state our basic assumptions: (H1) A, A n are nonlinear (possibly multivalued) maximal monotone operators in the real Hilbert space H, with the domains These hypotheses assure the existence and the uniqueness in W 2,2 (0,T;H) of the solutions to problems (3.1)-(3.2) and (3.3)-(3.4),respectively.In addition, suppose that The continuous dependence on data result for the problem (3.1)-(3.2) may now be stated.3.3)- (3.4), respectively, then u n (t) → u(t) uniformly on [0,T] and The proof of this theorem is the purpose of the next section.
Remark 3.2.In [7, Theorem 3.2, page 62 and Proposition 3.7, page 64], the author establishes some conditions when the sum A n + B n converges to A + B in the sense of the resolvent.Here A n and B n are supposed to be maximal monotone operators convergent to A and B, respectively, in the sense of the resolvent.Theorem 3.1 above is not a consequence of these general perturbation results.Indeed, problems (3.1)-(3.2) and (3.3)-(3.4)can be written in L 2 (0,T;H) as respectively, where Ꮽ, Ꮽ n are the realizations of A, A n in L 2 (0,T;H), B is given in (1.14), and B n is analogous to B, but with a n , b n instead of a, b.It is known that B, B n are maximal monotone in L 2 (0,T;H) (see [1]).Moreover, B = ∂ϕ, B n = ∂ϕ n , where ϕ,ϕ n : L 2 (0,T;H) → (−∞,∞] are defined by We show that ϕ n is not Mosco convergent to ϕ.To do this, consider u ∈ W 1,2 (0,T;H) with u(T) = b and where C is a constant in H.It is clear that u n → u in L 2 (0,T;H) and liminf n→∞ ϕ n (u n ) < ϕ(u) = +∞.Thus condition (b) in Definition 2.3 is not satisfied.Then ϕ n does not converge to ϕ in the sense of Mosco.Theorem 2.4 implies that ∂ϕ n is not convergent to ∂ϕ in the sense of the resolvent.So Attouch's results for the convergence of the sum Ꮽ n + B n are not applicable here, even if Ꮽ n → Ꮽ in L 2 (0,T;H) in the sense of the resolvent.

N. Apreutesei 73
A particular case of Theorem 3.1 is obtained assuming that A and A n are subdifferential mappings and replacing (H7) by the condition (H7) ϕ n → ϕ in the sense of Mosco.
In this case, we find (in view of Theorem 2.4) the following consequence of Theorem 3.1.

The proof of Theorem 3.1
The proof of the main result combines some ideas from [1,2,5].For every given λ > 0, we put By hypothesis (3.5), it follows that for all λ > 0, and for all ξ ∈ H, where A n λ is the Yosida approximation of A n .Let w λ , v λ , w nλ , v nλ be the solutions of the auxiliary boundary value problems ) ) ) respectively.From the general theory recalled in Section 1, we know that each of these problems has a unique solution in W 2,2 (0,T;H).For every t ∈ [0,T], λ > 0, and n ∈ N, we can write ) Recall that | • | denotes the norm in L 2 (0,T;H).We intend to take the superior limit as n → ∞ and then the limit as λ → 0 in both (4.7) and (4.8).In order to do this, we estimate each term in (4.7) and (4.8).One begins with some boundedness results.
Lemma 4.1.Under the hypotheses of Theorem 3.1, if w nλ is the solution of problem (4.5), then for every fixed λ > 0, ) Proof.One approximates (4.5) by Here and everywhere below, we omit the variable t at the functions under integrals.Without loss of generality, suppose that 0 ∈ A n 0. Otherwise, we replace A n u by µ is monotone, by the above equality and (H4), we obtain The constant C 1 and all the constants below are positive and independent of n, λ, and µ.Using (4.18) in (4.15), we find hence, by virtue of boundedness of f n , for small µ > 0. Observe that Therefore, we get To estimate w nλµ (0) and w nλµ (T), we write N. Apreutesei 77 and analogously for w nλµ (T) .Hence, The last inequality, together with (4.17), leads to Denoting by B 1 the operator we may write (4.13) under the form where n µ is the realization of Taking into account the maximal monotonicity of Ꮽ n in L 2 (0,T;H) and (4.43), in order to take the limit in (4.45), it is enough to prove that (4.48) Thus (4.46) is proved.Now we may pass to the limit as µ → 0 in (4.45) and find that h nλ ∈ D(A n ) and −B 1 h nλ − f n ∈ Ꮽ n h nλ .Since w nλ verifies the same equation, by the uniqueness one deduces h nλ = w nλ .Therefore, w nλµ w nλ , w nλµ w nλ in L 2 (0,T;H), w nλµ → w nλ in C([0,T];H), w nλµ (t) → w nλ (t) for all t ∈ [0,T].Now (4.9)-(4.12)follow from (4.32)-(4.34).The proof is finished.
We now give a boundedness result for the solution (u n ) of (3.4)-(3.5).Lemma 4.2.If the hypotheses of Theorem 3.1 are satisfied, then {u n (0)} and {u n (T)} are bounded in H, {u n }, {u n } are bounded in L 2 (0,T;H), and

Proof. Consider the boundary value problem
Following the computation from the proof of Lemma 4.1, we get an estimate of the form (4.20) with a n , b n instead of y nλ , z nλ .Since (a n ), (b n ) are bounded, this can be written as where k 1 ,k 2 ,k 3 > 0 are independent of n and µ.
Similarly, one obtains an inequality of the form (4.26), namely, Hypotheses (H5) and (H6) imply the existence of some constants k 8 ,k 9 ,k 10 > 0 (independent of n and µ) such that Next, as in (4.31), one arrives at and an analogous inequality for u nµ (T) , with all constants independent of n and µ.This provides upper bounds for u nµ (0) , u nµ (T) and via (4.50), (4.52), for we find an upper bound for u nµ in C([0,T];H).
80 Continuous dependence on data Now, as in the proof of the previous lemma, one shows that u nµ u n , u nµ u n in L 2 (0,T;H), u nµ → u n in C([0,T];H), and u nµ (t) → u n (t) for t ∈ [0,T] (as µ → 0).
Using the same method we can state that the solution (v nλ ) of (4.6) is bounded with respect to n for any fixed λ > 0. Since (4.6) already contains the Yosida approximation A n λ of A n , we avoid the new parameter µ and work directly with (4.6).One obtains estimates similar to (4.20), (4.29), and (4.32),where y nλ , z nλ , A n √ λ a , A n √ λ b are bounded with respect to n, for every given λ > 0. Indeed, by (4.2) we have the convergences  For the second terms in (4.7) and (4.8), we can also find upper bounds with the aid of y λ , z λ , A √ λ a, A √ λ b.Lemma 4.6.Suppose that the above hypotheses hold and let w nλ , v nλ be the solutions of boundary value problems (4.5) and (4.6), respectively.Then Finally, it will be established that v nλ − v λ and v nλ − v λ tend to 0 as n → ∞, for all λ > 0, in L 2 (0,T;H) and in C([0,T];H), respectively.Lemma 4.9.Suppose the assumptions of Theorem 3.1 are satisfied.Then, for every λ > 0, or, in view of the monotonicity of A n λ ,

.69)
According to the boundedness from Lemma 4.3, this leads to for all λ > 0, n ∈ N, where k λ is independent of n.By (4.2) we infer that Using this, together with (H6) and the other convergences from (4.2) into (4.70),we find the first part of (4.67).The second limit is immediate.
The end of the proof of Theorem 3.1.We come back to (4.7) and (4.8) and apply Lemmas 4.5-4.9.Therefore, for small λ > 0, where c 13 > 0 is independent of λ.A similar inequality is available for limsup n→∞ |u n − u |.
Taking into account the boundedness of A √ λ a and A √ λ b and the convergences y λ → a, z λ → b, we may pass to the limit as λ → 0 in the above inequality and conclude that u n (t) → u(t) as n → ∞, uniformly on [0,T].Analogously, u n → u in L 2 (0,T;H) and the proof is complete.

Internal approximations
In this section, we give a numerical approximation of the solution u of the problem by the solution u N of an internal scheme of approximation.Suppose that H is a separable real Hilbert space, provided with the scalar product (•,•) and the corresponding norm • and p,r : Consider the univoque operator A : H → H satisfying the following assumption: (H8) A is monotone, hemicontinuous, and everywhere defined on H.
Let {e i } ∞ i=1 be an orthonormal basis in H.For any fixed positive integer N, denote by P N the orthogonal projector given by P N x = N i=1 (x,e i )e i for all x ∈ H and let H N = P N H.It is known that P 2 N = P N and P N is selfadjoint, that is, (P N x, y) = (x,P N y) for all x, y ∈ H (see, e.g., [17]).
One defines the operator (5. 3) It is easy to check that, in view of (H8), the operator A N is monotone, hemicontinuous, univoque, and everywhere defined on H N .Consequently, it is maximal monotone in H N .Consider now the approximating problem pu N (t) + ru N (t) = A N u N (t), 0 < t < T, u N (0) = P N a, u N (T) = P N b. (5.4) It is clear that (5.4) has a unique solution u N ∈ W 2,2 (0,T;H N ).Assume in addition that A0 = 0 and A is bounded, that is, it maps bounded sets onto bounded sets.
We now show that The sequence {y N } is bounded in H for every fixed λ > 0. Indeed, since A0 = 0 and (I + λA N ) −1 is a contraction, it follows that y N = I + λA N −1 P N x − I + λA N −1 0 ≤ P N x .
Passing to the superior limit as N → ∞ in (5.7) and using the monotonicity and the boundedness of A, we find that y N → y in H as N → ∞, that is, (5.5) holds.
Using again the boundedness of A and that fact that P N is selfadjoint with P 2 N = P N , we can easily show that A N P N a and A N P N b are bounded in H. Thus condition (H5) is verified.
As a consequence of Theorem 3.1, we state the following internal approximating result.
A N : D(A N ) = H N ⊂ H → H, A N = P N A. So, for every x N = P N x ∈ H N (with x ∈ H), we have A N x N ∈ H N and A N x N = P N AP N x = P N A   N i=1x,e i e i   .
∀x ∈ H.(5.5)    To do this, we put y N = (I + λA N ) −1 P N x and y = (I + λA)−1 x.Therefore, we get y N ∈ H N and y N − P N y N + λ P N Ay N − P N Ay = 0.(5.6)Multiplying by y N −P N y in H, we obtain y N −P N y 2 + λ(P N Ay N −P N Ay, y N −P N y) = 0. Since P N is selfadjoint, P 2 N = P N , and P N y N = y N , one deduces that y N − P N y 2 + λ Ay N − Ay, y N − y + λ Ay N − Ay, y − P N y = 0.(5.7)

Proposition 5 . 1 .
Assume that (5.2) holds, A : H → H is a bounded operator satisfying (H8), A0 = 0 and a,b ∈ H are given.Denoting by u and u N the unique solutions of the boundary values problems (5.1) and(5.4),respectively, where A N = P N A : H N → H N , the convergences u N (t) → u(t) uniformly on [0,T] and u N → u in L 2 (0,T;H) as N → ∞ are obtained.
.54) These lead to the following result.
Here we have denoted for simplicity by α nλ the element pw nλ + rw nλ − f n ∈ A n w nλ .Integrating by parts and writing v nλ in the right-hand side as v nλ = J n λ v nλ + λA n λ v nλ , we obtain via the monotonicity of A n , Lemma 4.7.If u and w λ are the solutions of the boundary value problems (3.1)-(3.2) and (4.3), then for every λ > 0,u − w λ ≤ c 9 a − y λ 1/2 + b − z λ 1/2 , u − w λ C ≤ a − y λ + c 10 a − y λ 1/2 + b − z λ t