Based on recent progress in understanding the abstract setting for Friedrichs symmetric positive systems by Ern et al. (2007), as well as Antonić and Burazin (2010), we continue our efforts to relate these results to the classical Friedrichs theory. Following the approach via the trace operator, we extend the results of Antonić and Burazin (2011) to situations where the important boundary field does not consist only of projections, allowing the treatment of hyperbolic equations, besides the elliptic ones.
1. Introduction
Over fifty years ago Friedrichs [1] showed that many partial differential equations of mathematical physics can be written as a first-order system of the form Lu≔∑k=1d∂k(Aku)+Cu=f,
which was afterwards called the Friedrichs system or the symmetric positive system.
More precisely, it is assumed (we keep these assumptions throughout the rest of the paper) that d,r∈N and that Ω⊆Rd is an open and bounded set with Lipschitz boundary Γ (we will denote its closure by ClΩ=Ω∪Γ). Real matrix functions Ak∈W1,∞(Ω;Mr(R)), k∈{1,…,d}, and C∈L∞(Ω;Mr(R)) satisfyAkis symmetric:Ak=Ak⊤,(∃μ0>0)C+C⊤+∑k=1d∂kAk⩾2μ0I(a.e.onΩ),
while f∈L2(Ω;Rr).
Quite often, even though a system does not satisfy the above conditions, it can be symmetrised after multiplication by a positive definite matrix function. However, the choice of such a multiplier is neither unique nor straightforward in general. An important advantage of this framework is the fact that it can accommodate the equations which change their type, such as the equations appearing in the mathematical models of transonic gas flow.
For the boundary conditions, Friedrichs [1] first defined Aν≔∑k=1dνkAk,
where ν=(ν1,ν2,…,νd)⊤ is the outward unit normal on Γ, which is, as well as Aν, of class L∞ on Γ (of course, Friedrichs considered more regular boundaries at the time). For a given matrix field on the boundary M:Γ→Mr(R), the boundary condition is prescribed by (Aν-M)u|Γ=0,
and by varying M one can enforce different boundary conditions. Friedrichs required the following two conditions (for a.e. x∈Γ) to hold: (∀ξ∈Rr)M(x)ξ⋅ξ⩾0,Rr=ker(Aν(x)-M(x))+ker(Aν(x)+M(x)),
and such M he called an admissible boundary condition. In the sequel we will refer to both properties (FM1) and (FM2) as (FM), and similarly in other such situations.
The boundary value problem thus reads the following: for given f∈L2(Ω;Rr) find u such that
{Lu=f,(Aν-M)u|Γ=0.
Of course, under such weak assumptions the existence of a classical solution (C1orW1,∞) cannot be expected. It can be shown that, in general, the solution belongs only to the graph space of operator ℒ: W={u∈L2(Ω;Rr):Lu∈L2(Ω;Rr)}.W is a separable Hilbert space (see, e.g., [2]) with the inner product (the corresponding norm will be denoted by ∥·∥ℒ) 〈u∣v〉L:=〈u∣v〉L2(Ω;Rr)+〈Lu∣Lv〉L2(Ω;Rr),
in which the restrictions of functions from Cc∞(Rd;Rr) to Ω are dense.
However, with such a weak notion of solution in a quite large space, the question arises how to interpret the boundary condition. It is not a priori clear what would be the meaning of restriction u|Γ for functions u from the graph space. Recently (cf. [2, 3]; for standard results regarding the traces of functions defined in Lipschitz domains we refer to [4]), it has been shown that Aνu|Γ can be interpreted as an element of H-1/2(Γ;Rr). Namely, on the graph space we can define operator 𝒯:W→H-1/2(Γ;Rr), which for u,v∈H1(Ω;Rr) satisfies H-1/2(Γ;Rr)〈Tu,TH1v〉H1/2(Γ;Rr)=〈Lu∣v〉L2(Ω;Rr)-〈u∣L̃v〉L2(Ω;Rr)=∫ΓAν(x)TH1u(x)⋅TH1v(x)dS(x),
where 𝒯H1 stands for the trace operator 𝒯H1:H1(Ω;Rr)→H1/2(Γ;Rr), and ℒ̃:L2(Ω;Rr)→𝒟′(Ω;Rr), the formally adjoint operator to ℒ, is defined by L̃v:=-∑k=1d∂k(Ak⊤v)+(C⊤+∑k=1d∂kAk⊤)v.
In general, 𝒯 is not an operator onto H-1/2(Γ;Rr) but still has a right inverse (the lifting operator) ℰ:im𝒯→W0⊥<W, which satisfies TEg=g,g∈imT.
Here, W0 denotes the closure of Cc∞(Ω;Rr) in W, while W0⊥ denotes its orthogonal complement in W. As im𝒯 is not necessarily closed in H-1/2(Γ;Rr), neither ℰ is necessarily continuous.
Using this trace operator, the appropriate well-posedness results for the weak formulation of (BVP), under additional assumptions, have been proven [3, 5].
More recently, the Friedrichs theory has been rewritten in an abstract setting by Ern and Guermond [6] and Ern et al. [7], in terms of operators acting on Hilbert spaces, such that the traces on the boundary have not been explicitly used. Instead, the trace operator has been replaced by the boundary operator D∈ℒ(W;W′) defined, for u,v∈W, by W′〈Du,v〉W:=〈Lu|v〉L2(Ω;Rr)-〈u|L̃v〉L2(Ω;Rr).
The boundary operator D can also be expressed [2, 7] via matrix function Aν: (∀u,v∈H1(Ω;Rr))W′〈Du,v〉W=∫ΓAν(x)TH1u(x)⋅TH1v(x)dS(x).
In the light of expressions (1.10) and (1.6), it is clear that 𝒯 and D are somehow connected. However, 𝒯 maps into H-1/2(Γ;Rr), while the codomain of D is W′, and it appears that D has better properties than the trace operator. Namely, using the operator D instead of 𝒯 in [7], the following weak well-posedness result has been shown.
Theorem 1.1.
Assume that there exists an operator M∈ℒ(W;W′) satisfying
(∀u∈W)W′〈Mu,u〉W⩾0,W=ker(D-M)+ker(D+M).
Then, the restricted operators
L|ker(D-M):ker(D-M)⟶L2(Ω;Rr),L̃|ker(D+M*):ker(D+M*)⟶L2(Ω;Rr)
are isomorphisms.
The operator M from the theorem is also called the boundary operator, as kerM=kerD=W0.
After rewriting the abstract theory of Ern et al. [7] in terms of Kreĭn spaces [2, 8, 9] and closing the questions they left open, in papers [10, 11] we investigated the precise relationship between the classical Friedrichs theory and its abstract counterpart and applied the new results on some examples.
To be specific, as the analogy between the properties (M) for operator M and the conditions (FM) for matrix boundary condition M is apparent, a natural question to be investigated is the nature of the relationship between the matrix field M and the boundary operator M. More precisely, our goal was to find additional conditions on the matrix field M with properties (FM) which will guarantee the existence of a suitable operator M∈ℒ(W;W′), with properties (M).
For a given matrix field M, which M will be a suitable operator? The condition is satisfied by such an operator M that the result of Theorem 1.1 really presents the weak well-posedness result for (BVP) in the following sense: if, for given f∈L2(Ω;Rr), u∈ker(D-M) is such that ℒu=f, where we additionally have u∈C1(Ω;Rr)∩C(ClΩ;Rr), then u satisfies (BVP) in the classical sense.
With such a connection between M and the boundary operator M, applications of the abstract theory to some equations of particular interest will become easier, as calculations with matrices are simpler than those with operators. We also take it as a first step towards better understanding of the relation between the existence and uniqueness results for the Friedrichs systems as in [7, 8] and the earlier classical results [1, 3, 5].
In [10] we have established this connection between M and M using two different approaches: via boundary operator D and via the trace operator 𝒯. Based on (1.10) and (1.6), in both these approaches we look for M of the form (see [6]) (∀u,v∈H1(Ω;Rr))W′〈Mu,v〉W=∫ΓM(x)TH1u(x)⋅TH1v(x)dS(x),
where we naturally assume that M is bounded, that is, M∈L∞(Γ;Mr(R)), and both the approaches make use of the following lemma.
Lemma 1.2.
Let matrix function M satisfy (FM1). Then, the following statements are equivalent.
M satisfies (FM2).
For almost every x∈Γ, there is a projector S(x), such that M(x)=(I-2S⊤(x))Aν(x).
For almost every x∈Γ, there is a projector P(x), such that M(x)=Aν(x)(I-2P(x)).
As properties (FM) do not guarantee that the preceding formula defines a continuous operator M:W→W′ satisfying (M), we have found [10] two different sets of additional conditions under which the desired properties are satisfied. The conditions that we got by using the trace operator are given in the next theorem.
Theorem 1.3.
Assume that the matrix field M∈L∞(Γ;Mr(R)) satisfies (FM) and that by (1.12) an operator M∈ℒ(W;W′) is defined. Then, (M1) holds.
Let the matrix function S from Lemma 1.2 additionally satisfy S∈C0,1/2(Γ;Mr(R)). If by 𝒮∈ℒ(H1/2(Γ;Rr)) one denotes the multiplication operatorS(z):=Sz,z∈H1/2(Γ;Rr),
by 𝒮*∈ℒ(H-1/2(Γ;Rr)) its adjoint operator defined by
H-1/2(Γ;Rr)〈S*T,z〉H1/2(Γ;Rr):=H-1/2(Γ;Rr)〈T,Sz〉H1/2(Γ;Rr),T∈H-1/2(Γ;Rr),z∈H1/2(Γ;Rr),
and by 𝒯:W→H-1/2(Γ;Rr) the trace operator, then the condition 𝒮*(im𝒯)⊆im𝒯 implies (M2).
The representation of M as a product of Aν by some matrix field I-2S⊤ is the essential ingredient in the proof of Theorem 1.3. However, in [11] we have noted that the requirement for S to be a projector appears overly restrictive in applications (this seems to be particularly true for hyperbolic equations), which motivated further investigation of possible improvements of Lemma 1.2. As a result we have realised that S needs to be a projector only at points where Aν is a regular matrix. Our goal here is to verify whether Theorem 1.3 (or some variant of it) holds in case when S is not a projector.
The paper is organised as follows. In Section 2 we propose an extension of the method from [10], the main result being Theorem 2.5. On the part of the boundary where Aν is singular, the matrix S appearing in Lemma 1.2(b) need not necessarily be a projector. This allows the treatment of hyperbolic equations, which is illustrated by an example in Section 3, where we also provide two sufficient conditions ensuring the assumptions of Theorem 2.5. Finally, in Section 4 we investigate whether we can get better results by using P instead of S.
2. Approach via Trace Operator When S Is Not a ProjectorLemma 2.1.
Let matrix function M∈L∞(Γ;Mr(R)) satisfy (FM1). Then the following statements are equivalent:
M satisfies (FM2).
For almost every x∈Γ, there is a matrix S(x), such that M(x)=(I-2S⊤(x))Aν(x) and
ker(Aν(x)S(x))+ker(Aν(x)(I-S(x)))=Rr.
For almost every x∈Γ, there is a matrix P(x), such that M(x)=Aν(x)(I-2P(x)) and
ker(Aν(x)P(x))+ker(Aν(x)(I-P(x)))=Rr.
Proof.
As for M from (c) we have Aν-M=2AνP and Aν+M=2Aν(I-P); thus (FM2) holds. Note also that by Lemma 1.2, the part (a) implies (c).
In order to prove that (b) is equivalent to (a) and (c), we use the well-known fact [1, 12] that M satisfies (FM) if and only if M⊤ satisfies (FM). By (c) this is equivalent to the existence of S such that M⊤=Aν(I-2S) and ker(AνS)+ker(Aν(I-S))=Rr a.e. on Γ, which is actually equivalent to (b).
Remark 2.2.
Note that P and S from the previous lemma also satisfy
ker(S⊤Aν)+ker((I-S⊤)Aν)=Rra.e.onΓ,ker(P⊤Aν)+ker((I-P⊤)Aν)=Rra.e.onΓ.
This is a consequence of the already-mentioned statement that M satisfies (FM) if and only if M⊤ satisfies (FM). It is also obvious that S⊤Aν=AνP a.e. on Γ.
Remark 2.3.
The result of Lemma 2.1 improves that of Lemma 1.2. We distinguish two situations that can occur at a fixed point x∈Γ (which we suppress in writing below).
If Aν is a regular matrix, then (for P as in Lemma 2.1) ker(AνP)=kerP and ker(Aν(I-P))=ker(I-P), and therefore ker(AνP)+ker(Aν(I-P))=Rr is equivalent to kerP+ker(I-P)=Rr, which is equivalent to the statement that P is a projector.
If Aν is not regular, then there can be several matrices P, which are not projectors but nevertheless satisfy ker(AνP)+ker(Aν(I-P))=Rr. For example, any matrix P, such that imP⊆kerAν or im(I-P)⊆kerAν, would satisfy ker(AνP)+ker(Aν(I-P))=Rr, as for such P either ker(AνP)=Rr or ker(Aν(I-P))=Rr.
The similar statements hold for S.
A variant of the following lemma has been proved in [11].
Lemma 2.4.
If M satisfies (FM), then for P and S as in Lemma 2.1 one has
AνP(I-P)=Aν(I-P)P=AνS(I-S)=Aν(I-S)S=0a.e.onΓ.
Proof.
Any w∈Rr can be decomposed as w=ξ+η such that ξ∈ker(AνP) and η∈ker(Aν(I-P)). Now using AνP=S⊤Aν we easily get
AνP(I-P)w=AνP(I-P)ξ+AνP(I-P)η=AνPξ-S⊤AνPξ+S⊤Aν(I-P)η=0,
which concludes the proof for P, while for S one can argue analogously.
Next we prove a new version of Theorem 1.3, with S not necessarily being a projector.
Theorem 2.5.
Assume that the matrix field M∈L∞(Γ;Mr(R)) satisfies (FM) and that by (1.12) an operator M∈ℒ(W;W′) is defined. Then (M1) holds.
Let the matrix function S from Lemma 2.1 additionally satisfy S∈C0,1/2(Γ;Mr(R)) such that the multiplication operator 𝒮 defined by (1.13) belongs to ℒ(H1/2(Γ;Rr)). If one denotes by 𝒮*∈ℒ(H-1/2(Γ;Rr)) the adjoint operator of 𝒮 and by 𝒯:W→H-1/2(Γ;Rr) the trace operator on the graph space, then the condition 𝒮*(im𝒯)⊆im𝒯 implies (M2).
Proof.
It only remains to show (M2). First we prove
S*(IH-1/2-S*)T=(IH-1/2-S*)S*T=0,
where ℐH-1/2 is the identity on H-1/2(Γ;Rr). By using Lemma 2.4, for u∈H1(Ω;Rr)and z∈H1/2(Γ;Rr) we have
H-1/2(Γ;Rr)〈S*(IH-1/2-S*)Tu,z〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈Tu,(IH1/2-S)Sz〉H1/2(Γ;Rr)=∫ΓAν(x)TH1u(x)⋅(I-S(x))S(x)z(x)dS(x)=∫ΓS⊤(x)(I-S⊤(x))Aν(x)TH1u(x)⋅z(x)dS(x)=∫Γ(Aν(x)(I-S(x))S(x))⊤TH1u(x)⋅z(x)dS(x)=0,
where ℐH1/2:H1/2(Γ;Rr)→H1/2(Γ;Rr) is the identity. Therefore, 𝒮*(ℐH-1/2-𝒮*)𝒯u=0 for every u∈H1(Ω;Rr), and, since H1(Ω;Rr) is dense in W, we have (2.6).
Just as in the proof of [10, Theorem 2], we will use the representations of operators D and M through the trace operator 𝒯; for u∈W and v∈H1(Ω;Rr), we have
W′〈Du,v〉W=H-1/2(Γ;Rr)〈Tu,TH1v〉H1/2(Γ;Rr),W′〈Mu,v〉W=H-1/2(Γ;Rr)〈(IH-1/2-2S*)Tu,TH1v〉H1/2(Γ;Rr).
By assumption 𝒮*(im𝒯)⊆im𝒯, for given w∈W, we can define
u:=w-ES*Tw,v:=ES*Tw,
where ℰ:im𝒯→W is the right inverse of the operator 𝒯, as before. Obviously, the decomposition w=u+v is valid.
Let us show that u∈ker(D-M): for s∈H1(Ω;Rr) by (2.6) and (2.8) we get
W′〈(D-M)u,s〉W=H-1/2(Γ;Rr)〈2S*Tu,TH1s〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈2S*T(w-ES*Tw),TH1s〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈2S*(IH-1/2-S*)Tw,TH1s〉H1/2(Γ;Rr)=0,
thus (D-M)u=0, as H1(Ω;Rr) is dense in W.
It remains to show that v∈ker(D+M). For s∈H1(Ω;Rr), similarly as above, it follows that
W′〈(D+M)v,s〉W=H-1/2(Γ;Rr)〈2(IH-1/2-S*)Tv,TH1s〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈2(IH-1/2-S*)TES*Tw,TH1s〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈2(IH-1/2-S*)S*Tw,TH1s〉H1/2(Γ;Rr)=0,
thus (D+M)v=0 and we have the claim.
Theorem 2.5 provides us with sufficient conditions for continuous operator M:W→W′, defined by (1.12), to satisfy (M). A natural question arises whether these conditions are feasible. The condition S∈C0,1/2(Γ;Mr(R)) does not appear particularly restrictive, as it is expected that the continuity of M requires even higher regularity of S (see [10]). However, the other condition, requiring that the image of the trace operator is invariant under 𝒮* appears somewhat artificial and unnatural. This is particularly true because in all examples to which we have applied the theory of Friedrichs systems [10] this condition is satisfied. At this point we still do not know whether it is always fulfilled.
3. On Feasibility of Assumptions
The following example illustrates the applicability of Theorem 2.5 for hyperbolic equations, in a simple situation.
Example 3.1.
The wave equation utt-γ2uxx=f can be written as the following symmetric system for u=(u,ut+γux):
∂t([1001]u)+∂x([γ00-γ]u)+[0-100]u=[0f].
After introducing a new unknown v:=e-λtu, we obtain a positive symmetric system (for λ>0 large enough)
∂t([1001]v)+∂x([γ00-γ]v)+[λ-10λ]v=[0e-λtf].
As
Aν=[ν1+γν200ν1-γν2],
in order to make calculations simpler, we take domain Ω⊆R2 to be a parallelogram with sides laying on the characteristic lines x±γt=±1 of the original wave equation, as presented in Figure 1. The straight parts of the boundary (open segments) are denoted by Γ1,…,Γ4.
Let us take a matrix function S∈C0,1/2(Γ;M2(R)) with the entries
S=[abcd],
and consider M=(I-2S⊤)Aν. Depending on the particular part of the boundary, matrix function M satisfies (FM) if and only if
onΓ1:c=0,d=0,onΓ2:c=0,d=1,onΓ3:a=0,b=0,onΓ4:a=1,b=0.
A straightforward calculation gives us the formula for 𝒯 on H1(Ω;R2):
T(u,w)={(0,TH1w),onΓ1,(0,-TH1w),onΓ2,(TH1u,0),onΓ3,(-TH1u,0),onΓ4.
Multiplying with possible values of S⊤ given above, for any (u,w)∈H1(Ω;R2) we have
S*T(u,w)={0,onΓ1,T(u,w),onΓ2,0,onΓ3,T(u,w),onΓ4,
and one can easily check that this equals 𝒯(ũ,w̃) with ũ=(1-x-γt)u/2 and w̃=(1+x-γt)w/2. By the density of H1(Ω;R2) in W, the continuity of 𝒮*∈ℒ(H-1/2(Γ;R2)) and the fact that 𝒯∈ℒ(W;H-1/2(Γ;R2)), as well as the continuity of linear mapping (u,v)↦(ũ,w̃) from W to W, we infer that the equality 𝒮*𝒯(u,w)=𝒯(ũ,w̃) is valid for (u,w)∈W. Therefore, by this construction, we obtained the inclusion 𝒮*(im𝒯)⊆im𝒯.
The corresponding boundary operator M:W→W′ is continuous, so we can apply Theorem 2.5. It is simple to interpret the boundary conditions: they are not imposed on the part Γ1∪Γ3 of the boundary (as Aν-M=0 there), while the boundary condition on Γ2 is w=0 and on Γ4 we have u=0 (as Aν-M=Aν on these parts of the boundary).
An example of the domain Ω; the wave equation.
The arguments used in this example are particularly simple due to the specific form of boundary Γ. Let us consider a more complicated pentagonal domain: cut the set Ω by a horizontal line and introduce a new horizontal part Γ5 of the boundary (on the top of Ω). A similar calculation leads us to the following relations that should be satisfied on Γ5: a≤12,d≥12,a+d=1,ad=bc,(b-c)2≤(1-2a)(2d-1),
and to the following equalities on Γ5 (valid for (u,w)∈H1(Ω;R2)), T(u,w)=(TH1u,-TH1w),S*T(u,w)=(aTH1u-cTH1w,bTH1u-dTH1w).
However, the inclusion 𝒮*(im𝒯)⊆im𝒯 is no longer obvious, as functions a, b, c, and d can be chosen quite arbitrarily.
Next we present some sufficient conditions which can be used to show that 𝒮*(im𝒯)⊆im𝒯.
Theorem 3.2.
Let S,P∈C0,1/2(Γ;Mr(R)) be matrix functions and 𝒮,𝒫∈ℒ(H1/2(Γ;Rr)) corresponding multiplication operators defined as in (1.13), and let S⊤Aν=AνP a.e. on Γ. Then 𝒯(H1(Ω;Rr)) is invariant under 𝒮*. If one additionally assumes that im𝒯 is closed in H-1/2(Γ;Rr), then one also has 𝒮*(im𝒯)⊆im𝒯.
Proof.
For u∈H1(Ω;Rr) and z∈H1/2(Γ;Rr), we have
H-1/2(Γ;Rr)〈S*Tu,z〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈Tu,Sz〉H1/2(Γ;Rr)=∫ΓAν(x)TH1u(x)⋅S(x)z(x)dS(x)=∫ΓS⊤(x)Aν(x)TH1u(x)⋅z(x)dS(x)=∫ΓAν(x)P(x)TH1u(x)⋅z(x)dS(x)=∫ΓAν(x)TH1EH1P(x)TH1u(x)⋅z(x)dS(x)=H-1/2(Γ;Rr)〈TEH1PTH1u,z〉H1/2(Γ;Rr),
where ℰH1:H1/2(Γ;Rr)→H1(Ω;Rr) is the right inverse of the trace operator 𝒯H1. Therefore, on H1(Ω;Rr), we have 𝒮*𝒯=𝒯ℰH1𝒫𝒯H1, and in particular 𝒮*𝒯(H1(Ω;Rr))⊆𝒯(H1(Ω;Rr)).
Let us now additionally assume that im𝒯 is closed in H-1/2(Γ;Rr). For u∈W, let un∈H1(Ω;Rr) be a sequence converging to u in W. Then by continuity we also have
S*Tun⟶S*TuinH-1/2(Γ;Rr),
while, from the equality 𝒮*𝒯un=𝒯ℰH1𝒫𝒯H1un∈im𝒯 and the closedness of im𝒯 in H-1/2(Γ;Rr), it follows that 𝒮*𝒯u∈im𝒯, which concludes the proof.
Note that in the above theorem S and P were arbitrary elements of C0,1/2(Γ;Mr(R)), not necessarily having properties from Lemma 2.1.
We close this section with a theorem showing that, if we impose conditions that ensure continuity of M defined by (1.12), then we also have 𝒮*(im𝒯)⊆im𝒯. These conditions were used in applications of the theory of Friedrichs systems in [11], and it is important to note that we do not expect them to be necessary for continuity of M, but only sufficient.
Theorem 3.3.
Let S∈C0,1/2(Γ;Mr(R)) be a matrix function and 𝒮∈ℒ(H1/2(Γ;Rr)) the corresponding multiplication operator defined by (1.13), and let P:Γ→Mr(R) be such that S⊤Aν=AνP a.e. on Γ. Additionally assume that P can be extended to a measurable matrix function Pp:ClΩ→Mr(R) satisfying the following.
The multiplication operator 𝒫p, defined by 𝒫p(v):=Ppv for v∈W is a bounded linear operator on W.
(∀v∈H1(Ω;Rr))Ppv∈H1(Ω;Rr)&𝒯H1(Ppv)=P𝒯H1v.
Then 𝒮*𝒯=𝒯𝒫p, and thus 𝒮*(im𝒯)⊆im𝒯.
Proof.
Similarly as in the proof of the previous theorem, for each u∈H1(Ω;Rr) and z∈H1/2(Γ;Rr) we have
H-1/2(Γ;Rr)〈S*Tu,z〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈Tu,Sz〉H1/2(Γ;Rr)=∫ΓAν(x)TH1u(x)⋅S(x)z(x)dS(x)=∫ΓS⊤(x)Aν(x)TH1u(x)⋅z(x)dS(x)=∫ΓAν(x)P(x)TH1u(x)⋅z(x)dS(x)=∫ΓAν(x)TH1(Pp(x)u(x))⋅z(x)dS(x)=H-1/2(Γ;Rr)〈TPpu,z〉H1/2(Γ;Rr),
and thus the equality 𝒮*𝒯=𝒯𝒫p is valid on H1(Ω;Rr). As all operators appearing in this equality are bounded, while H1(Ω;Rr) is dense in W, the same equality is valid on W, which proves the claim.
4. Using P instead of S
In Theorem 2.5 a matrix function S was used in order to impose sufficient conditions for (M) to hold. In Lemma 2.1 a matrix function P also appears, with a similar role as S, the only difference being in the fact that in expression for M function P multiplies Aν from a different side than S. Therefore, it is natural to check whether we could get a better result than Theorem 2.5 by using P instead of S. In the next theorem we show what we have got by this approach.
Theorem 4.1.
Assume that the matrix field M∈L∞(Γ;Mr(R)) satisfies (FM) and that by (1.12) a bounded operator M∈ℒ(W;W′) is defined. Then (M1) holds.
Let the matrix function P from Lemma 2.1 additionally satisfy P∈C0,1/2(Γ;Mr(R)), and let 𝒫∈ℒ(H1/2(Γ;Rr)) be the corresponding multiplication operator defined by 𝒫(z):=Pz. Then one has
H1(Ω;Rr)⊆ker(D-M)+ker(D+M).
Proof.
As in the proof of Theorem 2.5, we shall use the representations of operators D and M via trace operator 𝒯 and multiplication operator 𝒫; for u∈H1(Ω;Rr) and v∈H1(Ω;Rr) we have
W′〈Du,v〉W=∫ΓAν(x)TH1u(x)⋅TH1v(x)dS(x)=∫ΓAν(x)TH1v(x)⋅TH1u(x)dS(x)=H-1/2(Γ;Rr)〈Tv,TH1u〉H1/2(Γ;Rr),W′〈Mu,v〉W=∫ΓAν(x)(I-2P(x))TH1u(x)⋅TH1v(x)dS(x)=∫ΓAν(x)TH1v(x)⋅(I-2P(x))TH1u(x)dS(x)=H-1/2(Γ;Rr)〈Tv,(IH1/2-2P)TH1u〉H1/2(Γ;Rr).
For given w∈H1(Ω;Rr) we define u:=ℰH1𝒫𝒯H1w, where ℰH1 is the right inverse of 𝒯H1, as before.
Let us show that u∈ker(D+M). For v∈H1(Ω;Rr) by using (4.2) and Lemma 2.4 we get
W′〈(D+M)u,v〉W=H-1/2(Γ;Rr)〈2Tv,(IH1/2-P)TH1u〉H1/2(Γ;Rr)=H-1/2(Γ;Rr)〈2Tv,(IH1/2-P)TH1EH1PTH1w〉H1/2(Γ;Rr)=2∫ΓAν(x)TH1v(x)⋅(I-P(x))P(x)TH1w(x)dS(x)=2∫ΓTH1v(x)⋅Aν(x)(I-P(x))P(x)TH1w(x)dS(x)=0,
thus (D+M)u=0, as H1(Ω;Rr) is dense in W.
Similarly, we get
W′〈(D-M)(w-u),v〉W=2∫ΓTH1v(x)⋅Aν(x)P(x)(I-P(x))TH1w(x)dS(x)=0,
and thus (D-M)(w-u)=0. As w=u+(w-u), we have the claim.
It appears that by using P instead of S we do not get better results.
Acknowledgment
This work is supported in part by the Croatian MZOS trough projects 037-0372787-2795 and 037-1193086-3226.
FriedrichsK. O.Symmetric positive linear differential equations195811333418010071810.1002/cpa.3160110306ZBL0083.31802AntonićN.BurazinK.Graph spaces of first-order linear partial differential operators20091411351552732591ZBL1183.35079JensenM.2004University of OxfordTartarL.20073Berlin, GermanySpringerxxvi+218Lecture Notes of the Unione Matematica Italiana2328004RauchJ.Boundary value problems with nonuniformly characteristic boundary19947343473531290491ZBL0832.35026ErnA.GuermondJ.-L.Discontinuous Galerkin methods for Friedrichs' systems200644623632388227259810.1137/05063831XZBL1133.65098ErnA.GuermondJ.-L.CaplainG.An intrinsic criterion for the bijectivity of Hilbert operators related to Friedrichs' systems2007321–3317341230415110.1080/03605300600718545AntonićN.BurazinK.Intrinsic boundary conditions for Friedrichs systems201035916901715275406010.1080/03605300903540927AntonićN.BurazinK.On equivalent descriptions of boundary conditions for Friedrichs systemsto appear in Mathematica Montisnigri10.1080/03605300903540927AntonićN.BurazinK.Boundary operator from matrix field formulation of boundary conditions for Friedrichs systems2011250936303651277318010.1016/j.jde.2011.02.004AntonićN.BurazinK.VrdoljakM.Second-order equations as Friedrichs systemsNonlinear Analysis B: Real World Applications. In press10.1016/j.nonrwa.2011.08.031BurazinK.2008University of Zagrebhttp://www.mathos.hr/~kburazin/papers/teza.pdf