This paper studies the construction of the exact solution for parabolic coupled systems of the type ut=Auxx, A1u(0,t)+B1ux(0,t)=0, A2u(l,t)+B2ux(l,t)=0, 0<x<1, t>0, and u(x,0)=f(x), where A1, A2, B1, and B2 are arbitrary matrices for which the block matrix (A1B1A2B2) is nonsingular, and A is a positive stable matrix.
1. Introduction
Coupled partial differential systems with coupled boundary-value conditions are frequent in quantum mechanical scattering problems [1–3], chemical physics [4–6], thermoelastoplastic modelling [7], coupled diffusion problems [8–10], and other fields. In this paper, we consider systems of the type
(1)ut(x,t)-Auxx(x,t)=0,0<x<1,t>0,(2)A1u(0,t)+B1ux(0,t)=0,t>0,(3)A2u(1,t)+B2ux(1,t)=0,t>0,(4)u(x,0)=f(x),0≤x≤1,
where the unknown u=(u1,u2,…,um)T and the initial condition f=(f1,f2,…,fm)T are m-dimensional vectors, Ai, Bi, i=1,2, are m×m complex matrices, elements of ℂm×m, and A is a matrix which satisfies the condition
(5)Re(z)>0foralleigenvalueszofA,
and we say that A is a positive stable matrix (where Re(z) denotes the real part of z∈ℂ). We assume that the block matrix
(6)(A1B1A2B2)isregular
and also that
(7)thematrixpencilA1+ρB1isregular.
Condition (7) is well known from the literature of singular systems of differential equations, and it involves the existence of some ρ0∈ℂ such that matrix A1+ρ0B1 is invertible [11].
Problem (1)–(4) with the less restrictive condition that (7) was solved in [12], but not with all of its blocks A1, A2, B1, B2, is singular (in particular A1=I). Mixed problems of the previously mentioned type, but with the Dirichlet conditions u(0,t)=0, u(1,t)=0 instead of (2) and (3), have been treated in [13, 14].
Throughout this paper, and as usual, matrix I denotes the identity matrix. The set of all the eigenvalues of a matrix C in ℂm×m is denoted by σ(C), and its 2-norm ∥C∥ is defined by [15, page 56]
(8)∥C∥=supx≠0∥Cx∥∥x∥,
where for vector y∈ℂm, the Euclidean norm of y is ∥y∥. By [15, page 556], it follows that
(9)∥eAt∥≤etα(A)∑k=0m-1∥mA∥ktkk!,t≥0,
where α(A)=max{Re(w);w∈σ(A)}. We say that a subspace E of ℂm is invariant by the matrix A∈ℂm×m, if A(E)⊂E. If B is a matrix in ℂn×m, we denote by B† its Moore-Penrose pseudoinverse. A collection of examples, properties, and applications of this concept may be found in [11, 16], and B† can be efficiently computed with the MATLAB and Mathematica computer algebra systems.
2. Preliminaries and Notation
In [17], eigenfunctions of problem (1)–(3) were constructed assuming other additional conditions besides (6) and (7). We recall in this section the notation and results needed. Let A~1 and B~1 be matrices defined by
(10)A~1=(A1+ρ0B1)-1A1,B~1=(A1+ρ0B1)-1B1,
fulfilling the relation: A~1+ρ0B~1=I. Under hypothesis (6), matrix B2-(A2+ρ0B2)B~1 is regular; see [17, page 431], and let A~2 and B~2 be the matrices defined by
(11)A~2=[B2-(A2+ρ0B2)B~1]-1A2,B~2=[B2-(A2+ρ0B2)B~1]-1B2,
so that they satisfy the relationships
(12)B~2-(A~2+ρ0B~2)B~1=I,B~2A~1-A~2B~1=I.
Assuming that the following condition
(13)existsb1∈σ(B~1)-{0},b2∈σ(B~2),v∈ℂm-{0},suchthat(B~1-b1I)v=(B~2-b2I)v=0,
and that values b1, b2 of condition (13) satisfy
(14)b1b2∈ℝ,whereb1∈ℝor2b1b2(Re(b1-1)-ρ0)=1ifb1∉ℝ,
we can define the function
(15)α(ρ0,b1,b2,λ)=(1-b2+ρ0b1b2)(1-ρ0b1)b1-b1b2λ2,λ>0.
Note that under hypothesis (14) we have guaranteed the existence of the solutions for
(16)λcot(λ)=(1-b2+ρ0b1b2)(1-ρ0b1)b1-b1b2λ2.
Equation (16) has a unique solution λk in each interval (kπ,(k+1)π) for k≥1, as seen in Figure 1. Also, it is straightforward to prove the following lemma.
Graphical representation of y=λcot(λ) and determination of the eigenvalues λn.
Lemma 1.
Under hypothesis (14), the roots λk of (16) satisfy limn→∞λn=+∞. Also, if b1b2≠0, then
(17)limn→∞sin(λn)=0,limn→∞|cos(λn)|=1.
Otherwise, if b1b2=0, then
(18)limn→∞|sin(λn)|=1,limn→∞cos(λn)=0.
However, in all cases it is
(19)limn→∞(λn+1-λn)=π.
Proof.
Function f(λ)=λcot(λ) has vertical asymptotes at the points λ=kπ, k∈ℕ, and f(λ) has zeros at the points λ=(π/2)+kπ, k∈ℕ. Thus, as we have stated, the real coefficient function ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)-b1b2λ2 intersects the graph of the function f(λ) in each interval (kπ,(k+1)π), where λk∈(kπ,(k+1)π) is the point of intersection. Thus, the sequence {λk}k≥1 is monotonicaly increasing with limk→∞λk=∞. We have to consider two possibilities.
b1b2>0. Function ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)-b1b2λ2 is therefore decreasing, and as seen in Figure 1, for large enough k, then λk∈((π/2)+kπ,(k+1)π).
b1b2<0. Function ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)-b1b2λ2 is therefore increasing, and as seen in Figure 1, for large enough k, then λk∈(kπ,(π/2)+kπ).
Thus, observe that if b1b2≠0, then (π/2)<λk+1-λk<(3π/2) for large sufficiently k. For λk, reemploying in (16), one gets
(20)λkcot(λk)=(1-b2+ρ0b1b2)(1-ρ0b1)b1-b1b2λk2,
dividing by λk2 and taking limits where k→∞:
(21)limk→∞cot(λk)λk=-b1b2≠0.
This demonstrates that sequences {λk}k≥1 and {cot(λk)}k≥1 are infinite equivalents and
(22)limk→∞cot(λk)=∞,
where limk→∞tan(λk)=0. Moreover, as {cos(λk)}k≥1 is bounded, one gets that limk→∞sin(λk)=0 and limk→∞|cos(λk)|=1. Taking into account that
(23)tan(λk+1-λk)=tan(λk+1)-tan(λk)1+tan(λk+1)tan(λk),
considering limits where k→∞, one gets limk→∞tan(λk+1-λk)=0, and with (π/2)<λk+1-λk<(3π/2), then limk→∞(λk+1-λk)=π.
If b1b2=0, then one obtains two possibilities.
If ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)>0, as one can see in Figure 1, for large enough k, λk∈(kπ,(π/2)+kπ).
If ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)<0, as one can see in Figure 1, for large enough k, λk∈((π/2)+kπ,(k+1)π).
Thus, observe that if b1b2=0, then also (π/2)<λk+1-λk<(3π/2) for k sufficiently large. For λk, reemploying in (16), one gets
(24)λkcot(λk)=(1-b2+ρ0b1b2)(1-ρ0b1)b1,
dividing by λk and taking limits where k→∞, one gets that limk→∞cot(λk)=0, and as the sequence {sin(λk)}k≥1 is bounded, one gets that limk→∞cos(λk)=0 and limk→∞|sin(λk)|=1. Moreover, one gets that
(25)cot(λk+1-λk)=cot(λk+1)cot(λk)+1cot(λk)-cot(λk+1),
considering limits where k→∞, one gets
(26)limk→∞cot(λk+1-λk)=∞,
as the sequence {cos(λk+1-λk)}k≥1 is bounded, we have that limk→∞sin(λk+1-λk)=0, and with (π/2)<λk+1-λk<(3π/2), one gets that limk→∞(λk+1-λk)=π.
If b1b2=0 and ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)=0, (16) reduces to λcot(λ)=0, whose roots are λk=(π/2)+kπ, k∈ℕ, and trivially λk+1-λk=π. Then limk→∞(λk+1-λk)=π.
Under hypothesis α(ρ0,b1,b2,λ0)<1 there is a root λ0∈(0,π), and we can define the set of eigenvalues of the problem (1)–(3) as
(27)ℱ={λk∈(kπ,(k+1)π);λkcot(λk)=α(ρ0,b1,b2,λk),k≥1}∪ℱ0,
where
(28)ℱ0={∅,ifα(ρ0,b1,b2,λ0)≥1λ0∈(0,π),ifα(ρ0,b1,b2,λ0)<1.
Thus, by [17, page 433] a set of solutions of problem (1) is given by
(29)u(x,t,λk)=e-λkAt{sin(λkx)A~1-λkcos(λkx)B~1}C(λk),sine-λkAt(λkx)=CA~1-λkcos(λkx)B~1λk∈ℱ,
where C(λk) satisfies
(30)G(ρ0,b1,b2,λk)C(λk)=0.
Observe that if p is the degree of minimal polynomial of A, the matrix G(ρ0,b1,b2,λk) is defined by
(31)G(ρ0,b1,b2,λk)=(B~1A-AB~1⋮B~1Ap-1-Ap-1B~1(A~2A~1+λk2B~2B~1)+α(ρ0,b1,b2,λk)I{(A~2A~1+λk2B~2B~1)+α(ρ0,b1,b2,λk)I}A⋮{(A~2A~1+λk2B~2B~1)+α(ρ0,b1,b2,λk)I}Ap-1).
In order to ensure that C(λk)≠0 satisfies (30) we have
(32)rankG(ρ0,b1,b2,λk)<m,
and under condition (32), the solution of (30) is given by
(33)C(λk)=(I-G(ρ0,b1,b2,λk)†G(ρ0,b1,b2,λk))S,S∈ℂm.
The eigenfunctions associated to the problem (1) are then given by
(34)u(x,t,λk)=e-λkAt{sin(λkx)A~1-λkcos(λkx)B~1}C(λk),{sin(λk)A~1-(λk)λkcos(λkx)B~1}C(λk),λk∈ℱ.
Also λ=0 is an eigenvalue of problem (1), if
(35)1∈σ(-A~2A~1).
Under hypothesis (35), if G(ρ0,0)=A~2A~1+I, then, if we denote by
(36)C(0)=(I-G(ρ0,0)†G(ρ0,0))S,S∈ℂm,
one gets that function
(37)u(x,0)=(xA~1-B~1)C(0)
is an eigenfunction of problem (1) associated to eigenvalue λ=0.
All these results are summarized in Theorem 2.1 of [17, page 434]. Our goal is to find the exact solution of the problem (1)–(4). We provide conditions for the function f(x) and the matrix coefficients in order to ensure the existence of a series solution of the problem. The paper is organized as follows. In Section 3 a series solution for the problem is presented. In Section 4 we proceed with an algorithm and give an illustrative example.
3. A Series Solution
By the superposition principle, a possible candidate to the series solution of problem (1)–(4) is given by
(38)u(x,t)={u(x,0)+∑λn∈ℱu(x,t,λk),0∈ℱ,∑λn∈ℱu(x,t,λk),0∉ℱ,
where u(x,t,λk) and u(x,0) are defined by (34) and (37), respectively, for suitable vectors C(λn) and C(0).
Assuming that series (38) and the corresponding derivatives ux(x,t), uxx(x,t), and ut(x,t) are convergent (we will demonstrate this later), (38) will be a solution of (1)–(3). Now, we need to determine vectors C(λ) and C(0) so that (38) satisfies (4).
Note that, taking v to satisfy (13), from (12) one gets
(39)A~2v=(b2-ρ0b1b2b1)v,A~1v=(1-ρ0b1)v.
Under condition (39), we will consider the scalar Sturm-Liouville problem:
(40)X′′(x)+λ2X(x)=0,(1-ρ0b1)X(0)+b1X′(0)=0,-(1-b2+ρ0b1b2b1)X(1)+b2X′(1)=0,
which provides a family of eigenvalues ℱ given in (27). Then, the associated eigenfunctions are
(41)Xλn(x)=(1-ρ0b1)sin(λnx)-b1λncos(λnx),λn>0,X0(x)=(1-ρ0b1)x-b1,ifλ0=0.
By the theorem of convergence of the Sturm-Liouville for functional series [18, chapter 11], with the initial condition f(x)=(f1(x),…,fm(x))t given in (4) satisfying the following properties:
(42)f∈𝒞2([0,1]),(1-ρ0b1)f(0)+b1f′(0)=0,-(1-b2+ρ0b1b2b1)f(1)+b2f′(1)=0,
each component fi of f, for 1≤i≤m, has a series expansion which converges absolutely and uniformly on the interval [0,1]; namely,
(43)fi(x)=α((1-ρ0b1)x-b1)e0i+∑λn∈ℱ((1-ρ0b1)sin(λnx)-b1λncos(λnx))eλni,
where
(44)α={1if(1-b2+ρ0b1b2)(1-ρ0b1)b1=10if(1-b2+ρ0b1b2)(1-ρ0b1)b1≠1,e0i=b1∫01((1-ρ0b1)x-b1)fi(x)dx∫01((1-ρ0b1)x-b1)2dxifλ0=0,eλni=b1λn∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))fi(x)dx∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))2dx∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))2dxifλn>0.
Thus,
(45)f(x)=α((1-ρ0b1)x-b1)E(0)+∑λn∈ℱ((1-ρ0b1)sin(λnx)+∑λn∈ℱ-b1λncos(λnx))E(λn),
where E(0)=(e01⋮e0m) and E(λn)=(eλn1⋮eλnm). On the other hand, from (38) and taking into account (34) and (37), one gets
(46)f(x)=u(x,0)=α(xA~1-B~1)C(0)+∑λn∈ℱ(sin(λnx)A~1-λncos(λnx)B~1)C(λn).
We can equate the two expressions; if C(0) and C(λn), apart from conditions (33) and (36), satisfy {C(0),C(λ)}⊂Ker(B~1-b1I). Then, we have
(47)C(λn)=E(λn)=∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))f(x)dx∫01((1-ρ0b1)sin(λnx)-b1λncos(λnx))2dx,ifλn>0,C(0)=E(0)C(0)=∫01((1-ρ0b1)x-b1)f(x)dx∫01((1-ρ0b1)x-b1)2dxifλ0=0.
Note that C(0) and C(λ)∈Ker(B~1-b1I), if
(48)f(x)∈Ker(B~1-b1I).
Then u(x,t) defined by
(49)u(x,t)=α((1-ρ0b1)x-b1)C(0)+∑λn∈ℱe-λn2At((1-ρ0b1)sin(λnx)+∑λn∈ℱe-λn2At-b1λncos(λnx))C(λn),
where α and C(λn) are defined by (44) and (47), satisfies the initial condition (4). Note that conditions (30)–(32) hold if
(50)G(ρ0,b1,b2,λk)f(x)=0,
and then
(51)(B~1-b1I)Ajf(x)=0,0≤j<p,(A~2A~1+λn2B~2B~1+α(ρ0,b1,b2,λ)I)Ajf(x)=0,0≤j<p.
It is easy to check that conditions (48), (51) are equivalent to the condition
(52)Ajf(x)∈Ker(B~1-b1I)∩Ker(B~2-b2I),0≤j<p.
Condition (52) holds if
(53)f(x)∈Ker(B~1-b1I)∩Ker(B~2-b2I),0≤x≤1,Ker(B~1-b1I)∩Ker(B~2-b2I),isaninvariantsubspacewithrespecttomatrixA.
Now we study the convergence of the solution given by (49) with α defined by (44) and C(λn) by (47). Using Parseval's identity for scalar Sturm-Liouville problems [19], there exists a positive constant M1>0 so that ∥C(λn)∥≤M1. Taking formal derivatives in (49), one gets
(54)ut(x,t)=∑λn∈ℱ(-λn2)e-λn2AtA(sin(λnx)(1-ρ0b1)∑λn∈ℱ(-λn2)e-λn2AtA-λncos(λnx)b1)C(λn),ux(x,t)=∑λn∈ℱλne-λn2At(cos(λnx)(1-ρ0b1)∑λn∈ℱλne-λn2At+λnsin(λnx)b1)C(λn)+α(1-ρ0b1)C(0),uxx(x,t)=∑λn∈ℱλn2e-λn2At(-sin(λnx)(1-ρ0b1)∑λn∈ℱλn2e-λn2At+λncos(λnx)b1)C(λn).
These series are all bounded in their respective norms:
(55)∥u(x,t)∥≤∑λn∈ℱ[∥e-λn2At∥|1-ρ0b1|M1+∥λne-λn2At∥|b1|M1]+α(|1-ρ0b1|x+|b1|)∥C(0)∥,∥ut(x,t)∥≤∑λn∈ℱ[∥λn2e-λn2AtA∥|1-ρ0b1|M1+∥λn2e-λn2At∥|b1|M1],∥ux(x,t)∥≤∑λn∈ℱ[∥λn2e-λn2At∥|1-ρ0b1|M1+∥λn2e-λn2At∥|b1|M1]+α|1-ρ0b1|∥C(0)∥,∥uxx(x,t)∥≤∑λn∈ℱ[∥λn2e-λn2At∥|1-ρ0b1|M1+∥λn3e-λn2At∥|b1|M1].
To check that the series is uniformly convergent in each domain [0,1]×[c,d], it is sufficient to verify that the series
(56)∑λn∈ℱλn3e-λn2At
is uniformly convergent in this domain. This is trivial because, using (9), one gets
(57)∥λn3e-λn2At∥≤e-λn2α(A)t∑k=0m-1(m∥A∥t)kλn2k+3k!,
and from the d'Alembert test series applied to each summand, taking into account (5) and the relation (19), limn→∞(λn+1-λn)=π, given in Lemma 1, one gets for 3≤r≤2(m-1)+3 that
(58)limn→∞e(λn2-λn+12)α(A)t(λn+1λn)r≤limn→∞e(λn2-λn+12)α(A)t(n+2n)r=e-α(A)tπlimn→∞(λn+λn+1)=0<1.
Thus, the series (56) is convergent.
Independence of the series solution (49) with respect to the chosen ρ0∈ℝ can be demonstrated using the same technique as given in [20].
We can summarize the results in the following theorem.
Theorem 2.
Consider the homogeneous problem with homogeneous conditions (1)–(4) under hypotheses (5), (6), and (7) verifying conditions (13) and (14). Let f(x) be a vectorial function satisfying (42). Let ℱ be the set defined by (27), and let G(ρ0,b1,b2,λk) be the matrix defined by (31), taking as eigenvalues of problems λ∈ℱ satisfying
(59)rank(G(ρ0,b1,b2,λk))<m,
including the eigenvalue λ=0 if 1∈σ(-A~2A~1), and taking as eigenfunctions u(x,t,λk) defined by (34). Let α be given by (44) and vectors C(λn) defined by (47). Then, u(x,t), as defined in (49), is a series solution of problem (1)–(4).
4. Algorithm and Example
We can summarize the process to calculate the solution of the homogeneous problem with homogeneous conditions (1)–(4) in Algorithm 1.
Algorithm 1: Solution of the homogeneous problem with homogeneous conditions (1)–(4).
Input data: A,A1,A2,B1,B2∈ℂm×m, f(x)∈ℂm.
Result: u(x,t).
(1) Check that matrix A satisfies (5).
(2) Check that matrices Ai,Bi∈ℂm×m,i∈{1,2} are singular, and check that the block matrix
(A1B1A2B2) is regular.
(3) Determine a number ρ0∈ℝ so that the matrix pencil A1+ρ0B1 is regular.
(4) Determine matrices A1~ and B1~ defined by (10).
(5) Determine matrices A2~ and B2~ defined by (11).
(6) Consider the following cases:
(i) Case 1. Condition (13) holds, that is, matrices B~1 and B~2 have a common eigenvector v≠0 associated
with eigenvalues b1∈σ(B~1)-{0} and b2∈σ(B~2). In this case continue with step (7).
(ii) Case 2. Condition (13) does not hold. In this case the algorithm stops because it is not possible to
find the solution of (1)–(4) for the given data.
(7) Determine b1∈σ(B~1), b1≠0, b2∈σ(B~2) and vector v≠0 verifying
v∈Ker(B~1-b1I)∩Ker(B~2-b2I) such that:
(i) Conditions (53) hold, that is:
1.1: Ker(B~1-b1I)∩Ker(B~2-b2I) is an invariant subspace respect matrix A.
1.2: f(x)∈Ker(B~1-b1I)∩Ker(B~2-b2I), ∀x∈[0,1].
(ii) Conditions (14) hold, that is:
1.3: (1-b2+ρ0b1b2)(1-ρ0b1)b1∈ℝ, b1b2∈ℝ.
(iii) The vectorial function f(x) satisfies (42), that is:
1.4: f∈𝒞2([0,1]).
1.5: (1-ρ0b1)f(0)+b1f′(0)=0.
1.6: -(1-b2+ρ0b1b2b1)f(1)+b2f′(1)=0.
If these conditions are not satisfied, return to step (6) of Algorithm 1 discarding the values
taken for b1 and b2.
(8) Determine the positive solutions of (16) and determine ℱ defined by (27).
(9) Determine degree p of minimal polynomial of matrix A.
(10) Building block matrix G(ρ0,b1,b2,λk) defined by (31).
(11) Determine λ∈ℱ so that rank G(ρ0,b1,b2,λk)<m.
(12) Include the eigenvalue λ=0 if 1∈σ(-A~2A~1).
(13) Determine α given by (44).
(14) Determine vectors C(λn) defined by (47).
(15) Determine functions u(x,t,λn) defined by (34).
(16) Determine the series solution u(x,t) of problem (1)–(4) defined by (49).
Example 1.
We will consider the homogeneous parabolic problem with homogeneous conditions (1)–(4), where the matrix A∈ℂ4×4 is chosen as
(60)A=(200-1121-2-10210001),
and the 4×4 matrices Ai, Bi, i∈{1,2}, are
(61)A1=(0000000000100001),A2=(0100100000010000),B1=(1000010000000000),B2=(1000100000100001).
Also, the vectorial valued function f(x) will be defined as
(62)f(x)=(0x2-100).
Observe that the method proposed in [12] cannot be applied to solve this problem.
We will follow Algorithm 1 step to step.
Matrix A satisfies the condition (5), because σ(A)={1,2}. That is, A is positive stable.
Each of the matrices Ai,Bi, i∈{1,2}, is singular, and the block matrix
(63)(A1B1A2B2)=((0000100000000100001000000001000001001000100010000001001000000001))
is regular.
Note that although A1 is singular, taking ρ0=1∈ℝ, the matrix pencil
(64)A1+ρ0B1=I4×4
is regular. Therefore, we take ρ0=1.
By (10) we have
(65)A1~=(A1+ρ0B1)-1A1=A1,B1~=(A1+ρ0B1)-1B1=B1.
By (11) we have
(66)A2~=(B2-(A2+ρ0B2)B1~)-1A2=(-10000-10000010000),B2~=(B2-(A2+ρ0B2)B1~)-1B2=(-1000-100000100001).
We have σ(B1~)={0,1} and σ(B2~)={0,1,-1}. Note that in this case the condition (13) holds because with b1=1 and b2=0∈σ(B~2) there exists a common eigenvector v∈ℂ4, v=(0,1,0,0)t, and thus Ker(B~1-I)∩Ker(B~2)≠(0,0,0,0)t. We are therefore in case 1 of Algorithm 1.
We take the values b1=1 and b2=0 and will check the conditions given in step 7 of the algorithm.
One gets that
(67)Ker(B~1-I)∩Ker(B~2)=〈(0100)〉.
Let x∈Ker(B~1-I)∩Ker(B~2). Then x=(0λ00), λ∈ℂ. In this case one gets
(68)Ax=(02λ00)∈Ker(B~1-I)∩Ker(B~2),
and then the subspace Ker(B~1-I)∩Ker(B~2) is invariant by matrix A.
It is trivial to check that
(69)f(x)∈Ker(B~1-I)∩Ker(B~2),∀x∈[0,1].
With these values ρ0, b1, and b2, one gets that
(70)(1-b2+ρ0b1b2)(1-ρ0b1)b1=0∈ℝ.
With these values b1 and b2, one gets
(71)b1b2=0∈ℝ.
It is trivial to check that f(x)∈𝒞2([0,1]).
It is trivial to check that (1-ρ0b1)f(0)+b1f′(0)=(0,0,0,0)t.
It is trivial to check that -((1-b2+ρ0b1b2)/b1)f(1)+b2f′(1)=(0,0,0,0)t.
Equation (16) is of the form
(72)λcot(λ)=0
We can solve (72) exactly, λk=(π/2)+kπ, with an additional solution λ0∈]0,π[, because
(73)(1-b2+ρ0b1b2)(1-ρ0b1)b1=0<1,
and then λ0=(π/2). Thus, we have a numerable family of solutions of (72) which we denote by ℱ, given by. (74)ℱ={λk=π2+kπ;λk∈(kπ,(k+1)π),k≥1}∪ℱ0,ℱ0={λ0=π2}.
The minimal polynomial of matrix A is given by p(x)=(x-2)3(x-1). Then p=4.
If λk is a positive solution of (72), the matrix G(ρ0,b1,b2,λk) given by (31) takes the form
(75)G(1,1,0,λk)=(000-1001-210000000000-3004-640000000000-70012-13120000000-λk2000-λk200000010000-2λk200λk2-2λk200λk200010000-4λk2003λk2-4λk2003λk200010000-8λk2007λk2-8λk2007λk200010000).
Since the second column G(1,1,0,λk) is zero, we have that rank(G(1,1,0,λk))<4. Thus, each one of the positive solutions given by (74) is an eigenvalue.
It is trivial to check that 1∉σ(-A~2A~1), because
(76)-A~2A~1=(00000000000-10000),σ(-A~2A~1)={0}.
Then we do not include 0 as an eigenvalue.
Taking into account that ((1-b2+ρ0b1b2)(1-ρ0b1)/b1)=0<1, one gets α=0.
Vectors C(λn) defined by (47) take the values
(77)C(λn)=64(-1)nπ4(2n+1)4(0100).
Using the minimal theorem [21, page 571], one gets that
(78)eAu=(e2u00-eu(eu-1)-12e2u(u-2)ue2ue2uu12eu(2+eu(-2+(-2+u)u))-e2uu0e2ue2uu000eu).
Next, by considering (78) with u=-((π/2)+nπ)2t and simplifying, we obtain the value of e-((π/2)+nπ)2At. Taking into account that all eigenvalues λn are positive, the associated eigenfunctions are
(79)u(x,t,λn)=e-λn2At((1-ρ0b1)sin(λnx)-b1λncos(λnx))C(λn).
We replace the values of C(λn) given by (77) in (79) and take into account the value of the matrix e-((π/2)+nπ)2At. After simplification, we finally obtain the solution of (1)–(4) given by
(80)u(x,t)=(∑n≥0-32(-1)ne-(1/2)(π+2nπ)2tcos((1/2)(π+2nπ)x)π3(2n+1)3)×(0100).
Acknowledgments
This research has been supported by the Universitat Politècnica de València Grant PAID-06-11-2020. The third listed author has been partially supported by the Universitat Jaume I, Grant P1.1B2012-05.
AlexanderM. H.ManolopoulosD. E.A stable linear reference potencial algorithm for solution of the quantum close-coupled equations in molecular scattering theory19878620442050MelezhikV. S.PuzyninI. V.PuzyninaT. P.SomovL. N.Numerical solution of a system of integro-differential equations arising from the quantum mechanical three-body problem with Coulomb interaction198454222123610.1016/0021-9991(84)90115-3MR745572ReidW. T.1971New York, NY, USAJohn Wiley & Sonsxv+553MR0273082LevineR. D.ShapiroM.JohnsonB.Transition probabilities in molecular collisions: computational studies of rotational excitation19705317551766LillJ. V.SchmalzT. G.LightJ. C.Imbedded matrix Green's functions in atomic and molecular scattering theory19837874456446310.1063/1.445338MR699694MrugalaF.SecrestD.The generalized log-derivate method for inelastic and reactive collisions19837859545961HueckelT.BorsettoM.PeanoA.1987New York, NY, USAJohn Wiley & SonsCrankJ.19952ndOxford University PressMikhailovM. D.OsizikM. N.1984New York, NY, USAJohn Wiley & SonsStakgoldI.1979New York, NY, USAJohn Wiley & SonsMR537127CampbellS. L.MeyerC. D.Jr.1979London, UKPitmanJódarL.NavarroE.MartinJ. A.Exact and analytic-numerical solutions of strongly coupled mixed diffusion problems200043226929310.1017/S0013091500020927MR1763051ZBL0949.35033JódarL.PonsodaE.Continuous numerical solutions and error bounds for time dependent systems of partial differential equations: mixed problems1995298637110.1016/0898-1221(95)00030-3MR1327563ZBL0831.65102NavarroE.PonsodaE.JódarL.A matrix approach to the analytic-numerical solution of mixed partial differential systems19953019910910.1016/0898-1221(95)00071-6MR1336666ZBL0839.65105GolubG. H.Van LoanC. F.1989Baltimore, Md, USAThe Johns Hopkins University PressRaoC. R.MitraS. K.1971New York, NY, USAJohn Wiley & Sonsxiv+240MR0338013NavarroE.JódarL.FerrerM. V.Constructing eigenfunctions of strongly coupled parabolic boundary value systems200215442943410.1016/S0893-9659(01)00154-9MR1902275ZBL1014.35065InceE. L.1962New York, NY, USADoverCoddingtonE. A.LevinsonN.1967New York, NY, USAMcGraw-HillSolerV.NavarroE.FerrerM. V.Invariant properties of eigenfunctions for multicondition boundary value problems200619121308131210.1016/j.aml.2005.06.019MR2264182ZBL1143.34304DunfordN.SchwartzJ.1977New York, NY, USAInterscience