The regular system of differential equations with convolution terms solved by Sumudu transform.

1. Introduction

A differential equation by itself is inherently underconstrained in the absence of initial values as well as boundary conditions. It is also well known that a differential equation along with the initial values or boundary conditions can be represented by an integral equation by using this integral representation, it becomes possible to solve the problem. However one of the most important achievements, and applications of integral transform methods is solving the partial differential equations (PDEs) of second order. For this purpose recently a new integral transform, which is called Sumudu transform, was introduced by Watugala [1, 2] and used by Weerakoon [3] for partial derivatives of Sumudu transform, provided the complex inversion formula in order to solve the differential equations in different applications of system engineering, control theory and applied physics. The convolution theorem of Sumudu transform was proved by Asiru in [4]. This new transform was applied to the solution of ordinary differential equations and control engineering problems; see [1, 5]. In [6], some fundamental properties of the Sumudu transform were established. In [7], this new transform was applied to the one-dimensional neutron transport equation. In fact the relationship between double Sumudu and double Laplace transforms were studied in [8, 9]. Furthermore in [10], the Sumudu transform was extended to the distributions and some of their properties were also studied. Thus, there have been several works on Sumudu transform and applied to different kind of problems.

In this paper, we prove Sumudu transform of convolution for the matrices and use to solve the regular system of differential equations.

Throughout the paper we use a square matrix, P=[Pij] of regular system having size n×n of polynomials and the associated determinant, det[P]. If det(P) is not the zero polynomial (which we write as det(P)≠0), we have deg[det(P)]≤|N(P)| where N(P) is the degree of the polynomials in the regular matrix P. The case of equality is so important that we make the following statement. We say that P is regular if det(P)≠0 and the condition

deg[det(P)]=|N(P)|,
where Nj(P) considered as the highest power of the variable term that occurs in the jth column of matrix P, that is,

Nj(P)=max1≤i≤mPij≠0[deg([Pij])].
Next we extend the result given in [4] as follows. For each i and j we define ΨP(i,j)(x) to be the 1×Nj matrix of polynomials given by the matrix product

ΨP(i,j)(x)=(1x1x21x3⋯1xN-1)(a1a2...aNja2a3..aNj0a3..aNj00......aNj0...0),
where the ak are the coefficients of Pij and Pij=∑k=0Njak/xk. In terms of (7.1) in [4] we have

ΨP(i,j)(x)=(ΨPij(x)00⋯0),
where the number of zero indicated is Nj-deg(Pij). We define ΨP to be matrix of polynomials and having size m×|N| defined in terms of the array of matrices:

ΨP=(ΨP(1,1)ΨP(1,2)⋯ΨP(1,n)ΨP(2,1)ΨP(2,2)⋯ΨP(2,n)ΨP(3,1).⋯ΨP(3,n)..⋯.ΨP(m,1).⋯ΨP(m,n)).
For each complex number x,ΨP(x) define a linear mapping of C|N| into Cm. If any Nj is zero, ΨP(i,j) is the empty matrix for all i and the corresponding column of matrices in ΨP is absent. If Nj=0 for all j,ΨP(x) is defined to be the unique linear mapping of {0}=C0 into Cm; its matrix representation is then the empty matrix. In particular consider

ΨP(x)=(x4+2x21-2x3x4+2x4x3).
Then we have N1=4,N2=3, and N=(4,3). Thus ΨP is the 3×7 matrix computed as

ΨP(x)=((1x1x21x31x4)(0201201001001000)(1x1x21x3)(-200000000)(1x1x21x31x4)(2001001001001000)(1x1x21x3)(004040400))=(2x2+1x42x+1x31x21x-2x002x+1x41x31x21x4x34x24x).
In general, if f=(f1,f2,…,fp) is a sequence of functions on (a,b),h=h1,h2,…,hp with each hi being an integer ≥0 and fi being hi-1 times differentiable on (a,b), we will write, using the notation in [4],

Φ(f,a;h)=(Φ(f1,a;h1),Φ(f1,a;h2),…,Φ(fp,a;hp))∈C|h|,Ϝ(f,b;h)=(Ϝ(f1,b;h1),Ϝ(f1,b;h2),…,Ϝ(fp,b;hp))∈C|h|,
whenever the limits exist. If any hi is zero, the corresponding string is absent. If hi=0 for all i we define Φ(f,a;h)=Ϝ(f,b;h)=0∈C0. The following proposition was proved in [4].

Proposition 1.1 (Sumudu transform of higher derivatives).

Let f be n times differentiable on (0,∞) and let f(t)=0 for t<0. Suppose that f(n)∈Lloc.Then f(k)∈Lloc for 0≤k≤n-1,dom(Sf)⊂dom(Sf(n)) and, for any polynomial P of degree n,S(P(Ḋ)(f))=P(u)S(f)(u)-MP(u)φ(f,n)
for u∈dom(Sf). In particular
(Sf(n))(u)=1un(Sf)(u)-(1un,1un-1,…,1u)φ(f;n)
(with φ(f;n) here written as a column vector). For n=2 we have
(Sf′′)(u)=1u2(Sf)(u)-1u2f(0+)-1uf′(0+).

Now we want to extend the above proposition to the system of differential equations as follows.

Proposition 1.2.

Let (f) = (f1,f2,f3,…,fn) be functions and let (h)= (h1,h2,…,hn) be a sequence of integers ≥0 such that m×n, a matrix of polynomial P, satisfies N(P)=h. Furthermore for each j we let fj be hj times differentiable on (0,∞)and we let f(t)=0 for t<0. Suppose that fjhj∈Lloc for each j. Then fjr∈Lloc for 0≤r≤hj-1,dom(Sf)⊂⋂j=1ndom(Sfjhj) and, we have,
S(P(Ḋ)(f))(u)=P(u)S(f)(u)-ΨP(u)φ(f,h).

In the following theorem we discuss the Sumudu transform of convolution of matrices.

Theorem 1.3.

Let A(t)=[fij(t)]∈MnI and B(t)=[gij(t)]∈MnI be Sumudu Transformable. Then
S[A(t)⊛B(t)](u)=uS[A(t)]S[B(t)],
where MnI is the set of n×n matrices for whose entries are integrable.

Proof.

Suppose that S[fij(t)]=Fij(u) and S[gij(t)]=Gij(u), by induction.

In case n=2, we have the following matrices:
A(t)=[f(t)]2×2=[f11(t)f12(t)f21(t)f22(t)],B(t)=[g(t)]2×2=[g11(t)g12(t)g21(t)g22(t)]∈M2I
as Sumudu transformable, then Sumudu transform of the above matrices are given by
S[[f(t)]2×2](u)=[F11(u)F12(u)F21(u)F22(u)],S[[g(t)]2×2](u)=[G11(u)G12(u)G21(u)G22(u)].
We have
uS[[f(t)]2×2](u)S[[g(t)]2×2](u)=[αβζη]=S[[f(t)]2×2⊛[g(t)]2×2](u),
where
α=uF11(u)G11(u)+uF12(u)G21(u),β=uF11(u)G12(u)+uF12(u)G22(u),ζ=uF21(u)G11(u)+uF22(u)G21(u),η=uF21(u)G12(u)+uF22(u)G22(u).
Similarly, in case n=3 it is also true that
uS[[f(t)]3×3](u)S[[g(t)]3×3](u)=S[[f(t)]3×3⊛[g(t)]3×3](u).
Assuming that n=k is true for the case n=k+1, we have the following matrices:
A(t)=[f(t)]k+1×k+1=[f11(t)⋯f1k+1(t)f21(t)⋯f2k+1(t)⋮⋯⋮⋮⋯⋮fk+11(t)⋮fk+1k+1(t)]∈Mk+1I,B(t)=[g(t)]k+1×k+1=[g11(t)⋯g1k+1(t)g21(t)⋯g2k+1(t)⋮⋯⋮⋮⋯⋮gk+11(t)⋯gk+1k+1(t)]∈Mk+1I
having Sumudu transforms given by
S[A(t)]=S[[f(t)]k+1×k+1](u)=[F11(u)⋯F1k+1(u)F21(u)⋯F2k+1(u)⋮⋯⋮⋮⋯⋮Fk+11(u)⋯Fk+1k+1(u)].
Similarly if
S[B(t)]=S[[g(t)]k+1×k+1](u)=[G11(u)⋯G1k+1(u)G21(u)⋯G2k+1(u)⋮⋯⋮⋮⋯⋮Gk+11(u)⋯Gk+1k+1(u)],
then we have
uS[[f(t)]k+1×k+1]S[[g(t)]k+1×k+1](u)=[Φ⋯ΓΘ⋯Λ⋮⋯⋮⋮⋯⋮ϒ⋯Ω],
where
Φ=uF11(u)G11(u)+⋯+uF1k+1(u)Gk+11(u),Γ=uF11(u)G1k+1(u)+⋯+uF1k+1(u)Gk+1k+1(u),Θ=uF21(u)G11(u)+⋯+uF2k+1(u)G1k+1(u),Λ=uF21(u)G1k+1(u)+⋯+uF2k+1(u)Gk+1k+1(u),⋮⋯⋮ϒ=uFk+11(u)G11(u)+⋯+uFk+1k+1(u)Gk+11(u),Ω=uFk+11(u)G1k+1(u)+⋯+uFk+1k+1(u)Gk+1k+1(u).
Thus,
uS[[f(t)]k+1×k+1](u)S[[g(t)]k+1×k+1](u)=S[[f(t)]k+1×k+1⊛[g(t)]k+1×k+1](u).
The proof completes.

The inverse P-1(u) of P(u) will exist provided that u is not a root of the equation det[P(u)]=0; hence let P̃ denote the adjugate matrix of P; by elementary matrix theory we have

[P(u)]-1=1det[P(u)]P̃(u).

Proposition 1.4 (Solution of homogeneous equation of regular system).

Let P be regular and let y be N(P) times differentiable on (0,∞) and zero on (-∞,0) and suppose that
P(D)y=0,
then y is given (except at 0) by the formula
y=S-1[[P(u)]-1ΨP(u)φ(y,N(P))].

Proof.

By using (1.12) and assuming that y is Sumudu transformable, we have
S(P(Ḋ)(y))(u)=P(u)S(y)(u)-ΨP(u)φ(y,h).
Equation (1.26) becomes
P(u)S(y)(u)=ΨP(u)φ(y,h),
using (1.25)
S(y)(u)=[P(u)]-1ΨP(u)φ(y,h).
Finally by taking the inverse Sumudu transform of the above equation we have
y=S-1[[P(u)]-1ΨP(u)φ(y,h)],
where we assume that the inverse transform exists.

The next proposition was proved in [4] for the single differential equation, and we extend it to the regular system of differential equations.

Proposition 1.5.

Let P be regular and let b be the greatest of the real parts of the roots of the equation
det[P(1x)]=0
if deg[det(P)]>0(otherwiseputb→-∞). Let f=[fi] be continuous on (0,∞), and zero on (-∞,0), locally integrable, and Sumudu transformable and furthermore suppose that
P(D)y=f,
then we have
P(u)S[(y)](u)=S(f)(u)+ΨP(u)φ(f,N(P))
for 1/u>b in dom(Sf).

Note that the most general system of equations can be written in the matrix form as

[P11(D)P12(D)⋯P1n(D)P21(D)P22(D)⋯P2n(D)⋮⋮⋯⋮⋮⋮⋯⋮Pn1(D)Pn2(D)⋯Pnn(D)][y1y2⋮⋮yn]=[f11(t)f12(t)⋯f1n(t)f21(t)f22(t)⋯f2n(t)⋮⋮⋯⋮⋮⋮⋯⋮fn1(t)fn2(t)⋯fnn(t)]*[g1g2⋯⋯gn]
which denotes the system

∑j=1nPij(D)yi=∑j=1nfij*gi(1≤i≤n).
Here the Pij are polynomials and if we let P=[Pij], f=[fij], respectively, and the vectors y=[yi],g=[gi] with n components, then the above equation can be written in the form of

P(D)y=f*g
under the initial condition

y(0)=y0,y′(0)=y1,…,y(n-1)(0)=yn-1.
Since y(k) is locally integrable, thus, Sumudu transformable for 0≤k≤n and for every such k, then Sumudu transform of (1.37) is given by

P(u)S(y)(u)=uS(f)(u)S(g)(u)+ΨP(u)φ(y,N(P)),
where P(u) is a matrix defined by

P(u)=[P11(1u)P12(1u)⋯P1n(1u)P21(1u)P22(1u)⋯P2n(1u)⋮⋮⋯⋮⋮⋮⋯⋮⋮⋮⋯⋮Pn1(1u)Pn2(1u)⋯Pnn(1u)]
and ΨP(u) by (1.41).

In order to find the solution of (1.37), first of all we multiply (1.39) by the inverse matrix P-1(u), then we get

S(y)(u)=[P(u)]-1[uS(f)(u)S(g)(u)+ΨP(u)φ(y,N(P))].
Now, by taking the inverse Sumudu transform for both sides of (1.41)

y(t)=S-1[P(u)-1[uS(f)(u)S(g)(u)]]+S-1[P-1(u)ΨP(u)φ(y,N(P))]
provided that the inverse exists for each term in the right-hand side of (1.42).

To illustrate our method, we give the following example.

Example 1.6.

Solve for t>0 the system of two equations
x′′+2y′-2x=-sin(t),x(0)=1,x′(0)=2,y′′-2x′-2y=cos(t)-2,y(0)=0,y′(0)=1.

The matrix

P(u)=[1u2-22u-2u1u2-2],
and we have det[P(u)]=1/u4+4 which has degree 4 = N(P). Thus P, regular. Now by applying Sumudu transform to the above system we have

P(u)S(xy)(u)=[-uu2+1-1+2u2u2+1]+ΨP(u)φ(y,N(P)),
where ΨP(u)φ(y,N(P)) is given by

ΨP(u)φ(y,N(P))=[1u21u1u0-1u01u21u][1201]=[1+2uu2-1u].
On using (1.25) we obtain
P(u)-1=[u2(1-2u2)1+4u4-2u31+4u42u31+4u4u2(1-2u2)1+4u4],
then equation (1.45) becomes

S(xy)(u)=P-1(u)[-uu2+1-1+2u2u2+1]+P-1(u)[ΨP(u)φ(y,N(P))].
finally, by taking inverse Sumudu transform equation (1.48) we obtain the solution of the system as follows

x(t)=sin(t)+etcos(t),y(t)=-cos(t)+etsin(t)+1
Thus based on the above discussions we note that the Sumudu transform can be applied for system of differential equations thus can be used in many engineering problems.

Acknowledgments

The authors gratefully acknowledge that this research was partially supported by the University Putra Malaysia under the Research University Grant Scheme 05-01-09-0720RU and Fundamental Research Grant Scheme 01-11-09-723FR. The authors also thank the referee(s) for very constructive comments and suggestions.

WatugalaG. K.The Sumudu transform for functions of two variablesWatugalaG. K.Sumudu transform: a new integral transform to solve differential equations and control engineering problemsWeerakoonS.Application of Sumudu transform to partial differential equationsAsiruM. A.Sumudu transform and the solution of integral equations of convolution typeKılıçmanA.GadainH. E.An application of double Laplace transform and double Sumudu transformBelgacemF. B. M.Boundary value problem with indefinite weight and applicationsKademA.Solving the one-dimensional neutron transport equation using Chebyshev polynomials and the Sumudu transformEltayebH.KılıçmanA.On double Sumudu transform and double Laplace transformKılıçmanA.EltayebH.A note on integral transforms and partial differential equationsEltayebH.KılıçmanA.On some applications of a new integral transform