doi:10.1155/2007/58373 Research Article The Convolution on Time Scales

The main theme in this paper is an initial value problem containing a dynamic version of the transport equation. Via this problem, the delay (or shift) of a function defined on a time scale is introduced, and the delay in turn is used to introduce the convolution of two functions defined on the time scale. In this paper, we give some elementary properties of the delay and of the convolution and we also prove the convolution theorem. Our investigation contains a study of the initial value problem under consideration as well as some results about power series on time scales. As an extensive example, we consider the q-difference equations case.


Introduction
As is known, the methods connected to the employment of integral transformations are very useful in mathematical analysis.Those methods are successfully applied to solve differential and integral equations, to study special functions, and to compute integrals.The main advantage of the method of integral transformations is the possibility to prepare tables of direct and inverse transformations of various functions frequently encountered in applications (the role of those tables is similar to that of derivative and integral tables in calculus).
One of the more widely used integral transformations is the Laplace transform.If f is a given function and if the integral exists, then the function F of a complex variable is called the Laplace transform of the original function f .The operation which yields F from a given f is also called the Laplace transform (or Laplace transformation).The original function f is called the inverse transform or inverse of F.
A discrete analogue of the Laplace transform is the so-called Z-transform, which can be used to solve linear difference equations as well as certain summation equations being discrete analogues of integral equations.If we have a sequence {y k } ∞ k=0 , then its Z-transform is a function Y of a complex variable defined by (1. 2) The Laplace transform on time scales (note that time scales analysis unifies and extends continuous and discrete analysis, see [1,2]) is introduced by Hilger in [3], but in a form that tries to unify the (continuous) Laplace transform and the (discrete) Z-transform.For arbitrary time scales, the Laplace transform is introduced and investigated by Bohner and Peterson in [4] (see also [1,Section 3.10]).
An important general property of the classical Laplace transform is the so-called convolution theorem.It often happens that we are given two transforms F(z) and G(z) whose inverses f and g we know, and we would like to calculate the inverse h of the product H(z) = F(z)G(z) from those known inverses f and g.The inverse h is written f * g and is called the convolution of f and g.The classical convolution theorem states that H is the Laplace transform of the convolution h of f and g defined by (1. 3) The main difficulty which arises when we try to introduce the convolution for functions defined on an arbitrary time scale T is that, if t and s are in the time scale T, then it does not necessarily follow that the difference t − s is also an element of T so that f (t − s), the shift (or delay) of the function f , is not necessarily defined if f is defined only on the time scale T. In [4], this difficulty is overcome in case of pairs of functions, in which one of the functions is an "elementary function."In the present paper, we offer a way to define the "shift" and hence the convolution of two "arbitrary" functions defined on the time scale T. The idea of doing so is the observation that the usual shift f (t − s) = u(t,s) of the function f defined on the real line R is the unique solution of the problem (for the first-order partial differential equation) .4)This suggests to introduce the "shift" f (t,s) of the given function f defined on a time scale T as the solution of the problem where t 0 ∈ T is fixed and where Δ means the delta differentiation and σ stands for the forward jump operator in T. We will call this problem the shifting problem.It can be considered as an initial value problem (with respect to s) with the initial function f at s = t 0 .The solution f of this problem gives a shifting of the function f along the time scale T. Then we define the convolution of two functions f ,g : The reasonableness of such a definition is justified by the fact, as we prove in this paper, that the convolution theorem holds for convolutions on time scales defined in this way.
The paper is organized as follows.In Section 2, we introduce shifts and convolutions while the convolution theorem is proved in Section 3. In Section 4, the theory of power series on time scales is developed, and the shifting problem is studied in Section 5. Finally, in Section 6, we investigate the presented concepts in the special case of quantum calculus.

Shifts and convolutions
Let T be a time scale such that sup T = ∞ and fix t 0 ∈ T. Definition 2.1.For a given f : [t 0 ,∞) T → C, the solution of the shifting problem is denoted by f and is called the shift (or delay) of f .
Example 2.2.In the case T = R, the problem (2.1) takes the form and its unique solution is u(t,s) = f (t − s + t 0 ).In the case T = Z, (2.1) becomes and its unique solution is again u(t,s) = f (t − s + t 0 ).For the solution of the problem (2.1) in the case T = q N0 , see Section 6.
In this and the next section, we will assume that the problem (2.1) has a unique solution f for a given initial function f and that the functions f , g, and the complex number z are such that the operations fulfilled in this and the next section are valid.Solvability of the problem (2.1) will be considered in Section 5.
Definition 2.5.For given functions f ,g : T → R, their convolution f * g is defined by where f is the shift of f introduced in Definition 2.1.
Theorem 2.6.The shift of a convolution is given by the formula Proof.We fix t 0 ∈ T. Let us put F(t,s) equal to the right-hand side of (2.7).First, we have Next, we calculate (2.9) The first integral after the last equal sign above can be evaluated using integration by parts: f t,σ(u) g Δs (u,s)Δu. (2.10) Putting these calculations together, we arrive at which completes the proof of (2.7).
Theorem 2.8.If f is delta differentiable, then and if g is delta differentiable, then (2.15) From here, since f (σ(t),σ(t)) = f (t 0 ) by Lemma 2.4, and since the first equal sign in the statement follows.For the second equal sign, we use the definition of f and integration by parts: (2.18) This completes the proof.

.19)
Proof.This follows from Theorem 2.8 by using g = 1.
Theorem 2.10.If f and g are infinitely often Δ-differentiable, then for all k ∈ N 0 , (2.20) Proof.We only prove the first equation as the proof of the second equation is similar and as the third equation clearly follows from the first equation as well as from the second equation.The statement is obviously true for k = 0. Assuming it is true for k ∈ N 0 , we use the first equation in Theorem 2.8 to find so that the statement is true for k + 1.
We conclude this section with the following extension of Lemma 2.4.
Theorem 2.11.If f has partial Δ-derivatives of all orders, then for all k ∈ N 0 , where f Δt indicates the Δ-derivative of f with respect to its first variable.
Proof.Let k ∈ N 0 .Our assumptions and the initial condition in (2.1) imply that where we have used [5, Theorem 7.2], the dynamic equation in (2.1), and the equality of mixed partial derivatives (under our assumptions) from [5, Theorem 6.1].

The convolution theorem
Note that below we assume that z ∈ (the set of regressive functions), that is, 1 + μ(t)z = 0 for all t ∈ T (where μ is the graininess on T).Then ( z) ∈ and therefore e z (•,t 0 ) is well defined on T.
Definition 3.1.Assume that x : T → R is a locally Δ-integrable function, that is, it is Δintegrable over each compact interval of T. Then the Laplace transform of x is defined by where Ᏸ{x} consists of all complex numbers z ∈ for which the improper integral exists.
Theorem 3.2 (convolution theorem).Suppose f ,g : T → R are locally Δ-integrable functions on T and their convolution f * g is defined by (2.6).Then, Proof.We have where According to the following lemma, Ψ(s) is independent of s.Then we can evaluate and we can conclude that ᏸ{ f * g} = ᏸ{g} • ᏸ{ f }.
We may use Lemma 3.3 once again to prove the following important theorem.In there, we make use of the function u a defined by u a (b (3.7) Proof.We have where Ψ is defined in (3.4) in Theorem 3.2.In Lemma 3.3, it was shown that Ψ is in fact a constant, namely, Ψ(t) ≡ Ψ(t 0 ) = ᏸ{ f }(z), and this concludes the proof.

Power series on time scales
Let T be a time scale.Following Agarwal and Bohner [6] (see also [1,Section 1.6]), let us introduce the generalized monomials h k : T × T → R, k ∈ N 0 , defined recursively by Then The definition (4.1) obviously implies and finding h k for k > 1 is not easy in general.For the case T = R, we have while for the case T = Z we have where For the functions h k in the case T = q N0 for some q > 1 (the quantum calculus case), see Section 6.
Returning to the arbitrary time scale we easily find that Hence the statement follows.
Now let n ∈ N and suppose f : and the functions h k defined by (4.1).Then we have the formula (Taylor's formula)

.11)
In order to get an estimation for the remainder term Therefore, we have, from (4.12), which completes the proof.
If a function f : , it has Δ-derivatives at α of all orders), then we can formally write for it the series called Taylor's series for the function f at the point α.For given values of α and t, it can be convergent or divergent.The case when Taylor's series for the function f is convergent to that function is of particular importance; in this case, the sum of the series is equal to f (t).Taylor's series (4.18) is convergent to f (t) if and only if the remainder of Taylor's formula tends to zero as n → ∞, that is, lim n→∞ R n (t,α) = 0.It may turn out that with a given function f we can formally associate its Taylor series, at a point α, of the form (4.18) (in other words, this means the Δ-derivatives f Δ k (α) make sense for this function for any k ∈ N 0 ) and that the series (4.18) is convergent for some values of t but its sum is not equal to f (t).Let us consider Taylor series expansions for some elementary functions.First we prove the following lemma.Lemma 4.4.For all z ∈ C and t ∈ T with t ≥ α, the initial value problem has a unique solution y that is represented in the form We solve (4.23) by the method of successive approximations setting If the series ∞ k=0 y k (t) converges uniformly with respect to t ∈ [α,R] T , where R ∈ T with R > α, then its sum will be obviously a continuous solution of (4.23).It follows from (4.24) that Therefore, using Theorem 4.2, for all k ∈ N 0 and t ∈ T with t ≥ α, we have It follows that (4.23) has a continuous solution y satisfying y(t) = ∞ k=0 z k h k (t,α) for all t ≥ α, and for this solution (4.22) holds.
To prove uniqueness of the solution, assume that (4.23) has two continuous solutions y and x for t ≥ α.Setting u = y − x, we get that (4.27) Next setting we have from (4.27) Using this in the integral in (4.27), we get

.30)
Repeating this procedure, we obtain Hence by Theorem 4.2, Passing here to the limit as k → ∞, we get u(t) = 0 for all t ∈ [α,R] T .Since R was an arbitrary point in T with R > α, we have that u(t) = 0 for all t ∈ T with t ≥ α.
Note that from (4.20) we have that (4.36) Definition 4.5.Assume that supT = ∞ and t 0 ∈ T is fixed.A series of the form where a k are constants for k ∈ N 0 (which may be complex in the general case) and t ∈ T, is called a power series on the time scale T, the numbers a k being referred to as its coefficients.
We denote by Ᏺ the set of all functions f : [t 0 ,∞) T → C of the form where the coefficients a k satisfy with some constants M > 0 and R > 0 depending only on the series (4.38).
Note that under the condition (4.39), the series (4.38) converges uniformly on any compact interval [t 0 ,L] T of T, where L ∈ T with L > t 0 .Indeed, using Theorem 4.2 and (4.39), we have for all t ∈ T with t ≥ t 0 and k ∈ N 0 .Therefore, the sum f (t) of the series (4.38) satisfies It is easy to see that Ᏺ is a linear space: if f ,g ∈ Ᏺ, then α f + βg ∈ Ᏺ for any constants α and β.Note also that any given function f ∈ Ᏺ can be represented in the form of a power series (4.38) uniquely.Indeed, Δ-differentiating the series (4.38) n times term by term we get, using (4.2), This series is convergent for t ∈ [t 0 ,∞) T by (4.13) and (4.39) so that the term-by-term differentiating is valid.Setting t = t 0 , we find that Thus the coefficients of the power series (4.38) are defined uniquely by the formula (4.43).

Investigation of the shifting problem
For an arbitrary time scale T, we can prove the following theorem.
Theorem 5.1.Let f ∈ Ᏺ so that f can be written in the form (4.38) with the coefficients satisfying (4.39) for some constants M > 0 and R > 0. Then the problem (2.1) has a solution u of the form where a k are the same coefficients as in the expansion (4.38) of f .This solution is unique in the class of functions u for which are delta differentiable functions of s ∈ T and for all s ∈ T, s ≥ t 0 with some constants A > 0 and B > 0.
Proof.Since (see [6]) we have, from (5.1), a k h k−1 t,σ(s) . (5.5) Note that the differentiation of the series term by term is valid because the series in (5.5) are convergent for t ≥ s.From (5.5) it follows that u defined by (5.1) satisfies the dynamic equation in (2.1).We also have so that the initial condition in (2.1) is satisfied as well.
To prove the uniqueness of the solution, assume that u is a solution of (2.1) and has the properties (5.2), (5.3).Then we can write for u the Taylor expansion series with respect to the variable t at the point t = s for each fixed s: where A k (s) are the Taylor coefficients defined by (5.2).Substituting (5.7) into the dynamic equation in (2.1), we get where we did not include the terms in the series with k = 0 since h 0 (t,s) = 1 and A 0 (s) = u(s,s) = f (t 0 ) both have zero derivatives as constant functions.Note also that the term by term differentiation of the series is valid due to the conditions (5.3).Next we can use (5.4) to get from (5.8) that A k σ(s) h k−1 t,σ(s) .
(5.9) Hence, and therefore, .11)This means that A k (s) does not depend on s for any k ∈ N 0 .On the other hand, setting s = t 0 in (5.1) and using the initial condition in (2.1) and (4.38), we find that (5.12) Consequently, the solution u coincides with (5.1).
Remark 5.2.We can take f given in (4.38), in particular, to be a finite combination of h k (t,t 0 ) for some values k ∈ N 0 .Then the solution u given by (5.1) will be represented by a finite sum.
For convenience we are denoting the solution u of the problem (2.1) by f indicating thus that it depends on the initial function f given in the initial condition in (2.1). (5.13) where (5.15) Proof.Using Theorem 5.1 for f , we have (5.16) Hence, making use of Theorem 4.1, we obtain .17) so that (5.14) and (5.15) are proved.
Next, we have from (5.15) by using (5.13) and setting R (5.18) This shows that f * g ∈ Ᏺ.
Finally, we are in a position to present a formula for the shift of a function defined on the quantum calculus time scale.Theorem 6.5.The shift of f : T → R is given by M. Bohner and G. Sh.Guseinov 23 Proof.We use the results from this and the previous section to obtain f q k t,t = k m=0 h m q k t,t f Δ m (1) k − ν m (−t) m q m(m−1)/2 f q ν = k ν=0 k ν t ν (1 − t) k−ν q f q ν , (6.9) where we have used [8, Formula (5.5)] to evaluate the expression in the curly braces.