Differential Transform Algorithm for Functional Differential Equations with Time-Dependent Delays

An algorithm using the differential transformation which is convenient for finding numerical solutions to initial value problems for functional differential equations is proposed in this paper. We focus on retarded equations with delays which in general are functions of the independent variable. .e delayed differential equation is turned into an ordinary differential equation using the method of steps. .e ordinary differential equation is transformed into a recurrence relation in one variable using the differential transformation. Approximate solution has the form of a Taylor polynomial whose coefficients are determined by solving the recurrence relation. Practical implementation of the presented algorithm is demonstrated in an example of the initial value problem for a differential equation with nonlinear nonconstant delay. A two-dimensional neutral system of higher complexity with constant, nonconstant, and proportional delays has been chosen to show numerical performance of the algorithm. Results are compared against Matlab function DDENSD.


Introduction
Functional differential equations (FDEs) are used to model processes and phenomena which depend on past values of the modelled entities. Indicatively, we mention models describing machine tool vibrations [1], predatorprey type models [2], and models used in economics [3]. Further models and details can be found for instance in [4,5] or [6].
Differential transformation (DT), a semianalytical approach based on Taylor's theorem, has been proved to be efficient in solving a variety of initial value problems (IVPs), ranging from ordinary to functional, partial, and fractional differential equations [7][8][9][10][11]. However, there is no publication about systematic application of DT to IVP for differential equations with nonconstant delays which are functions of the independent variable.
In this paper, we present an extension of DT to a class of IVPs for delayed differential equations with analytic righthand side. Albeit the analyticity assumption seems to be quite restrictive, it is reasonable to develop theory for such class of equations [12,13]. e paper is organised as follows. In Section 2, we define the subject of our study and briefly describe the methods we combine to solve the studied problem, including recalling necessary results of previous studies. Section 3 contains the main results of the paper, including algorithm description, new theorems, examples, and comparison of numerical results. In Section 4, we briefly summarise what has been done in the paper.
Let t * � min 1≤i≤r inf t∈[t 0 ,T] (α i (t)) and m � max m 1 , m 2 , . . . , m r }; hence, t * ≤ t 0 and m ≤ n. If m < n, we have a retarded system (1); otherwise, if m � n, we call the system neutral. Furthermore, if t * < 0, initial vector function DT algorithm for the case t * � t 0 � 0 with all delays being proportional is described in [14]. DT algorithm for the case t * < t 0 when all delays are constant is introduced in [15]. In this paper, we develop the algorithm for the case t * < t 0 when at least one delay is nonconstant.
To have a complete IVP, we consider system (1) together with initial conditions: and, since t * < t 0 , also subject to initial vector function We consider the IVPs (1)-(3) under the following hypotheses: (H1) We assume that all the functions ϕ j (t), j � 1, . . . , p, are analytic in [t * , t 0 ], the functions α i (t), i � 1, . . . , r, are analytic in [t 0 , T] and the functions f j , j � 1, . . . , p, are analytic in an open set containing (H2) If α i (t) � q i t and m i � n in f j for some i ∈ 1, . . . , r { } and j ∈ 1, . . . , p , that is, jth equation is neutral with respect to the proportional delay α i , we assume that u (n) l (α i (t)) ≡ 0 for l ∈ 1, . . . , p , l ≠ j. is hypothesis is included since if it is not fulfilled, the existence of unique solution of IVP could be violated.
We note that these assumptions imply that the IVP (1)-(3) has a unique solution in the interval [t 0 , T].

Method of Steps.
e basic idea of our approach is to combine DT and the general method of steps. e method of steps enables us to replace the terms including delays with initial vector function Φ(t) and its derivatives. en, the original IVP for the delayed or neutral system of differential equations is turned into IVP for a system of ordinary differential equations.
For the sake of clarity, we include a simple explanatory example. Suppose that we have a system with three delays, one of each type considered: We have to distinguish two cases: (a) If t 0 � 0, applying the method of steps turns system (1) into where and m l ≤ n for l � 1, 2, 3, 4. More details on the general method of steps can be found, for instance, in monographs [4] or [6]. Continuation of the method of steps algorithm for equations with constant delays τ 1 , . . . , τ r is described in [15]. Briefly summarised, the interval [t 0 , T] is divided into subintervals I l � [t l− 1 , t l ], l � 1, . . . , K, where t K � T and t l , l � 1, . . . , K − 1, are the principal discontinuity points which is the set of points t ρ,σ , such that t 0,1 � t 0 and for ρ, σ ≥ 1, t ρ,σ are the minimal roots with odd multiplicity of r equations: 2 Complexity If nonconstant nonproportional delays α i appear in system (1), the principal set of discontinuity points is defined as follows: e principal discontinuity points for the solutions of system (1) are given by the set of points t ρ,σ , such that t 0,1 � t 0 and for ρ, σ ≥ 1, t ρ,σ are the minimal roots with odd multiplicity of r equations: Similar to the case of constant delays, we break the interval [t 0 , T] into subintervals I l � [t l− 1 , t l ], l � 1, . . . , K. We start with the mesh grid t 0 , . . . , t K formed by the principal discontinuity points calculated using Definition 1. To improve convergence or performance of the algorithm, there is a possibility to refine the mesh grid by inserting other points into it. For more details on the principal discontinuity points and mesh grid, we refer to the monograph [16].

Differential Transformation
Definition 2. Differential transformation of a real function provided that the original function u(t) is analytic in a neighbourhood of t 0 .
Definition 3. Inverse differential transformation of In applications, the function u(t) is expressed by a finite sum As we can observe in (10), DT is based on Taylor series; hence, any theorem about convergence of Taylor series may be used. However, we would like to point out the paper [17] where the finest general explicit a priori error estimates are given.
e following formulas are listed, e.g., in [18] and will be used in Section 3.3.

Lemma 1. Assume that F(k)[t 0 ] and U(k)[t 0 ] are differential transformations of functions f(t) and u(t),
respectively: Remark 1. Similar formulas can be obtained using numerical approach called Functional Analytical Technique based on Operator eory [19,20]. e main disadvantage of many papers about DT is that there are almost no examples of equations with nonpolynomial nonlinear terms containing unknown function However, DT of components containing nonlinear terms can be obtained in a consistent way using the algorithm described in [21].

Theorem 1.
Let g and f be real functions analytic near t 0 and g(t 0 ), respectively, and let h be the composition as the differential transformations of functions g, f, and h at t 0 , g(t 0 ), and t 0 , respectively. en, the numbers where B k,l (x 1 , . . . , x k− l+1 ) are the partial ordinary Bell polynomials.
e following Lemma proved in [21] is useful when calculating partial ordinary Bell polynomials.

Algorithm Description.
Recall system (1) with initial conditions and initial vector function Further recall that in Section 2.2, we broke the interval en, we are looking for a solution u(t) of the IVP (1)-(3) in the form where solution u I j in the jth interval I j is obtained in the following way. We solve the following equation: where Application of DT at t j− 1 to equation (19) yields a system of recurrence algebraic equations: where the function F is the DT of the righthand side of equation (19) and involves application of eorem 1.
Next, we transform the initial conditions (2). Following Definition 2, we derive (23) Using (22) with (23) and then inverse transformation rule, we obtain approximate solution to (19) in the form of Taylor series: for all j ∈ 1, . . . , K { }. To transform (20) correctly, we need the following theorem.
Proof. To prove (25) with p � 0, we use eorem 1 with for p > 0 is a consequence of Lemma 1 and it remains to prove (26). We recall that As the assumption was that α i (t j− 1 ) ∈ I l , we may apply Definition 2 to (28) and obtain Substituting t 0 � α i (t j− 1 ) and z � x + y gives (26). □

New DT Formulas.
In the applications, we also use the following DT formulas.

Theorem 3. Assume that F(k)[t 0 ]
is the differential transformation of the function f(t) and r ∈ R: Proof (a) Recall the Newton's generalisation of the binomial formula: if x and y are real numbers with |x| > |y|, and r is any complex number, one has where (b) We start by proving the formula by induction. For k � 1, we have (ln(t)) ′ � 1/t; hence, (32) is valid. Suppose that (32) holds for k. en, us, formula (32) is valid for all k ∈ N. Now by Definition 2, As the first test problem, we choose an IVP for a scalar equation with one nonconstant delay where the exact solution is known to be the exponential function e t . e purpose of including this example is to compare results obtained by DT against values of the exact solution and also against results obtained by Matlab function DDENSD which has been designed to approximate solutions to IVP for neutral delayed differential equations.
Example 1. Consider the delayed equation: with the initial condition and with the initial function First we find the differential transform of the initial condition (36) which is U(0) [1] { } ∞ k�0 as the transformation of the exponential function with the center at 0 and D ln(t) as the transformation of the logarithmic function at 1, respectively. en, Lemma 1 and eorem 3 yield For t ∈ [1, e], equation (35) is transformed into where for k ≥ 1, We have Complexity 5 Using the inverse transformation, we see that for t ∈ [1, e], which corresponds to the exact solution to the IVPs (35)-(37).
In the second step of the method of steps, i.e., in the interval t ∈ [e, e e ], we know that u(t) � e t for t ∈ [1, e] and equation (35) is transformed into Taking the values calculated in the first step and substituting them into the recurrence formulas (43) which again coincides with the exact solution to problems (35)-(37).
In the second application, we have chosen an IVP for a nonlinear system of neutral delayed differential equations taken from the fully open access paper [18]. ere are several reasons to test the proposed algorithm on the particular problem. e first is that the problem involves a nonlinear system of neutral equations of high complexity whose exact solution is unknown. Secondly, the proposed algorithm is a 6 Complexity complete differential transform version of the algorithm presented in [18] where modified Adomian formula has been used. Furthermore, the calculations done in [18] are shown only for the first step of the method of steps up to the first principal discontinuity point, whereas we continue calculations beyond that point in this paper. Last but not least, we want to verify performance and reproduce values obtained by DT and published in [18]. Rebenda et al. [18] has been submitted 4 years ago for the first time, and since that time, the Maple source code has been lost.
Example 2. Consider a nonlinear system of neutral delayed differential equations: with initial functions for t ∈ [− 2, 0], and initial conditions

(50)
For t ∈ [0, t 1 ], where t 1 ≈ 0, 351734 is the minimal root of t − (1/2)e − t � 0, using the method of steps, we obtain (51) We need to find the differential transform of the considered problem. We notice that system (2) contains nonlinear term ∞ k�0 , and we apply eorem 1. First, applying DT to system (2) at t 0 � 0, we get the recurrent system: Denote g(t) � t 2/3 ; then, h(t) � (g ∘ u 1 )(t), and following eorem 1, we obtain the transformed initial conditions are (55) Using them, we compute the first three coefficients of the nonlinear term h(t): Solving recurrent systems (52) and (53), we get . en, f(t) � (u 2 ′ ∘ e)(t) and, since e(t 1 ) � 0, eorem 1 in combination with Lemma 1 yields To get the initial data U 1 (k)[t 1 ] and U 2 (k)[t 1 ] for k � 0, 1, 2, we have to transform at t 1 , i � 1, 2. For k � 0, 1, 2, we have i.e., e initial values at t 1 will be approximated by taking finite sums in computer evaluations of the infinite sums above.
and the coefficients H 2 for k � 0, 1, 2 are Let us turn our attention to the first few values of (69) Finally, coefficients F 2 for k � 0, 1, 2 are As the calculations are getting more complicated, all the calculations have been done numerically only. . Table 1 shows comparison of results for Example 1 obtained by DT algorithm with the orders of Taylor polynomials of the approximate solution N � 5, 10, 25 to results of Matlab function DDENSD in the interval [1, e]. Since the exact solution is known, absolute errors illustrate precision of each algorithm setting. All numbers are rounded to four decimal places. We see that DDENSD performs satisfactory well and DT for N � 10, 25 does even better, whereas DT for N � 5 does not show satisfactory precision. Table 2 brings the same comparison in the second interval [e, e e ]. We can observe a fast growth rate of the function values of the exact solution, which leads to the growth of absolute errors and loss of precision in all settings. It indicates that at the end of the considered interval [e, e e ], the rate of precision would be better seen using relative errors.

Numerical Results and Discussion
Implementation of DT in Matlab in case of Example 2 produces numerical results which are listed in Table 3. e results of DT with order of the Taylor polynomial N � 10 are compared to values obtained by DT combined with modified Adomian formula in [18] and to values produced by Matlab function DDENSD.
First, we should say that the function DDENSD had difficulty at 0 where the value of the delayed argument t/2 was equal to the argument itself. Hence, to make DDENSD work, we replaced t/2 by t/2 − 10 − 16 in the second equation of (2). Our hypothesis is that the reason of the DDENSD failure is a combination of two facts: the second equation is neutral with respect to a proportional delay and the interval where the problem is considered contains 0.
Second, we should mention that the numerical results for DDENSD were obtained by looking for approximate solutions on the whole interval [0, t 2 ]. When trying to follow the method of steps, i.e., using DDENSD on [0, t 1 ] and then on [t 1 , t 2 ], the results on the second interval [t 1 , t 2 ] did not correspond to reality: there was a discontinuity in u 2 at t 1 .
Furthermore, we recall that the values taken from [18] have been computed using symbolic software Maple and the source code of the computation has been lost. Now, we can see a very good concordance of all algorithms in numerical values of the second component u 2 , while we observe a growing distance between the values of the first component u 1 computed by presented DT algorithm and values computed by the other two algorithms. As u 1 has exponential characteristics, we interpret the growing distance as growing lack of precision of DT algorithm which is based on approximation by Taylor polynomials. We suppose that dividing the intervals [0, t 1 ] and [t 1 , t 2 ] into smaller subintervals, i.e., refining the mesh grid, and applying the DT algorithm on those smaller intervals consecutively will improve the performance of the presented algorithm.
Although it seems that the algorithm used in [18] shows better performance than the one presented in this paper, we cannot claim it with certainty as the source code got lost and we are not able to reproduce the data. Moreover, the approach used in [18] involves calculations of symbolic derivatives which makes it difficult to implement in numerical software like Matlab.

Conclusion
In the paper, we presented an algorithm which makes use of the differential transformation to initial value problems for systems of delayed or neutral differential equations with nonconstant delays. Two examples have been chosen to validate and test the algorithm. Numerical comparison of the presented semianalytical approach to Matlab function DDENSD brought interesting and promising results. Example 1 showed expected and reliable behaviour of the differential transform in the first step of the method of steps and expected deviation in the numerical results from values of the exact solution in the second step. Furthermore, we could observe a good concordance between the presented algorithm and DDENSD.
After facing difficulties with DDENSD in Example 2, we could confirm a very good concordance of both differential transform and DDENSD in values of the component u 2 which has a polynomial character on the considered intervals. On the other hand, we observed a growing discrepancy between the two methods in values of the component u 1 which has an exponential character. Our conclusion is that the disagreement is caused by large lengths of the intervals where the approximate solution is computed using the differential transform and that refining the mesh grid is necessary to obtain better performance.
Further investigation will be focused on experimenting with different densities of mesh grids and studying convergence of the algorithm to find the optimal mesh grid. Numerical experiments will be focused on tuning the performance on problems with high complexity whose exact solutions are known and subsequently on applications to nonartificial real-life problems whose exact solutions are unknown.

Data Availability
e numerical data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article.  [18] and Matlab function DDENSD for u 1 and u 2 in the first step for t ∈ [0, t 1 ], and presented DT and DDENSD in the second step for t ∈ [t 1 , t 2 ].