Accelerated Runge-Kutta Methods

Standard Runge-Kutta methods are explicit, one-step, and generally constant step-size numerical integrators for the solution of initial value problems. Such integration schemes of orders 3, 4, and 5 require 3, 4, and 6 function evaluations per time step of integration, respectively. In this paper, we propose a set of simple, explicit, and constant step-size Accerelated-Runge-Kutta methods that are two-step in nature. For orders 3, 4, and 5, they require only 2, 3, and 5 function evaluations per time step, respectively. Therefore, they are more computationally efficient at achieving the same order of local accuracy. We present here the derivation and optimization of these accelerated integration methods. We include the proof of convergence and stability under certain conditions as well as stability regions for finite step sizes. Several numerical examples are provided to illustrate the accuracy, stability, and efficiency of the proposed methods in comparison with standard Runge-Kutta methods.


Introduction
Most of the ordinary differential equations ODEs that model systems in nature and society are nonlinear, and as a result are often extremely difficult, or sometimes impossible, to solve analytically with presently available mathematical methods.Therefore, numerical methods are often very useful for understanding the behavior of their solutions.A very important class of ODEs is the initial value problem IVP : where y, y 0 , and f are m − 1 -dimensional vectors, and t denotes the independent variable, time.
A constant step numerical integrator approximates the solution y t 1 of the IVP defined by 1.1 at point t 1 t 0 h by y 1 .This is referred to as "taking a step."Similarly, at step n 1, the numerical integrator approximates the solution y t n 1 at point t n 1 t n h by y n 1 .By taking steps successively, the approximate solutions y 2 , . . ., y N at points t 2 , . . ., t N are generated.The accuracy of a numerical integrator is primarily determined by the order of the local error it generates in taking a step.The local error at the end of step n 1 is the difference u t n 1 − y n 1 , where u t is the local solution that satisfies the local IVP: 1.2 Standard Runge-Kutta RK methods are a class of one-step numerical integrators.That is, the approximate solution y n 1 is calculated using y n and the function f.An RK method that has a local error of O h p 1 is said to be of order p and is denoted by RKp.It has been established that RK3, RK4, and RK5 require 3, 4, and 6 function evaluations per time step of integration, respectively 1-3 .
In this paper, we propose a new and simple set of numerical integrators named the accelerated-Runge-Kutta ARK methods.We derive these methods for the autonomous version of 1.1 given by dy t dt f y t , t 0 ≤ t ≤ t N , y t 0 y 0 , where y, y 0 , and f are m-dimensional vectors, and t denotes the independent variable, time.We assume that f is a sufficiently smooth Lipschitz continuous function, and hence the solution of the IVP given by 1.3 is unique.Accelerated Runge-Kutta methods are inspired by standard Runge-Kutta methods but are two-step in nature.That is, the approximate solution y n 1 is calculated using y n , y n−1 , and the function f.ARKp denotes an ARK method whose local error is of O h p 1 .We will see in the next section that ARK3, ARK4, and ARK5 require 2, 3, and 5 function evaluations per time step of integration, respectively.Since function evaluations are often the most computationally expensive part of numerically solving differential equations see Section 4 for further details , ARK methods are expected to be more computationally efficient than standard RK methods.
Various authors have attempted to increase the efficiency of RK methods.As a result, numerous variations of two-step explicit RK methods exist today.Byrne and Lambert 4 proposed 3rd-and 4th-order two-step Pseudo RK methods.Byrne 5 later proposed a set of methods that minimize a conservative bound on the error, and Gruttke 6 proposed a 5th-order Pseudo RK method.Costabile 7 introduced the Pseudo RK methods of II species which involve many parameters to improve local accuracy.Jackiewicz et al. 8 considered the two-step Runge-Kutta TSRK methods up to order 4. Jackiewicz and Tracogna 9 studied a more general class of these TSRK mothods and derived order conditions up to order 5 using Albrecht's method.Recently, Wu 10 has proposed a set of two-step methods which make use of the derivative of the differential equation.

Firdaus E. Udwadia and A. Farahani 3
The accelerated-Runge-Kutta ARK methods presented here are along the lines first proposed in 11 and are simple and straightforward.They could be considered special cases of the more general TSRK methods.The simplicity of the ARK methods' construction not only reduces computational overhead cost, but also leads to a simpler and more elegant set of order equations that can be directly analyzed more precisely.The ARK methods also benefit from a more precise and effective optimization technique which transforms them into higher-order methods for some problems.In addition, we have presented here a complete analysis of all aspects of the ARK methods, including their motivation, derivation, accuracy, speedup, and stability.
In Section 2, we explain the central idea of ARK methods followed by a description of how to derive and optimize ARK3, ARK4, ARK4-4 a more accurate ARK4 , and ARK5 in Sections 2.1, 2.2, 2.3, and 2.4, respectively.In Section 3, we use seven standard initial value problems to compare the accuracy of ARK and RK methods.In Section 4, we present the computational speedup of ARK methods over the RK methods using some of the same standard problems.The hallmark of our ARK method is its simplicity.Tracogna and Welfert 12 , based on methods developed in 9 , use a more sophisticated approach using B-series to develop a 2-step TSRK algorithm for numerical integration.In Section 5, we compare the numerical performance of our method with that of 12 using the above-mentioned standard initial value problems.Section 6.1 deals with stability and convergence of ARK methods at small step sizes, and Section 6.2 compares the stability region of ARK and RK methods at finite step sizes.We end with our conclusions and findings, and include in the appendices, the description of the standard initial value problems used, and the Maple code that derives the ARK3 method.

Central idea of accelerated Runge-Kutta methods
The idea behind the ARK methods proposed herein is perhaps best conveyed by looking at RK3 and ARK3.An example of the RK3 method solving the IVP given by 1.1 has the form where

2.2
The approximate solution y n 1 at t n 1 is calculated based on the approximate solution y n at t n along with 3 function evaluations k 1 , k 2 , and k 3 in t n , t n 1 .Hence, RK3 is a one-step method with a computational cost of 3 function evaluations per time step.An example of the accelerated-RK3 ARK3 method solving the IVP given by 1.3 has the form where

2.4
The approximate solution y n 1 at time t n 1 is calculated based on the approximate solutions y n and y n−1 at times t n and t n−1 , respectively, along with 2 function evaluations k 1 and k 2 in t n , t n 1 and 2 reused function evaluations k −1 and k −2 in t n−1 , t n .In each step of integration from point t n to point t n 1 , only k 1 and k 2 are calculated, while k −1 and k −2 are reused from the previous step's k 1 and k 2 .Figure 1 illustrates this idea.The reuse of previously calculated data reduces the number of function evaluations at each time step of integration from 3 for RK3 to 2 for ARK3, making ARK3 more computationally efficient than RK3.This increased efficiency is shared by ARK4 and ARK5 when compared to RK4 and RK5, respectively, and is the central idea behind the ARK methods.
It is important to note that at the first step, there is no previous step.Therefore, ARK methods cannot be self-starting.A one-step method must supply the approximate solution y 1 at the end of the first step t 1 .The one-step method must be of sufficient order to ensure that the difference y 1 − y t 1 is of order p or higher.In this case, the ARKp method will be convergent of order p see Section 6.1, Theorem 6.4 .For example, the ARK3 method can be started by the RK3 or RK4 methods.This extra computation occurs only in the first step of integration; for all the subsequent steps, ARK methods bear less computational cost than their RK counterparts.
The general form of the ARK methods presented in this paper is where

2.6
Here, v denotes the number of function evaluations performed at each time step and increases with the order of local accuracy of the ARK method.In each step of integration, only k 1 , k 2 , . . .are evaluated, while k −1 , k −2 , . . .are reused from the previous step.
To determine the appropriate values of the parameters c's and a's, a three-part process is performed.The first and second parts of this process are shown in detail for ARK3 in 11 .First, the ARK expression 2.5 is expanded using the Taylor series expansion.Second, after some algebraic manipulation, this expansion is equated to the local solution u t n 1 at t n 1 given by the Taylor series expansion see 1.2

2.7
This results in the system of nonlinear algebraic order equations 2.8 -2.13 .Third, we attempt to solve as many order equations as possible because the highest power of h for which all of the order equations are satisfied is the order of the resulting ARK method.The above process requires a great deal of algebraic and numeric calculations which were mainly performed using Maple see Appendix B .
In the following, we will show what order of accuracy an ARK method can achieve with a given number of function evaluations v.For v 1, a second-order method ARK2 is possible.It is, however, omitted here because such a low order of accuracy is of limited use in accurately solving differential equations; we will look at cases v 2, . . ., 5.

ARK3
For v 2, from 2.5 , we have the 6 parameters c 0 , c −0 , c 1 , c −1 , c 2 , and a 1 .The derived order equations up to O h 4 Clearly, 2.8f cannot be satisfied, so it is impossible to achieve a 4th-order method with 2 function evaluations per time step.It is, however, possible to achieve a 3rd-order method ARK3 with 2 function evaluations per time step by satisfying 2.8a -2.8d .We solve these equations in terms of c 0 , c −0 , c 1 , and c −1 and let the remaining two parameters c 2 and a 1 become free parameters.
We use sets of free parameters to optimize the accuracy of an ARK method by minimizing its local error.The local error is represented by higher order equations order 4 and above in case of ARK3 .The closer these order equations are to being satisfied, the smaller the local error is.Therefore, we try to minimize the difference between the LHS and RHS of these order equations.This can be done in several different ways.One popular technique is to minimize a general bound on the error 5 .A second technique is to minimize a norm of the error 13 .A third technique is to satisfy as many individual order equations as possible hinted to by 4 .In general, there is no way of knowing a priori which technique yields the most accurate ARK method since the error ultimately depends on the problem at hand.We have chosen the last technique for two reasons.One, it is straightforward to implement exactly.Two, for some classes of problems, the differential term multiplying the unsolved higher order equations may vanish, which could effectively increases the order of the method.For instance, the differential term multiplying 2.8f may vanish, and provided 2.8e is already satisfied, the resulting ARK method with 2 function evaluations will become a 4th-order method.
Using the two free parameters c 2 and a 1 , we can solve two higher-order equations: 2.8e and any one of the two 5th-order equations, However, we cannot choose 2.9b because to prove stability, we will need −1 ≤ c −0 < 1 see condition 6.16 .Therefore, we choose 2.8e and 2.9a , which leads to Set 2 of Table 1.We also present a set in which c −0 0 because doing so can increase the stability see Section 6 and speed see Section 4 of an ARK method.Setting c −0 0 and solving 2.8a -2.8e leads to Set 3 of Table 1.Set 1 of Table 1 is the only set of parameters here that is not found by solving all possible higher order equations.It is found by setting c −0 0, solving 2.8a -2.8d for c 1 , c −1 , and c 2 , and searching for a value of a 1 that leads to a highly accurate ARK method in solving some standard IVPs see Section 3 and Appendix A .

ARK4
For v 3, we have the 8 parameters c 0 , c −0 , c 1 , c −1 , c 2 , c 3 , a 1 , and a 2 along with order equations up to O h 5

2.10j
Since there are 10 order equations up to O h 5 , we cannot achieve a 5th-order method.However, we can achieve a 4th-order method ARK4 by solving the first 6 order equations 2.10a -2.10f .Therefore, ARK4 requires 3 function evaluations per time step.We have chosen to solve these equations for the c's while keeping a 1 and a 2 as free parameters because all order equations are linear in terms of the c's which makes it straightforward to solve for them.
We use a 1 and a 2 to optimize ARK4's accuracy by solving two of the O h 5 equations.We choose 2.10g and 2.10h because they were used in deriving the other O h 5 equations.Set 2 and Set 3 of Table 2 are the two solutions.Set 1 of Table 2 is a set in which c −0 0 and is produced by solving 2.10b to 2.10g .It is a set of approximate values and naturally contains some round-off error.The effects of this round-off error in parameter values can be significant if the error produced by the ARK methods is small enough to be of the same order.Therefore, to measure the accuracy of the ARK methods correctly see Section 3 , we have calculated the parameter values using 25 digits of computational precision in Maple.Note, however, that such high precision is not required for the ARK methods to be effective.

ARK4-4
For v 4, we have the 10 parameters c 0 , c −0 , c 1 , c −1 , c 2 , c 3 , c 4 , a 1 , a 2 , and a 3 .Order equations up to O h 5 are Since there are exactly 10 order equations up to O h 5 , it would seem that we can achieve a 5th-order method.However, the only two solutions of the above order equations have c −0 1 and c −0 25, both of which produce unstable methods see condition 6.16 .Therefore, we set c −0 0 which leaves us with the 8 parameters c 1 , c −1 , c 2 , c 3 , c 4 , a 1 , a 2 , and a 3 .We solve 2.11b -2.11f for the c's and solve three of the four O h 5 order equations using the free parameters a 1 , a 2 , and a 3 .The result is a 4th-order method called ARK4-4 that is more accurate than ARK4 for most of the IVPs that are investigated herein.The "-4" indicates that this 4th-order method requires 4 function evaluations, unlike ARK4 which requires only 3.
Set 1 of Table 3 satisfies the higher order equations 2.11g , 2.11h , and 2.11j .Set 3 of Table 3 satisfies the higher order equations 2.11g , 2.11h , and 2.11i .Set 2 of Table 3 satisfies the higher order equations 2.11g , 2.11j and 2.12b .To understand where 2.12b comes from, we first need to explain a unique property of the O h 5 order equations.
The O h 5 order equations are fundamentally different from lower order equations.If the order equations for ARK4-4 are derived with a two-dimensional coupled ODE system, we get the four O h 5 order equations presented above, that is, 2.11g -2.11j .However, if the derivation is done with a one-dimensional ODE or with a two-dimensional uncoupled ODE system, we get the three O h 5 equations: Hence, coupling of the ODE system affects the O h 5 order equations but not the O h 0 , . . ., O h 4 order equations.We see that the difference between the two sets of order equations is that 2.11h and 2.11i are replaced by 2.12b 2 × 2.11h 2.11i .Therefore, a parameter set that satisfies 2.11h and 2.11i also automatically satisfies 2.12b , but not vice versa.To ensure that the order equations are valid for all ODE systems, we have considered a twodimensional coupled ODE system in our derivations.
Although we have not derived the order equations based on a 3-or higher-dimensional coupled ODE system, we hypothesize that the order equations would be the same as 2.11g -2.11j .We base this hypothesis on two observations.One, unlike moving from a one-dimensional system to a two-dimensional system where the coupling property emerges, no property emerges when we move from a two-dimensional system to a three-dimensional system, so there is no perceivable reason for the order equations to change.Two, using a twodimensional coupled ODE system, we have derived the same standard RK order equations that others have reported 13 .
With that explanation, it might seem counterintuitive to use a parameter set that satisfies 2.12b .After all, this equation only applies for uncoupled systems.However, the thinking is that for uncoupled systems, the resulting ARK4-4 becomes a 5th-order method.While for coupled systems, the resulting method yields a very accurate 4th-order method because it satisfies two of the four O h 5 equations as well as a weighted average of the other two order equations.This is confirmed by our experiments in solving standard problems see Section 3 since the accuracy of Set 2 is comparable to that of Set 1 and Set 3.

2.13r
Solving the first 10 equations, 2.13a -2.13j , gives us a 5th-order method ARK5 which requires 5 function evaluations per time step.We solve the first 8 equations for the c's and solve equations 2.13i -2.13l for a 1 , . . ., a 4 .Among the multiple solutions of this system, we have found Set 3 in Table 4 to be the most accurate in solving the standard problems given in Appendix A. Setting c −0 0 results in 10 parameters and 9 equations of order up to O h 5 .We use the one free parameter to solve 2.13k which leads to numerous solutions.Among them, Set 1 and Set 2 in Table 4 are the most accurate in solving the standard initial value problems in Section 3.

Accuracy
We consider seven initial value problems IVPs to illustrate the accuracy of the ARK methods.These problems are taken from examples used by previous researchers 11, 14 .They are named IVP-1 through IVP-7, and are listed in Appendix A. Most of the problems model real-world phenomena suggested by 14 and have been used by other researchers to test numerical integrators.Specifically, IVP-3 models the Euler equations for a rigid body without external forces, IVP-4 and IVP-5 model a one-body gravitational problem in the plane, IVP-6 models a radioactive decay chain, and IVP-7 models the gravitational problem for five planets around the sun and four inner planets .The first two example problems IVP-1 and IVP-2 are  The approximate solution to each example ODE problem is computed for 0 ≤ t ≤ 15 at the step sizes: 0.001, 0.0025, 0.005, 0.01, 0.025, 0.05, and 0.1 using ARK3, ARK4, ARK4-4, and ARK5.For comparison, we also show the corresponding results using the RK2, RK3, RK4, and RK5 methods.Each ARK method is started with an RK method of the same order, taking 10 steps in the first time step at 1/10th the value of step size.For ARK methods, Set 1 parameter values of Tables 1, 2, 3, and 4 are used, and for RK methods, the parameter values in Table 5 are used.
The accuracy plots for problems IVP-1 through IVP-7 are given in Figures 2, 3, 4, 5, 6, 7, and 8, respectively.Along the ordinate is plotted an average of the 2-norm of the global error, which is computed by taking the 2-norm of the difference between the solution y t n and approximate solution y n , and then averaging this norm over the interval 10 ≤ t ≤ 15.The exception is IVP-5 for which the average absolute value of global error in all four components of the solution are plotted separately.The results obtained when using RK methods are shown by lines with circles along them, while the results for ARK methods are shown by lines with squares along them.Lines with the same color indicate methods that have the same number of function evaluations.
Due to small step sizes, for example 10 −3 , and the high order of local accuracy for some of the RK and ARK methods, the global error for some problems reaches 10 −24 .This is beyond MATLAB's precision capabilities.MATLAB has a minimum round-off error of about eps ≈ 2.3×10 −16 at magnitude 1 and a maximum of 17 significant digits of accuracy.Therefore, a proper analysis of the accuracy of the ARK and RK methods cannot be done in MATLAB.In fact, we found that for small step sizes, the higher order ARK and RK plot lines produced by MATLAB are erroneous; they are jagged lines with slopes different than the order of the method.That is why the PARI mathematics environment 16 which provides 28 digits of accuracy by default was used for all calculations in this section.This ensures that the computational round-off error is far smaller than the methods' global error.Note however, that using high precision computation is not necessary for the ARK methods to be effective.The ARK methods should retain their accuracy and efficiency advantages over the RK methods at any precision.
The slopes of the plot lines confirm the order of our ARK methods; the slopes of ARK3, ARK4, ARK4-4, and ARK5 lines except when the time steps are large are 3, 4, 4, and 5, respectively.The slope of the ARK5 lines for problems IVP-3 to IVP-7 Figures 4 to 8 is especially important.We hypothesized in Section 2.3 that although our O h 5 order equations were derived based on a two-dimensional ODE system, they would also apply to higherdimensional problems.The fact that problems IVP-3 to IVP-7 are 3-dimensional or higher and their ARK5 accuracy plot lines have slope 5 for small step sizes confirms our hypothesis.
Comparing ARK and RK methods, the plots show that generally speaking, the global error of the ARK methods is greater than that of the corresponding RK methods by about one order of magnitude when comparing methods of the same order, such as comparing ARK3 versus RK3.Although great effort was made to find the parameter sets that yield the most accurate ARK methods, it is possible that better sets can be found which would considerably improve the accuracy of the ARK methods.Set 1 of ARK4 exemplifies this claim.It is seen from the accuracy plots that for problems IVP-2, IVP-3, IVP-5, and IVP-7, ARK4 has virtually the same accuracy as RK4.
A more useful comparison of ARK and RK methods would be to compare those that have the same number of function evaluations per time step.Table 6 summarizes the order and number of function evaluations for the ARK and RK methods.In comparing ARK3 versus RK2, ARK4 versus RK3, and ARK4-4 versus RK4 in the accuracy plots, ARK methods prove to be more accurate.Because ARK3 and ARK4 are higher-order methods than RK2 and RK3, respectively, their gain in accuracy grows as the step size decreases.For example, for IVP-5 see Figure 6 at step size 0.1, ARK3 and ARK4 are less than one order of magnitude more accurate than RK2 and RK3, respectively, but at step size 0.001, they are three and four orders of magnitude more accurate than RK2 and RK3, respectively.A similar gain of accuracy can occur even for ARK4-4 versus RK4 when for some problems see Figures 2 and 7 , ARK4-4 becomes a 5th-order method.
To verify these speedups experimentally, we have implemented the methods in MATLAB.We have measured the time it takes to solve IVP-2 and IVP-4 for 0 ≤ t ≤ 1000 with such a large time span needed to get a reliable time measurement at the 7 different step sizes used in Section 3. We have used the MATLAB utility TIC and TOC for time measurements .We repeated this integration 100 times and took the average of integration times before calculating the speedup t RK − t ARK /t RK .Figures 9 and 10  problem being solved, and among IVP-1 through IVP-7, IVP-2 leads to the lowest speedup, while IVP-1 leads to the highest.Speedup also depends on the parameter set used because it affects the computational overhead.The ARK implementations here use values of Set 1 in Tables 1, 2, and 4, and the RK implementations use parameter values in Table 5.The plots show a linear least square fit to the points.We expect this line to be fairly flat because the number of steps for the ARK and RK methods are only slightly different caused by the startup of the ARK methods at the first time step and because speedup does not depend on the step size.
The speedups seen in Figures 9 and 10 are lower than expected.Moreover, we have found the time measurement and consequently the speedup to be machine dependent.Through measurements of the computation times at different points in our computer codes, we have found two main causes for the less-than-expected speedup of our ARK methods.First, the ARK methods require additional multiplication and addition operations when evaluating the expression for y n 1 than the RK methods require.This accounts for about half of the loss in the expected speedup.Second, the ARK methods require a number of assignment operations to set y n−1 , k −1 , . . . to y n , k 1 , . . . in every step of integration which the RK methods do not require.Somewhat surprisingly, this takes a significant amount of computation time in the MATLAB environment and is responsible for about the other half of the loss in the expected speedup.

Comparison with other two-step methods
We now compare the ARK4 method with parameters of Set 1 of Table 2 to the more sophisticated TSRK4 method presented in 12 .Both are fourth-order methods with 3 function evaluations per time step.The accuracy of the two methods is calculated in the same way as in Section 3 but implemented in MATLAB.Because of MATLAB's lower precision, we have not calculated the accuracy at the two smallest step sizes.The average of the 2-norm of the global error for problems IVP-1, IVP-2, and IVP-4 is presented in Figures 11, 12, and 13.The TSRK4 method is more than one order of magnitude more accurate in solving IVP-1; the ARK4 method is less than one order of magnitude more accurate than the TSRK4 method in solving IVP-2; and the TSRK4 method is slightly more accurate than ARK4 in solving IVP-4.The TSRK4 method is also more accurate in solving the other problems listed in Appendix A. This is because the TSRK4 method is more general and possesses more parameters, which have been effectively optimized.In Figure 14, we have plotted the relative change in computation time when solving IVP-1, IVP-2, and IVP-4 similar to Section 4 .The lower computation time see Figure 14 required by the ARK4 method is due to its lower computational overhead, which is produced by its simpler form as given in 2.5 .

Stability
We have already established that the error of an ARKp method over a single time step local error has the order O h p 1 see Section 1 .In this section, we will look at how this error accumulates over many time steps global error by looking at two cases, small step sizes approaching zero, D-stability and large step sizes.Similar results are provided in 12 in the form of a a bound on the global error for small step sizes and b a comparable stability region for finite step sizes.

Small step sizes
The ARK method given by expression 2.5 yields an approximate solution {y n } 2 ≤ n ≤ N to the initial value problem 1.3 and can be written as where I is an m by m identity matrix.We can write 6.1 as where Q is the 2m by 2m block matrix given by and φ is the 2m by 1 vector defined as With a slightly different initial condition z 0 y 0 Δ 0 at time t 0 , a slightly different approximate solution z 1 y 1 Δ 1 at time t 1 , and a slightly different recipe φ z n , f, h hδ n 1 ≤ n ≤ N − 1 , the ARK method yields a "perturbed" approximate solution {z n } 2 ≤ n ≤ N , and can be written as for 1 ≤ n ≤ N − 1, or can be written as where To prove stability, we will show that under some conditions, z n − y n ∞ is bounded.To prove convergence, we will show that y t n − y n ∞ → 0 as h → 0, where y t n is the exact solution of the IVP defined by 1.3 .In order to do this, we will assume that the ARK method is stable and that as h tends to zero, y t 1 − y 1 ∞ tends to zero, where t 0 is the initial time and t 1 t 0 h.The stability analysis comprises of two lemmas and two theorems.In Lemma 6.1, we will consider the eigenvalue problem for the matrix Q.In Lemma 6.2, we will show that the function φ is Lipschitz continuous.In Theorem 6.3, we will establish stability of the ARK methods by looking at the difference of 6.2 and 6.7 .In Theorem 6.4, we will prove convergence for ARK methods using the result of Theorem 6.3.This stability analysis is inspired by Shampine 17 .Lemma 6.1.The eigenvalue problem for the matrix Q given in 6.4 can be written as QW WD. 6.9 If −1 ≤ c −0 < 1, the matrix W is nonsingular and satisfies where here and throughout the stability analysis, all norms denote the infinity norm • ∞ .where we have made use of the O h 0 order equation, c 0 1 c −0 , we observe that 6.9 is satisfied.Noting that we require that c −0 / 1 for W −1 to exist.Recalling that • ≡ • ∞ , we can write With the condition we can see from 6.15 that W −1 QW 1. From condition 6.16 and expression 6.13 , we find that and from expression 6.14 , we obtain Thus, Lemma 6.1 is proven.
for a constant L, and any z n , y n 1 ≤ n ≤ N .

6.20
Proof.First, we will show by induction that where we recall that • ≡ • ∞ , and that v is the number of function evaluations of the ARK method.Also, here and throughout this proof, 1 ≤ n ≤ N.For i 1, we have where γ 1 L. Assuming inequality 6.21 is true for i j, we have

6.23
where γ j 1 L 1 hγ j |a j | .Hence, inequality 6.21 holds for j 1 and, therefore, it holds for 1 ≤ i ≤ v. Similarly, we can show that Function φ can be written as

6.25
Using inequality 6.21 , we can write

6.26
Similarly, using inequality 6.24 , we can write

6.28
Because Therefore, where Thus, Lemma 6.2 is proven.
Theorem 6.3.Suppose that an ARK method is used to solve the IVP given in 1.3 , and that f is a Lipschitz continuous function throughout the integration span.If −1 ≤ c −0 < 1, the ARKp method is stable for all sufficiently small step sizes, and where Proof.Subtracting 6.7 from 6.2 , we have

6.33
We multiply this equation by W −1 to get

6.34
Let and we get Taking the infinity norm and using the Lipschitz condition on φ Lemma 6.2 , we can write

6.37
Considering expressions 6.10 and 6.12 , we have We will now show by induction that this bound can be written as In the following, we will use the inequality Since for any r, h t k − t r t k 1 − t r , we have that

6.47
Therefore, inequality 6.41 holds for k 1 and by induction, it holds for 2 ≤ n ≤ N.
We can simplify inequality 6.41 by writing

6.48
Using the inequality 6.39 , we have

6.49
Since z n − y n ≤ z n − y n , and δ m δ m , we have where E 4we 2 Lw t n −t 1 max t n − t 1 , 1 .

6.51
Hence, stability of ARK methods is established.Recall that the ARK methods require to be provided with the initial condition y 0 at the beginning of the first time step t 0 and the approximate solution y 1 at the end of the first time step t 1 .The max r 0,1 z r − y r term in expression 6.31 refers to the perturbations in these values.
Theorem 6.4.Consider an ARKp method that is used to solve the IVP given by 1.3 , where f is a sufficiently smooth Lipschitz continuous function throughout the integration span.If −1 ≤ c −0 < 1 and the approximate solution y 1 at time t 1 is accurate to O h q , the ARKp method is convergent to order min p, q , and where E 4we 2 Lw t n −t 1 max t n − t 1 , 1 .

6.53
In particular, y t n − y n → 0 as h → 0.
where z 0 y 0 Δ 0 , z 1 y 1 Δ 1 , and z n 's recipe is perturbed by hδ n .The truncation error hδ * n is defined as the difference between the exact and approximate solutions at the end of the current step if the method is provided with the exact solution at the beginning of the current and previous steps.Therefore, the exact solution y t n satisfies 6.56 We will now show that {y t n } 2 ≤ n ≤ N is a perturbed solution of the ARK method.For n 1, we have 57

6.59
We set z 0 y t 0 y 0 , 6.60 and from 6.58 and 6.59 , we have z 2 y t 2 .Similarly, we set δ n δ * n for 1 ≤ n ≤ N − 1, and from 6.55 and 6.56 , we get z n y t n for 2 ≤ n ≤ N.
Since we have made the same assumptions as Theorem 6.3, relations 6.31 and 6.32 hold, and we have For ARKp, δ * m O h p .The approximate solution y 1 at time t 1 which is required by the ARKp method is supplied by a one-step method such as RKp.If y 1 is accurate of order q, that is, y t 1 − y 1 O h q , the above inequality shows that the ARKp method is convergent of order min p, q , that is, In particular, y t n − y n → 0 as h → 0, and Theorem 6.4 is proven.

Large step sizes
To produce the stability region of ARK methods for finite step sizes, we look at the approximate solution of the linear one-dimensional problem y λy, where λ is a complex number.After some simplification using the order equations, the ARK method's approximate solution of this problem becomes

6.67
For stability, we require that the modulus of the eigenvalues of the matrix in 6.63 be smaller than 1.Figures 15, 16, 17, and 18 show the stability regions of ARK and RK methods produced in this way.The curves represent the points at which the computed maximum modulus of the eigenvalues equals 1.The region of stability is the area enclosed by the curves and the real axis.It is clear from 6.64 and 6.65 that the stability region of ARK3 and ARK4 depends only on c −0 , but the stability of ARK4-4 and ARK5 depends also on the products c 4 a 1 a 2 a 3 and c 5 a 1 a 2 a 3 a 4 , respectively.

Conclusion
In this paper, we have developed a simple set of constant step size, explicit, accelerated Runge-Kutta ARK methods for the numerical integration of initial value problems.They rely on the general form posited in 2.5 .The simplicity of this form facilitates their detailed treatment, an aspect missing in previous work on similar multistep methods.Specifically, we have provided a study of their motivation, derivation, optimization, accuracy, speedup, stability, and convergence.1 and RK3 set of Table 5.  2 and RK4 set of Table 5.
Our main conclusions are as follows.
1 The ARK methods developed in this paper are simple to analyze and implement and they lead to an elegant set of order equations.Three accuracy-optimized parameter sets for ARK3, ARK4, ARK4-4, and ARK5 have been provided.The accuracy of the methods obtained from these sets is confirmed by numerically solving a standard set of seven initial value problems.3 and RK4 set of Table 5.  4 and RK5 set of Table 5.
previous steps of integration.ARK3, ARK4, and ARK5 require one less function evaluation per time step of integration than RK3, RK4, and RK5, respectively.
3 Numerical examples show that the accuracy-optimized ARK3, ARK4, ARK4-4, and ARK5 methods presented here are superior to RK methods that computationally cost the same in terms of the number of function evaluations.Also, the ARK methods are comparable to RK methods that have the same order of local accuracy.
step of integration and is insignificant compared to the computational savings of ARK methods over the subsequent steps.
5 In solving the standard initial value problems considered here, the ARK3, ARK4, and ARK5 methods exhibit minimum speedups of 19%, 17%, and 15% compared to RK3, RK4, and RK5 methods, respectively, on our computer.However, the theoretical speedups of the ARK methods are approximately 33%, 25%, and 17%, respectively, based on the smaller number of function evaluations required.This reduction in speedup is found to be caused by the higher computational overheads in the ARK methods, when compared to the RK methods.
6 Convergence and stability for small step sizes are proven for ARK methods with some conditions.The stability regions of ARK methods for large step sizes are shown to be generally smaller than, but comparable to, those of RK methods.

A. Standard initial value problems
The following are seven initial value problems IVPs that we have chosen from the literature for numerical experiments.Most of the problems model real-world phenomena suggested by 14 and have been used by other researchers to test numerical integrators.They help to illustrate the accuracy, speedup, and stability of ARK methods in comparison with RK methods.These problems are solved for 0 ≤ t ≤ 15.
The first problem is a simple IVP which has the form IVP-1 14 y −y, y t 0 1, with solution y t e −t . A.1 The second problem is an example of a simple nonautonomous ODE.We use it to illustrate that such problems can be readily converted to autonomous form and solved by ARK methods without difficulty.The problem is written as IVP-2 11 y − ty 1 t 2 , y t 0 1, with solution y t The third problem is the Euler equations of motion for a rigid body without external forces.This problem has no closed form solution, so its exact solution is approximated by RK6 with a step size of 10 −4 .We have IVP-3 14 y 1 y 2 y 3 , y 1 t 0 0, y 2 −y 1 y 3 , y 2 t 0 1, y 3 −0.51y 1 y 2 , y 3 t 0 1.

A.3
The fourth problem is a 1-body gravitational problem with eccentricity e 0.8.The solution, which is the orbit of a body revolving around a center body, has regions of high stability and low stability as pointed out by the high and low values of the spectral norm of the Jacobian.We use this problem to show the effects of such a stability behavior on the ARK methods' performance.The ARK and RK methods fail to maintain their order of accuracy for step sizes larger than about 0.01.This is due to the instability of the solution particulary in the region where the revolving body is passing close to the center body.This problem is commonly solved by employing a variable-step numerical method which reduces the step size in this region.The problem is written as IVP-4 14

A.5
The sixth problem is a linear 10th-order system that models a radioactive decay chain.This problem and the next are used to measure the performance of the ARK methods when solving larger systems of ODEs.Although this problem has a closed form solution, because of numerical errors in computing it, it was approximated by RK6 with a step size of 10 −4 .It takes the form IVP-6 14 The seventh problem is a 5-body gravitational problem for the orbit of 5 outer planets around the sun and 4 inner planets.The orbit eccentricity of the planets 1 through 5 are 0.31, 0.33, 0.30, 0.13, and 0.63, respectively.The subscripts p and c denote the planet and coordinate, respectively.The exact solution of this problem was also approximated by RK6 with a step size of 10 −4 .It is a 30-dimensional system of first-order ODEs, and can be written as IVP-7 14 The gravity constant and planet masses are ODEs-the first, autonomous, the second, nonautonomous.The dimension of the rest of the examples, which include linear and non-linear ODE systems, ranges from 3 to 30.

Figure 2 :Figure 3 :
Figure 2: Accuracy plot for IVP-1 using log-log scale.The RK methods are shown by lines with circles.The ARK methods are shown by lines with squares.Lines of the same color correspond to methods with the same number of function evaluations per time step.

Figure 6 :
Figure 6: Accuracy plots for IVP-5: a y 1 , b y 2 , c y 3 , and d y 4 see caption of Figure 2 .

Figure 14 :
Figure 14: Relative change in computation time of the TSRK4 method compared with the ARK4 method in solving IVP-1 blue , IVP-2 red , and IVP-4 black , showing that TSRK4 requires greater computation time.

Firdaus E .
Udwadia and A. Farahani 27 Proof.Consider again the approximate solution {y n } 2 ≤ n ≤ N and the perturbed solution {z n } 2 ≤ n ≤ N of the ARKp method given by y n 1 c 0 y n − c −0 y n−1 φ y n , y n−1 , f, h for 1 ≤ n ≤ N − 1, 6.54

Table 6 :
Summary of order and number of function evaluations per time step for RK methods and ARK methods presented herein.
Relation 6.40 establishes a bound on d n 1 based on d n .This is not very useful because we do not know how large d n is.Instead, we want to find a bound based on d 1 .
is the solution of the Kepler equation u t e sin u .The fifth problem is similar to the above problem but with e 0. With consistent stability behavior, it is a good example to compare with the previous problem.It has the form IVP-5 14 2.95912208286, m 3 0.0000437273164546 Uranus , m 0 1.00000597682 sun & 4 inner planets , m 4 0.0000517759138449 Neptune , m 1 0.000954786104043 Jupiter , m 5 0.00000277777777778 Pluto . A.10