Adaptive Optimal M-stage Runge-kutta Methods for Solving Reaction-diffusion-chemotaxis Systems

Copyright q 2011 Jui-Ling Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present a class of numerical methods for the reaction-diffusion-chemotaxis system which is significant for biological and chemistry pattern formation problems. To solve reaction-diffusion-chemotaxis systems, efficient and reliable numerical algorithms are essential for pattern generations. Along with the implementation of the method of lines, implicit or semi-implicit schemes are typical time stepping solvers to reduce the effect on time step constrains due to the stability condition. However, these two schemes are usually difficult to employ. In this paper, we propose an adaptive optimal time stepping strategy for the explicit m-stage Runge-Kutta method to solve reaction-diffusion-chemotaxis systems. Instead of relying on empirical approaches to control the time step size, variable time step sizes are given explicitly. Yet, theorems about stability and convergence of the algorithm are provided in analyzing robustness and efficiency. Numerical experiment results on a testing problem and a real application problem are shown.


Introduction and Background
The reaction-diffusion-chemotaxis system is the most common model used to describe chemical and biological processes and the most successful model used to describe the generation of pattern formations.Numerical simulation of such system is necessary because complicated chemotaxis terms and highly nonlinear reaction terms increase the difficulty on finding analytical solutions for realistic systems.Even though, some of the early chemotaxis models had analytical solution or one-dimensional simulation with reduced models 1, 2 , the development of numerical methods for solving chemotaxis systems does not seem sufficient comparing with other type of PDEs systems.Since interesting patterns may emerge and evolve before they reach steady state 3-8 , some particular heed needs to be paid to the Here, u represents the microorganism bacteria density; p stands for chemoattractant concentration and is given or obtained from another equation.f is the reaction term which models the interactions between the chemicals.The diffusion term, D u Δu, imitates the random motion of particles with positive constant diffusivity D u .∇ • uχ p ∇p describes the directed random movement of particles p that are modulated by the concentration gradient of the chemoattractant.The chemotaxis parameter function, χ, indicates the strength of the chemotaxis, which is given by the Keller-Segal model.
where C 0 and C 1 are positive constants and s ≥ 1 is an integer power.Observe that lim x → ∞ χ x 0 which indicates that particles tend to stay where they are when the local concentration of chemoattractant is large.
The basic idea for illustrating patterns using reaction-diffusion-chemotaxis systems is to demonstrate that eigenvalues of local linearization that are responsible for the growth decay can have positive negative real parts.Therefore, it can lead to growth at some places and decay at others, resulting in spatially inhomogeneous patterns 12, 13 .Based on this idea, we consider the method of lines MOL approach in this paper because it is not only the first step for construction of many numerical methods, but also provides good explanations about the generation of patterns through analyzing the positive growth modes and negative decay modes of eigenvalues.Other numerical methods used on the chemotaxis model include the fractional step method 14 , the Alternating Direction Implicit ADI method 15, 16 , and the optimal two-stage scheme 5, 8 listed here for readers's references.
The MOL approach reduces the partial differential equations to a system of ordinary differential equations ODEs by approximating spatial derivatives with numerous spatial discretization methods, for example, the finite difference, finite element, and spectral methods.The solution can be obtained after following some suitable time integrator.According to various spatial discretization features, the computation work can be facilitated by developing some more sophisticated robust and efficient adaptive time integrators 5, 8 .In this paper, we use standard finite differences of center difference for the Laplacian operator and upwind differences for the chemotaxis term.The reason for adapting upwind differences is the stability concern.When the chemotaxis dominates, the system behaves more like a hyperbolic system.Then the upwind differences are more stable methods 17 .It is worth to mention that the following time-stepping scheme can also be applied to other discretization methods when the domain containing the eigenvalues is known or can be estimated.The general form of the resulting ODE system after spatial discretization reads where M t is a time-dependent matrix and b t is a reaction vector which also depends on time.
Typically, the time integrator for 1.3 can be classified as implicit or explicit methods.Generally, the numerical scheme for solving implicit method is more complicated than explicit method since it often involves solving systems of algebraic equations or needs to deal with the inverse of matrices.Although implicit methods allow larger step size than explicit ones for those spatial discretization matrices only having eigenvalues with negative real parts, the computational cost may still be high due to the complicated implementation for implicit methods.Most importantly, the problem regarding time constrain still exists if the spatial matrices contain very positive real parts of eigenvalues; even for the implicit method the use of very small time steps is yet unpreventable in order to meet the stability condition.This situation is likely to occur for pattern formation problems for which the spatial discretization matrices comprise both positive and negative real parts for their eigenvalues 5, 8 .Besides, for pattern formation problems, those positive real eigenvalues are the unpredictable ones which evolve along with time.Bear these factors in mind, the explicit methods are actually better ones for the reaction-diffusion-chemotaxis systems.
In the present paper, we start with reviewing the explicit m-stage Runge-Kutta methods and then construct their adaptive optimal time steps to enhance the computation efficiency for the chemotaxis system.The optimal time step formulas are presentable and easy to calculate.Only some simple estimations about the bounds for the largest and smallest eigenvalues of the discretization spatial matrices are required at each time step.The numerical simulations on the test and realistic problems are provided.Comparison will be made in the testing problem to demonstrate that our method is not only stable but also more efficient than other similar methods, referring to CPU time and the number of iterations.

Time Independence ODEs Systems
Since interesting patterns are caused by the interaction between growth and decay modes, to analyze system 1.3 and relate it to the pattern formation model 1.1 , one can do the local linearization property of the right-hand side of 1.3 .The positive and negative real eigenvalues of the linearization matrix then, respectively, correspond to the growth and decay modes.Because of this reason, we will consider the corresponding ODEs induced from the chemotaxis system and introduce our idea by starting with the constant coefficients case, that is, then generalize it to the time-dependent coefficients case.The general formulation of the mstage Runge-Kutta method is where m is the stage number and α q q−1 j 1 β qj , q 2, . . ., m.The subscripts n and n 1 are the iteration level.For simplicity, we first look at the uniform time step size Δt t n − t n−1 for all n.Then u n ≈ u x n is the numerical solution at the nth step.Equations for the unknowns γ s and β qs can be obtained by matching terms with Taylor's expansion.
We choose single-step explicit methods, which basically are Runge-Kutta methods, because they are easy to implement and especially efficient for ODEs systems derived from the PDEs.The method is set to be explicit since the range of eigenvalues with negative real parts is measurable for chemotaxis systems.In 5, 8 , we have developed adaptive twostage time stepping scheme with optimum time step sizes for solving chemotaxis system and verified that it is reliable and more economical when comparing with other similar methods.We conjecture that the similar strategy used in 5, 8 could be used to derive optimal adaptive three-and four-stage methods.In the two-stage case, the size of the optimal step is determined by the root of the first-order equation hence it is absolute optimal within one time step.If our conjecture is correct, the sizes of optimal steps for higher-stage methods should be the roots of some higher-order equations.But, are these adaptive step sizes absolute optimal?And are they easy to be found as in the two-stage case?
To answer these questions, we begin our study with the three-stage Runge-Kutta method and set b 0 in 2.1 .As in 5, 8 , we first fix all the intermediate coefficients at t t n for the three-stage Runge-Kutta method.The general time dependence case is proposed in the next section.Denoting M M t n and applying 2.2 to the three-stage Runge-Kutta method with all the intermediate values being fixed at t n , we arrive at where δ i , i 1, 2, 3, are combinations of the coefficients β qs and γ s .Since u t n 1 e MΔt u t n and assuming u n is exact up to the order Δt 3 , we have δ 1 1, δ 2 1/2! , δ 3 1/3! .Similar arguments also hold for RK methods with other stages.Thus, if we treat it as a standard iterative method, 2.4 is a one-step method 18 : p m Δt u n .

2.5
In the case of b / 0, 2.5 becomes Denote v as the solution by solving the system with the perturbated initial condition; we have From 2.6 and 2.7 , the accumulation error can be written as Note that is called the amplification factor, where λ is an eigenvalue of M. It is obtained when a scalar test problem, u λu, is applied to the single-step method.Suppose that λ a bi Re λ Im λi.For pattern formation problems, what we are interested in is the long-term behavior of the solution vectors.It is important to ensure that observed patterns are accurate in the sense that they are not coming from the accumulation error of numerical schemes.We therefore make the following definition of stability.
Here O Δt means higher order of Δt.Our definition is designed to guarantee appropriate numerical solutions for both decay and growth cases.Based on this definition, we get restrictions on the step size.Typically, time steps are restricted either when the scheme is explicit and a < 0 or when the scheme is implicit and a > 0 5, 8 .Since the amplification factors are functions of complex numbers, it is sometimes difficult to analyze.The following theorem shows that the amplification factor can be approximated by simpler functions of a plus some error terms instead of functions of complex eigenvalues λ.Observe that the order of error terms indicated in Theorem 2.2 is the same as the truncation error of the m-stage Runge-Kutta method.Therefore, we can work with this simpler function without losing the order of accuracy.

Stability Analysis and the Optimization Method
The following theorem states that the m-stage Runge-Kutta method satisfies the stability condition automatically for a > 0 but needs constraints on Δt when a < 0.
Proof.The proof for m 1 is straightforward, and the proof for m 2 is in 5, 8 .For m 3,

2.11
The geometric interpretation is that, for spatial matrices M t n , the eigenvalues with negative real parts are assumed to be contained in the left-plane ellipse.Let us denote 1 Δta To examine whether numerically generated patterns truly obey the original models, one needs to verify if the negative modes a < 0 are diminished and positive modes a > 0 are growing accordingly.That is, when the negative modes occur, p m a, Δt is less than one, and is greater than one when the spatial matrix M t n happens to have positive modes.From p m a, Δt , we find that when a > 0, p m a, Δt is definitely greater than unity, hence the stability condition is automatically satisfied; when a < 0, p m a, Δt is then possible to be less than unity.It is necessary to put the limitation on the time steps to ensure the stability condition p m a, Δt < 1 is preserved.Besides, the negative modes should be diminished in an effective way so that the computational cost can be reduced.Under these consideration, we derive the optimal time step which not only diminishes decay modes efficiently but also guarantees the stability conditions.Before moving on the theorems about showing how we get the optimal time steps for m-stage Runge-Kutta method, let us recall the Gerschgorin theorem 19 .It says that for matrices with real coefficients, the distribution for the real parts of eigenvalues can be estimated simply by the discs centered at the diagonal elements.So we can assume that the range of negative real parts of eigenvalues of the discretized spatial matrices is known.Here, only negative real eigenvalues are needed as they are the only ones responsible for the stability and restriction of the step sizes for explicit schemes.We are now ready to give relevant theorems for constructing optimal step sizes by effectively damping the negative modes.
Theorem 2.3.For a < 0 and m being even, p m a, Δt > 0 and is concave up for all Δt > 0.
Proof.For m 2, it is a second-degree polynomial with a positive leading coefficient.The proof is straightforward.Note that p 2 a, Δt > 0.
Assume p m−2 a, Δt > 0 for all even m.For m being even and m > 2, we have This leads to the solution.
Let us denote g m Δt p m −a max , Δt − p m −a min , Δt .The following theorem states that the optimal time step is unique for m being even.For m being odd, it is shown in Theorem 2.5.Proof.
Δt a max − a min q m Δt .

2.17
It follows from 2.17 and the fact that a max / a min , to find the zeros of g m Δt is enough to solve q m Δt 0. Since q m 0 1 and q m Δt < 0 as Δt tends to infinity, positive real solutions hence exist for g m Δt by the Intermediate Value Theorem.Moreover, q m Δt is strictly decreasing Figures 1 and 2 .Therefore, the solution is unique.

Time-Dependent ODEs Systems
In the previous section, we derived simpler forms of amplification factors as well as theorems for setting optimal time steps for constant ODEs systems.The optimal time step is obtained under the assumption that the intermediate values of Runge-Kutta methods are fixed at current time.Under the stability consideration, we continue our study on searching optimal time steps and relevant schemes when the ODEs system and Runge-Kutta methods are time dependent.The general form of the time dependent system is

Derivation of Adaptive Optimal Time Step for the Three-Stage Runge-Kutta Method
We start with the three-stage Runge-kutta method.As before, we first set the reaction vector to be zero.

2.19
Here, M n M t n .Expand M n α i at t n by Taylor expansion, then , where i 2, 3. Therefore,

2.21
Followed by 2.20 and the consistency condition, 2.19 can be simplified as

2.22
Now, we assume

2.23
Then, we can rewrite 2.22 as

2.24
This way, we arrive the following desirable form through rearrangement:

2.25
Let where

2.27
Then the previous equation implies

2.28
The above result illustrates that the time-dependent three-stage Runge-Kutta method can be treated as a single-step method without losing the order of accuracy.In addition, it shares the same form with 2.5 if J n is replaced by M. Next, we derive a similar argument for the four-stage Runge-Kutta method.

Derivation of Adaptive Optimal Time Step for the Four-Stage Runge-Kutta Method
Applying the four-stage Runge-Kutta method to 2.18 , we get

2.30
By consistency of the method,

2.32
We further assume

2.33
The previous equation hence can be written as

2.34
That is, where

2.36
We now turn to examine the more general situation in which the reaction term is not zero, that is, b / 0. In this case, we have

2.37
Similar to our discussion in the previous section, the accumulation error for 2.37 is

2.38
We remark again that the amplification factor m i 0 Δt n J n i /i! in 2.38 at the nth local step has the same form with 2.5 except M being replaced by J n .Therefore, we can apply the strategy demonstrated in Theorems 2.5 and 2.6 to look for the optimal time steps for Runge-Kutta schemes.The present a max and a min appeared in the optimal step size formulas then, respectively, represent the greatest and smallest real parts of the eigenvalues of the matrix J n .Notice that by the similar argument outlined in the previous proofs, the stability is preserved with this step size.

Journal of Applied Mathematics
An Explicit Adaptive Optimal Time Stepping Scheme for Solving 2.18 1 Set up initial condition u 0 and initial step size Δt 0 .
2 Set J n in accordance with the stage number m, and apply m-stage Runge-Kutta schemes.
3 Use the Gerschgorin theorem to estimate the interval, −a n max , −a n min for negative real parts of the eigenvalues of J n .Note that the time step for m 4 can be solved by mathematical software such as MAPLE from the following equation:

The "Best" Optimal Step Size and the Time-Dependent Homogeneous Systems
We point out that for those eigenvalues with negative real parts, one may also assume they are enclosed in an ellipse with one of the vertex lying at the origin, that is, a n min 0. In this way, only a n max are needed in forming optimal time step formulas and time steps Δt n 1 in step 4 are switched as i m 2: Δt n 1 2/a n max , ii m 3: Δt n 1 2.512745327/a n max , iii m 4: Δt n 1 2.785293563/a n max , where a n max is defined as before in step 2. We remark that when Δt n 1 is specified as above values, p m a, Δt is equal to one.The stability condition does not seem to be satisfied.However, this hidden problem is avoidable by requiring Gerschgorin theorem or other

Testing Problems
Consider where u u x, t and f tx − 1 e −t cos x − t te −t sin x − t .The three terms on the righthand side represent diffusion, chemotaxis, and reaction, respectively.Assume the periodic boundary condition: u o, t u 2π, t , t ≥ 0 and the initial condition: u x, 0 sin x.The analytic solution of 3.1 is u x, t e −t sin x − t .

3.2
To apply MOL, we simply use the difference scheme for the spatial discretization.The standard second-order difference is applied to the diffusion term.The upwind scheme is employed in the chemotaxis for some stability consideration 17 .After the discretization, we then apply m-stage Runge-Kutta method with the adaptive optimal time stepping scheme as the time integration strategy.By denoting h 2π/N, x i ih, i 0, 1, . . ., N, and the numerical solution as u i , the resulting ODE system is where Generally, the actual eigenvalues of the spatial matrix M t are not available.One may use the power method and the inverse power method 20 to estimate the largest and smallest magnitude of eigenvalues.But these methods are usually time consuming.Moreover, only negative real parts of the eigenvalues are needed for our scheme.Therefore, we suggest using Gerschgorin theorem to estimate the distribution of eigenvalues.As we should show immediately, it is a simple and inexpensive way to find the range for real parts of the eigenvalues.It is therefore adequate for implementing our scheme.In fact, if a formula for the distribution of eigenvalues or actual eigenvalues can be found ahead of time, then it is not necessary to use the theorem to estimate the range at each time step.A simple version of the Gerschgorin theorem is stated as follows.
Theorem 3.1 Gerschgorin .The union of all discs contains all eigenvalues of the n × n matrix A a ik .
The theorem states that if A is a real matrix, the range of the real parts of the eigenvalues can be easily detected by checking the intersection intervals of these discs with the real axis.

3.8
Figure 3 shows the relative error between the numerical and the analytic solutions.Observe that the relative error for the standard two-stage Runge-Kutta method increases dramatically around t 5.2, but the optimal time-stepping m-stage Runge-Kutta methods still converge to the analytic solution.
Tables 1, 2 4, 5, and 6.The computations were done on Dell Latitude D620 2600.Tables 4-6 illustrate the computational efficiency for our method with CPU time.

Further Discussion on the Efficiency of the Optimal RK Method
We continue our discussion about the computational efficiency for optimal RK methods with time steps mentioned in Sections 2.2.We are interested in measuring the efficiency difference between RKF and our method.To make one's point, we take the optimal four-stage RK method as an example.Let us start with the "best" optimal step size as in Section 2.  Table 7 shows that even with the "medium" optimal time step 3.11 , our numerical scheme still performs better than the RKF 45.In addition, the computational efficiency of RKF 45 is approximately equal to the optimal four-stage RK when the time step is chosen as 3.11 .

Computer Simulations of Chemotaxis in Bacteria
Our numerical experiment is based on the biological experiments reported in 2.28 and the corresponding mathematical model given in 2.28 .Interesting patterns are developed when Escherichia coli or Salmonella typhimurium is incoulated on intermediates of the tricarboxylic acid TCA cycle.Bacteria form symmetrical rings of spots or strips if they are inoculated in semisolid agar on the intermediates of TCA cycle.When placed in a liquid medium and fed on intermediates of the TCA cycle, they secrete aspartate as a chemoattractant.Cells then arrange themselves into high density aggregates.In this case, aggregates form randomly as time evolves and fade in a time interval less than the bacteria generation.Here, attractant chemicals are excreted by the bacterial themselves instead of being added into the Petri dish artificially.
A nondimensionalized mathematical models were proposed in 2.28 for the liquid medium experiments in 2.28 .The model is a classical chemotaxis system, which reads as Δv ω u 2 μ u 2 , 3.12 where u, v, and ω represent cell density, concentration of chemotactic substance, and succinate, respectively.Simulations were done with zero flux boundary condition for both u and v. Initial conditions are set to be zero for v and the random perturbation at u 0 1 for the initial condition of u.The noise amplitude is of order 10 −1 .They were created by the C random number generator.The initial, boundary conditions for u and v, and the noise amplitude are chosen to be the same as described in 2.28 except for the random numbers which may not be identical due to the use of different softwares.
Numerical simulations are showed in Figure 4. Simulations did capture the characteristics as described in 2.28 qualitatively.The aggregates first are divided into small regions and then cojoin with others.

Conclusions and Discussions
We investigate an optimal adaptive time stepping strategy for fully explicit m-stage Runge-Kutta methods to solve general reaction-diffusion-chemotaxis systems.It is designed specially for handling spatially nonhomogeneous pattern formations.The method is based on m-stage Runge-Kutta methods and simple estimation on the optimal step size at each iteration.With some optimization techniques, we efficiently discard the decay modes and only allow the growth modes to grow.In this way, we guarantee that the numerically generated patterns follow the primary model equations.The scheme is proven to be extremely robust and accurate by theoretical analysis, numerical experiments on the testing problem, and real pattern simulations on bacterial chemotaxis.The numerical experiments also show that our method is superior to RKF in solving reaction-diffusion-chemotaxis systems.

Theorem 2 . 7 .
If m is even, g m Δt has only one positive real solution.
Figure3shows the relative error between the numerical and the analytic solutions.Observe that the relative error for the standard two-stage Runge-Kutta method increases dramatically around t 5.2, but the optimal time-stepping m-stage Runge-Kutta methods still converge to the analytic solution.Tables1, 2, and 3 demonstrate the number of iteration for our scheme Optimal mstage RK with that of two other methods: standard m-stage Runge-Kutta method with fixed

Figure 3 :
Figure 3: Relative error between numerical simulations and the analytic solution.

Figure 4 :
Figure 4: Numerical simulations for bacterial aggregates they exposed to TCA cycle: from left to right, top to bottom.The aggregates are separated into small regions and then cojoin other aggregates.The aggregates become fewer but larger.
Δt is strictly decreasing with respect to Δt. Case for m being odd .Assume that −a max ≤ a ≤ −a min , where a max > a min > 0 are two positive numbers.Then the solution of the following min-max problem occurs at Δt Δt opt in which Δt satisfies f m −a max , Δt 0, where m is an odd number.The min-max problem is stated as Δt opt , and at p m −a max , Δt , Δt > Δt opt .So the minimax occurs when p m −a max , Δt p m −a min , Δt , that is, OptFigure 1: Graphs for p 3 a, Δt where a −6: •, a −10: * , a −12: •.It follows from mathematical induction, ∂ 2 /∂Δt 2 p m a, Δt > 0 for all even m.Hence, p m a, Δt is concave up w.r.t.Δt if m is even.Moreover, p m a, Δt > 0 for all even m, because p m a, Δt > exp aΔt > 0 for even m.Corollary 2.4.For a < 0 and m being an odd number, p m a, Δt < 1 and p m a, Δt is strictly decreasing for all Δt > 0.Proof.For m 1, the proof is trivial.For m > 1, p m a, 0 1 and∂p m a, Δt ∂Δt ap m−1 a, Δt < 0 2.13since a < 0 and m − 1 is an even number.Therefore, we conclude p m a, Δt < 1 for all odd m and Δt > 0.Moreover, p m a,Proof.Note that p m a 1 , Δt > p m a 2 , Δt if a 1 > a 2 anda 1 , a 2 < 0. For any fixed Δt, the maximum of p m a, Δt is obtained at p m −a min , Δt .So the min-max occurs when p m −a max , Δt 0. Since p m a, Δt is continuous, p m a, 0 1 > 0, and p m a, Δt < 0 as Opt Figure 2: Graphs for p 4 a, Δt where a −6: •, a −10: * , a −12: •.Δt → ∞, the solution exists by Intermediate Value Theorem.Furthermore, from Corollary 2.4, p m a, Δt is monotone decreasing w.r.t.Δt, so the solution is unique.Theorem 2.6 Case for m being even .Assume that −a max ≤ a ≤ −a min , where a max > a min > 0 are two positive numbers.Then the solution of the following min-max problem occurs at Δt Δt opt in which Δt satisfies p m −a max , Δt p m −a min , Δt , where m is an even number m 2, 4 .The min-max problem is stated as min Proof.For each fixed a, p m a, Δt is a quadratic-like polynomial function in Δt.Then for any fixed Δt, the maximum of p m a, Δt is obtained at p m −a min , Δt , Δt < , Δt n 1 .When an a n max is equal to or close to zero, simply choose a reasonable step size for Δt n 1 .

Table 1 :
Comparison for number of iterations among optimal two-stage RK, RKF 23, and standard twostage RK.The optimal two-stage RK method is counted twice for the function evaluation per iteration, while the RKF 23 is counted four times per iteration."Div" is the abbreviation for Divergent.

Table 2 :
Comparison for number of iterations among the optimal three-stage RK, RKF 34, and standard three-stage RK.The optimal three-stage RK method is counted three times for the function evaluation per iteration, while the RKF 34 is five times per iteration.RK and the Runge-Kutta-Fehlberg methods RKF .These tables basically indicate that while the optimal time stepping scheme always ensure convergence, the standard m-stage Runge-Kutta methods or other adaptive methods like RKF either diverge or converge slower since they require a larger number of iterations.Similar results can also be illustrated by calculating CPU time following from Tables

Table 3 :
Comparison for number of iterations among the optimal four-stage RK, RKF 45, and standard four-stage RK.The optimal four-stage RK method is counted four times for the function evaluation per iteration while the RKF 45 is six times per iteration.

Table 4 :
Comparison of CPU time among the optimal two-stage RK, RKF 23, and standard two-stage RK with Δt 0.001.

Table 5 :
Comparison of CPU time among the optimal three-stage RK, RKF 34, and standard three-stage RK with Δt 0.0005 and Δt 0.001, respectively.

Table 6 :
Comparison of CPU time among the optimal four-stage RK, RKF 45, and standard four-stage RK with Δt 0.001.

Table 7 :
Comparison of CPU time between the optimal four-stage RK with the average time step of the "best" and the "worst" cases, and the RKF 45.