3.1. Linear SPPs
Consider the following linear singular perturbation initial value problem
(3.1)x′(t)+Ax(t)+By(t)=r(t), t∈[0,T],ϵdy(t)dt+Cx(t)+Dy(t)=s(t), 0<ϵ≪1,x(0)=x0, y(0)=y0,
where x0 and y0 are the given initial value, ϵ is the singular perturbation parameter, r(t)∈Rn1 and s(t)∈Rn2 are given functions. The constant matrices A∈Rn1×n1, B∈Rn1×n2, C∈Rn2×n1, D∈Rn2×n2 are split by A=A1-A2, B=B1-B2, C=C1-C2, D=D1-D2 respectively, x(t)∈Rn1 and y(t)∈Rn2 are unknowns. Then the system (3.1) can be written as
(3.2)x′(t)+A1x(t)+B1y(t)=A2x(t)+B2y(t)+r(t), t∈[0,T],ϵdy(t)dt+C1x(t)+D1y(t)=C2x(t)+D2y(t)+s(t), 0<ϵ≪1,x(0)=x0, y(0)=y0.
The continuous-time Waveform Relaxation algorithm for (3.1) is as follows:
(3.3)dx(k+1)(t)dt+A1x(k+1)(t)+B1y(k+1)(t)=A2x(k)(t)+B2y(k)(t)+r(t), t∈[0,T],dy(k+1)(t)dt+C1ϵx(k+1)(t)+D1ϵy(k+1)(t)=C2ϵx(k)(t)+D2ϵx(k)(t)+1ϵs(t), 0<ϵ≪1,x(k+1)(0)=x0, y(k+1)(0)=y0, k=1,2,…,
The matrix form of (3.3) reads
(3.4)ddt(x(k+1)(t)y(k+1)(t))+(A1B1C1ϵD1ϵ)(x(k+1)(t)y(k+1)(t))=(A2B2C2ϵD2ϵ)(x(k)(t)y(k)(t))+(r(t)s(t)ϵ).
Solve the equations (3.4), we can derive
(3.5)(x(k+1)(t)y(k+1)(t))=exp(-(A1B1C1ϵD1ϵ)t)(x(k+1)(0)y(k+1)(0))+∫0texp((A1B1C1ϵD1ϵ)(s-t))((A2B2C2ϵD2ϵ)(x(k)(s)y(k)(s))+(r(s)s(s)ϵ))ds.

Denote εxk(t)=xk(t)-x(t), εyk(t)=yk(t)-y(t), where x(t) and y(t) are the exact solutions of (3.1). From (3.2) and (3.5), we can obtain
(3.6)(εxk+1(t)εyk+1(t))=∫0texp((A1B1C1ϵD1ϵ)(s-t))(A2B2C2ϵD2ϵ)(εxk(s)εyk(s))ds,
then (3.6) can be written as
(3.7)(εxk+1(t)εyk+1(t))=(R(εxk(s)εyk(s)))(t),
clearly, ℛ is a Volterra convolution operator with the kernel function
(3.8)κ(t)=exp((-A1B1C1ϵD1ϵ)t)(A2B2C2ϵD2ϵ),(Ry)(t)=κ(t)*y(t)=∫0tκ(s-t)y(s)ds,ℛ is the Waveform Relaxation operator.

Theorem 3.1.
Let the waveform relaxation operator ℛ be defined in C([0,T],Rn). If the kernel function κ(t) is continuous in [0,T] and satisfies ∥κ∥T≤M, where M is a constant, then the sequence of functions (εxk+1(t),εyk+1(t))Tdefined by (3.6) satisfy
(3.9)(‖εxk(t)‖‖εyk(t)‖)≤Tkk!Mk(max0<s<T‖εx0(s)‖max0<s<T‖εy0(s)‖).

Proof.
Taking the norm in both side of (3.6) which reads
(3.10)(‖εxk+1(t)‖‖εyk+1(t)‖)≤∫0texp(‖A1B1C1ϵD1ϵ‖(s-t))‖A2B2C2ϵD2ϵ‖(‖εxk(s)‖‖εyk(s)‖)ds.
From the induction of (3.10) and the condition ∥κ∥T≤M, we can derive
(3.11)(‖εx1(t)‖‖εy1(t)‖)≤M(max0<s<T‖εx0(s)‖max0<s<T‖εy0(s)‖)t.
Moreover, we can derive
(3.12)(‖εxk(t)‖‖εyk(t)‖)≤Tkk!Mk(max0<s<T‖εx0(s)‖max0<s<T‖εy0(s)‖),
thus, (∥εxk(t)∥,∥εyk(t)∥)T→0 as k→∞ which complete the proof.

3.2. Nonlinear SPPs
Consider the following nonlinear singular perturbation initial value problem:
(3.13)x′(t)=f(x(t),y(t),t), t∈[0,T],ϵdy(t)dt=g(x(t),y(t),t), 0<ϵ≪1,x(0)=x0, y(0)=y0,
where x0 and y0 are given initial values, ϵ is the singular perturbation parameter, f:Rn1×Rn2×[0,T]→Rn1 and g:Rn1×Rn2×[0,T]→Rn2 are given continuous function mappings.

The continuous-time waveform relaxation algorithm for (3.13) is
(3.14)x(k+1)(t)=F(x(k+1)(t),x(k)(t),y(k+1)(t),y(k)(t),t), t∈[0,T],dy(k+1)(t)dt=1ϵG(x(k+1)(t),x(k)(t),y(k+1)(t),y(k)(t),t), 0<ϵ≪1,x(k+1)(0)=x0, y(k+1)(0)=y0,k=1,2,…,
where the splitting functions F(u1,u2,u3,u4,t) and G(u1,u2,u3,u4,t) determine the type of the waveform relaxation algorithm. we can derive from (3.14) that
(3.15)x(k)(t)=F(x(k)(t),x(k-1)(t),y(k)(t),y(k-1)(t),t), t∈[0,T],dy(k)(t)dt=1ϵG(x(k)(t),x(k-1)(t),y(k)(t),y(k-1)(t),t), 0<ϵ≪1,x(k)(0)=x0, y(k)(0)=y0,k=1,2,….
Denote
(3.16)εxk+1(t)=xk+1(t)-xk(t),εyk+1(t)=yk+1(t)-yk(t).

Theorem 3.2.
Assume that the matrices (∂F/∂ui) and (∂G/∂ui) (i=1,2,3,4) of the splitting functions F(u1,u2,u3,u4,t) and G(u1,u2,u3,u4,t) are continuous, then the continuous-time waveform relaxation algorithm (3.14) is convergent.

Proof.
Subtracting (3.14) from (3.15), we have
(3.17)dεxk+1(t)dt=∂F∂u1εxk+1(t)+∂F∂u2εxk(t)+∂F∂u3εyk+1(t)+∂F∂u4εyk(t),dεyk+1(t)dt=1ϵ(∂G∂u1εxk+1(t)+∂G∂u2εxk(t)+∂G∂u3εyk+1(t)+∂G∂u4εyk(t)),
the matrix form of (3.17) reads
(3.18)ddt(εxk+1(t)εyk+1(t))=(∂F∂u1∂F∂u31ϵ∂G∂u11ϵ∂G∂u3)(εxk+1(t)εyk+1(t))+(∂F∂u2∂F∂u41ϵ∂G∂u21ϵ∂G∂u4)(εxk(t)εyk(t)).
Denote
(3.19)εk(t)=(εxk(t),εyk(t))T,A(t)=(∂F∂u1∂F∂u31ϵ∂G∂u11ϵ∂G∂u3),B(t)=(∂F∂u2∂F∂u41ϵ∂G∂u21ϵ∂G∂u4).
Then, we can derive
(3.20)dεk+1(t)dt=A(t)εk+1(t)+B(t)εk(t).
Assume that the basis matrix ϕ(t) satisfies
(3.21)dϕ(t)dt=A(t)ϕ(t),
then the solution of (3.20) can be written as
(3.22)εk+1(t)=ϕ(t)∫0tϕ-1(s)B(s)εk(s)ds.
From (3.22), we have, upon taking the norm in both side and premultiplying by e-λt, that
(3.23)e-λt‖εk+1(t)‖=e-λt‖∫0tϕ(t)ϕ-1(s)B(s)εk(s)ds‖,
furthermore,
(3.24)max0≤t≤T{e-λt‖εk+1(t)‖}=max0≤t≤T{e-λt‖∫0tϕ(t)ϕ-1(s)B(s)εk(s)ds‖},
so
(3.25)‖εk+1(t)‖λ,T≤max0≤t≤T{e-λt‖∫0tϕ(t)ϕ-1(s)B(s)eλse-λsεk(s)ds‖}≤max0≤t≤T{e-λt∫0t‖ϕ(t)ϕ-1(s)B(s)‖eλse-λs‖εk(s)‖ds}≤M‖εk(t)‖λ,T max0≤t≤T{e-λt∫0teλsds}≤Mλ‖εk(t)‖λ,T,
where M=max 0≤t≤T,0≤s≤t {∥ϕ(t)ϕ-1(s)B(s)∥}, and we can choose large enough λ such that M/λ<1, then the iterative error sequences {εk(t)} are convergent.