The random walk is used as a model expressing equitableness and the effectiveness of various finance phenomena. Random walk is included in unit root process which is a class of nonstationary processes. Due to its nonstationarity, the least squares estimator (LSE) of random walk does not satisfy asymptotic normality. However, it is well known that the sequence of partial sum processes of random walk weakly converges to standard Brownian motion. This result is so-called functional central limit theorem (FCLT). We can derive the limiting distribution of LSE of unit root process from the FCLT result. The FCLT result has been extended to unit root process with locally stationary process (LSP) innovation. This model includes different two types of nonstationarity. Since the LSP innovation has time-varying spectral structure, it is suitable for describing the empirical financial time series data. Here we will derive the limiting distributions of LSE of unit root, near unit root and general integrated processes with LSP innovation. Testing problem between unit root and near unit root will be also discussed. Furthermore, we will suggest two kind of extensions for LSE, which include various famous estimators as special cases.
1. Introduction
Since the random walk is a martingale sequence, the best predictor of the next term becomes the value of this term. In this sense, the random walk is used as a model expressing equitableness and the effectiveness of various finance phenomena in economics. Furthermore, because the random walk is a unit root process, taking the difference of the random walk, we can recover the independent sequence. However, the information of the original sequence will be lost by taking the difference when it does not include a unit root. Therefore, the testing of the existence of unit root in the original sequence becomes important.
In this section, we review the fundamental asymptotic results for unit root processes. Let {ɛj} be i.i.d. (0,σ2) random variables, where σ2>0, and define the partial sum
rj=rj-1+ɛj(r0=0)=∑i=1jɛi,(j=1,…,T),
which is the so-called random walk process. Random walk corresponds to the first-order autoregressive (AR(1)) model with unit coefficient. Therefore, random walk is included in unit root (I(1)) processes which is a class of nonstationary processes. Let 𝒞=𝒞[0,1] be the space of all real-valued continuous functions defined on [0,1]. For random walk process, we construct the sequence of the processes of the partial sum {RT} in 𝒞 as
RT(t)=1σTrj+T(t-jT)1σTɛj,(j-1T≤t≤jT).
It is well known that the partial sum process {RT} converge weakly to a standard Brownian motion on [0,1], namely,
L(RT)⟶L(W)asT⟶∞,
where ℒ(·) denotes the distribution law of the corresponding random elements. This result is the so-called functional central limit theorem (FCLT) (see Billingsley [1]).
The FCLT result can be extended to the unit root process where the innovation is general linear process. We consider a sequence {R̃T} of a stochastic process in 𝒞 defined by
R̃T(t)=1Tr̃j+T(t-jT)1Tuj,(j-1T≤t≤jT),
where r̃j=∑i=1jui and {uj} is assumed to be generated by
uj=∑l=0∞αlɛj-l,α0=1.
Here, {ɛj} is a sequence of i.i.d. (0,σ2) random variables, and {αj} is a sequence of constants which satisfies ∑l=0∞l|αl|<∞; therefore, {uj} becomes stationary process. Using the Beveridge and Nelson [2] decomposition, it holds (see, e.g., Tanaka [3])
L(R̃T)⟶L(αW),α=∑l=0∞αl.
The asymptotic property of LSE for stationary autoregressive models has been well established (see, e.g., Hannan [4]). On the other hand, due to its nonstationarity, the LSE of random walk does not satisfy asymptotic normality. However, we can derive the limiting distribution of LSE of unit root process from the FCLT result. For more detailed understanding about unit root process with i.i.d. or stationary innovation, refer to, for example, Billingsley [1] and Tanaka [3].
In the above case, the {uj}’s are stationary and hence, have constant variance, while covariances depend on only time differences. This is referred to as the homogeneous case, which is too restrictive to interpret empirical data, for example, empirical financial data. Recently, an important class of nonstationary processes have been proposed by Dahlhaus (see, e.g., Dahlhaus [5, 6]), called locally stationary processes. In this paper, we alternatively adopt locally stationary innovation process, which has smoothly changing variance. Since the LSP innovation has time-varying spectral structure, it is suitable for describing the empirical financial time series data.
This paper is organized as follows. In the appendix, we review the extension of the FCLT results to the cases that the innovations are locally stationary process. Namely, we explain the FCLT for unit root, near unit root, and general integrated processes with LSP innovations. In Section 2, we obtain the asymptotic distribution of the least squares estimator for each case of the appendix. In Section 3, we also consider the testing problem for unit root with LSP innovation. Finally, in Section 4, we discuss the extensions of LSE, which include various famous estimators as special cases.
2. The Property of Least Squares Estimator
In this section, we investigate the asymptotic properties of least squares estimators for unit root, near unit root, and I(d) processes with locally stationary process innovations. Testing problem for unit root is also discussed. For the notations which are not defined in this section, refer to the appendix.
2.1. Least Squares Estimator for Unit Root Process
Here, we consider the following statistics:
ρ̂=∑j=2Txj-1,Txj,T∑j=2Txj-1,T2,obtained from model (A.3), which can be regarded as the least squares estimator (LSE) of autoregressive coefficient in the first-order autoregressive (AR(1)) model xj,T=ρxj-1,T+uj,T. Define
U1,T=1Tσ2∑j=2Txj-1,T(xj,T-xj-1,T)=12XT(1)2-12X(0)2-12Tσ2∑j=1Tuj,T2-X(0)u1,TTσ,V1,T=1T2σ2∑j=2Txj-1,T2=1T∑j=1TXT(jT)2-1TXT(1)2,
then we have
S1,T≡T(ρ̂-1)=U1,TV1,T.
Let us define a continuous function H1(x)=(H11(x),H12(x)) for x∈𝒞, where
H11(x)=12{x(1)2-x(0)2-∫01∑l=0∞αl(ν)2dν},H12(x)=∫01x(ν)2dν.
It is easy to check
U1,T=H11(XT)+oP(1),V1,T=H12(XT)+oP(1).
Therefore, the continuous mapping theorem (CMT) leads to ℒ(U1,T,V1,T)→ℒ(H1(X)) and
L(S1,T)=L(T(ρ̂-1))⟶L(H11(X)H12(X))=L((1/2){X(1)2-X(0)2-∫01∑l=0∞αl(ν)2dν}∫01X(ν)2dν)=L(∫01X(ν)dX(ν)+(1/2)∫01[{∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν∫01X(ν)2dν).
2.2. Least Squares Estimator for Near Unit Root Process
We next consider the least squares estimator ρ̂T for model (A.11) in the case that β(t)≡β is a constant on [0,1], namely,
yj,T=ρTyj-1,T+uj,T,(j=1,…,T),
with ρT=1-β/T. Then, we have
ρ̂T=1-β̂T=∑j=2Tyj-1,Tyj,T∑j=2Tyj-1,T2,S2,T≡T(ρ̂T-1)=-β̂=U2,TV2,T,
where
U2,T=1Tσ2∑j=2Tyj-1,T(yj,T-yj-1,T)=12YT(1)2-12Y(0)2-12Tσ2∑j=1T(uj,T-βTyj-1,T)2-1TσY(0)(u1,T-βTy0,T)V2,T=1T2σ2∑j=2Tyj-1,T2=1T∑j=1TYT(jT)2-1TYT(1)2.,
Let us define a continuous function H2(x)=(H21(x),H22(x)) for x∈𝒞, where
H21(x)=12{x(1)2-x(0)2-∫01∑l=0∞αl(ν)2dν},H22(x)=∫01x(ν)2dν.
It is easy to check
U2,T=H21(YT)+oP(1),V2,T=H22(YT)+oP(1).
Therefore, the CMT leads to ℒ(U2,T,V2,T)→ℒ(H2(Y)) and
L(S2,T)=L(T(ρ̂-1))=L(-β̂)⟶L(H21(Y)H22(Y))=L((1/2){Y(1)2-Y(0)2-∫01∑l=0∞αl(ν)2dν}∫01Y(ν)2dν)=L(∫01Y(ν)dY(ν)+(1/2)∫01[{∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν∫01Y(ν)2dν).
2.3. Least Squares Estimator for I(d) Process
Furthermore, we consider the least squares estimator
ρ̂{d}=∑j=2Txj-1,T{d}xj,T{d}∑j=2T(xj-1,T{d})2,S3,T≡T(ρ̂{d}-1)=U3,TV3,T,
obtained from model xj,T{d}=ρxj-1,T{d}+xj,T{d-1}, where
U3,T=1T2d-1σ2∑j=2Txj-1,T{d}(xj,T{d}-xj-1,T{d})=12XT{d}(1)2-12T2∑j=1T{XT{d-1}(jT)}2-1TXT{d}(0)XT{d-1}(1T)V3,T=1T2dσ2∑j=2T(xj-1,T{d})2=1T∑j=1T{XT{d}(jT)}2-1T{XT{d}(1)}2.
Let us define a continuous function H3(x)=(H31(x),H32(x)) for x∈𝒞, where
H31(x)=12x(1)2,H32(x)=∫01x(ν)2dν.
It is easy to check
U3,T=H31(XT{d})+oP(1),V3,T=H32(XT{d})+oP(1).
Therefore, the CMT leads to ℒ(U3,T,V3,T)→ℒ(H3(X{d-1})) and
L(S3,T)=L(T(ρ̂{d}-1))⟶L(H31(X{d-1})H32(X{d-1}))=L((1/2){X{d-1}(1)}2∫01{X{d-1}(ν)}2dν)=L(∫01X{d-1}(ν)dX{d-1}(ν)∫01{X{d-1}(ν)}2dν).
The equality above is due to (d-1)-times differentiability of X{d-1}.
3. Testing for Unit Root
In the analysis of empirical financial data, the existence of the unit root is an important problem. However, as we see in the previous section, the asymptotic results between unit root and near unit root processes are quite different (the drift term appeared in the limiting process of near unit root). Therefore, we consider the following testing problem against the local alternative hypothesis:
H0:ρ=1H1:ρ=1-βT.
We should assume that σ2 is a unit to identify the models. Let the statistics S1,T be constructed in (2.3). Recall that, as T→∞, under H0,
L(S1,T)⟶L(∫01X(ν)dX(ν)+(1/2)∫01[{∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν∫01X(ν)2dν)=L(UV+∫01[{∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν2∫01X(ν)2dν),
where
U=∫01X(ν)dX(ν),V=∫01X(ν)2dν.
Since {∑l=0∞αl(ν)}2,∑l=0∞αl(ν)2 are unknown, we construct a test statistic
Zρ=T(ρ̂-1)+(1/T)∑j=1Tûj,T2-(1/T)∑t=1Tf̂(t/T,0)2(1/T2)∑j=2Txj-1,T2,
where ûj,T=xj,T-xj-1,T. A nonparametric time-varying spectral density estimator f̂(u,λ) is given by
f̂(u,λl)=M∫K(M(λl-μk))IN(u,μk)dμk≈2πMT∑k=-T/4πM+lT/4πM+lK(M(λl-μk))IN(u,μk),
where λl=(2π/T)l-π, l=1,…,T-1 and μk=(2π/T)k-π, k=1,…,T-1. Here, IN(u,λ) is the local periodogram around time u given by
IN(u,λ)=12πN|∑s=1Nh(sN)û[uT]-N/2+s,Te-iλs|2,
where [·] denotes Gauss symbol and, for real number a, [a] is the greatest integer that is less than or equal to a. Furthermore, we employ the following kernel functions and the orders of bandwidth for smoothing in time and frequency domain, respectively,
K(x)=6(14-x2),x∈[-12,12],h(x)={6x(1-x)}1/2,x∈[0,1],M=T1/6,N=T5/6,
which are optimal in the sense that they minimize the mean squared error of nonparametric estimator (see Dahlhaus [6]); however, we simply multiply the orders of bandwidth by the constants equal to one. Then, it can be established that, under H0,
L(Zρ)⟶L(UV).
We now have to deal with statistics for which numerical integration must be elaborated. Let R be such a statistic, which takes the form R=U/V. Using Imhof’s [7] formula gives us distribution function of R,
FR(x)=P(R≤x)=P(xV-U≥0)=12+1π∫011sIm{ϕ(s;x)}ds,
where ϕ(s;x) is the characteristic function of xV-U, namely,
ϕ(-is;x)=E[exp{s(xV-U)}]=E[exp{s(x∫01X(ν)2dν-∫01X(ν)dX(ν))}].
However, so far we do not have the explicit form of the distribution function of the estimator. Therefore, we cannot perform a numerical experiment except for the clear simple cases. It includes the complicated problem in the differential equation and requires one further paper for solution.
4. Extensions of LSE
In this section, we consider the extensions of LSE ρ̂T for near random walk model yj,T=ρTyj-1,T+uj,T, ρT=1-β/T.
4.1. Ochi Estimator
Ochi [8] proposed the class of estimators of the following form, which are the extensions of LSE for autoregressive coefficient:
ρ̂T(θ1,θ2)=1-β̂(θ1,θ2)T=∑j=2Tyj-1,Tyj,T∑j=2T-1yj,T2+θ1y1,T2+θ2yT,T2,θ1,θ2≥0,S4,T=T(ρ̂T(θ1,θ2)-1)=-β̂(θ1,θ2)=U4,TV4,T,
where
U4,T=1Tσ2{∑j=2Tyj-1,Tyj,T-∑j=2T-1yj,T2-θ1y1,T2-θ2yT,T2}={12(1-2θ1)+βT(2θ1-1)+β2T2(1-θ1)}Y(0)2+12(1-θ2)YT(1)2-121Tσ2∑j=1T(uj,T-βTyj-1,T)2+1Tσ{1-2θ1+2βT(θ1-1)}u1,TY(0)+1Tσ2(1-θ1)u1,T′2,V4,T=1T2σ2{∑j=2T-1yj,T2+θ1y1,T2+θ2yT,T2}=1T∑j=1TYT(jT)2+(θ1-1)1TYT(1T)2+(θ2-1)1TYT(1)2.
This class of estimators includes LSE ρ̂T(1,0), Daniels’s estimator ρ̂T(1/2,1/2), and Yule-Walker estimator ρ̂T(1,1) as the special cases.
Define for x∈𝒞, H4(x)=(H41(x),H42(x)),
H41(x)=12{(1-2θ1)x(0)2+(1-2θ2)x(1)2-∫01∑l=0∞αl(ν)2dν},H42(x)=∫01x(ν)2dν,
then we see that H4(x) is continuous and
U4,T=H41(YT)+oP(1),V4,T=H42(YT)+oP(1).
From the CMT, we obtain ℒ(U4,T,V4,T)→ℒ(H4(Y)), and therefore,
L(S4,T)=L(T(ρ̂T(θ1,θ2)-1))=L(-β̂(θ1,θ2))⟶L(H41(Y)H42(Y)),
where
H41(Y)=12{(1-2θ1)Y(0)2+(1-2θ2)Y(1)2-∫01∑l=0∞αl(ν)2dν}=(1-2θ2)∫01Y(ν)dY(ν)+(1-θ1-θ2)Y(0)2+12∫01[(1-2θ2){∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν,H42(Y)=∫01Y(ν)2dν.
4.2. Another Extension of LSE
Next, we suggest another class of estimators which are also the extensions of LSE. Define for θ(u)(∈𝒞) with continuous derivative θ′(u)=(∂/∂u)θ(u),
ρ̂Tθ=1-β̂θT=∑j=2Tθ((j-1)/T)yj-1,Tyj,T∑j=2Tθ((j-1)/T)yj-1,T2,S5,T=T(ρ̂Tθ-1)=-β̂θ=U5,TV5,T,
where
U5,T=1Tσ2∑j=2Tθ(j-1T)yj-1,T(yj,T-yj-1,T)=-12∑j=1T{θ(jT)-θ(j-1T)}YT(jT)2+12θ(1)YT(1)2-12θ(0)Y(0)2-121Tσ2∑j=1Tθ(jT)(uj,T-βTyj-1,T)2+12Tσ2θ(1T)(u1,T-βTy0,T)2+12Tσ2θ(0){u1,T(u1,T+2y0,T)-2βTy0,T(y0,T+u1,T)+β2T2y0,T2},V5,T=1T2σ2∑j=2Tθ(j-1T)yj-1,T2=1T∑j=1Tθ(jT)YT(jT)2-1Tθ(1)YT(1)2.
If we take the taper function as θ(u), this estimator corresponds to the local LSE.
Define for x∈𝒞, H5(x)=(H51(x),H52(x)),
H51(x)=-12{∫01θ′(ν)x(ν)2dν-θ(1)x(1)2+θ(0)x(0)2}-12{∫01θ(ν)∑l=0∞αl(ν)2dν},H52(x)=∫01θ(ν)x(ν)2dν,
where θ′(u)=(∂/∂u)θ(u), then we see that H5(x) is continuous and
U5,T=H51(YT)+oP(1),V5,T=H52(YT)+oP(1).
From the CMT, we obtain ℒ(U5,T,V5,T)→ℒ(H5(Y)), and therefore,
L(S5,T)=L(T(ρ̂Tθ-1))=L(-β̂θ)⟶L(H51(Y)H52(Y))≡L(Yθ),
where
H51(Y)=-12{∫01θ′(ν)Y(ν)2dν-θ(1)Y(1)2+θ(0)Y(0)2}-12{∫01θ(ν)∑l=0∞αl(ν)2dν},H52(Y)=∫01θ(ν)Y(ν)2dν.
The integration by part leads to
Yθ=(1/2){∫01θ(ν)dY(1)(ν)-∫01θ(ν)∑l=0∞αl(ν)2dν}∫01θ(ν)Y(ν)2dν,
with Y(1)(t)=Y(t)2. Hence, using Ito’s formula,
dY(1)(t)=d{Y(t)2}=2Y(t)dY(t)+{∑l=0∞αl(t)}2dt,
we have
Yθ=∫01θ(ν)Y(ν)dY(ν)+(1/2)∫01θ(ν)[{∑l=0∞αl(ν)}2-∑l=0∞αl(ν)2]dν∫01θ(ν)Y(ν)2dν.
Appendices
In this appendix, we review the extensions of functional central limit theorem to the cases that innovations are locally stationary processes, which are used for the main results of this paper.
A. FCLT for Locally Stationary Processes
Hirukawa and Sadakata [9] extended the FCLT results to the unit root processes which have locally stationary process innovations. Namely, they derived the FCLT for unit root, near unit root, and general integrated processes with LSP innovations. In this section, we briefly review these results which are applied in previous sections.
A.1. Unit Root Process with Locally Stationary Disturbance
First, we introduce locally stationary process innovation. Let {uj,T} be generated by the following time-varying MA (∞) model:
uj,T=∑l=0∞αl(jT)ɛj-l:=∑l=0∞αl(jT)Llɛj=α(jT,L)ɛj,
where L is the lag-operator which acts as Lɛj=ɛj-1 and α(u,L)=∑l=0∞αl(u)Ll, and time-varying MA coefficients satisfy
∑l=0∞lsup0≤u≤1|αl(u)|<∞,∑l=0∞lsup0≤u≤1|∂∂uαl(u)|<∞.
Then, these {uj,T}’s become locally stationary processes (see Dahlhaus [5], Hirukawa and Taniguchi [10]). Using this innovation process, define the partial sum {xj,T} as
xj,T=xj-1,T+uj,T=x0,T+∑i=1jui,T,
where x0,T=σTX(0), X(0)~N(γX,δX2) and is independent of {ɛj}.
We consider a sequence {XT} of partial sum stochastic processes in 𝒞 defined by
XT(t)=1σTxj,T+T(t-jT)1σTuj,T,(j-1T≤t≤jT).
Now, we define on ℝ×𝒞ht(1)(x,y)=x+α(t,1)y(t)-∫0tα′(ν,1)y(ν)dν,α(t,1)=∑l=0∞αl(t),α′(t,1)=∂∂tα(t,1)=∑l=0∞∂∂tαl(t).
Then, we can obtain
L(XT)⟶L{h(1)(X(0),W)}≡L(X).
The integration by parts leads to
X(t)=X(0)+α(t,1)W(t)-∫0tα′(ν,1)W(ν)dν=X(0)+∫0tα(ν,1)dW(ν),dX(t)=α(t,1)dW(t).
Note that the time-varying MA (∞) process uj,T in (A.1) has the spectral representation
uj,T=∫-ππA(jT,λ)eijλdξ(λ),
where ξ(λ) is the spectral measure of i.i.d. process {ɛj} which satisfies ɛj=∫-ππeijλdξ(λ), and the transfer function A(t,λ) is given by
A(t,λ)=∑l=0∞αl(t)e-ilλ,A(t,0)=∑l=0∞αl(t)=α(t,1).
Therefore, stochastic differential in (A.7) can be written as
dX(t)=A(t,0)dW(t).
A.2. Near Unit Root Process with Locally Stationary Disturbance
In this section, we consider the following near unit root process {yj,T} with locally stationary disturbance:
yj,T=ρj,Tyj-1,T+uj,T,(j=1,…,T)=∏i=1jρi,Ty0,T+∑i=1j(∏k=i+1jρk,T)ui,T,
where {uj,T} is generated from the time-varying MA (∞) model in (A.1), ρj,T=1-(1/T)β(j/T), β(t)∈𝒞[0,1], y0,T=TσY(0), and Y(0)~N(γY,δY) is independent of {ɛj} and X(0). Then, we define a sequence {YT} of partial sum processes in 𝒞 as
YT(t)=1σTyj,T+T(t-jT)yj,T-yj-1,TσT,(j-1T≤t≤jT).
Define on ℝ2×𝒞ht(2)(x,y,z)=e-∫0tβ(ν)dν(y-x)-∫0tβ(ν)e-∫νtβ(s)dsz(ν)dν+z(t).
Then, we can obtain
L(YT)⟶L{h(2)(X(0),Y(0),X)}≡L(Y).
The integration by parts and Ito’s formula lead to
Y(t)=e-∫0tβ(s)ds(Y(0)-X(0)-∫0tβ(ν)e∫0νβ(s)dsX(ν)dν)+X(t)=e-∫0tβ(s)ds(Y(0)+∫0te∫0νβ(μ)dμdX(ν))=e-∫0tβ(s)ds(Y(0)+∫0te∫0νβ(μ)dμα(ν,1)dW(ν)),dY(t)=-β(t)Y(t)+α(t,1)dW(t)=-β(t)Y(t)+A(t,0)dW(t)=-β(t)Y(t)+dX(t).
A.3. I(d) Process with Locally Stationary Disturbance
Let I(d) process {xj,T{d}} be generated by
(1-L)dxj,T{d}=uj,T,(j=1,…,T),
with x-d+1,T{d}=⋯=x0,T{d}=0, and {uj,T} being the time-varying MA (∞) process in (A.1). Note that the relation (A.16) can be rewritten as
(1-L)xj,T{d}=xj,T{d-1}.
Then, we construct the partial sum process {XT{d}} as
XT{d}(t)=1Td-1{1σTxj,T{d}+T(t-jT)1σTxj,T{d-1}},
for (j-1)/T≤t≤j/T, d≥2, and XT{1}(t)≡XT(t), where the partial sum process {XT} is defined in (A.4). Let us first discuss weak convergence to the onefold integrated process {X{1}} defined by
X{1}(t)=∫0tX(ν)dν=∫0t{X(0)+∫0να(μ,1)dW(μ)}dν.
For d=2, the partial sum process in (A.18) becomes
XT{2}(t)=1T{∑i=1jXT(iT)+T(t-jT)XT(jT)},(j-1T≤t≤jT).
Define on 𝒞ht(3)(x)=∫0tx(ν)dν.
Then, we can see that
L(XT{2})⟶L{h(3)(X)}=L{X{1}}.
For the general integer d, define the d-fold integrated process {X{d}} by
X{d}(t)=∫0tX{d-1}(ν)dν,X{0}(t)=X(t).
From the similar argument in the case of d=2, we can see that the partial sum process {XT{d}} satisfies
L(XT{d})⟶L{h(3)(X{d-1})}=L{X{d-1}}.
Acknowledgments
The authors would like to thank the referees for their many insightful comments, which improved the original version of this paper. The authors would also like to thank Professor Masanobu Taniguchi who is the lead guest editor of this special issue for his efforts and celebrate his sixtieth birthday.
BillingsleyP.1968New York, NY, USAJohn Wiley & Sonsxii+2530233396ZBL0172.21201BeveridgeS.NelsonC. R.A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the ‘business cycle’198172151174TanakaK.1996New York, NY, USAJohn Wiley & Sonsx+623Wiley Series in Probability and Statistics: Probability and Statistics1397269HannanE. J.1970London, UKJohn Wiley and Sonsxi+5360279952DahlhausR.Maximum likelihood estimation and model selection for locally stationary processes199662-3171191138305010.1080/10485259608832670ZBL0879.62025DahlhausR.Asymptotic statistical inference for nonstationary processes with evolutionary spectra1996115New York, NY, USASpringer145159Lecture Notes in Statist.1466743ImhofJ. P.Computing the distribution of quadratic forms in normal variables1961484194260137199ZBL0136.41103OchiY.Asymptotic expansions for the distribution of an estimator in the first-order autoregressive process198341576771129610.1111/j.1467-9892.1983.tb00358.xZBL0511.62031HirukawaJ.SadakataM.Asymptotic properties of unit root processes with locally stationary disturbancePreprintHirukawaJ.TaniguchiM.LAN theorem for non-Gaussian locally stationary processes and its applications20061363640688218197210.1016/j.jspi.2004.08.017ZBL1077.62070