JPSJournal of Probability and Statistics1687-95381687-952XHindawi Publishing Corporation74170110.1155/2011/741701741701Research ArticleLarge Deviations in Testing Squared Radial Ornstein-Uhleneck ModelZhaoShoujiang1LiuQiaojing1Al-SalehMohammad Fraiwan1College of ScienceChina Three Gorges UniversityYichang 443002Chinactgu.edu.cn201120072011201112052011100720112011Copyright © 2011 Shoujiang Zhao and Qiaojing Liu.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We study the large deviations and moderate deviations of hypothesis testing for squared radial Ornstein-Uhleneck model. Large deviation principles for the log-likelihood ratio are obtained, by which we give negative regions in testing squared radial Ornstein-Uhleneck model and get the decay rates of the error probabilities.

1. Introduction

Let us consider the hypothesis testing for the following squared radial Ornstein-Uhleneck model: dXt=(δ+2αXt)dt+2XtdWt,X0=0, where α<0 is the unknown parameter to be tested on the basis of continuous observation of the process {Xt,t0} on the time interval [0,T], W is a standard Brownian motion and, δ>0 is known. We denote the distribution of the solution (1.1) by Pαδ.

We decide the two hypothesis: H0:α=α0,H1:α=α1, where α0,α1<0. The hypothesis testing is based on a partition of ΩT={dPα1δdPα0δ|F[0,T]R} of the outcome process on [0,T] into two (decision) regions BT and its compliment BTc, and we decide that H0 is true or false according to the outcome X·BT or X·BTc.

The probability e1(T) of accepting H1 when H0 is actually true is called the error probability of the first kind. The probability e2(T) of accepting H0 when H1 is actually true is called the error probability of the second kind. That is, e1(T)=Pα0δ(BT),e2(T)=Pα1δ(BTc). By the Neyman-Pearson lemma (cf. ), the optional decision region BT has the following form: {1TlogdPα1δdPα0δ|F[0,T]c}, where [0,T] is the σ-algebra generated by the outcome process on [0,T].

The research of hypothesis testing problem has started in the 1930s (cf. ). Since the optional decision region BT has the above form, we are interested in the calculation or approximation of the constant c, and the hypothesis testing problem can be studied by large deviations (cf. ). In those papers, some large deviation estimates of the error probabilities for some i.i.d. sequences, Markov chains, stationary Gaussian processes, stationary diffusion processes, Ornstein-Uhlenbeck processes are obtained. In this paper, we study the large deviations and moderate deviations for the hypothesis testing problem of squared radial Ornstein-Uhleneck model; by large deviation principle, we obtain that the decay of the error probability of the second kind approaches to 0 or 1 exponentially fast depending on the fixed exponent of the decay of the error probability of the first kind; we also give negative regions and get the decay rates of the error probabilities by moderate deviation principle. The large and moderate deviations for parameter estimators of squared radial Ornstein-Uhleneck model were studied in [7, 8].

2. Main Results

In this section, we state our main results.

Theorem 2.1.

Let a(T) be a positive function satisfying a(T)T0,a(T)Tas  T. For any a>0, set BT={1a(T)logdPα1δdPα0δ|F[0,T]T4a(T)(α1-α0)2δα0-|α12-α02|2α0aδ-α0}. Then limTTa2(T)logPα0δ(BT)=-a,limTTa2(T)logPα1δ(BTc)=-.

Theorem 2.2.

If α1<α0<0, then for each a>0, there exists a ξ(a), such that limT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]ξ(a))=-a, and when a(0,zα1), liminfT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]<ξ(a))-Iα1(ξ(a)), when a(zα1,+), liminfT1Tlog(1-Pα1δ(1TlogdPα1δdPα0δ|F[0,T]<ξ(a)))-Iα1(ξ(a)), where zα1=-(α0-α1)2δ4α1,Iα1(z)={α02-α12z+(α1-α0)δ/2(δ4-α1(z+(α1-α0)δ/2)α12-α02)2,z<(α0-α1)δ2,+,otherwise.

Theorem 2.3.

If α0<α1<0, then for each a>0, there exists a ξ̃(a), such that limT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]ξ̃(a))=-a, and when a(0,ẑα1), liminfT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]<ξ̃(a))-Îα1(ξ̃(a)), when a(ẑα1,+), liminfT1Tlog(1-Pα1δ(1TlogdPα1δdPα0δ|F[0,T]<ξ̃(a)))-Îα1(ξ̃(a)), where ẑα1=-(α0-α1)2δ4α1,Îα1(z)={α02-α12z+(α1-α0)δ/2(δ4-α1(z+(α1-α0)δ/2)α12-α02)2,z>(α0-α1)δ2,+,otherwise.

3. Moderate Deviations in Testing Squared Radial Ornstein-Uhleneck Model

In this section, we will prove Theorem 2.1. Let us introduce the log-likelihood ratio process of squared radial Ornstein-Uhleneck model and study the moderate deviations of the log-likelihood ratio process.

By , the log-likelihood ratio process has the representation logdPα1δdPα0δ|F[0,T]=12(α1-α0)(Xt-δt)-α12-α0220tXs2ds.

The following Lemma (cf. ) plays an important role in this paper.

Lemma 3.1.

The law of Xt under Pαδ is γ(δ/2,α/e2tα-1), where γ(a,b) denotes the Gamma distribution: γ(a,b)(dx)=baxa-1Γ(a)e-bx(dx),x>0. Moreover, for any θ, Eαδ(eθXt)=(1-θα(e2tα-1))-δ/2.

Lemma 3.2.

For any closed subset F, limsupTTa2(T)logPα0δ(1a(T)(logdPα1δdPα0δ|F[0,T]-(α1-α0)2δT4α0)F)-infzF-4α03z2(α12-α02)2δ, and for any open subset G, liminfTTa2(T)logPα0δ(1a(T)(logdPα1δdPα0δ|F[0,T]-(α1-α0)2δT4α0)G)-infzG-4α03z2(α12-α02)2δ.

Proof.

Let ΛT(y)=logEα0δexp{a(T)yT(logdPα1δdPα0δ|F[0,T]-(α1-α0)2δT4α0)}. By (3.1), for any φ<0, we have ΛT(y)=-a(T)yδ(α1-α0)24α0+logEα0δexp{λ(XT-δT)+u0TXt2ds}=-a(T)yδ(α1-α0)24α0+logEφδ[dPα0δdPφδexp{λ(XT-δT)+u0TXt2ds}]=-a(T)yδ(α12-α02)4α0+logEφδ[(u-12α02+12φ2)0TXs2dsexp{(u-12α02+12φ2)0TXs2ds(λ+α0-φ2)XT-α0-φ2δT+(u-12α02+12φ2)0TXs2ds}], where λ=a(T)y(α1-α0)2T,u=-a(T)y(α12-α02)2T. For T large enough, α02-2u>0, we can choose φ=-α02-2u, then φ<0 and ΛT(y)=-a(T)yδ(α12-α02)4α0-α0-φ2δT+logEφδ[exp{(λ+α0-φ2)XT}].

By Lemma 3.1, we have limTTa2(T)logEφδ[exp{(λ+α0-φ2)XT}]=limT-Tδ2a2(T)log(1-1φ(λ+α0-φ2)(e2Tφ-1))=0. Therefore, Λ(y):=limTTa2(T)ΛT(y)=limTTa2(T)(-a(T)yδ(α12-α02)4α0-α0-φ2δT)=limTTa2(T)(-a(T)yδ(α12-α02)4α0-α0δT2(1-1+a(T)y(α12-α02)Tα02))=y216(α12-α02)2δ-α03. Finally, the Gärtner-Ellis theorem (cf. ) implies the conclusion of Lemma 3.2.

Noting that logdPα1δdPα0δ|F[0,T]=-logdPα0δdPα1δ|F[0,T], we also have the following result.

Lemma 3.3.

For any closed subset F, limsupTTa2(T)logPα1δ(1a(T)(logdPα1δdPα0δ|F[0,T]+(α1-α0)2δT4α1)F)-infzF-4α13z2(α12-α02)2δ, and for any open subset G, liminfTTa2(T)logPα1δ(1a(T)(logdPα1δdPα0δ|F[0,T]+(α1-α0)2δT4α1)G)-infzG-4α13z2(α12-α02)2δ.

Proof of Theorem <xref ref-type="statement" rid="thm2.1">2.1</xref>.

The first claim is a direct conclusion of Lemma 3.2. Since Ta(T)(α1-α0)2δα0+Ta(T)(α1-α0)2δα1-,as  T, by Lemma 3.3, we see that the second one also holds.

4. Large Deviations in Testing Fractional Ornstein-Uhleneck Model

In this section, we will prove Theorems 2.2 and 2.3. We first study the large deviations of the log-likelihood ratio process.

Lemma 4.1.

Assume α1<α0<0. Then for any closed subset F, limsupT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]F)-infzFIα0(z), and for any open subset G, liminfT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]G)-infzGIα0(z), where Iα0(z)={α02-α12z+(α1-α0)δ/2(δ4-α0(z+(α1-α0)δ/2)α12-α02)2,  z<(α0-α1)δ2,+,otherwise.

Proof.

Let ΛT(y)=logEα0δexp{ylogdPα1δdPα0δ|F[0,T]}. Then for φ<0, we have ΛT(y)=logEα0δexp{λ(XT-δT)+u0TXt2ds}=logEφδ[dPα0δdPφδexp{λ(XT-δT)+u0TXt2ds}]=logEφδ[exp{(λ+α0-φ2)(XT-δT)+(u-12α02+12φ2)0TXs2ds}], where λ=(α1-α0)y/2,  u=y(α02-α12)/2.

Since α02-2u>0, for y>(α02/α02-α12), we can choose φ=-α02-2u, for each y>(α02/α02-α12); then φ<0 and ΛT(y)=-δT(λ+α0-φ2)+logEφδ[e(λ+(α0-φ)/2)XT]. By Lemma 3.1, we get limT1TlogEφδ[exp{(λ+α0-φ2)XT}]=0. Therefore, Λ(y)=:limT1TΛT(y)=-δ2((α1-α0)y+α0+α02+(α12-α02)y).

Since Λ(y) is a strictly convex differentiable function on 𝒟Λ=(α-,+) with α-=α02α02-α12<0,limyα-Λ(y)=+, where 𝒟Λ is the effective domain of Λ, we see that Λ(y) is steep. Finally, by supyR{zy-Λ(y)}={α02-α12z+(α1-α0)δ/2(δ4-α0(z+(α1-α0)δ/2)α12-α02)2,z<(α0-α1)δ2,+,otherwise, and Gärtner-Ellis theorem, we complete the proof of this lemma.

Similarly, when α0<α1<0, we have Λ(y)=:limT1TΛT(y)=-δ2((α1-α0)y+α0+α02+(α12-α02)y). Since Λ(y) is a strictly convex differentiable function on 𝒟Λ=(-,α+) with α+=α02α02-α12>0, and limyα+Λ(y)=+, we can see that Λ(y) is steep. By Gärtner-Ellis theorem, we also have the following result.

Lemma 4.2.

Assume α0<α1<0. Then for any closed subset F, limsupT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]F)-infzFÎα0(z), and for any open subset G, liminfT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]G)-infzGÎα0(z), where Îα0(z)={α02-α12z+((α1-α0)δ/2)(δ4-α0(z+(α1-α0)δ/2)α12-α02)2,z>(α0-α1)δ2,+,otherwise. Note that logdPα1δdPα0δ|F[0,T]=-logdPα0δdPα1δ|F[0,T]. Then we have the following Lemma.

Lemma 4.3.

Assume α1<α0<0. Then for any closed subset F, limsupT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]F)-infzFÎα1(z), and for any open subset G, liminfT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]G)-infzGÎα1(z), where Iα1(z)={α02-α12z+(α1-α0)δ/2(δ4-α1(z+(α1-α0)δ/2)α12-α02)2,z<(α0-α1)δ2,+,otherwise.

Lemma 4.4.

Assume α0<α1<0. Then for any closed subset F, limsupT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]F)-infzFIα1(z), and for any open subset G, liminfT1TlogPα1δ(1TlogdPα1δdPα0δ|F[0,T]G)-infzGIα1(z), where Îα1(z)={α02-α12z+(α1-α0)δ/2(δ4-α1(z+(α1-α0)δ/2)α12-α02)2,z>(α0-α1)δ2,+,otherwise.

By the expression of Iα0(z), Iα1(z), Îα0(z), and Îα1(z), the following lemma is.

Lemma 4.5.

(i)Iα0(z)=0iff  zα0=(α0-α1)2δ4α0,Iα1(z)=0iff  zα1=-(α1-α0)2δ4α1, for all z<(α0-α1)δ/2, Iα0(z)=Iα1(z)+z;

(ii)Îα0(z)=0iff  ẑα0=(α0-α1)2δ4α0,Îα1(z)=0iff  ẑα1=-(α1-α0)2δ4α1, for all z>(α0-α1)δ/2, Îα0(z)=Îα1(z)+z.

Proof of Theorems <xref ref-type="statement" rid="thm2.2">2.2</xref> and <xref ref-type="statement" rid="thm2.3">2.3</xref>.

Since the proofs of the two theorems are similar, we only prove Theorem 2.2. Since Iα0(z) is increasing on (zα0,(α0-α1)δ/2) and Iα0(zα0)=0, Therefore, for a>0, by Lemma 4.1, we can choose a ξ(a)(zα0,(α0-α1)δ/2)) such that limT1TlogPα0δ(1TlogdPα1δdPα0δ|F[0,T]ξ(a))=-a.

It is clear that ξ(a) is increasing for a>0, and by Lemma 4.5, we get Iα0(zα1)=zα1, which implies ξ(zα1)=zα1. Hence for a(0,zα1), we have ξ(a)(0,zα1), and since Iα1(z) is nonincreasing for zξ(a), therefore we get Iα1(ξ(a))=inf{Iα1(z):zξ(a)}. Similarly, for a(zα1,+), we have ξ(a)(zα1,(α0-α1)δ/2), and since Iα1(z) is nondecreasing for zξ(a), therefore we get Iα1(ξ(a))=inf{Iα1(z):zξ(a)}, which complete the proof of Theorem 2.2.

Acknowledgments

The authors would like to express their gratitude to Professor F. Q. Gao and the reviewer for their valuable comments.

NeymanJ.PearsonE. S.On the problem of the most efficient tests of statistical hypothesesPhilosophical Transactions of the Royal Society of London1933231289337ZBL0006.26804BlahutR. E.Hypothesis testing and information theoryIEEE Transactions on Information Theory1984204054170396072ZBL0305.62017ChiyonobuT.Hypothesis testing for signal detection problem and large deviationsNagoya Mathematical Journal20031621872031836139GapeevP. V.KüchlerU.On large deviations in testing Ornstein-Uhlenbeck-type modelsStatistical Inference for Stochastic Processes200811214315510.1007/s11203-007-9012-12403104ZBL1204.62144HanT. S.KobayashiK.The strong converse theorem for hypothesis testingIEEE Transactions on Information Theory198935117818010.1109/18.42188995334ZBL0678.62011NakagawaK.KanayaF.On the converse theorem in statistical hypothesis testing for Markov chainsIEEE Transactions on Information Theory193339262963310.1109/18.2122941224350ZBL0788.62008ZaniM.Large deviations for squared radial Ornstein-Uhlenbeck processesStochastic Processes and Their Applications20021021254210.1016/S0304-4149(02)00156-41934153ZBL1075.62535GaoF. Q.JiangH.Moderate deviations for squared Ornstein-Uhlenbeck processStatistics & Probability Letters200979111378138610.1016/j.spl.2009.02.0112537513ZBL1163.62309PitmanJ.YorM.A decomposition of Bessel bridgesZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete198259442545710.1007/BF00532802656509ZBL0484.60062DemboA.ZeitouniO.Large Deviation Technique and Applicatons1988New York, NY, USASpringer