Let Xn1,…,Xnn be the observations from a chirp type statistical model Xnt, Xnt=Acos(ωt+Δ/nt2)+Bsinωt+Δ/nt2+ϵt, where ϵt is a stationary noise. We consider a method of estimation of parameters, A, B, ω, Δ, and ν, (where ν is the variance of ϵt’s) which is basically an approximate least-squares method. The main advantage of the proposed approach is that no assumptions are required. We make use of the three theorems which were established associated with the kernel ∑t=1neiut+vt2 and then use them to prove, under certain conditions, the consistency of the estimators.
1. Introduction
In [1] 1973 Walker considered the problem of estimating the parameters of a sine wave,(1)Xt=Acosωt+Bsinωt+ϵt,where Xt’s are the observations and ϵt’s are independent, identically distributed random variables with mean zero and finite but unknown variance ν. The parameters A, B, ω, and ν are assumed unknown and are to be estimated. He showed that, as n→∞, the estimators A^n, B^n, ω^n, and ν^n converge in probability to the actual values A0, B0, ω0, and ν0, respectively. That is, he showed that the estimators are consistent. He then showed that the differences between the estimators and the actual values they estimate have a joint normal distribution.
Suppose now that the frequency of the above sine wave changes with time. If ω is the initial frequency, the frequency at time t may be written as(2)ωt=ω+αt.The simplest case is one in which αt is linear; that is, αt=αt. Then ωtt=ωt+αt2. This leads to the model (3)Xt=Acosωt+αt2+Bsinωt+αt2+ϵt.The parameters for this model are A, B, ω, ν, and α. This is sometimes called the “chirp” model (see [2, 3]). Several authors have considered the parameter estimation of the chirp model and are found in [4–7]. Different approaches to the estimation of chirp parameters in similar kinds of models are found in [8–12].
But our approach is entirely different from them. We do not make any assumptions. The method used to estimate the parameters and to prove the consistency of the estimators is similar to that by Walker in [13]. We consider not only estimates of the parameters of A, B, ω, and ν but also an estimate α^n of α. Although this leads to an interesting problem in estimation, the model is somewhat unrealistic. Over the course of n observations, the frequency changes from ω to ω+αn. As n goes to ∞, the sine wave oscillates faster and faster (unless α=0). Its frequency becomes infinite, and its period approaches 0. We therefore change the model by assuming that the change in frequency over the course of the n observations is a number Δ independent of n. We also assume, as in (3), that the change in frequency is linear. This leads to the model or, more precisely, sequence of models (4)Xnt=Acosωt+Δnt2+Bsinωt+Δnt2+ϵt,1≤t≤n.We assume, as Walker does in his 1971 paper, that ϵt’s are independent and identically distributed with mean zero and variance ν. Thus the parameters are A, B, ω, ν, and Δ and the estimators of the parameters are A^, B^, ω^, ν^, and Δ^. Our objective is to show that these estimators are consistent.
We make use of the following three theorems which we have established.
Theorem 1 (see [14]).
Let Snω,α=∑t=1ntβeiωt+αt2, where β is a nonnegative real number. Then Sn(ω,α)=onβ+1, for α not a rational multiple of π, uniformly in ω.
Theorem 2 (see [15]).
Let (ϵt) be a sequence of independent random variables such that E(ϵt)=0, E(ϵt2)=v<∞, and E(ϵt4)=σ<∞. Then (5)max0≤ω<π0≤α<π∑t=1ntβϵteiωt+αt22=Opnβ+7/8.
Theorem 3 (see [16]).
For any sufficiently small δ1>0 and δ2>0(6)lim-maxn-1δ1≤u≤π,v≤π-n-2δ2oru≤π-n-1δ1,n-2δ2≤v≤π1n∑t=1neiut+vt2dx<1.
2. Estimation of the Parameters
If ϵt’s are normally distributed, the likelihood function of the observations Xn1,…,Xnn is(7)fxn1,…,xnn=12πνne-1/2π∑t=1nxnt-Acosωt+Δ/nt2-Bsinωt+Δ/nt22.Then the log likelihood function of the observations Xn1,…,Xnn is (8)LnA,B,ω,Δ,ν=-12nlog2πν-1/2SnA,B,ω,Δν,where(9)SnA,B,ω,Δ=∑t=1nxnt-Acosωt+Δnt2-Bsinωt+Δnt22.
Now consider SnA,B,ω,Δ: (10)SnA,B,ω,Δ=∑t=1nXnt-Acosωt+Δnt2-Bsinωt+Δnt22=∑t=1nXnt2-2∑t=1nXntAcosωt+Δnt2+Bsinωt+Δnt2+∑t=1nAcosωt+Δnt2+Bsinωt+Δnt22=∑t=1nXnt2-2∑t=1nXntAcosωt+Δnt2+Bsinωt+Δnt2+∑t=1nA2cos2ωt+Δnt2+B2sin2ωt+Δnt2+∑t=1n2ABcosωt+Δnt2sinωt+Δnt2.
By using the identities cos2θ=2cos2θ-1, cos2θ=1-2sin2θ, and sin2θ=2sinθcosθ we obtain(11)SnA,B,ω,Δ=∑t=1nXnt2-2∑t=1nXntAcosωt+Δnt2+Bsinωt+Δnt2+∑t=1nA2cos2ωt+Δ/nt2+12+∑t=1nB21-cos2ωt+Δ/nt22+ABsin2ωt+Δnt2=∑t=1nXnt2-2∑t=1nXntAcosωt+Δnt2+Bsinωt+Δnt2+n2A2+B2+12A2-B2cos2ωt+Δnt2+AB∑t=1nsin2ωt+Δnt2.From Theorem 1, the last two terms in (11) are of order o(n). Therefore if we let (12)UnA,B,ω,Δ=∑t=1nXnt2-2∑t=1nXntAcosωt+Δnt2+Bsinωt+Δnt2+n2A2+B2,
then Un is an approximation of Sn. More precisely, Un-Sn=o(n). Thus(13)Ln∗A,B,ω,Δ,ν=-12nlog2πν-1/2UnA,B,ω,Δνis an approximation of the log likelihood function LnA,B,ω,Δ,ν. Now fix ν in (13). Then we maximize Ln∗A,B,ω,Δ,ν to obtain the estimates for A, B, ω, and Δ. That is, we minimize Un(A,B,ω,Δ) over the region R×R×[0,π/2]×[-π/4,π/2]. Now fix ω and Δ. If A→∞ or B→∞ then Un(A,B,ω,Δ)→∞. Since Un(A,B,ω,Δ) is continuous in (A,B), the minimum is achieved at a point A^nω,Δ,B^nω,Δ. The partial derivatives ∂Un/∂A and ∂Un/∂B must vanish at A^nω,Δ,B^nω,Δ. Thus the estimators of A and B are solutions of the equations(14)∂Un∂A=-2∑t=1nXntcosωt+Δnt2+nA^=0,∂Un∂B=-2∑t=1nXntsinωt+Δnt2+nB^=0.The solution to these equations is given by(15)A^nω,Δ=2n∑t=1nXntcosωt+Δnt2,(16)B^nω,Δ=2n∑t=1nXntsinωt+Δnt2.To obtain estimates for ω and Δ we substitute A^n(ω,Δ) and B^n(ω,Δ) for A and B, respectively, in (12) and minimize UnA^nω,Δ,B^nω,Δ,ω,Δ as a function of ω and Δ. Since this last expression is equal to (17)∑t=1nXnt2-2∑t=1nXnt2n∑t=1nXntcosωt+Δnt2cosωt+Δnt2+n2∑t=1nXntsinωt+Δnt2sinωt+Δnt2+n22n∑t=1nXntcosωt+Δnt22+2n∑t=1nXntsinωt+Δnt22=∑t=1nXnt2-22n∑t=1nXnteiωt+Δ/nt22+2n∑t=1nXnteiωt+Δ/nt22=∑t=1nXnt2-2n∑t=1nXnteiωt+Δ/nt22,with respect to ω and Δ; this is same as maximizing(18)Inω,Δn=2n∑t=1nXnteiωt+Δ/nt22.Since (ω,Δ) varies over a compact set, there is a point (ω^n,Δ^n) which maximizes this expression. We then estimate A and B by (19)A^n=A^nω^n,Δ^n,B^n=B^nω^n,Δ^n.The minimum value of Un(A,B,ω,Δ) is assumed at (A^n,B^n,ω^n,Δ^n).
Now we obtain an estimator ν^n of ν. If ν is zero then, since E(ϵt)=0 and E(ϵt2)=0, ϵt=0 almost surely. Thus there is no randomness in the model. We assume, then, that 0<ν<∞. To obtain ν^n we substitute A^n, B^n, ω^n, and Δ^n in (13) and maximize Ln∗(A^n,B^n,ω^n,Δ^n,ν). From (13), (20)Ln∗A^n,B^n,ω^n,Δ^n,ν=-n2log2π+logν-1/2UnA^n,B^n,ω^n,Δ^nν.We will show that Ln∗(A^n,B^n,ω^n,Δ^n), as a function of ν, achieves its positive maximum provided Un(A^n,B^n,ω^n,Δ^n)>0, which we show is positive with high probability for large values of n. From (4) and (9) we have (21)SnA^n,B^n,ω^n,Δ^n=∑t=1nϵt2.Using (11) and (12) we have (22)SnA^n,B^n,ω^n,Δ^n=UnA^n,B^n,ω^n,Δ^n+12A^n2-B^n2∑t=1ncos2ω^nt+Δ^nnt2+A^nB^n∑t=1nsin2ω^nt+Δ^nnt2.Using the mean value theorem, (23)1n∑t=1ncos2ω^nt+Δ^nnt2-1n∑t=1ncos2ω0t+Δ0nt2=1n∑t=1n2tsin2ω~nt+Δ~nnt22ω^n-ω0+1n∑t=1n4tnsin2ω~nt+Δ~nnt22t2nΔ^n-Δ0,where (ω~,Δ~/n) is a point on the line segment connecting (ω0,Δ0/n) and (ω^n,Δ^n/n).
Thus(24)1n∑t=1ncos2ω^nt+Δ^nnt2-1n∑t=1ncos2ω0t+Δ0nt2≤4nω^n-ω0∑t=1nt+8n3∑t=1nt3Δ^n-Δ0≤4nω^n-ω0+8nΔ^n-Δ0.In Section 3 we prove that ω^n-ω0=op(n-1) and Δ^n-Δ0=op(n-1). Therefore we have (25)1n∑t=1ncos2ω^nt+Δ^nnt2-1n∑t=1ncos2ω0t+Δ0nt2=4nopn-1+8nopn-1=op1.Also in Section 3 we show that (26)∑t=1ncos2ω0t+Δ0nt2=on.Thus (25) implies that (27)1n∑t=1ncos2ω^nt+Δ^nnt2=op1.Similarly(28)1n∑t=1ncos2ω^nt+Δ^nnt2=op1.In Section 3 we also show that A^n→PA0 and B^n→PB0. Thus (22) implies that (29)SnA^n,B^n,ω^n,Δ^n-UnA^n,B^n,ω^n,Δ^n=op1.Using (21) in the above equation we obtain (30)1nUnA^n,B^n,ω^n,Δ^n=1n∑t=1nϵt2-op1.By the weak law of large numbers (31)1n∑t=1nϵt2⟶Pν>0.Using this in (30) we obtain that Un(A^n,B^n,ω^n,Δ^n)>0 with probability arbitrarily close to 1 for sufficiently large values of n. Suppose, then, that Un(A^n,B^n,ω^n,Δ^n)>0. Since (32)Ln∗A^n,B^n,ω^n,Δ^n=-n2log2π+logν-1/2UnA^n,B^n,ω^n,Δ^nνand 0<ν<∞, Ln∗(A^n,B^n,ω^n,Δ^n)→-∞ as ν→0 or ν→∞. Thus since Ln∗(A^n,B^n,ω^n,Δ^n) is a continuous function in ν the maximum is achieved when ν=ν^n. The partial derivative ∂Ln∗/∂ν vanishes at ν^n. Therefore ν^n is the solution to equation (33)∂Ln∗∂ν=-12nν+1/2UnA^n,B^n,ω^n,Δ^nν2=0.So(34)ν^n=12UnA^n,B^n,ω^n,Δ^n.
Note that ν^n exits only with high probability for large n. Using (12) and substituting for A^n and B^n we obtain(35)UnA^n,B^n,ω^n,Δ^n=∑t=1nXnt2-22n∑t=1nXteiω^nt+Δ^n/nt22+2n∑t=1ncosω^nt+Δ^nnt22+∑t=1nsinω^nt+Δ^nnt22=∑t=1nXnt2-22n∑t=1nXnteiω^nt+Δ^n/nt22+2n∑t=1nXnteiω^nt+Δ^n/nt22=∑t=1nXnt2-2n∑t=1nXnteiω^nt+Δ^n/nt22.Now, using (18) in the above equation, we obtain (36)UnA^n,B^n,ω^n,Δ^n=∑t=1nXnt2-Inω^n,Δ^n.Therefore, substituting in (34) we have (37)ν^n=1n∑t=1nXnt2-Inω^n,Δ^n.
3. The Consistency of the Estimators
Now we will establish under certain conditions the consistency of the estimators A^n, B^n, ω^n, Δ^n, and ν^n.
Theorem 4.
Let ϵtt=1∞ be a sequence of independent random variables with E(ϵt)=0, E(ϵt2)=ν<∞, and E(ϵt4)=σ<∞. Let A0,B0 be any real numbers and 0<ω<π/2. Let Δ0≤π/4 and let αn=Δ0/n. For each n=1,2,…, let (38)Xnt=A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵt,1≤t≤n.The estimators A^n, B^n, ω^n, Δ^n, and ν^n of A0, B0, ω0, Δ0, and ν0 given in (12), (13), (15), and (17), respectively, are consistent estimators of A0, B0, ω0, Δ0, and ν0, respectively. Furthermore ω^n-ω0=op(n-1) and Δ^n-Δ0=op(n-1).
Proof.
From (15)(39)n2Inω,Δn=∑t=1nXteiωt+Δ/nt22.But(40)Xnt=A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵt.Thus(41)n2Inω,Δn=∑t=1nA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵteiωt+Δ/nt22.If we use complex exponentials instead of sines and cosines we have (42)Xnt=A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2=D0eiω0t+Δ0/nt2+D0∗e-iω0t+Δ0/nt2,where D0=(A0-iB0)/2 and D0∗=(A0+iB0)/2, and then (43)n2Inω,Δn=∑t=1nD0eiω+ω0t+Δ/n+Δ0/nt2+D0∗e-iω0-ωt+Δ0/n-Δ/nt2+ϵteiωt+Δ/nt22.Let(44)Mnu,v=∑t=1neiut+vt2.Then(45)n2Inω,Δn=D0Mnω+ω0,Δn+Δ0n+D0∗Mnω-ω0,Δn-Δ0n+∑t=1nϵteiωt+Δ/nt22=∑t=1nϵteiωt+Δ/nt22+2R∑t=1nϵte-iωt+Δ/nt2D0Mnω+ω0,Δn+Δ0n+D0∗Mnω-ω0,Δn-Δ0n+D0Mnω+ω0,Δn+Δ0n+D0∗Mnω-ω0,Δn-Δ0n2.Expanding the third term on the right, we get(46)n2Inω,Δn=∑t=1nϵteiωt+Δ/nt22+2R∑t=1nϵte-iωt+Δ/nt2D0Mnω+ω0,Δn+Δ0n+D0∗Mnω-ω0,Δn-Δ0n+D02Mnω+ω0,Δn+Δ0n2+2RD02Mnω+ω0,Δn+Δ0nMnω-ω0,Δ0n-Δn+D02Mnω-ω0,Δn-Δ0n2.
Lemma 5.
(47)1nInω0,Δ0n=A02+B022+op1.
Proof.
From (46) we have(48)n2Inω0,Δ0n=∑t=1nϵteiω0t+Δ0/nt22+2R∑t=1nϵte-iω0t+Δ0/nt2D0Mn2ω0,2Δ0n+D0∗Mn0,0+D02Mn2ω0,2Δ0n2+2RD02Mn2ω0,2Δ0nMn0,0+D0∗2Mn0,02.By virtue of Theorem 2(49)max0≤ω≤π0≤α≤π∑t=1nϵteiωt+αt2=Opn7/8.Hence(50)max0≤ω≤π0≤Δ/n≤π∑t=1nϵteiωt+Δ/nt2=Opn7/8.Thus we obtain the following estimates for the first and second terms in (48):(51)∑t=1nϵteiω0t+Δ0/nt22=Opn7/4,2R∑t=1nϵte-iω0t+Δ0/nt2D0Mn2ω0,2Δ0n+D0∗Mn0,0=Opn·n7/8=Opn15/8.Consider the term Mn2ω0,2Δ0/n2 in (48). Using (44) we have (52)Mn2ω0,2Δ0n=∑t=1nei2ω0t+2Δ0/nt2.Now we can write (53)2ω0t+2Δ0nt2=2π2ω0t+2Δ0/nt22π=2πft,where f(t)=2ω0t+2Δ0/nt2/2π.
Also(54)f′t=2ω02π+4Δ0t/n2π≤ω0π+2Δ0tnπ≤ω0π+2Δ0nnπ<12+2Δ0πsinceω0<π2<1sinceΔ0≤π4.It follows by virtue of Lemma [16] that there is A1>0 for which (55)∑t=1nei2ω0t+2Δ0/nt2-∫0nei2ω0t+2Δ0/nt2dt≤A1.Thus we have (56)1nMn2ω0,2Δ0n=1n∫0nei2ω0nt/n+2Δ0nt2/n2dt+O1n.Let x=t/n. Then we obtain (57)1nMn2ω0,2Δ0n=∫01ei2ω0nx+2Δ0nx2dx+o1.In the proof of Theorem 3 [16, p. 65, eq 1.9], we showed that if at least one of sn or tn tends to ∞ then (58)∫01ei2snx+2tnx2dx=o1.Therefore, since ω0>0, we have (59)∫01ei2ω0nx+2Δ0nx2dx=o1.This together with (57) implies that (60)Mn2ω0,2Δ0n=on.Thus using this equation (51), and (60) in (48) we obtain (61)n2Inω0,Δ0n=Opn7/4+Opn15/8+on2+onn+A02+B024n2.Thus(62)n-1Inω0,Δ0n=A02+B022+op1.This completes the proof of Lemma 5.
Lemma 6.
Let K(n,δ1,δ2) be defined by (63)Kn,δ1,δ2=maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2Inω,Δn,where 0<ω<π/2 and |Δ|≤π/4. Then (64)n-1Kn,δ1,δ2≤A02+B022n2maxΔ-Δ0≥n-1δ2orω-ω0≥n-1δ1∫0neiω-ω0t+Δ-Δ0t2dt2+op1.
Proof.
Throughout the proof it is assumed that 0<ω<π/2 and |Δ|≤π/4. From (46) we have (65)n2Kn,δ1,δ2≤maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∑t=1nϵteiωt+Δ/nt22+maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ22R∑t=1nϵte-iωt+Δt2D0Mnω+ω0,Δ0n+Δn+D0∗Mnω-ω0,Δn-Δ0n+maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2D02Mnω+ω0,Δ0n+Δn2+maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ22RD02Mnω+ω0,Δn+Δ0nMnω0-ω,Δ0n-Δn+maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2D02Mnω-ω0,Δn-Δ0n2.From Theorem 2(66)max0≤ω<π∑t=1nϵteiωt+αt2=Opn7/8.This implies that(67)max0≤ω<π∑t=1nϵteiωt+Δ/nt2=Opn7/8.Thus we obtain the following estimates for the first and second terms in (65): (68)maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∑t=1nϵteiωt+Δ/nt22=Opn7/4,maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ22R∑t=1nϵte-iωt+Δ/nt2D0Mnω+ω0,Δ0n+Δn+D0∗Mnω-ω0,Δn-Δ0n=Opn·n7/8=Opn15/8.Consider the term Mnω+ω0,Δ/n+Δ0/n2. Using (44) we have (69)Mnω+ω0,Δn+Δ0n=∑t=1neiω+ω0t+Δ/n+Δ0/nt2.We can write (70)ω+ω0t+Δn+Δ0nt2=2πω+ω0t+Δ/n+Δ0/nt22π=2πft,where f(t)=ω+ω0t+Δ/n+Δ0/nt2/2π.
Then(71)f′t≤ω+ω02π+2tΔ/n+Δ0/n2π≤ω+ω02π+2nΔ+Δ0πn<12+Δ+Δ0πsince0<ω0<π2,sup0<ω0<π/2ω+ω0<π<12+π2πsinceΔ≤π4,Δ0≤π4=1.It follows by virtue of Lemma [9, p. 1] that there is A2>0 for which (72)∑t=1neiω+ω0t+Δ/n+Δ0/nt2-∫0neiω+ω0t+Δ/n+Δ0/nt2≤A2.Similarly we can show that (73)∑t=1neiω-ω0t+Δ/n-Δ0/nt2-∫0neiω-ω0t+Δ/n-Δ0/nt2dt≤A3.Thus we have (74)1nMnω+ω0,Δn+Δ0n=1n∫0neiω+ω0t+Δ+Δ0t2/ndt+O1n,(75)1nMnω-ω0,Δn-Δ0n=1n∫0neiω-ω0t+Δ-Δ0t2/ndt+O1n.Let x=t/n in (74). Then we have (76)1nMnω+ω0,Δn+Δ0n=∫01eiω+ω0nx+Δ+Δ0nx2dx+O1n.Let R=Δ,ω:Δ≤π/4,0≤ω≤π/2. We will show that (77)maxR∫01eiω+ω0nx+Δ+Δ0nx2dx=o1.Suppose(78)maxR∫01eiω+ω0nx+Δ+Δ0nx2dx≠o1.Then there is a sequence nk and δ>0 for which (79)maxR∫01eiω+ω0nkx+Δ+Δ0nkx2dx>δ,for each k. Then we can find (ωk,Δk)∈R for which (80)maxR∫01eiωk+ω0nkx+Δ+Δ0nkx2dx>δ2,for each k. Let sk=(ωk+ω0)nk, tk=(Δk+Δ0)nk. Since (ωk,Δk)∈R, ωk≥0, ωk+ω0≥ω0>0; hence sk=(ωk+ω0)nk≥ω0nk→∞. It follows from equation 1.9 [16, p. 65] in the proof of Theorem 3 that(81)∫01eiωk+ω0nkx+Δ+Δ0nkx2dx⟶0.Thus we have a contradiction. Therefore (82)maxR∫01eiω+ω0nx+Δ+Δ0nx2dx=o1.Hence by (76) (83)maxRMnω+ω0,Δn+Δ0n=on.This implies that(84)maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2Mnω+ω0,Δn+Δ0n=on.Substituting (68) and (84) in (65) we obtain (85)n2Kn,δ1,δ2≤Opn7/4+Opn15/8+on2+onn+D02A02+B024maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∫0neiω-ω0t+Δ/n-Δ0/nt2dt+A3.Thus(86)n-1Kn,δ1,δ2≤n-2A02+B022maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∫0neiω-ω0t+Δ/n-Δ0/nt2dt2+op1.This completes the proof of Lemma 6.
Now combining Lemmas 5 and 6 we obtain (87)n-1Kn,δ1,δ2-n-1Inω0+Δ0n≤A02+B022n-2maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∫0neiω-ω0t+Δ/n-Δ0/nt2dt2-1+op1.Thus(88)n-1Kn,δ1,δ2-n-1Inω0,Δ0n≤cn+zn,where(89)cn=A02+B022n-2maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2∫0neiω-ω0t+Δ/n-Δ0/nt2dt2-1,and zn=op(1).
From Theorem 3 we have (90)lim¯maxn-1δ1≤u≤π,0≤v≤π-n-2δ2oru≤π-n-1δ1,n-2δ2≤v≤π1n∫0neiut+vt2<1.Substituting u=ω-ω0 and v=Δ/n-Δ0/n in the above inequality we obtain (91)lim¯maxn-1δ1≤ω-ω0≤π,0≤Δ-Δ0≤nπ-n-1δ2orω-ω0≤π-n-1δ1,n-1δ2≤Δ-Δ0≤nπ1n∫0neiω-ω0t+Δ/n-Δ0/nt2<1.Therefore since 0<ω<π/2 and |Δ|≤π/4, we have (92)lim¯maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ21n∫0neiω-ω0t+Δ/n-Δ0/nt2<1.Therefore lim¯cn=c<0.
Let ϵ>0. Since limn→∞zn=P0, there exits N1 such that (93)Pzn<-c4≥1-ϵ∀n>N1.Since lim¯cn=c (<0), there exits N2(δ1,δ2) such that (94)cn<c2,∀n>N2δ1,δ2.Now let N′δ1,δ2=max{N1,N2(δ1,δ2)}. If n>N′, then (95)zn<-c4⊆cn+zn<c2+-c4=cn+zn<c4⊆cn+zn<0.Hence (96)Pcn+zn<0≥Pzn<-c4≥1-ϵ.by 93.Now, from (88), (97)cn+zn<0⊆n-1Kn,δ1,δ2-n-1Inω0,Δ0n<0.Therefore (98)Pn-1Kn,δ1,δ2-n-1Inω0,Δ0n<0≥1-ϵ,∀n>N′.Since ϵ>0 is arbitrary, (99)limn→∞PKn,δ1,δ2-Inω0,Δ0n<0=1.In other words,(100)limn→∞PKn,δ1,δ2<Inω0,Δ0n=1,where(101)Kn,δ1,δ2=maxω-ω0≥n-1δ1orΔ-Δ0≥n-1δ2Inω,Δn.In the last statement the maximum is taken over the region shown in Figure 1.
Suppose (ω′,Δ′) belongs to the shaded region of the figure. Then In(ω′,Δ′/n) does not exceed K(n,δ1,δ2). If In(ω0,Δ0/n)>K(n,δ1,δ2) this implies that (ω′,Δ′) is not the point (Δ^n,ω^n) which maximizes In(ω,Δ/n) which in turn implies that (Δ^n,ω^n) lies in the unshaded rectangle, namely, the region Δ,ω:Δ-Δ0<n-1δ2,ω-ω0<n-1δ1. Thus (100) implies that, for large n, the probability is high that the maximum value (Δ^n,ω^n) lies in that region, that is,(102)Δ-Δ0<n-1δ2,ω^n-ω0<n-1δ1.In other words (103)limnPω^n-ω0<n-1δ1,Δ^n-Δ0<n-1δ2=1.Since δ1 and δ2 are arbitrary positive numbers, we have (104)ω^n-ω0=oPn-1,ω^n-Δ0=oPn-1.Next we prove the consistency of A^n and B^n. That is, we show that (105)limn→∞A^n=pA0,limn→∞B^n=pB0.Recall from (15) and (16) (106)A^n=2n∑t=1nXntcosω^nt+Δ^nnt2,B^n=2n∑t=1nXntsinω^nt+Δ^nnt2.Thus (107)A^n+iB^n=2n∑t=1nXnteω^nt+Δ^n/nt2.But (108)Xnt=A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵt;so(109)A^n+iB^n=2n∑t=1nA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵteiω^nt+Δ^n/nt2.Letting(110)A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2=D0eω0t+Δ0/nt2+D0∗eiω0t+Δ0/nt2,where D0=(A0-iB0)/2 and D0∗=(A0+iB0)/2, we have(111)A^n+iB^n=2n∑t=1nD0eiω0t+Δ0/nt2+D0∗eω0t+Δ0/nt2+ϵteiω^nt+Δ^n/nt2=2n∑t=1nD0eiω0+ω^nt+Δ0/n+Δ^n/nt2+D0∗eiω^n-ω0t+Δ^n/n-Δ0/nt2+ϵteiω^nt+Δ^n/nt2.Since A0+iB0=2D0∗ we obtain, using (44), (112)A^n-A0+iB^n-B0=2nD0Mnω0+ω^n,Δ0n+Δ^nn+D0∗Mnω^n-ω0,Δ^nn-Δ0n-n+2n∑t=1nϵteiω^nt+Δ^n/nt2.Using the triangle inequality we get(113)A^n-A0+iB^n-B0≤2D0nMnω^n+ω0,Δ^nn+Δ0n+2D0∗nMnω^n-ω0,Δ^nn-Δ0n-n+2n∑t=1nϵteiω^nt+Δ^n/nt2.Now we estimate each of the terms on the right hand side of the above inequality.
Let F(ω,Δ/n)=RMn(ω,Δ/n).
By the mean value theorem (114)RMnω,Δn-RMn0,0=∂F∂ωω~+Δ~nω+∂F∂Δω~+Δ~nΔ,where ω~+Δ~/n is a point on the open line segment connecting (0,0) and (ω,Δ/n).
Now(115)∂F∂ωω,Δn=∂∂ωRMnω,Δn=∂∂ω∑t=1ncosωt+Δnt2=-∑t=1ntsinωt+Δnt2,and similarly (116)∂F∂Δω,Δn=∂∂ΔRMnω,Δn=-1n∑t=1nt2sinωt+Δnt2.Thus(117)RMnω,Δn-RMn0,0=-ω∑t=1ntsinω~t+Δ~nt2-Δn∑t=1nt2sinω~t+Δ~nt2.Using the triangle inequality and the fact that Mn(0,0)=n, we obtain (118)RMnω,Δn-n≤ω∑t=1nt+Δn∑t=1nt2.Substituting ω^n-ωn and Δ^n/n-Δ0/n for ω and Δ, respectively, we get (119)RMnω^n-ωn,1nΔ^nn-Δ0n-n≤ω^-ω0∑t=1nt+1nΔ^nn-Δ0n∑t=1nt2<ω^n-ω0n2+1nΔ^nn-Δ0nn3,so that(120)1nRMnω^n-ωn,Δ^nn-Δ0n-n<ω^n-ωnn+Δ^n-Δ0n.It follows from (104) that (121)limn→∞RMnω^n-ωn,Δ^nn-Δ0n-n=P0.Similarly(122)limn→∞1nIMnω^n-ωn,Δ^nn-Δ0n-n=P0.Thus(123)limn→∞1nMnω^n-ωn,Δ^nn-Δ0n-n=P0.Now consider the first term on the right in inequality (113). Again by the mean value theorem, (124)RMnω,Δn-RMn2ω0,2Δ0n=∂F∂ωω~,Δ~nω-2ω0+∂F∂Δω~,Δ~nΔn-2Δ0n,where (ω~,Δ~/n) is a point on the open line segment connecting 2ω0,2Δ0/n and (ω,Δ/n). Using the above expressions for ∂F/∂ω and ∂F/∂Δ, we have as before (125)RMnω,Δn-RMn2ω0,2Δ0n=-ω-2ω0∑t=1ntsinω~t+Δ~nt2-Δn-2Δ0n∑t=1nt2sinω~t+Δ~nt2.Thus(126)RMnω,Δn=-ω-2ω0∑t=1ntsinω~t+Δ~nt2-Δn-2Δ0n∑t=1nt2sinω^t+Δ^nt2+∑t=1ncos2ω0t+2Δ0nt2.Therefore(127)RMnω,Δn≤ω-2ω0∑t=1nt+Δn-2Δ0n∑t=1nt2+∑t=1ncos2ω0t+2Δ0nt2.Substituting ω^n+ωn and Δ^n/n+Δ0/n for ω and Δ, respectively, we get (128)RMnω^n+ωn,Δ^nn+Δ0n≤ω^-ω0∑t=1nt+Δ^nn-Δ0n∑t=1nt2+∑t=1ncos2ω0t+2Δ0nt2<ω^n-ω0n2+Δ^n-Δ0n2+∑t=1ncos2ω0t+2Δ0nt2,so that(129)1nRMnω^n+ω0,Δ^nn+Δn0<ω^n-ω0n+Δ^n-Δ0n+1n∑t=1ncos2ω0t+2Δ0nt2.Using (60) and (104) we obtain (130)limn→∞1nRMnω^n+ω0,Δ^nn+Δ0n=P0.Similarly(131)limn→∞1nIMnω^n+ω0,Δ^nn+Δ0n=P0.Thus(132)limn→∞1nMnω^n+ωn,Δ^nn+Δ0n=P0.Now consider the last term in inequality (113). As a consequence of Theorem 2, we have(133)limn→∞1n∑t=1nϵteiω^nt+Δ^0/nt2=P0.By using (123), (132), and (133) in inequality (113), we obtain (134)limn→∞A^n-A0+iB^n-B0=P0.Therefore (135)limn→∞A^n-A0=P0,limn→∞B^n-B0=P0.That is,(136)limn→∞A^n=PA,limn→∞B^n=PB0.Now we prove the consistency of ν^n. That is, we show that limn→∞ν^n=Pν.
Recall from (37) that (137)ν^n=1n∑t=1nXnt2-Inω^n+Δ^nn.From the definition of In(ω,Δ/n), we have (138)Inω^n,Δ^nn=2n∑t=1nXnteiω^nt+Δ^n/nt22=2n∑t=1nXntcosω^nt+Δ^nnt22+2n∑t=1nXntsinω^nt+Δ^nnt22.But from (15) and (16) we have (139)A^n=2n∑t=1nXntcosω^nt+Δ^nnt2,B^n=2n∑t=1nXntsinω^nt+Δ^nnt2.Hence(140)Inω^n+Δ^nn=2nA^n2+B^n2.Therefore(141)ν^n=1n∑t=1nXnt2-n2A^n2+B^n2.Since(142)Xnt=A0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵt,we have(143)ν^n=1n∑t=1nA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+ϵt2-12A^n2+B^n2=1n∑t=1nA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt22+2n∑t=1nϵtA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+1n∑t=1nϵt2-12A^n2+B^n2=1n∑t=1nA02cosω0t+Δ0nt22+B02sinω0t+Δ0nt22+2A0B0cosω0t+Δ0nt2sinω0t+Δ0nt2+2n∑t=1nϵtA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+1nϵt2-12A^n2+B^n2.Using the identities cos2θ=2cos2θ-1, cos2θ=1-2sin2θ, and sin2θ=2sinθcosθ we obtain(144)ν^n=1n∑t=1nA022cos2ω0t+Δ0nt2+1+1n∑t=1nB0221-cos2ω0t+Δ0nt2+1n∑t=1nA0B0sin2ω0t+Δ0nt2+2n∑t=1nϵtA0cosω0t+Δ0nt2+B0sinω0t+Δ0nt2+1n∑t=1nϵt2-12A^n2+B^n2.From (60) we have (145)Mn2ω0,2Δ0n=∑t=1nei2ω0t+2Δ0/nt2=on.Thus(146)∑t=1ncos2ω0t+Δ0nt2=on,∑t=1nsin2ω0t+Δ0nt2=on.Therefore(147)ν^n=A022+B022+2A0n∑t=1nϵtcosω0t+Δ0nt2+2B0n∑t=1nϵtsinω0t+Δ0nt2+1n∑t=1nϵt2-12A^n2+B^n2+o1=1n∑t=1nϵt2+2A0n∑t=1nϵtcosω0t+Δ0nt2+2B0n∑t=1nϵtsinω0t+Δ0nt2+12A02-A^n2+B02-B^n2+o1.Now we estimate each term on the right in (147). First consider the term 1/n∑t=1nϵt2. ϵt’s are independent and identically distributed with E(ϵt2)=ν. Therefore by the Weak Law of Large Numbers (148)limn→∞1n∑t=1nϵt2=Pν.Next consider the second and third terms in the right hand side of (147).
Using Theorem 2 we have (149)1n∑t=1nϵtcosω0t+Δ0nt2=op1,1n∑t=1nϵtsinω0t+Δ0nt2=op1.Now consider the last term A02-A^n2+B02-B^n2 in (136). From (132) we have (150)A^n⟶PA0,B^n⟶PB0.Thus(151)A^n2⟶PA02,B^n2⟶PB02.Therefore (152)limn→∞A02-A^n2+B02-B^n2=P0.By using (148), (149), and (152) in (147) we obtain (153)limn→∞ν^n=Pν.This completes the proof of the theorem.
Competing Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
WalkerA. M.On the estimation of a harmonic component in a time series with stationary dependent residuals1973521724110.1017/s000186780003915xMR0336943Larry BretthorstG.1965SpringerRay SmithC.Erickson GrayJ.1987D. Reidel Publishing CompanyKumaresanR.VermaS.On estimating the parameters of chirp signals using rank reduction techniquesProceedings of the 21st Asilomar Conference on Signals Systems and ComputersMarch 1987Pacific Grove, Calif, USA555558DjuricP. M.KayS. M.Parameter estimation of chirp signals199038122118212610.1109/29.615382-s2.0-0025630338GiniF.MontanariM.VerrazzaniL.Estimation of chirp radar signals in compound-Gaussian clutter: a cyclostationary approach20004841029103910.1109/78.8275372-s2.0-0033884595NandiS.KunduD.Asymptotic properties of the least squares estimators of the parameters of the chirp signals200456352954410.1007/bf02530540MR20950172-s2.0-11844252118GiannakisG. B.ZhouG.Harmonics in multiplicative and additive noise: parameter estimation using cyclic statistics19954392217222110.1109/78.4147902-s2.0-0029373194ZhouG.GiannakisG. B.SwamiA.On polynomial phase signals with time-varying amplitudes199644484886110.1109/78.4925382-s2.0-0030130212ShamsunderS.GiannakisG. B.FriedlanderB.Estimating random amplitude polynomial phase signals: a cyclostationary approach199543249250510.1109/78.3481312-s2.0-0029254425SwamiA.Cramer-Rao bounds for deterministic signals in additive and multiplicative noise1996532-323124410.1016/0165-1684(96)00088-62-s2.0-0030231371ZhouG.GiannakisG. B.Harmonics in multiplicative and additive noise: performance analysis of cyclic estimators19954361445146010.1109/78.3888572-s2.0-0029325542WalkerA. M.On the estimation of a harmonic component in a time series with stationary independent residuals197158213610.1093/biomet/58.1.21MR02756192-s2.0-0002825499PereraK.An application of Vander Corput's inequality200121, article 8MR1822301PereraK.Probability inequality concerning a chirp-type signal20132013, article 131810.1186/1029-242x-2013-131MR30499512-s2.0-84894518055PereraK.An inequality concerning the kernal ∑t=1nei(ut+vt2)2012325970