Asymptotic Optimality of Estimating Function Estimator for CHARN Model

CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares CL estimator. However, recently, it was shown that the optimal estimating function estimator G estimator is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.


Introduction
The conditional least squares CL estimator is one of the most fundamental estimators for financial time series models.It has the two advantages which can be calculated with ease and does not need the knowledge about the innovation process i.e., error term .Hence this convenient estimator has been widely used for many financial time series models.However, Amano and Taniguchi 1 proved it is not good in the sense of the efficiency for ARCH model, which is the most famous financial time series model.
The estimating function estimator was introduced by Godambe 2, 3 and Hansen 4 .Recently, Chandra and Taniguchi 5 constructed the optimal estimating function estimator G estimator for the parameters of the random coefficient autoregressive RCA model, which was introduced to describe occasional sharp spikes exhibited in many fields and ARCH model based on Godambe's asymptotically optimal estimating function.In Chandra and Taniguchi 5 , it was shown that G estimator is better than CL estimator by simulation.Furthermore, Amano 6 applied CL and G estimators to some important time series models RCA, GARCH, and nonlinear AR models and proved that G estimator is better than CL estimator in the sense of the efficiency theoretically.Amano 6 also derived the conditions that G estimator becomes asymptotically optimal, which are not strict and natural.However, in Amano 6 , G estimator was not applied to a conditional heteroscedastic autoregressive nonlinear model denoted by CHARN model .CHARN model was proposed by Härdle and Tsybakov 7 and Härdle et al. 8 , which includes many financial time series models and is used widely in the finance.Kanai et al. 9 applied G estimator to CHARN model and proved its asymptotic normality.However, Kanai et al. 9 did not compare efficiencies of CL and G estimators and discuss the asymptotic optimality of G estimator theoretically.Since CHARN model is the important and rich model, which includes many financial time series models and can be assumed as return processes of assets, more investigation of CL and G estimators for this model are needed.Hence, in this paper, we compare efficiencies of CL and G estimators and investigate the asymptotic optimality of G estimator for this model.This paper is organized as follows.Section 2 describes definitions of CL and G estimators.In Section 3, CL and G estimators are applied to CHARN model, and efficiencies of these estimators are compared.Furthermore, we derive the condition of asymptotic optimality of G estimator.We also compare the mean squared errors of θ CL and θ G by simulation in Section 4. Proofs of Theorems are relegated to Section 5. Throughout this paper, we use the following notation: |A|: Sum of the absolute values of all entries of A.

Definitions of CL and G Estimators
One of the most fundamental estimators for parameters of the financial time series models is the conditional least squares CL estimator θ CL introduced by Tjφstheim 10 , and it has been widely used in the finance.θ CL for a time series model {X t } is obtained by minimizing the penalty function where F t m is the σ-algebra generated by {X s : t − m ≤ s ≤ t − 1}, and m is an appropriate positive integer e.g., if {X t } follows kth-order nonlinear autoregressive model, we can take m k .CL estimator generally has a simple expression.However, it is not asymptotically optimal in general see Amano and Taniguchi 1 .

Advances in Decision Sciences
Hence, Chandra and Taniguchi 5 constructed G estimator θ G based on Godambe's asymptotically optimal estimating function for RCA and ARCH models.For the definition of θ G , we prepare the following estimating function G θ .Let {X t } be a stochastic process which is depending on the k-dimensional parameter θ 0 , then G θ is given by where The estimating function estimator θ E for the parameter θ 0 is defined as Chandra and Taniguchi 5 derived the asymptotic variance of and gave the following lemma by extending the result of Godambe 3 .

2.5
Based on the estimating function G * θ in Lemma 2.1, Chandra and Taniguchi 5 constructed G estimator θ G for the parameters of RCA and ARCH models and showed that θ G is better than θ CL by simulation.Furthermore, Amano 6 applied θ G to some important financial time series models RCA, GARCH, and nonlinear AR models and showed that θ G is better than θ CL in the sense of the efficiency theoretically.Amano 6 also derived conditions that θ G becomes asymptotically optimal.However, in Amano 6 , θ CL and θ G were not applied to CHARN model, which includes many important financial time series models.Hence, in the next section, we apply θ CL and θ G to this model and prove θ G is better than θ CL in the sense of the efficiency for this model.Furthermore, conditions of asymptotical optimality of θ G are also derived.

CL and G Estimators for CHARN Model
In this section, we discuss the asymptotics of θ CL and θ G for CHARN model.CHARN model of order m is defined as First we estimate the true parameter θ 0 of 3.1 by use of θ CL , which is obtained by minimizing the penalty function

3.2
For the asymptotics of θ CL , we impose the following assumptions.
Assumption 3.1.1 u t has the probability density function f u > 0 a.e.u ∈ R. 2 There exist constants

3.3
3 H θ x is continuous and symmetric on R m , and there exists a positive constant λ such that Assumption 3.1 makes {X t } strict stationary and ergodic see 11 .We further impose the following.

Assumption 3.2. Consider the following
for all θ ∈ Θ.
Assumption 3.3. 1 F θ and H θ are almost surely twice continuously differentiable in Θ, and their derivatives ∂F θ /∂θ j and ∂H θ /∂θ j , j 1, . . ., k, satisfy the condition that there exist square-integrable functions A j and B j such that 3.8 3 The continuous derivative f u ≡ ∂f u /∂u exists on R and satisfies

3.11
Next, we apply θ G to CHARN model.From Lemma 2.1, θ G is obtained by solving the equation

Advances in Decision Sciences
For the asymptotic of θ G , we impose the following Assumptions.

and equality holds if and only if
This theorem is proved by use of Kholevo inequality see Kholevo 12 .From this theorem, we can see that the magnitude of the asymptotic variance of θ G is smaller than that of θ CL , and the condition that these asymptotic variances coincide is strict.Therefore, θ G is better than θ CL in the sense of the efficiency.Hence, we evaluate the condition that θ G is asymptotically optimal based on local asymptotic normality LAN .LAN is the concept of local asymptotic normality for the likelihood ratio of general statistical models, which was established by Le Cam 13 .Once LAN is established, the asymptotic optimality of estimators and tests can be described in terms of the LAN property.Hence, its Fisher information matrix Γ is described in terms of LAN, and the asymptotic variance of an estimator has the lower bound Γ −1 .Now, we prepare the following Lemma, which is due to Kato et where

3.20
From this Lemma, the asymptotic variance of θ G V −1 has the lower bound Γ −1 , that is,

3.22
Finally, we give the following example which satisfies the assumptions in Theorems 3.7 and 3.9.
Example 3.10.CHARN model includes the following nonlinear AR model: , where a 0 > 0, a j ≥ 0, j 1, . . ., m, m j 1 a j < 1 .In Amano 6 , it was shown that the asymptotic variance of θ CL attains that of θ G .Amano 6 also showed under the condition that u t is Gaussian, θ G is asymptotically optimal.

Numerical Studies
In this section, we evaluate accuracies of θ CL and θ G for the parameter of CHARN model by simulation.Throughout this section, we assume the following model:

Proofs
This section provides the proofs of the theorems.First, we prepare the following lemma to compare the asymptotic variances of CL and G estimators see Kholevo 12 .Lemma 5.1.We define ψ ω and φ ω as r × s and t × s random matrices, respectively, and h ω as a random variable that is positive everywhere.If the matrix E φφ /h −1 exists, then the following inequality holds: The equality holds if and only if there exists a constant r × t matrix C such that hψ Cφ o a.e.

5.2
Now we proceed to prove Theorem 3.7.

5.3
Hence from Lemma 5.1, we can see that W ≥ UV −1 U.

5.4
From this inequality, we can see that

Table 1 :
MSE of θ CL and θ G for the parameter a in 4 .
−1 , . . ., X t−m u t , 3.1 where F θ , H θ : R m → R are measurable functions, and {u t } is a sequence of i.i.d.random variables with Eu t 0, E u 2 t 1 and independent of {X s ; s < t}.Here, the parameter vector θ θ 1 , . . ., θ k is assumed to be lying in an open set Θ ⊂ R k .Its true value is denoted by θ 0 .
: R m → R is a measurable function, and {u t } is a sequence of i.i.d.random variables with Eu t 0, E u 2 t 1 and independent of {X s ; s < t}, and we assume Assumptions 3.1, 3.2, 3.3, and 3.5 for example, we define F θ θ } ∼i.i.d.N 0, 1 .Mean squared errors MSEs of θ CL and θ G for the parameter a are reported in the following Table1.The simulations are based on 1000 realizations, and we set the parameter value a and the length of observations n as a 0.1, 0.2, 0.3 and n 100, 200, 300.From Table1, we can see that MSE of θ G is smaller than that of θ CL .Furthermore it is seen that MSE of θ CL and θ G decreases as the length of observations n increases.
Fisher information matrix of CHARN model based on LAN Γ can be represented as