On the Exponential Inequality for Weighted Sums of a Class of Linearly Negative Quadrant Dependent Random Variables

The exponential inequality for weighted sums of a class of linearly negative quadrant dependent random variables is established, which extends and improves the corresponding ones obtained by Ko et al. (2007) and Jabbari et al. (2009). In addition, we also give the relevant precise asymptotics.


Introduction
Lehmann [1] introduced a natural definition of negative dependence: two random variables and are said to be negative quadrant dependent (NQD, say) if ( > , > ) ≤ ( > ) ( > ) for all , ∈ R. Based on the concept of NQD, another notion of negative dependence was formulated by Newman [2] as follows: a sequence { , 1 ≤ ≤ } of random variables is said to be linearly negative quadrant dependent (LNQD, say) if, for any disjoint subsets 1 and 2 of {1, 2, . . . , } and positive 's, ∑ ∈ 1 , and ∑ ∈ 2 are NQD. Recall that a finite family of random variables { , 1 ≤ ≤ } is said to be negatively associated (NA, say) if, for every pair of disjoint subsets 1 and 2 of {1, 2, . . . , }, Cov { 1 ( , ∈ 1 ) , 2 ( , ∈ 2 )} ≤ 0, whenever 1 and 2 are coordinatewise increasing and the covariance exists. An infinite family is NA if every finite subfamily is NA. The concept of negative association was introduced by Joag-Dev and Proschan [3]. It is obvious to observe that NA sequences are LNQD and LNQD sequences are not necessarily NA, as it can be seen from the examples in Newman [2] or Joag-Dev and Proschan [3]. Hence, it is of interest to investigate the exponential inequality and its relevant result for LNQD sequences. It is well-known that the exponential inequalities for partial sum ∑ =1 ( − ) play a very important role in various proofs of limit theorems. One can refer to Yang and Wang [4], T.-S. Kim and H.-C. Kim [5], Sung [6], Jabbari et al. [7], Xing et al. [8], Sung [9], and so on for further comprehension. As for the limit results about LNQD sequence, one can refer to Newman [2], Zhang [10], H. Kim and T. Kim [11], Wang and Zhang [12], and references therein.
Recently, Ko et al. [13] gave a Bernstein-Hoeffding type inequality for uniformly bounded LNQD random variables, by which they obtained the almost sure convergence rate where → ∞ as → ∞. Motivated by the paper above, we establish the exponential inequality for weighted sums of uniformly bounded LNQD random variables. The result obtained extends and improves the corresponding ones given by Ko et al. [13] and Jabbari et al. [7]. Furthermore, we give the precise asymptotics with respect to the rate −1/2 (log ) 1/2 .
Throughout this paper, we always suppose that and 1 denote positive constants independent of but whose value may vary over cases, [ ] denotes the integral part of , = ∑ =1 , 2 = 2 , and ( ) = sup ≥1 ∑ :| − |≥ |Cov( , )| and denote log = ln( ∨ ). This paper is organized 2 The Scientific World Journal as follows. Section 2 contains our main results. Section 3 contains the corresponding proofs.

Main Results
In this section, our main results will be given. For formulation of the theorems obtained, some assumptions are needed, which are listed below.

Theorem 1.
Suppose that the assumption (A1) holds. Then for any 0 < < 1, one has where 1 and are positive constants.
in Theorem 1, then by Borel-Cantelli lemma, we have
(2) Since LNQD sequences are strictly weaker than NA sequences, as mentioned in Section 1, Theorem 1 extends Theorem 2.1 in Jabbari et al. [7] from strictly stationary negatively associated setting to weighted LNQD case. In addition, by the analysis mentioned above, we know that the strong convergence rate much faster than the relevant one (1) −1/3 (log ) 2/3 Jabbari et al. [7] obtained only for the special case of geometrically decreasing covariances.
(3) For the sequence of extended negatively dependent (END, say) or wide orthant dependent (WOD, say) random variables, similar results can also be obtained.
( ) ≤ − for some > 1, and Then for > −1, one has where denotes the standard normal random variable.

Proofs
Firstly, the following lemma is needed, which will be used in what follows.
Therefore, in terms of the result above and the concept of LNQD, Also, since ≥ 0 and > 0, { , 1 ≤ ≤ } is LNQD. Therefore, from Lemma 3.1 in Ko et al. [13], we have Combining (13) Proof. Applying Markov inequality and Lemma 6, we obtain Optimizing the exponent in the term of this upper-bound, we find that = /(4 ), so that this exponent becomes equal to − 2 /(16 ) as desired. The proof is completed.
Lemma 8 (see [10]). Under the conditions of Theorem 4, one has where is a standard normal random variable.
Lemma 9 (see [12]). Under the conditions of Theorem 4, we have for ∈ where Φ( ) denotes the standard normal distribution function.
Based on the above lemmas, the proofs of Theorems 1 and 4 can be given as follows.
Proof of Theorem 1. Let > 0 which satisfies 1 ≤ 1/2; then it follows from Lemma 7 that which completes the proof.
it is sufficient to prove that for any > −1.