© Hindawi Publishing Corp. A NOTE ON HAMMERSLEY’S INEQUALITY FOR ESTIMATING THE NORMAL INTEGER MEAN

Let X1,X2,…,Xn be a random sample from a normal N(θ,σ2) distribution with an unknown mean θ=0,±1,±2,…. Hammersley (1950) proposed the maximum likelihood estimator 
(MLE) d=[X¯n], nearest integer to the sample mean, 
as an unbiased estimator of θ and extended the Cramer-Rao inequality. The Hammersley lower bound for the variance of any unbiased estimator of θ is significantly improved, and the asymptotic (as 
n→∞) limit of Fraser-Guttman-Bhattacharyya bounds is also determined. A limiting property of a suitable distance is used to give some plausible explanations why such bounds cannot be attained. An almost uniformly minimum variance 
unbiased (UMVU) like property of d is exhibited.


Introduction.
Let X 1 ,X 2 ,...,X n be a random sample from a normal N(θ, σ 2 ) distribution with an unknown integer mean θ = 0, ±1, ±2,.... Hammersley [3] proposed the maximum likelihood estimator (MLE) d = [X n ], nearest integer to the sample mean, as an unbiased estimator of θ.He extended the Cramér-Rao variance inequality for any unbiased estimator t = t(X 1 ,...,X n ) and showed that He noticed that d does not achieve this bound (even asymptotically) and raised the question whether H is a greatest lower bound, that is, if there is any estimator attaining this bound.The bound H is improved to the extent that the new bound is almost twice the size of H.However, even the improved bound cannot be attained.The asymptotic limit (as n → ∞) of Fraser-Guttman-Bhattacharyya bounds is also determined although d still fails to achieve the asymptotic limit.We consider a suitable distance and use its limiting property to shed some light on the reasons why such bounds are unattainable.An intriguing behavior of d is observed as follows.Define the loss functions where I is the usual indicator function.Let R k (n) = E θ L k (d, θ) be the risk functions of d when k = 0, 1, 2. Then as n → ∞, k = 0, 1, 2. (1.3) Hammersley [3] proved (1.3) when k = 2.The interesting observation is that R k (n) has the same asymptotic behavior when k = 0, 1.In Section 2, we prove (1.3), improve the bound (1.1), and also determine the asymptotic limit of Bhattacharyya bounds.Also, we use a suitable distance and its limiting property to show the reason why such bounds cannot be attained even asymptotically.Such apparent anomalies seem to stem from the restricted parameter space.Some other properties (such as admissibility and minimaxity etc.) of d have been explored by Khan [4,5,6], Ghosh and Meeden [2], and Kojima et al. [7].However, the main focus of this paper is to settle some of the questions raised by Hammersley [3] himself regarding his bound, its attainment, and its relevance to his estimator.His problem is revisited for theoretical interest.A by-product of this pursuit is the conclusive observation that d = [X n ] is the best estimator even though d fails to achieve any bound.In fact, at the end of Section 2, we show that d is almost like uniformly minimum variance unbiased (UMVU) estimator in a restricted sense.

The main results.
In what follows, P θ denotes the probability under θ and E θ denotes the corresponding expectation (P i and E i have similar meanings).Let Φ(x) be the standard normal distribution function, and φ(x) = (2π) −1/2 exp(−x 2 /2).The nearest integer to y is denoted by [y] throughout the paper without any further mention.It follows from the definition of d = [X n ] that Clearly, f (−j) = f (j) with maximum at j = 0, and (2.2) Moreover, for k = 1, 2, we have 3) when k = 0.The case k = 2 in the asymptotic (1.3) was shown by Hammersley [3]; hence, (2.4) implies (1.3) when k = 1.We now consider the problem of improving the bound (1.1).Let X 1 ,X 2 ,...,X n be i.i.d.N(θ, σ 2 ), where θ is not necessarily integer, and let f n (θ) be the joint density of (X 1 ,...,X n ).Letting T = n 1 X i , we have where It is easy to verify that Let t(X 1 ,...,X n ) be an unbiased estimator of θ, and let h ≠ 0 (h is an integer if θ is an integer).Then, E θ t(X 1 ,...,X n ) = θ implies that where Since E θ S = 0, (2.8) implies that cov θ (t, S) = 2h.Consequently, Cauchy-Schwarz inequality gives (2.9) Using (2.7), we obtain (2.10) Thus (2.11) It is easily seen that lim h→0 B 1 (h) = σ 2 /n (the usual bound).However, in our case under consideration, θ is an integer so that h ≠ 0 must be an integer to make (2.8) and subsequent equations valid.Hence, maximizing the bound B 1 (h) over integers h ≠ 0, we obtain (2.12) It is easy to see that B > H and B/H → 2 as n → ∞.Thus, the bound in (1.1) is not the best possible and there can be no estimator achieving (1.1) even asymptotically, and the question raised by Hammersley [3] is resolved.Motivated by the preceding improvement, it is tempting to determine the limit of the kth Fraser-Guttman-Bhattacharyya bound B k as k → ∞.For integers h ≠ 0, define the generalized difference operator (cf.Feller [1, page 220]) by = 0, we note that E θ S m = 0, and (2.14) Using (2.7), one verifies that (2.15) Hence (2.16)Moreover, it is easy to check that for any unbiased estimator (2.17) Thus, γ = (1, 0, 0,...,0) is a vector of covariances between t and S 1 ,...,S k .Let Σ = (cov θ (S m ,S m )) k×k be the covariance matrix of S 1 ,S 2 ,...,S k .Then, the kth Bhattachayyra bound is B k (h) = γ Σ −1 γ.A general expression for B k (h) is intractable although its asymptotic limit is discussed below.But, we first evaluate B 2 (h).Letting y = exp(nh 2 /σ 2 ), it is easily seen from (2.16) that Maximizing B 2 (h) over integers, we obtain where H is the bound in (1.1).However, Then, it is well known that the Vandermonde determinant V (x 1 ,...,x k ) is given by (2.20) (2.21) we obtain Next, we note that A 11 is (k − 1) × (k − 1) determinant given by where (2.25) Also, since 2 Recalling the definition x = exp(λy), λ = n/σ 2 , y = h 2 , from (2.23) and (2.26), we have (2.30) Clearly, B * k ( 1) and lim k→∞ B * k ( 1) are both similar to exp(−n/σ 2 ) ∼ H (as n → ∞), and hence the asymptotic limit of Bhattacharyya bounds does not improve the Hammersley bound.Now, the question arises: why such bounds are unattainable?To see the reason, we define a suitable distance and examine its limit.Let f n (θ) be the joint density of (X 1 ,...,X n ) under P θ relative to a σ -finite measure µ n , where θ ∈ Ω (not necessarily normal).Let θ 1 ,θ 2 ∈ Ω, and define (2.31) We need the following elementary lemma of independent and general interest.
Lemma 2.1.The upper and lower bounds for D n are given by 2(1−ρ n (θ 1 ,θ 2 )) Proof.Clearly, proving the first half of the inequality.Moreover, by Cauchy-Schwarz inequality, we have (2.33) and the lemma is proved.
In the normal case N(θ, σ 2 ), recall the density (2.5) and note that Let θ 1 ≠ θ 2 , and set h = θ 1 − θ 2 .Then, the inequality in Lemma 2.1 becomes However, in the process of using Cauchy-Schwarz inequality to obtain Hammersley's bound (1.1) (although there is no escape from it), the following inequality occurs in disguise: This very weak inequality is the cause of poor lower bound when θ is restricted to integers (h ≠ 0 is restricted to integers as well).Now, we compute D n exactly in the normal case.First, suppose that where In general, for any θ 1 ≠ θ 2 and h = θ 2 − θ 1 , we have as n → ∞. (2.39) In particular, if θ is restricted to integers and |h| = 1, then which is the asymptotic (1.3) noted earlier.Khan [5] that under squared-error loss function, d = [X n ] is admissible in the class of integervalued estimators Ᏽ, while Ghosh and Meeden [2] proved its admissibility in Ᏽ under a more general loss function.It has been further observed by Khan [6] that d is the best invariant estimator in the class Ᏽ and that it is admissible under a generalized version of 0-1 loss function.Here, we show that d is almost like UMVU estimator in Ᏽ.Thus, we conclude the discussion about Hammersley's estimator with the following interesting observation.It should be noted that the statistic d 1 = X n continues to be sufficient for θ ∈ ᐆ = {0, ±1, ±2,... } although it is not complete for the obvious reason that d − d 1 is unbiased estimator of 0 but d 1 − d ≠ 0 with positive probability.Moreover, σ 2 d , σ 2 d 1 , and cov(d, d 1 ) are independent of θ.This fact was exploited in Khan [4] to determine the best unbiased estimator d α = αd+(1−α)d 1 (0 ≤ α ≤ 1) which strictly dominates d and d 1 .However, in view of sufficiency, one considers estimators of the form t = t(X n ), and we further restrict this class to integer-valued unbiased estimators of the form T = f (X n ), where f is an integer-valued function on the real line.Also, it is logical to assume that f (i) = i, i ∈ ᐆ, and f is nondecreasing.Under these conditions, we will show that σ 2 f (θ) ≥ σ 2 d (θ) for all θ and for any unbiased f (X n ).Thus, d is the best unbiased estimator in this class and this makes it almost like UMVU estimator.The proof of this elementary fact now follows.Let K = n/2πσ 2 , and let f = f (X n ), where f is integer-valued with the above properties.It then follows from the assumed unbiasedness of f that

An almost UMVU property of d. It has been shown by
(2.41) We note that |f (i+ u) − i| is minimized for all i if and only if Hence, it follows from the above that (2.45) Thus, d is the best unbiased estimator in the class of integer-valued estimators which are functions of sufficient statistic.

Call for Papers
Thinking about nonlinearity in engineering areas, up to the 70s, was focused on intentionally built nonlinear parts in order to improve the operational characteristics of a device or system.Keying, saturation, hysteretic phenomena, and dead zones were added to existing devices increasing their behavior diversity and precision.In this context, an intrinsic nonlinearity was treated just as a linear approximation, around equilibrium points.Inspired on the rediscovering of the richness of nonlinear and chaotic phenomena, engineers started using analytical tools from "Qualitative Theory of Differential Equations," allowing more precise analysis and synthesis, in order to produce new vital products and services.Bifurcation theory, dynamical systems and chaos started to be part of the mandatory set of tools for design engineers.
This proposed special edition of the Mathematical Problems in Engineering aims to provide a picture of the importance of the bifurcation theory, relating it with nonlinear and chaotic dynamics for natural and engineered systems.Ideas of how this dynamics can be captured through precisely tailored real and numerical experiments and understanding by the combination of specific tools that associate dynamical system theory and geometric tools in a very clever, sophisticated, and at the same time simple and unique analytical environment are the subject of this issue, allowing new methods to design high-precision devices and equipment.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable: Thus, in view of the asymptotic consideration, we replace Σ by A = (x αβ /h α+β ) k×k , α, β = 1, 2,...,k, and the bound B k is replaced by B * k = γ A −1 γ = |A 11 |/|A|, where A 11 is the (1, 1) cofactor of A. Note that B k (h) and B * k (h) both depend on n, and B k (h) ∼ B * k (h) as n → ∞.Fortunately, the determinants |A| and |A 11 | are related to Vandermonde determinant.Consider the k × k Vandermonde determinant whose ith row is