© Hindawi Publishing Corp. A REGRESSION CHARACTERIZATION OF INVERSE GAUSSIAN DISTRIBUTIONS AND APPLICATION TO EDF GOODNESS-OF-FIT TESTS

We give a new characterization of inverse Gaussian distributions using the regression of a suitable statistic based on a given random sample. A corollary of this result is a characterization of inverse Gaussian distribution based on a conditional joint density function of the sample. Application of this corollary as a transformation in the procedure to construct EDF (empirical distribution function) goodness-of-fit tests for inverse Gaussian distributions is also studied.


Introduction.
A distribution is an inverse Gaussian distribution with parameters m > 0 and λ > 0, denoted IG(m, λ), if it has a density function given by x for x > 0, 0 otherwise.
Then, the statistics Y = n j=1 X j and Z = n j=1 X −1 j − n 2 Y −1 are jointly complete sufficient for m and λ.Y and Z are independently distributed, Y has an IG(nm, n 2 λ) distribution, and λZ has a chi-square distribution with (n − 1) degrees of freedom.Khatri [4] gave a characterization of the inverse Gaussian distributions based on the independence between Y and Z, then Seshadri [9] gave several characterizations of the inverse Gaussian distributions based on the constant regression of several different statistics given Y .In this note, we give a characterization of the inverse Gaussian distributions based on the regression of a statistic given Y and Z.The corollary of this result is a characterization of the inverse Gaussian distributions based on the conditional joint density function of X 1 ,...,X n−2 , given Y and Z.The result of this corollary can be used as a transformation in the procedure to construct EDF (empirical distribution function) goodness-of-fit tests for inverse Gaussian distributions.

Characterization results. The conditional joint density function of
From (2.1), the UMVUE of the density function at a point x 1 > 0 is given by where y,z > 0. (See Chhikara and Folks [1].) On the other hand, this expectation can be computed using the conditional density function of X 1 given by (2.2), and the following integral is taken on the support of this conditional density function: where (2.5) (2.6) Using integration by parts, (2.7) (2.8) In the following part, we construct a characterization of inverse Gaussian distributions based on regression (2.8).
If X has an inverse Gaussian distribution with the characteristic function ϕ(t) given by (1.2), then take logarithm of this characteristic function following three successive differentiations and several simplifications, then ϕ(t) satisfies the differential equation Conversely, if ϕ(t) is the characteristic function of a random variable X with finite E[X −1 ] and E[X 3 ], that is, a solution of the differential equation (2.9), then, by the continuity of a characteristic function using the reverse procedure for getting (2.9), this characteristic function is (1.2).Hence, the following result is obtained.
Lemma 2.1.Let X be a nonnegative random variable with a nondegenerate distribution F and with finite E[X −1 ] and E[X 3 ].Assume that E[X] = m and Var(X) = m 3 /λ for some positive numbers m and λ, then F is an IG(m, λ) if and only if its characteristic function is a solution of the differential equation (2.9).
The following theorem is the main result of this note.
Theorem 2.2.Let X j , j = 1,...,n, n ≥ 2, be a random sample of n nonnegative random variables from a nondegenerate distribution F with finite E[X] and Var(X).Then, F is an inverse Gaussian distribution if and only if regression (2.8) holds.
Proof.We only need to show that if (2.8) holds, then F is an inverse Gaussian distribution.
From (2.8), From the fact that X is a random variable with finite for any constant T such that −T < t, where ϕ is the characteristic function of X (Khatri [4]), then (2.12) where F * (k) denotes the k times convolution of F .Substitute (2.12) into (2.10),simplify, and differentiate three times, the differential equation (2.9) is obtained.Then by Lemma 2.1, F is an inverse Gaussian distribution.
The following characterization of inverse Gaussian distributions based on (2.1) or (2.2) can be obtained directly from Theorem 2.2.This result will be used as a transformation in the procedure to construct EDF goodness-of-fit tests for inverse Gaussian distributions.The application of this result is discussed in Section 3.
Corollary 2.3.Let X j , j = 1,...,n, n ≥ 2, be a random sample of nonnegative random variables from a nondegenerate distribution F with finite E[X −1 ]  and

is an inverse Gaussian distribution if and only if the conditional joint density function of
3. Application to goodness-of-fit test.Let X j , j = 1,...,n, n ≥ 2, be a sample of nonnegative random variables from a nondegenerate distribution F with finite E[X −1 ] and E[X 3 ].To test whether F is an inverse Gaussian distribution, by Corollary 2.3, it is to test the equivalent simple hypothesis that whether the conditional joint density of X 1 ,...,X n−2 , given Y = y > 0 and Z = z > 0, is (1.1).The results of Rosenblatt [8] and then of Chhikara and Folks [1] are used to change the X's sample to the U 's random sample from a distribution over the interval (0,1), and the equivalent hypothesis now is whether the U 's sample is from the uniform distribution over the interval (0,1).Then, any EDF test statistics can be used (D'Agostino and Stephens [2]).Nguyen and Dinh [5] used this transformation and studied the first exact EDF goodness-of-fit tests for inverse Gaussian distributions.In their study, at some alternative distributions, and with reasonable, not large, sample sizes, the exact EDF goodness-of-fit tests based on this transformation behave pretty well comparing with the other approximate EDF goodness-of-fit tests.The other goodness-of-fit tests for inverse Gaussian distributions using EDF statistics were given by Edgeman et al. [3], O'Reilly and Rueda [6], and Pavur et al. [7].For detailed references, see Seshadri [10].

Call for Papers
Thinking about nonlinearity in engineering areas, up to the 70s, was focused on intentionally built nonlinear parts in order to improve the operational characteristics of a device or system.Keying, saturation, hysteretic phenomena, and dead zones were added to existing devices increasing their behavior diversity and precision.In this context, an intrinsic nonlinearity was treated just as a linear approximation, around equilibrium points.
Inspired on the rediscovering of the richness of nonlinear and chaotic phenomena, engineers started using analytical tools from "Qualitative Theory of Differential Equations," allowing more precise analysis and synthesis, in order to produce new vital products and services.Bifurcation theory, dynamical systems and chaos started to be part of the mandatory set of tools for design engineers.
This proposed special edition of the Mathematical Problems in Engineering aims to provide a picture of the importance of the bifurcation theory, relating it with nonlinear and chaotic dynamics for natural and engineered systems.Ideas of how this dynamics can be captured through precisely tailored real and numerical experiments and understanding by the combination of specific tools that associate dynamical system theory and geometric tools in a very clever, sophisticated, and at the same time simple and unique analytical environment are the subject of this issue, allowing new methods to design high-precision devices and equipment.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable: