JPS Journal of Probability and Statistics 1687-9538 1687-952X Hindawi 10.1155/2017/4793702 4793702 Research Article Upper Bound of the Generalized p Value for the Population Variances of Lognormal Distributions with Known Coefficients of Variation Somkhuean Rada http://orcid.org/0000-0001-8269-3397 Niwitpong Sa-aat Niwitpong Suparat Chow Shein-chung Department of Applied Statistics Faculty of Applied Science King Mongkut’s University of Technology North Bangkok Bangkok 10800 Thailand kmutnb.ac.th 2017 16 01 2017 2017 27 09 2016 15 12 2016 16 01 2017 2017 Copyright © 2017 Rada Somkhuean et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper presents an upper bound for each of the generalized p values for testing the one population variance, the difference between two population variances, and the ratio of population variances for lognormal distribution when coefficients of variation are known. For each of the proposed generalized p values, we derive a closed form expression of the upper bound of the generalized p value. Numerical computations illustrate the theoretical results.

King Mongkut’s University of Technology North BangkokKMUTNB-GOV-59-36
1. Introduction

The problem of statistical inference for the population variances has been widely discussed by various authors; see, for example, Singh et al. , Agrawal and Sthapit , Arcos Cebrián and Rueda García , and Arcos et al. . Kadilar and Cingi  proposed some ratio estimators for the population variance in simple and stratified random sampling. Cojbasic and Tomovic  proposed the bootstrap methods to construct the confidence intervals of the population variance for one sample and the difference of variances of two samples. Cojbasic and Loncar  proposed one-sided bootstrap method to construct the confidence intervals of the population variance of skewed distributions. Rajic et al.  proposed the new method for the testing one population variance and the difference of variances of two samples, based on t -statistics and bootstrap method. Singh and Malik  proposed a family of estimators for the population variance using auxiliary attributes. In this paper, we used the generalized p values, proposed by Tsui and Weerahandi  and Weerahandi , to find new generalized p values for testing: ( 1 ) one population variance, ( 2 ) the difference between two population variances, and ( 3 ) the ratio of population variances of lognormal distributions when coefficients of variation are known. This problem is analogous to the Behrens-Fisher problem; see, for example, Tang and Tsui  and Somkhuean et al. . In Section 2, the outline for some basic steps to construct the generalized p value for testing hypothesis in this problem is presented. The process of deriving each of the upper bounds as mentioned above is presented in Section 3. The numerical results are shown in Section 4 and the conclusion is presented in Section 5.

2. Generalized <inline-formula> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M9"> <mml:mrow> <mml:mi>p</mml:mi></mml:mrow> </mml:math></inline-formula> Values

The concept of the generalized p values has been introduced by Tsui and Weerahandi  and Weerahandi . We briefly review this concept as follows.

Let X be a random variable with a density function f X ξ , where ξ = θ , δ , θ is the parameter of interest, and δ is a nuisance parameter.

Suppose we want to test (1) H 0 : θ θ 0 vs.   H 1 : θ > θ 0 , where θ 0 is a specified quantity. Let x be a particular observed sample. The generalized test variable, T X , x , ξ , is required to satisfy the following conditions:

For fixed x and ξ = θ , δ , the distribution T X , x , ξ is free from the nuisance parameter δ .

t o b s = T x , x , ξ is free from any unknown parameter.

T X , x , ξ is either stochastically increasing or decreasing in θ for any given t and fixed values of x and δ .

Under the above conditions, if T X , x , ξ is a stochastically increasing test variable then the subset of space is extreme region C . For the one-sided hypothesis given above they defined a data-based extreme region C x is of the form (2) C x = X : T X , x , ξ 0 . Given the observed sample x , the generalized p value is defined as (3) p x = s u p θ H 0 P X C θ = s u p θ H 0 P X : T X , x , ξ 0 , for further details and for several applications based on the generalized p value; we refer to the book by Weerahandi .

Moreover, Tsui and Weerahandi  used the generalized p value p x for the Behrens-Fisher problem of testing the difference of two independent normal distribution means with possibly unequal variances. Later, Tang and Tusi  extended the works of Weerahandi , Gamage and Weerahandi  to derive the formula of the upper bound r of the generalized p value p x which is in the form of (see also, e.g., Kabaila and Lloyd ) (4) P p x r r .

In this paper, we also extend Tang and Tsui  work to find upper bound of each of the generalized p values, p x , for hypotheses testing of the one population variance, the difference between two population variances, and the ratio of population variances of lognormal distributions with known coefficients of variation.

3. Main Results for the Population Variance of Lognormal Distributions with Known Coefficients of Variation

Let X i = X i j , X i j , , X i j , for i = 1,2 , j = 1,2 , , n i be random samples having lognormal distribution and let Y i j = ln X i j ~ N μ i , σ i 2 where μ i and σ i 2 denote the mean and variance of Y i , respectively. In particular, the mean, variance, and the coefficient of variation for lognormal distribution are, respectively, given by (5) E X i = exp μ i + σ i 2 2 , v a r X i = exp 2 μ i + σ i 2 exp σ i 2 - 1 , C V i = exp σ i 2 - 1 , where C V i denotes the coefficient of variation of X i which is computed from v a r X i / E X i .

It is easy to see that (6) τ i = C V = exp σ i 2 - 1 , τ i 2 = exp σ i 2 - 1 , τ i 2 + 1 = exp σ i 2 , σ i 2 = ln τ i 2 + 1 and the parameters of interest are (7) θ 1 = v a r X 1 = exp 2 μ 1 + ln τ 1 2 + 1 τ 1 2 = k 1 exp 2 μ 1 + l 1 , k 1 = τ 1 2 , l 1 = ln τ 1 2 + 1 , θ 2 = v a r X 1 - v a r X 2 = exp 2 μ 1 + l n τ 1 2 + 1 τ 1 2 - exp 2 μ 2 + l n τ 2 2 + 1 τ 2 2 = k 1 exp 2 μ 1 + l 1 - k 2 exp 2 μ 2 + l 2 , k i = τ i 2 , l i = l n τ i 2 + 1 , i = 1,2 , θ 3 = v a r X 1 v a r X 2 = k 1 exp 2 μ 1 + l 1 k 2 exp 2 μ 2 + l 2 . For testing the null hypothesis, H 01 : θ 1 θ 01 vs H a 1 : θ 1 > θ 01 , H 02 : θ 2 θ 02 vs H a 2 : θ 2 > θ 02 , and H 03 : θ 3 θ 03 vs H a 3 : θ 3 > θ 03 , the sufficient statistics involving these problems are Y - i , S i 2 , where (8) Y - i = 1 n i i = 1 n i Y i , S i 2 = 1 n i - 1 i = 1 n i Y i j - Y - i 2 . It is known that the probability distributions of the statistics below are independent: (9) Y - i ~ N μ i , σ i 2 n i , U i = n i - 1 S i 2 σ i 2 ~ χ n i - 1 2 . We denote D = X 11 , X 12 , , X 1 n 1 , d = x 11 , x 12 , , x 1 n 1 and Q = X 11 , X 12 , , X 1 n 1 , X 21 , X 22 , , X 2 n 2 , q = x 11 , x 12 , , x 1 n 1 , x 21 , x 22 , , x 2 n 2 . Here d and q are the vector of the observed samples. Let y - i , s i 2 be the observed value of the sufficient statistic Y - i , S i 2 . Following Tang and Tsui  and Somkhuean et al. , the repeated sampling, y - i , s i 2 follows the same probability distributions as (9).

Case  1. The hypothesis testing is (10) H 01 : θ 1 θ 01 , H a : θ 1 > θ 01 . The parameter of the one population variance of lognormal distribution when the coefficient of variation is known is (11) θ 1 = k 1 exp 2 μ 1 + l 1 . Using the generalized test variable for θ 1 which is (12) T X 1 , x 1 , θ 1 = k 1 exp 2 μ 1 + l 1 = k 1 exp 2 y - 1 - Y - 1 - μ 1 S 1 / n 1 s 1 n 1 + l 1 = k 1 exp 2 y - 1 - Z U 1 / n 1 - 1 s 1 n 1 + l 1 , where Z = n 1 Y - 1 - μ 1 / σ 1 ~ N 0,1 , U 1 = n 1 - 1 S 1 2 / σ 1 2 ~ χ n 1 - 1 2 .

It is easy to see that T X 1 , x 1 , θ 1 in (12) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, p d , is defined, under the null hypothesis H 01 , to be (13) p d = sup H 01 P T 1 X 1 , x 1 , θ 1 T 1 x 1 , x 1 , θ 1 = P T 1 X 1 , x 1 , θ 1 T 1 x 1 , x 1 , θ 1 . Following (13) the generalized p value for (10) can be defined as (14) p d = P T 1 X 1 , x 1 , θ 1 T 1 x 1 , x 1 , θ 1 = P k 1 exp 2 y - 1 - Z U 1 / n 1 - 1 s 1 n 1 + l 1 θ 01 = P ln k 1 + 2 y - 1 - Z U 1 / n 1 - 1 s 1 n 1 + l 1 ln θ 01 = P y - 1 - Z U 1 / n 1 - 1 s 1 n 1 1 2 l n θ 01 k 1 - l 1 = P Z y - 1 - 1 / 2 ln θ 01 / k 1 - l 1 s 1 / n 1 U 1 n 1 - 1 = E U 1 Φ y - 1 - 1 / 2 ln θ 01 / k 1 - l 1 s 1 / n 1 U 1 n 1 - 1 , where E U 1 · is an expectation operator with respect to U 1 = n 1 - 1 S 2 / σ 2 ~ χ n 1 - 1 2 and Φ · is cdf of the standard normal distribution.

Theorem 1.

If f u 1 = Φ T n 1 - 1 u 1 / n 1 - 1 - m then f u 1 is a convex function of u 1 where T n 1 - 1 is t - distribution with n 1 - 1 degrees of freedom.

Proof.

See Appendix.

Theorem 2.

The upper bound of p ( d ) in (14) takes the form Ψ n 1 - 1 Φ - 1 r + m , for 0 < r < 0.5 , m = 1 / 2 ln θ 01 / k 1 - l 1 / σ 1 / n , where Ψ · is a cdf of t -distribution function with n 1 - 1 degrees of freedom and Φ · is a cdf of the standard normal distribution.

Proof.

See Appendix.

Case  2. The hypothesis testing is (15) H 02 : θ 2 θ 02 vs   H a 2 : θ 2 > θ 02 . The parameter of the difference between two population variances for lognormal distributions is (16) θ 2 = k 1 exp 2 μ 1 + l 1 - k 2 exp 2 μ 2 + l 2 . Without loss of generality, suppose θ 2 = θ 02 = 0 , (17) θ 2 = k 1 exp 2 μ 1 + l 1 - k 2 exp 2 μ 2 + l 2 = 2 μ 1 - μ 2 + ln k 1 k 2 + l 1 - l 2 = 2 μ 1 - μ 2 + D 1 . Using generalized test variable for θ 2 is (18) T 2 X 1 , X 2 , x 1 , x 2 , θ 2 = 2 y - 1 - y - 2 + Y - 2 - Y - 1 - μ 2 - μ 1 σ 1 2 / n 1 + σ 2 2 / n 2 σ 1 2 s 1 2 n 1 S 1 2 + σ 2 2 s 2 2 n 2 S 2 2 + D 1 = 2 y - 1 - y - 2 + Z U 1 + U 2 n 1 - 1 / n 1 s 1 2 U 1 / U 1 + U 2 + n 2 - 1 / n 2 s 2 2 U 2 / U 1 + U 2 + D 1 = 2 y - 1 - y - 2 + 1 n 1 + n 2 - 2 Z U 1 + U 2 / n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 U 1 / U 1 + U 2 + n 2 - 1 / n 2 s 2 2 U 2 / U 1 + U 2 + D 1 = 2 y - 1 - y - 2 + T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B + D 1 . It easy to see that T 2 X 1 , X 2 , x 1 , x 2 , θ 2 in (18) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, p q , is defined, under the null hypothesis H 02 , to be (19) p q = sup H 02 P T 2 X 1 , X 2 , x 1 , x 2 , θ 2 T 2 x 1 , x 2 , x 1 , x 2 , θ 2 = P T 2 X 1 , X 2 , x 1 , x 2 , θ 2 T 2 x 1 , x 2 , x 1 , x 2 , θ 2 . Following (19) the generalized p value for (15) can be defined as (20) p q = P T 2 X 1 , X 2 , x 1 , x 2 , θ 2 T 2 x 1 , x 2 , x 1 , x 2 , θ 2 = P 2 y - 1 - y - 2 + T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B + D 1 0 = P T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B y - 2 - y - 1 - D 1 2 = P T n 1 + n 2 - 2 y - 1 - y - 2 + D 1 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 / B + n 2 - 1 / n 2 s 2 2 / 1 - B = E B Ψ n 1 + n 2 - 2 y - 1 - y - 2 + D 1 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 / B + n 2 - 1 / n 2 s 2 2 / 1 - B , where Ψ n 1 + n 2 - 2 · is a cdf of t -distribution with n 1 + n 2 - 2 degrees of freedom and E B · is an expectation operator with respect to B · B ~ B e t a n 1 - 1 / 2 , n 2 - 1 / 2 .

Theorem 3.

If f b = Ψ ν z + w 1 / k 1 / b + k 2 / b then for fixed z 0 and w 0 , f b is convex function for b .

Proof.

See  Appendix.

Theorem 4.

If g t = P Ψ n 1 + n 2 - 2 z + w 1 / t C n 1 - 1 / n 1 - 1 + 1 - t C n 2 - 1 / n 2 - 1 r and z , C n 1 - 1 , C n 2 - 1 are independent random variable such that z ~ N 0,1 , C n 1 - 1 ~ χ n 1 - 1 2 , C n 2 - 1 ~ χ n 2 - 1 2 , then g t is convex function for t .

Proof.

See  Appendix.

Theorem 5.

The upper bound of p ( q ) is Ψ m i n n 1 - 1 , n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r - w for 0 < r < 0.5 , w = D 1 / 2 / σ 1 2 / n 1 + σ 2 2 / n 2 , where Ψ n 1 + n 2 - 2 · is the t -distribution function with n 1 + n 2 - 2 degrees of freedom and Ψ n 1 + n 2 - 2 - 1 · is the inverse function of Ψ n 1 + n 2 - 2 · .

Proof.

See Appendix.

Case  3. The hypothesis testing is (21) H 03 : θ 3 θ 03 vs   H a 3 : θ 3 > θ 03 . The parameter of the ratio of population variances for lognormal distributions is (22) θ 3 = k 1 exp 2 μ 1 + l 1 k 2 exp 2 μ 2 + l 2 , where (23) θ 3 = k 1 exp 2 μ 1 + l 1 k 2 exp 2 μ 2 + l 2 = θ 03 = ln k 1 exp 2 μ 1 + l 1 k 2 exp 2 μ 2 + l 2 = ln θ 03 = 2 μ 1 - μ 2 - ln k 2 k 1 + l 2 - l 1 + ln θ 03 = 2 μ 1 - μ 2 - D 2 = μ 1 - μ 2 - D 2 2 , D 2 = ln k 2 k 1 + l 2 - l 1 + ln θ 03 . Using generalized test variable for θ 3 is (24) T 3 X 1 , X 2 , x 1 , x 2 , θ 3 = y - 1 - y - 2 + Y - 2 - Y - 1 - μ 2 - μ 1 σ 1 2 / n 1 + σ 2 2 / n 2 σ 1 2 s 1 2 n 1 S 1 2 + σ 2 2 s 2 2 n 2 S 2 2 - D 2 2 = y - 1 - y - 2 + Z n 1 - 1 s 1 2 n 1 U 1 + n 2 - 1 s 2 2 n 2 U 2 - D 2 2 = y - 1 - y - 2 + Z U 1 + U 2 n 1 - 1 / n 1 s 1 2 U 1 / U 1 + U 2 + n 2 - 1 / n 2 s 2 2 U 2 / U 1 + U 2 - D 2 2 = y - 1 - y - 2 + 1 n 1 + n 2 - 2 Z U 1 + U 2 / n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 U 1 / U 1 + U 2 + n 2 - 1 / n 2 s 2 2 U 2 / U 1 + U 2 - D 2 2 = y - 1 - y - 2 + T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B - D 2 2 . It easy to see that T 3 X 1 , X 2 , x 1 , x 2 , θ 3 in (24) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, p q 2 , is defined, under the null hypothesis H 03 , to be (25) p q 2 = sup H 03 P T 3 X 1 , X 2 , x 1 , x 2 , θ 3 T 3 x 1 , x 2 , x 1 , x 2 , θ 3 = P T 3 X 1 , X 2 , x 1 , x 2 , θ 3 T 3 x 1 , x 2 , x 1 , x 2 , θ 3 . Following (25) the generalized p value for (21) can be defined as (26) p q 2 = P T 3 X 1 , X 2 , x 1 , x 2 , θ 3 T 3 x 1 , x 2 , x 1 , x 2 , θ 3 = P y - 1 - y - 2 + T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B - D 2 2 0 = P T n 1 + n 2 - 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 B + n 2 - 1 / n 2 s 2 2 1 - B y - 2 - y - 1 + D 2 2 = P T n 1 + n 2 - 2 y - 1 - y - 2 - D 2 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 / B + n 2 - 1 / n 2 s 2 2 / 1 - B = E B Ψ n 1 + n 2 - 2 y - 1 - y - 2 - D 2 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 / B + n 2 - 1 / n 2 s 2 2 / 1 - B , where Ψ · is a cdf of t -distribution with n 1 + n 2 - 2 degrees of freedom and E · is an expectation operator with respect to B · B ~ B e t a n 1 - 1 / 2 , n 2 - 1 / 2 .

Theorem 6.

The upper bound of p ( q 2 ) is Ψ m i n n 1 - 1 , n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r + w for 0 < r < 0.5 , w = D 2 / 2 / σ 1 2 / n 1 + σ 2 2 / n 2 , where Ψ n 1 + n 2 - 2 · is the t -distribution function with n 1 + n 2 - 2 degrees of freedom and Ψ n 1 + n 2 - 2 - 1 · is the inverse function of Ψ n 1 + n 2 - 2 · .

Proof.

It is similar to Theorem 5.

4. Numerical Results

In this section, we used functions written in the R program  to compute the values of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6. For given values of n 1 , n 2 , m , W , θ 01 , θ 03 , and r , we computed the upper bounds of p d , p q , and p q 2 , by using the results from Theorems 2, 5, and 6 shown in Tables 13. As we can see in these tables, all results of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6 depend mainly on a variety of values of n 1 , n 2 , m , W , θ 01 , θ 03 , and r . As a result, these upper bounds confirm our proof in Theorems 2, 5, and 6.

The upper bound of p value for hypothesis (10), for τ = 1 .

n 1 θ 01 m r = 0.01 r = 0.02 r = 0.05 r = 0.10 r = 0.20
10 2.001 0.000949333 0.02254277 0.03514731 0.06730417 0.1161747 0.2111518
2.002 0.001898192 0.02257782 0.03520131 0.06740329 0.1163342 0.2114042
2.003 0.002846578 0.02261291 0.03525537 0.06750250 0.1164939 0.2116567

15 2.001 0.000949333 0.01779592 0.02963687 0.06122800 0.1105716 0.2073298
2.002 0.001898192 0.01782805 0.02968858 0.06132703 0.1107338 0.2075869
2.003 0.002846578 0.01786022 0.02974036 0.06142616 0.110896 0.2078441

20 2.001 0.000949333 0.01563906 0.02706209 0.05832057 0.1078731 0.2054935
2.002 0.001898192 0.01566962 0.02711254 0.05841947 0.1080364 0.2057529
2.003 0.002846578 0.01570022 0.02716304 0.05851845 0.1081999 0.2060124

The upper bound of p value for hypothesis (15), for τ 1 = 1 , τ 2 = 1 , and W = 0 .

n 1 , n 2 r = 0.01 r = 0.02 r = 0.05 r = 0.10 r = 0.20
5, 5 0.02213745 0.03526116 0.06823592 0.1174918 0.2121440
10, 10 0.01553660 0.02705864 0.05846833 0.1080566 0.2055204
15, 15 0.01356674 0.02457767 0.05550640 0.1052267 0.2035670
20, 20 0.01262788 0.02338536 0.05407854 0.1038673 0.2026341
30, 30 0.01172026 0.02222509 0.05268530 0.1025436 0.2017292

The upper bound of p value for hypothesis (21), for τ 1 = 1 , τ 2 = 1 , and W = 0 .

n 1 , n 2 θ 03 W r = 0.01 r = 0.02 r = 0.05 r = 0.10 r = 0.20
5, 5 1.01 0.00944854 0.02234846 0.03562201 0.06898679 0.1188115 0.2144116
1.02 0.01880399 0.02255971 0.03598345 0.06973902 0.1201326 0.2166765
1.03 0.02806817 0.02277119 0.03634547 0.07049260 0.1214551 0.2189384

10, 10 1.01 0.00944854 0.01577932 0.02747895 0.05934179 0.1095594 0.2079965
1.02 0.01880399 0.0160234 0.02790139 0.06021843 0.1110648 0.2104685
1.03 0.02806817 0.01626883 0.02832593 0.06109821 0.1125725 0.2129362

15, 15 1.01 0.00944854 0.01381588 0.02501398 0.05641761 0.1067866 0.2061060
1.02 0.01880399 0.01406687 0.02545308 0.05733271 0.1083493 0.2086403
1.03 0.02806817 0.01431970 0.02589493 0.05825161 0.1099147 0.2111700

20, 20 1.01 0.009448542 0.01287934 0.02382890 0.05500805 0.1054549 0.2052033
1.02 0.01880399 0.01313292 0.02427558 0.05594180 0.1070456 0.2077678
1.03 0.02806817 0.01338861 0.02472537 0.05687972 0.1086391 0.2103272

30, 30 1.01 0.009448542 0.01197343 0.02267532 0.05363272 0.1041586 0.2043280
1.02 0.01880399 0.01222900 0.02312906 0.05458475 0.1057767 0.2069217
1.03 0.02806817 0.01248697 0.02358628 0.05554131 0.1073979 0.2095102
5. Conclusion

We proposed three new generalized p values for testing the hypotheses of ( 1 ) one population variance, ( 2 ) the difference between two population variances, and ( 3 ) the ratio of population variances of lognormal distributions when the coefficients of variation are known. We also proved new upper bounds for our proposed generalized p values. We note here that the results for these results for case ( 1 ) , case ( 2 ) , and case ( 3 ) were analogous to the upper bound of the generalized p value for the Behrens-Fisher problem proposed by Tang and Tsui . Numerical results shown in Tables 13 confirmed our results of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6; we also found that the proposed upper bounds are increasing up on the parameter values of m and W . For example, for n 1 = 10 , m = 0.002846578 , and r = 0.01 , the upper bound of p value using Theorem 2 is 0.022261291 and this upper bound of p value approaches       r = 0.01 when n 1 is increasing. Similar results are applied for other cases. For the two-tailed test, that is, H 01 : θ 1 = θ 01 and H a : θ 1 θ 01 ; it is easy to apply the results of Theorem 2 of Tang and Tsui  to all hypotheses in this paper. So we skipped it.

Appendix Proof of Theorem <xref ref-type="statement" rid="thm1">1</xref>.

Defining h u 1 = T n 1 - 1 u 1 / n 1 - 1 - m , we have f u 1 = Φ h u 1 . Let ϕ be the probability density function of u 1 .

Hence (A.1) f u 1 = f u 1 = ϕ h u 1 h u 1 = ϕ h u 1 h u 1 2 + ϕ h u 1 h u 1 . For T n 1 - 1 0 ,   h ( u 1 ) < 0 . Hence ϕ h z > 0 and ϕ h z > 0 .

Moreover (A.2) h u 1 = 1 2 T n 1 - 1 u 1 - 1 / 2 n 1 - 1 = - 1 4 T n 1 - 1 u 1 - 3 / 2 n 1 - 1 > 0 . Hence f u 1 > 0 , and f u 1 is convex in u 1 .

Proof of Theorem <xref ref-type="statement" rid="thm2">2</xref>.

Denote z = y - / σ 1 / n and U 1 = n 1 - 1 S 2 / σ 2 ~ χ n 1 - 1 2 .

From (14), we have (A.3) p d = E U 1 Φ y - - 1 / 2 ln θ 01 / k 1 - l 1 s 1 / n 1 U 1 n 1 - 1 = E U 1 Φ y - σ 1 / n 1 U 1 n 1 - 1 - 1 / 2 ln θ 01 / k 1 - l 1 σ 1 / n 1 U 1 n 1 - 1 σ 1 / n 1 s 1 / n 1 = E U 1 Φ z U 1 n 1 - 1 - 1 / 2 ln θ 01 / k 1 - l 1 σ 1 / n 1 U 1 n 1 - 1 n 1 - 1 U 1 = E U Φ T n 1 - 1 U 1 n 1 - 1 - m , m = 1 / 2 ln θ 01 / k 1 - l 1 σ 1 / n 1 . For any r < 0.5 and p d < r , hence, by Theorem 1, we have E f U 1 f E U 1 (A.4) p d = E U 1 Φ T n 1 - 1 U 1 n 1 - 1 - m Φ T n 1 - 1 E U 1 n 1 - 1 - m = Φ T n - 1 - m = p 1 d . For 0 < r < 0.5 , we have (A.5) p d d : p d r P d d : p 1 d r = P p 1 d r = P Φ T n 1 - 1 - m r = P T n 1 - 1 Φ - 1 r + m = E Ψ n 1 - 1 Φ - 1 r + m = Ψ n 1 - 1 Φ - 1 r + m , w h e r e m = 1 / 2 l n θ 01 / k 1 - l 1 σ 1 / n .

Proof of Theorem <xref ref-type="statement" rid="thm3">3</xref>.

Define f b = f h b , h b = z - w 1 / k 1 / b + k 2 / b , and ψ ν is the probability density function of t -distribution of ν degrees of freedom. Hence (A.6) f h b = f h b = ψ ν h b h b = ψ ν h b h b 2 + ϕ h b h b . For z 0 , w 0 implies that h b < 0 , ψ ν h b 0 , and ψ ν h b 0 .

We have (A.7) h b = z + w k 1 b + k 2 1 - b - 3 / 2 · - 1 2 - k 1 b 2 + k 2 1 - b 2 = - z + w k 1 2 / 2 b 2 + k 2 2 / 2 1 - b 4 + 2 k 1 k 2 / b 1 - b 3 + 2 k 1 k 2 / b 3 1 - b + 3 k 1 2 k 2 2 / b 2 1 - b 2 k 1 / b + k 2 / 1 - b 5 / 2 > 0 . We have f h b > 0 . Hence f b is convex function for b .

Proof of Theorem <xref ref-type="statement" rid="thm4">4</xref>.

(A.8) g t = P Ψ n 1 + n 2 - 2 z + w 1 t C n 1 - 1 / n 1 - 1 + 1 - t C n 2 - 1 / n 2 - 1 r = P z + w 1 t C n 1 - 1 / n 1 - 1 + 1 - t C n 2 - 1 / n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r = P z t C n 1 - 1 n 1 - 1 + 1 - t C n 2 - 1 n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r - w = E Φ t C n 1 - 1 n 1 - 1 + 1 - t C n 2 - 1 n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r - w , where Φ is the cdf of a standard normal distribution.

Let g 1 t = Φ h 1 t , h 1 t = t C n 1 - 1 / n 1 - 1 + 1 - t C n 2 - 1 / n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r - w , and ϕ is the probability density function of a standard normal distribution. Hence (A.9) g 1 t = g 1 t = ϕ h 1 t h 1 t = ϕ h 1 t h 1 t 2 + ϕ h 1 t h 1 t . We have (A.10) h 1 t = 1 2 Ψ n 1 + n 2 - 2 - 1 r t C n 1 - 1 n 1 - 1 + 1 - t C n 2 - 1 n 2 - 1 - 1 / 2 C n 1 - 1 n 1 - 1 - C n 2 - 1 n 2 - 1 = - 1 4 Ψ n 1 + n 2 - 2 - 1 r t C n 1 - 1 n 1 - 1 + 1 - t C n 2 - 1 n 2 - 1 - 3 / 2 C n 1 - 1 n 1 - 1 - C n 2 - 1 n 2 - 1 2 0 . We have g 1 t 0 . Hence    g 1 t is convex function for t . As a result, g t = E g 1 t is convex in t .

Proof of Theorem <xref ref-type="statement" rid="thm5">5</xref>.

Denote (A.11) T = σ 1 2 / n 1 σ 1 2 / n 1 + σ 2 2 / n 2 , z = y - 1 - y - 2 σ 1 2 / n 1 + σ 2 2 / n 2 , C n 1 - 1 = n 1 - 1 s 1 2 σ 1 2 , C n 2 - 1 = n 2 - 1 s 2 2 σ 2 2 . From (24) (A.12) p q = E B Ψ n 1 + n 2 - 2 y - 1 - y - 2 + D 1 / 2 σ 1 2 / n 1 + σ 2 2 / n 2 n 1 + n 2 - 2 n 1 - 1 / n 1 s 1 2 / B + n 2 - 1 / n 2 s 2 2 / 1 - B 1 1 / σ 1 2 / n 1 + σ 2 2 / n 2 = E B Ψ n 1 + n 2 - 2 z + w n 1 + n 2 - 2 C n 1 - 1 σ 1 2 / n 1 / B σ 1 2 / n 1 + σ 2 2 / n 2 + C n 2 - 1 σ 2 2 / n 2 / 1 - B , w = D 1 / 2 σ 1 2 / n 1 + σ 2 2 / n 2 = E B Ψ n 1 + n 2 - 2 z + w n 1 + n 2 - 2 T C n 1 - 1 / B + 1 - T C n 2 - 1 / 1 - B . For any r < 0.5 and p q < r , hence, by Theorem 1, such that E B f B f E B (A.13) p q = E B Ψ n 1 + n 2 - 2 z + w n 1 + n 2 - 2 T C n 1 - 1 / B + 1 - T C n 2 - 1 / 1 - B Ψ n 1 + n 2 - 2 E B z + w n 1 + n 2 - 2 T C n 1 - 1 / B + 1 - T C n 2 - 1 / 1 - B = Ψ n 1 + n 2 - 2 z + w 1 T C n 1 - 1 / n 1 - 1 + 1 - T C n 2 - 1 / n 2 - 1 = p 1 q . For 0 < r < 0.5 , we have (A.14) p q q : p q r P q q : q 1 q r = g T , by  Theorem  4 , g T max g 0 , g 1 = max P Ψ n 1 + n 2 - 2 z + w 1 C n 1 - 1 / n 1 - 1 r , P Ψ n 1 + n 2 - 2 z + w 1 C n 2 - 1 / n 2 - 1 r = max P z 1 C n 1 - 1 / n 1 - 1 - w 1 C n 1 - 1 / n 1 - 1 + Ψ n 1 + n 2 - 2 - 1 r , P z 1 C n 2 - 1 / n 2 - 1 - w 1 C n 2 - 1 / n 2 - 1 + Ψ n 1 + n 2 - 2 - 1 r = max E Ψ n 1 - 1 - w 1 C n 1 - 1 / n 1 - 1 + Ψ n 1 + n 2 - 2 - 1 r , E Ψ n 2 - 1 - w 1 C n 2 - 1 / n 2 - 1 + Ψ n 1 + n 2 - 2 - 1 r max Ψ n 1 - 1 E - w 1 C n 1 - 1 / n 1 - 1 + Ψ n 1 + n 2 - 2 - 1 r , Ψ n 2 - 1 E - w 1 C n 2 - 1 / n 2 - 1 + Ψ n 1 + n 2 - 2 - 1 r = Ψ m i n n 1 - 1 , n 2 - 1 Ψ n 1 + n 2 - 2 - 1 r - w , w = D 1 / 2 σ 1 2 / n 1 + σ 2 2 / n 2 .

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The second author is grateful to Grant no. KMUTNB-GOV-59-36 from King Mongkut’s University of Technology North Bangkok.

Singh H. P. Upadhyaya L. N. Namjoshi U. D. Estimation of finite population variance Current Science 1988 57 24 1331 1334 2-s2.0-0009664968 Agrawal M. C. Sthapit A. B. Unbiased ratio-type variance estimation Statistics and Probability Letters 1995 25 4 361 364 10.1016/0167-7152(94)00242-7 MR1363237 2-s2.0-9644271997 Arcos Cebrián A. Rueda García M. Variance estimation using auxiliary information: an almost unbiased multivariate ratio estimator Metrika. International Journal for Theoretical and Applied Statistics 1997 45 2 171 178 10.1007/bf02717100 MR1450552 Arcos A. Rueda M. Martínez M. D. González S. Román Y. Incorporating the auxiliary information available in variance estimation Applied Mathematics and Computation 2005 160 2 387 399 10.1016/j.amc.2003.11.010 MR2102817 2-s2.0-9644254418 Kadilar C. Cingi H. Ratio estimators for the population variance in simple and stratified random sampling Applied Mathematics and Computation 2006 173 2 1047 1059 10.1016/j.amc.2005.04.032 MR2207994 ZBL1086.62005 2-s2.0-32644435899 Cojbasic V. Tomovic A. Nonparametric confidence intervals for population variance of one sample and the difference of variances of two samples Computational Statistics & Data Analysis 2007 51 12 5562 5578 10.1016/j.csda.2007.03.023 2-s2.0-34547257228 Cojbasic V. Loncar D. One-sided confidence intervals for population variances of skewed distributions Journal of Statistical Planning and Inference 2011 141 5 1667 1672 10.1016/j.jspi.2010.11.007 MR2763198 ZBL1331.62249 2-s2.0-78751702755 Rajic V. C. Kocovic J. Loncar D. Antic T. R. Testing population variance in case of one sample and the difference of variances in case of two samples: example of wage and pension data sets in Serbia Economic Modelling 2012 29 3 610 613 10.1016/j.econmod.2012.01.002 2-s2.0-84856204765 Singh R. Malik S. Improved estimation of population variance using information on auxiliary attribute in simple random sampling Applied Mathematics and Computation 2014 235 43 49 10.1016/j.amc.2014.03.002 MR3194580 ZBL1334.92368 2-s2.0-84897005221 Tsui K.-W. Weerahandi S. Generalized p-values in significance testing of hypotheses in the presence of nuisance parameters Journal of the American Statistical Association 1989 84 406 602 607 10.1080/01621459.1989.10478810 MR1010352 2-s2.0-0000440236 Weerahandi S. Exact Statistical Methods for Data Analysis 1995 New York, NY, USA Springer 10.1007/978-1-4612-0825-9 MR1316663 Tang S. Tsui K.-W. Distributional properties for the generalized p-value for the Behrens-Fisher problem Statistics & Probability Letters 2007 77 1 1 8 10.1016/j.spl.2006.05.005 2-s2.0-33750533616 Somkhuean R. Niwitpong S. Niwitpong S.-A. Upper bounds of generalized p-values for testing the coefficients of variation of lognormal distributions Chiang Mai Journal of Science 2016 43 671 681 2-s2.0-84961836131 Gamage J. Weerahandi S. Size performance of some tests in one-way ANOVA Communications in Statistics—Simulation and Computation 1998 27 3 625 640 10.1080/03610919808813500 2-s2.0-0001075834 Kabaila P. Lloyd C. J. Tight upper confidence limits from discrete data The Australian Journal of Statistics 1997 39 2 193 204 10.1111/j.1467-842X.1997.tb00535.x MR1481261 ZBL0883.62034 The R Development Core Team An Introduction to R 2010 Vienna, Austria R Foundation for Statistical Computing http://www.R-project.org