Sample Size Calculation for Controlling False Discovery Proportion

The false discovery proportion FDP , the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.


Introduction
Modern biomedical research frequently involves parallel measurements of a large number of quantities of interest, such as gene expression levels, single nucleotide polymorphism SNP and DNA copy number variations.The scientific question can often be formulated as a multiple testing problem.In order to address the multiplicity issue, many methods have been proposed to control the family-wise error rate FWER , false discovery rate FDR or false discovery proportion FDP .Controlling FDR has been widely used in high-dimensional data analysis 1-3 .FDR is the expected value of the FDP, which is the proportion of incorrect rejections among all rejections.Controlling FDR ensures that the average of FDP from many independently repeated experiments is under control.However, the variability of FDP is ignored, and the actual FDP could be much greater than FDR with high probability, especially for given r 1 and c 1 .This is a more stringent criterion than FDR because the proportion of false rejections is bounded above by r 1 with high probability.The FDP controlling procedures are generally too conservative to be useful for the purpose of study design or power analysis.
When we design studies involving multiple testing, it is important to determine sample size to ensure adequate statistical power.Methods for calculating sample size have been proposed to control various criteria, for example, FWER 12-14 , FDR 15-20 , the number of false discoveries 19, 21 and FDP 22 .For controlling FDP, Oura et al. 22 provided a method to calculate sample size using the beta-binomial model for the sum of rejection status of true alternative hypotheses.It is assumed that only test statistics of true alternative hypotheses are dependent, with a parametric correlation structure.This assumption is restrictive because null test statistics can also be correlated and the dependence structure can be more complicated than the assumed parametric correlation structure.Furthermore, the computation is intensive because computation of the betabinomial distribution is required.However, to our knowledge this is the only paper that directly deals with this important design problem.
In this paper, we provide a more general method of sample size calculation for controlling FDP under weak-dependence assumptions.Under some assumptions on dependence among test statistics, explicit formulas for the mean and variance of FDP have been derived for each fixed effect size 23 .The formulas elucidate the effects of various design parameters on the variance of FDP.Moreover, the formulas provide a convenient tool to calculate sample size for controlling the FDP.As in 13, 18, 19, 24 , we consider the probability of detecting at least a specified proportion of true alternative hypotheses as the power criterion.An iterative computation algorithm for calculating sample size is provided.Simulation experiments indicate that studies with the resultant sample sizes satisfy the power criterion at the given rejection threshold.We illustrate the sample size calculation procedure using a prostate cancer dataset.

Notation
Suppose that m hypotheses are tested simultaneously.Let M 0 denote the index set of m 0 tests for which null hypotheses are true and M 1 the index set of m 1 m − m 0 tests for which alternative hypotheses are true.Denote the proportion of true null hypotheses by π 0 m 0 /m.We reject a hypothesis if the P value is less than some threshold α, and denote the rejection status of the ith test by R i α I p i < α , where p i denotes the P value of the ith test and I • is an indicator function.The number of rejections is R m i 1 R i α .Let the comparison-wise type II error of the ith test be β i and the average type II error be Table 1 summarizes the outcomes of m tests and their expected values.
Denote the Pearson correlation coefficient of two rejection indicators by Furthermore, for i, j ∈ M 0 , define Let the average correlation be denoted as Similarly, for i, j ∈ M 1 , we define The average correlation is In addition, for i ∈ M 1 , j ∈ M 0 , denote Denote the average correlation by

The Effect of Design Parameters on the Variance of FDP
It has been shown via numerical studies that the variability of FDP increases when test statistics are dependent 25, 26 .But the relationship between design parameters and the variance of FDP has not been examined through analytical formulas.Under the assumptions of common effect size and weak dependence among test statistics, explicit formulas for the mean μ Q and variance σ 2 Q of the FDP have been derived 23 : where and ω α/ 1 − α .The variance formula 2.10 elucidates the effects of various design parameters on the variance of FDP.To explore the effects, in Figure 1 we calculated σ Q using 2.10 and plotted it against m for different correlations θ V .We set π 0 0.7 and m in the range of 1000 to 10000.The average correlations θ U and θ UV are fixed to be 0.001 and 0, respectively.The levels of α and β are chosen such that FDR is 3% or 5%.At each value of θ V , σ Q decreases as the number of tests m increases.The solid line shows the standard deviation of the FDP when θ V is 0. When θ V is not 0, σ Q increases evidently.If test statistics are highly correlated, FDP can be much greater than its mean FDR at a given rejection threshold due to its large variability.
In Figure 2, the relationship between σ Q and π 0 was investigated.When other parameters are fixed, σ Q increases as π 0 increases.
Figure 3 shows that σ Q increases as β increases.When other factors are fixed, the variability of FDP is smaller when the comparison-wise type II error is smaller.

Power and Sample Size Analysis
Under some general regularity conditions including weak dependence among test statistics, the FDP follows an asymptotic normal distribution N μ Q , σ 2   method, and under weak dependence log FDP is closer to normal than the FDP itself.The approximate mean and variance of Y log FDP are 23 where Σ is in 2.11 .
To control FDP with desired power, criterion 1.1 has to be satisfied.Asymptotic normality of log FDP implies that where Φ • is the cumulative distribution function CDF of standard normal distribution, μ Y is in 2.12 , and σ 2 Y is in 2.13 .There are two commonly used power criteria in multiple testing: the average power, defined as E U/m 1 , and the overall power, defined as P U/m 1 ≥ r 2 for given r 2 .When a study is designed using the average power criterion, the proportion of true alternative hypotheses rejected will be greater than a prespecified number on average.However, under dependence among test statistics the variability of U/m 1 increases 18 , and the study can be underpowered with high probability.Consequently, the overall power has been used in 13, 18, 19, 24 and we also use this power criterion, for given r 2 and c 2 .
Under the weak-dependence assumptions in 18, 23 , U/m 1 has an asymptotic normal distribution:

2.16
Setting the inequality in 2.15 to equality, the following equation for β can be obtained as in 18 : where For illustration, consider that a two-sample one-sided t-test is performed.Let δ denote the effect size mean difference divided by the common standard deviation , and a 1 and 1−a 1 denote the allocation proportion for two groups.We first find α and β which fulfill criteria 1.1 and 2.15 .The required sample size n is the smallest integer satisfying the following inequality: where Following the notation defined in Section 2.1, the correlations can be calculated as: where Ψ n−2 is the CDF of a bivariate t-distribution with n − 2 degrees of freedom and ρ ij denotes the Pearson correlation between the ith and jth test statistics.As can be seen from these formulas, the correlations depend on α and β.No analytical solutions can be found for these two parameters.We use the following iterative computation algorithm to calculate sample size. Algorithm.
4 Using the current values of θ U , θ V , θ UV and β, solve for α from equation 5 Using the current estimates of β and α, calculate θ V , θ U and θ UV from 2.19 , 2.20 and 2.21 , respectively.Obtain the average correlations θ V , θ U and θ UV .
6 With updated estimates of θ V , θ U and θ UV , repeat steps 3 to 5 until the estimates of β and α converge.
7 Plug the estimated β and α into 2.18 to solve the sample size.
The estimates of rejection threshold α and comparison-wise type II error β can also be obtained.

Simulation
The proposed sample size calculation procedure was illustrated for one-sided t-test comparing the mean of two groups.The effect size δ 1 and allocation proportion a 1 0.5.Two types of correlation structures were used: blockwise correlation and autoregressive correlation structure.In the blockwise correlation structure, a proportion of test statistics were correlated in units of blocks.The correlation coefficient within block was a constant, and test statistics were independent across blocks.True null test statistics and true alternative test statistics were independent.In the autoregressive correlation structure, the correlation matrix σ ij for dependent test statistics was parameterized by σ ij ρ ρ |i−j| , where σ ij ρ is the Pearson correlation coefficient for the ith and jth test statistics and ρ is a correlation parameter.
Oura et al. 22 provided a sample size calculation method for controlling FDP using the beta-binomial model.Only test statistics of true alternative hypotheses are allowed to be dependent, with blockwise correlation structure.For comparison, this method and the sample size calculation procedure for controlling FDR with dependence adjustment in 18 were also assessed.Specifically, the criteria for controlling FDP are P FDP ≤ 0.05 ≥ 0.95, P U m 1 ≥ 0.9 ≥ 0.8.

3.2
Table 2 presents the sample size estimates for the blockwise correlation structure.Several parameter configurations were used.The block size is 20 or 100, for m 2000 or 10000, respectively.We observe that the sample size increases as the correlation between test statistics gets stronger, represented by a greater correlation parameter or a larger proportion of correlated test statistics.When the correlation is fixed, as the number of tests m increases, the required sample size decreases.With the other parameters fixed, when the number of true alternative hypotheses increases π 0 decreases , the required sample size decreases.
The sample sizes for controlling FDP are greater than those for controlling FDR because controlling FDP is in general more stringent.In the case that π 0 0.9, p v 0.3, ρ v 0.6 and m 2000 see Table 2 , the sample size for controlling FDP is 81, which is 23% greater than the sample size for controlling FDR.The sample sizes using the method in 22 are in parentheses and are slightly smaller than ours.In terms of computational efficiency, our algorithm converges very fast and generally within 10 steps.The computation is not heavy, and in fact, very similar and comparable to that in 18 for controlling FDR with dependence adjustment.The method of Oura et al. 22 is more computationally intensive.It becomes not feasible when the number of tests or the number of blocks of dependent test statistics is large.Simulation studies show that FDP is controlled and the power is achievable with the sample size given by our procedure at the calculated rejection threshold α results not shown .
Table 3 presents the sample sizes for the autoregressive correlation structure.Similar trends for sample size are observed as the design parameters vary.The method in 22 is not applicable to this dependence structure.

Sample Size Calculation Based on a Prostate Cancer Dataset
We use a prostate cancer dataset as source of correlation structure to illustrate the proposed sample size calculation method while ensuring overall power.The study by Wang et al. 28 investigated the association between mRNA gene expression levels and the aggressive phenotype of prostate cancer.The dataset contains 13935 mRNA measured from 62 patients with aggressive prostate cancer and 63 patients with nonaggressive disease.The method in

Table 1 :
Outcomes and expected outcomes of testing m hypotheses.