So I am trying to see how close the sample size calculations (for two sample independent proportions with unequal samples sizes) are between proc power in SAS and some sample size functions in r. I am using the data found here at a UCLA website.
The UCLA site gives parameters as follows:
p1=.3,p2=.15,power=.8,null difference=0, and for the two-sided tests it assumes equal sample sizes;
for the unequal sample size tests the parameters are the same, with group weights of 1 for group1 and 2 for group2, and the tests they perform are one-sided.
I am using the r function
pwr.t.test(n=NULL,d=0,sig.level=0.05,type="two.sample",alternative="two.sided")
from the pwr
package.
So if I input the parameter selections as the UCLA site has for their first example, I get the following error:
Error in uniroot(function(n) eval(p.body) - power, c(2, 1e+07)) :
f() values at end points not of opposite sign.
This appears to be because the difference is undetectable by r. I set d=.5 and it ran. Would SAS give error as well for too small difference? It doesn't in the example as their null difference is zero also.
I also get the error above when using
pwr.2p.test(h = 0, n = , sig.level =.05, power = .8)
and
pwr.chisq.test(w =0, N = , df =1 , sig.level =.05, power =.8 ).
I may be doing something horribly wrong, but I cant seem to really find a way if the hypothesized difference is 0.
I understand that SAS and r are using different methods for calculating the power, so I shouldn't expect to get the same result. I am really just trying to see if I can replicate proc power results in r.
I have been able to get near identical results for the first example with equal sample sizes and a two-sided alternative using
bsamsize(p1=.30,p2=.15,fraction=.5, alpha=.05, power=.8)
from the Hmisc
package. But when they do 1-sided tests with unequal sample sizes I can't replicate those.
Is there a way to replicate the process in r for the 1-sided sample size calculations for unequal group sizes?
Cheers.
In pwr.t.test
and its derivatives, d
is not the null difference (that's assumed to be zero), but the effect size/hypothesized difference between the two populations. If the difference between population means is zero, no sample size will let you detect a nonexistent difference.
If population A has a proportion of 15% and population B has a proportion of 30%, then you use the function pwr::ES.h
to calculate the effect size and do a test of proportions like:
> pwr.2p.test(h=ES.h(0.30,0.15),power=0.80,sig.level=0.05)
Difference of proportion power calculation for binomial distribution (arcsine transformation)
h = 0.3638807
n = 118.5547
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: same sample sizes
> pwr.chisq.test(w=ES.w1(0.3,0.15),df=1,sig.level=0.05,power=0.80)
Chi squared power calculation
w = 0.2738613
N = 104.6515
df = 1
sig.level = 0.05
power = 0.8
NOTE: N is the number of observations
来源:https://stackoverflow.com/questions/15395767/sample-size-and-power-calculation-in-r-as-viable-alternative-to-proc-power-in-sa