pwr
pwr copied to clipboard
optimization problems
I'm using pwr
as the basis for the jamovi jpower
module (https://github.com/richarddmorey/jpower) but I'm having some difficulty with effect size calculations, particularly at low sample sizes. Consider the following code:
pwr::pwr.t.test(n = 3, sig.level=0.001, power = .99)
This code will fail because the effect size to which this is powered is above the arbitrary limit of 10 in the pwr
code here:
https://github.com/heliosdrm/pwr/blob/44f91279372906d1a722bd1ff62eeb839d30e990/R/pwr.t.test.R#L52
The solution to this is to transform d
to a value that is between 0 and 1, such as e = d / (1+d)
. When you optimise over e
, you don't need to set arbitrary bounds. e=0 corresponds to d=0, and e=1 corresponds to d=infinity, so you just use optimization limits of c(0,1).
I've created a more stable workaround for now that I'm using in jpower
that you can see here:
https://github.com/richarddmorey/jpower/blob/d2b021699bb6e24494d2cd5fc1e973bb59c1c5b6/jpower/R/utils.R#L30
Probably there are several places where this optimization strategy could improve the robustness of pwr
.
Sorry for taking sooo long to answer this. :flushed: I appreciate this suggestion; however, implementing that workaround --at least in the way you made it in jpower-- would imply changing the definition of the target function for each unknown.
The current limits, albeit arbitrary, are sufficiently wide for the majority of circumstances in which power tests are actually necessary. In this particular example, a Cohen's d greater than 10 is an extremely large difference.
I'm updating now the package. If you have some another clever suggestion to make the root finding more robust, with a general function definition, I'd encourage you to submit a pull request. (I promise to respond quickly this time.)