pyhf
pyhf copied to clipboard
Additional documentation needed on difference between calculator type test statistic return type
Summary
There is a large discrepancy in the value of the discovery test statistic if it is generated via an Asymptotic based or toy based calculator.
Related Issues:
- https://github.com/scikit-hep/pyhf/issues/1587
- https://github.com/scikit-hep/pyhf/issues/1712
OS / Environment
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
Steps to Reproduce
# issue_1792.py
import pyhf
def discovery_from_test_stat(model, data):
init_pars = model.config.suggested_init()
par_bounds = model.config.suggested_bounds()
fixed_params = model.config.suggested_fixed()
return pyhf.infer.test_statistics.q0(
0.0,
data,
model,
init_pars,
par_bounds,
fixed_params,
)
def discovery_from_calculator(model, data, calc_type="asymptotics", **kwargs):
calculator = pyhf.infer.utils.create_calculator(
calc_type, data, model, test_stat="q0", **kwargs
)
return calculator.teststatistic(0.0)
def discovery_from_asymptotic_calculator(model, data):
calculator = pyhf.infer.calculators.AsymptoticCalculator(
data, model, test_stat="q0"
)
return calculator.teststatistic(0.0)
if __name__ == "__main__":
model = pyhf.simplemodels.uncorrelated_background([25], [2500], [2.5])
data = [2600] + model.config.auxdata
discovery_test_stat = discovery_from_test_stat(model, data)
print(f"Discovery test stat from infer.test_statistics.q0: {discovery_test_stat}")
discovery_test_stat = discovery_from_calculator(model, data, calc_type="toybased")
print(
f"Discovery test stat from toy based infer.utils.create_calculator: {discovery_test_stat}"
)
discovery_test_stat = discovery_from_calculator(model, data)
print(
f"Discovery test stat from asymptotics infer.utils.create_calculator: {discovery_test_stat}"
)
discovery_test_stat = discovery_from_asymptotic_calculator(model, data)
print(
f"Discovery test stat from infer.calculators.AsymptoticCalculator: {discovery_test_stat}"
)
File Upload (optional)
No response
Expected Results
The discovery test statistic would be the same regardless of calculator type.
Actual Results
$ python issue_1792.py
Discovery test stat from infer.test_statistics.q0: 3.9377335959507036
Discovery test stat from toy based infer.utils.create_calculator: 3.9377335959507036
Discovery test stat from asymptotics infer.utils.create_calculator: 1.485845489706193
Discovery test stat from infer.calculators.AsymptoticCalculator: 1.485845489706193
pyhf Version
9fd99be886349a90e927672e950cc233fad0916c on master
Code of Conduct
- [X] I agree to follow the Code of Conduct
I don't understand how the first two results are the exact same, but if this is a toy vs non-toy issue with a 4 sigma effect (which, looking at the setup, would make sense roughly), are you sure you have enough toys to evaluate this (several 10k)?
are you sure you have enough toys to evaluate this (several 10k)?
Here I'm using the calculator API in a strange way as only 1 experiment is being evaluated, so there really isn't any pseudoexperiment generation happening. I should give a better example later.
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L951-L960
I missed the sqrt required to go from q0 to the significance in the previous comment, this should be a 2 sigma effect of course: sqrt(2500) = 50, so a 100 event excess over background-only with negligible background uncertainty will be a 2 sigma effect. That agrees perfectly with the first two numbers.
The last two numbers scale with the number of signal events, so these are something else.
The last two numbers scale with the number of signal events, so these are something else.
yeah. The fact that the calculators are going in opposite directions as the signal increases is telling there's a problem.
At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?
At the moment this doesn't run any toys as far as I can tell, so I'm assuming this is a question about the calculators, and not about asymptotic vs toy agreement in general?
Yes, I didn't phrase the original text clearly as I was dumping things in for myself to clarify later.
Okay @kratsg has pointed out that the behavior of the calculator APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values (so cumulative distributions of test statistics)
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/init.py#L169-L178
while the toy based is returning q test stats. At the very least this needs to be made a lot more clear in the docs.
APIs is (known to be) not consistent across the asymptotic and toy based as asymptotic is returning p-values
well, it's consistent here, it's that the calc.teststatistic(test_poi) has a different meaning which is translated by the respective calculator's calc.pvalues(teststat, ...) call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.
well, it's consistent here, it's that the
calc.teststatistic(test_poi)has a different meaning which is translated by the respective calculator'scalc.pvalues(teststat, ...)call if that makes sense. The API is the same, but there's a few hoops/hurdles/threading to make it work.
While all very true, a user should rightly complain that the public API is too confusing as is (given the current documentation).
My main complaint at the moment is that while we make it clear that the asymptotic test stats are in -^mu/sigma space
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L85-L89
we don't make this clear again in the calculator teststatistic API for the asymptotics calculator
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L330-L332
or in the toy calculator (that it is different)
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L922-L924
Also we could mention in EmpiricalDistribution that the test stats are distributed in different spaces as well
https://github.com/scikit-hep/pyhf/blob/9fd99be886349a90e927672e950cc233fad0916c/src/pyhf/infer/calculators.py#L526-L532
So this really is a documentation issue, but a pretty big one in my mind.
I forgot this, and if I've written some of this code and can make this mistake when tired then I think a user definitely will.
My main complaint at the moment is that while we make it clear that the asymptotic test stats are in
-^mu/sigmaspace
this part I think is what confuses me, so if you have a better grasp, it would be great to clarify this for the upcoming release.
This all came up as I was trying to take a stab at Issue #1712 and was trying to figure out to have things work for either calculator type.