alibi-detect
alibi-detect copied to clipboard
Submodule to compute and visualise drift detection metrics
The alibi_detect.cd.metrics
submodule implements utility functions to measure and visualize detector calibration and test power. These can be used for standalone experiments, and will also be used for our internal benchmarking.
How we check sufficient data has been provided in _check_sufficient_data_size
is still an open question...
Example notebook: https://gist.github.com/ascillitoe/4a23e156adbff72a3314d78846fbff83
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Thanks for your comments @ojcobb, I've had a first go at addressing them.
One interesting thing to note is that in order to save some compute, in eval_roc
I'm only calling eval_calibration
for one significance level, and then compute the FPR's for each significance level using the same bunch of p_vals
. Can you see any reason this is a bad idea?
p.s. docstrings are still TODO. Won't bother with those until we have finalized functionality...
Thanks for your comments @ojcobb, I've had a first go at addressing them.
One interesting thing to note is that in order to save some compute, in
eval_roc
I'm only callingeval_calibration
for one significance level, and then compute the FPR's for each significance level using the same bunch ofp_vals
. Can you see any reason this is a bad idea?
Nope, seems fine to me.