responsible-ai-toolbox
responsible-ai-toolbox copied to clipboard
Threshold scrollbar for Fairness Dashboard
Rather than pass in predictions, it would be useful if I could pass in the confidence scores of a binary classification model and use a scrollbar to vary threshold within the dashboard itself (to see how the metrics change as a result).
Perhaps the parameter for threshold could be its default value (where the scrollbar starts)
Added component: Ability to choose threshold as the value on the X-axis
Not quite what you're asking, but you can start up a dashboard with y_pred
as a probability, rather than a class. You'll get a different set of metrics, which might help until what you're describing can be implemented.
The data contains binary labels and your model makes binary predictions - changing the predictions to probabilities would remove the notion of threshold, which in my case is not possible
Ahhh, I thought you might have access to predict_proba()
or something like that. That would let you have probabilities. It wouldn't get you everything, but might help with alternative plots.
This reminds me somewhat of https://research.google.com/bigpicture/attacking-discrimination-in-ml/
I read this like @riedgar-ms and assumed that you want to threshold on the probabilities (or as you called them: "confidence scores"). You can very much threshold on probabilities. That's in fact what many unfairness mitigation techniques do, see the link above or ThresholdOptimizer
in fairlearn. I do agree, though, that we'd need to think about what gets passed in.