cobra
cobra copied to clipboard
Extract feature contribution per prediction
Visualize contribution of features to a single prediction (or average contribution for multiple predictions)
Often used at our banking clients, therefore somewhat higher prio. One of the shap explainability plots also helps in this regard.
@sandervh14 Do you have an implementation that is functioning? I am wondering if one can make the shapely values dependent on the original values and not on the encoded ones. Like this one would be able to get a clearer picture. how did you do this in the past ?