onnxmltools icon indicating copy to clipboard operation
onnxmltools copied to clipboard

Adding support for SHAP explainability to tree model types

Open kikisq7 opened this issue 1 month ago • 1 comments

SHAP Explainability provides an explanation for the output of a machine learning model. It provides insights as to how each feature contributes to the model's predictions, aiding in gaining an understanding of what happens in the "black box."

I’m working on functionality to add support for including SHAP explainability when converting a tree model to ONNX and I’d like to contribute this work back to this community. TreeSHAP, an algorithm used for analyzing tree models, provides an "explainability" vector that indicates how much each feature contributed to the model's output. TreeSHAP is currently available as a tree model execution environment for Python, implemented in C++. I will be referencing the existing TreeSHAP algorithm and adding ONNX operations during the conversion to support SHAP explainability for tree models. During model execution, SHAP adjusts the feature's value in the "explainability" vector, based on the tree's decision at each node.

When this option is enabled the output tensor will be adjusted to include the vector of SHAP values (with length that is the number of features) following the converted ONNX model at execution time. There will also be an option to enable/disable this feature at conversion time and this feature will be disabled by default.

I really appreciate these tools and I'm excited to contribute to this community!

kikisq7 avatar Jun 10 '24 17:06 kikisq7