PyHealth icon indicating copy to clipboard operation
PyHealth copied to clipboard

SHAP Interpretability method implementation

Open naveenkcb opened this issue 2 months ago • 0 comments

Contributor: Naveen Baskaran

Contribution Type: Interpretability method, Tests, Example

Description

This PR implements the SHAP (SHapley Additive exPlanations) interpretability method for PyHealth models, enabling users to understand which features contribute most to model predictions. SHAP is based on coalitional game theory and provides theoretically grounded feature importance scores with desirable properties like local accuracy, missingness, and consistency.

Files to Review

pyhealth/interpret/methods/init.py pyhealth/interpret/methods/shap.py - Core SHAP method implementation. Suports embedding based attribution, continuous feature support pyhealth/processors/tensor_processor.py - minor fix to resolve warning message examples/shap_stagenet_mimic4.py - Example script showing the usage of SHAP method tests/core/test_shap.py - added comprehensive test cases to test the main class, utility methods and attribution methods.

Results on mimic4-demo dataset

image

naveenkcb avatar Nov 15 '25 04:11 naveenkcb