SHAP Interpretability method implementation
Contributor: Naveen Baskaran
Contribution Type: Interpretability method, Tests, Example
Description
This PR implements the SHAP (SHapley Additive exPlanations) interpretability method for PyHealth models, enabling users to understand which features contribute most to model predictions. SHAP is based on coalitional game theory and provides theoretically grounded feature importance scores with desirable properties like local accuracy, missingness, and consistency.
Files to Review
pyhealth/interpret/methods/init.py pyhealth/interpret/methods/shap.py - Core SHAP method implementation. Suports embedding based attribution, continuous feature support pyhealth/processors/tensor_processor.py - minor fix to resolve warning message examples/shap_stagenet_mimic4.py - Example script showing the usage of SHAP method tests/core/test_shap.py - added comprehensive test cases to test the main class, utility methods and attribution methods.