Feature Importance
Feature
Support feature importance in tabular data scenarios.
- Understand which features are beneficial for prediction and help to develop new features
- Feature selection, removing features that are not helpful in prediction
Ideas
- GBDTs naturally have APIs for calculating feature importance, it's easy to add.
- NNs
- Permutation. After shuffling a certain feature, observe the changes in metric. The greater the change, the more important the feature is. Simple.
- SHAP. Complex.
Mutual Information Sort is already added here. For feature sorting in NNs, I recommend you take a look at the ExcelFormer example. If you are interested in adding any feature related functionalities, you can add it in the transform module.
Thanks. As you mentioned, mutual information sorting and ExcelFormer improve performance through transformation capabilities. However, I want to discuss how much different features contribute to the final prediction result. For example, user behavioral features are important in recommender systems, so their feature importance should be high. pytorch-frame is good to use. It allows me to quickly obtain benchmark results on real-world datasets to determine whether NNs or GBDTs are better. I'm unsure if the functionality to evaluate feature importance is worth integrating as a module into pytorch-frame.
I think you can use Captum https://captum.ai/ to have a try cc @weihua916 we can also integrate this in PyT?
Yes, Captum implemented many interpretability methods, Feature Permutation and SHAP are part of them.
Is there any update or roadmap related to it? 👀