tsai icon indicating copy to clipboard operation
tsai copied to clipboard

model explainability

Open alitirmizi23 opened this issue 1 year ago • 3 comments

I was checking out one of the utilities for model explanations. I see two functions (grad_cam and feat_attribution). Is this attribution in any way related to SHAP? I don't see that it is. Would a SHAP-like implementation be helpful here in terms of local explanability for multivariate time series input data towards predictions? I can try to look into it and add a feature

alitirmizi23 avatar Apr 24 '23 18:04 alitirmizi23

Back in the day there was a SHAP wrapper for fastai models (https://github.com/nestordemeure/fastshap), but as far as I know, there's nothing like that today.

vrodriguezf avatar Apr 28 '23 23:04 vrodriguezf

Hi @alitirmizi23, I'm sorry @alitirmizi23, but I misinterpreted your description. Yes, it'd be good to investigate if a SHAP-like functionality could be added to tsai. Please, let me know if you are still interested and if you need any help.

oguiza avatar Sep 03 '23 17:09 oguiza

Hi @alitirmizi23, I'm sorry @alitirmizi23, but I misinterpreted your description. Yes, it'd be good to investigate if a SHAP-like functionality could be added to tsai. Please, let me know if you are still interested and if you need any help.

Hi! I have some questions: ①I want to know if we can use permutation methods to calculate feature importance of each variable in evey step? ②Or we can use SHAP method in tsai to derive the local variation of feature importance with each feature? ③Moreover, I'm trying to figure out that if the window_len in 'applying_sliding_window' is the length of one window. For example, I have 100 samples, if window_len is set with 30. Then it means there's 30 samples in one window, or I split 100 samples into 30 windows?

lisu579 avatar Oct 09 '23 09:10 lisu579