tsai
tsai copied to clipboard
model explainability
I was checking out one of the utilities for model explanations. I see two functions (grad_cam and feat_attribution). Is this attribution in any way related to SHAP? I don't see that it is. Would a SHAP-like implementation be helpful here in terms of local explanability for multivariate time series input data towards predictions? I can try to look into it and add a feature
Back in the day there was a SHAP wrapper for fastai models (https://github.com/nestordemeure/fastshap), but as far as I know, there's nothing like that today.
Hi @alitirmizi23, I'm sorry @alitirmizi23, but I misinterpreted your description. Yes, it'd be good to investigate if a SHAP-like functionality could be added to tsai. Please, let me know if you are still interested and if you need any help.
Hi @alitirmizi23, I'm sorry @alitirmizi23, but I misinterpreted your description. Yes, it'd be good to investigate if a SHAP-like functionality could be added to tsai. Please, let me know if you are still interested and if you need any help.
Hi! I have some questions: ①I want to know if we can use permutation methods to calculate feature importance of each variable in evey step? ②Or we can use SHAP method in tsai to derive the local variation of feature importance with each feature? ③Moreover, I'm trying to figure out that if the window_len in 'applying_sliding_window' is the length of one window. For example, I have 100 samples, if window_len is set with 30. Then it means there's 30 samples in one window, or I split 100 samples into 30 windows?