lime
lime copied to clipboard
Incoherent explanation on text LIME explainer ?
LIME is a great package for explaining model's decisions and worked great until now.
However I just used it to explain my model's prediction on a classic disasters tweet classification task. The results is quite surprising on one instance. In fact, even if LIME tells me that some words really drives models towards a decision, the final probabilities output is quite different.
Is this normal behavior ? expected ? and how to interpret it ?