captum icon indicating copy to clipboard operation
captum copied to clipboard

The resulting interpretability is very low

Open zbjbiubiubiu opened this issue 1 year ago • 1 comments

Hello, thank you very much for developing Captum, such a useful software. I would like to ask, when using 'IntegratedGradients' to explain the GCN model, the input is the characteristics of the node, and then ig.attribute updates the contribution of these characteristics to the prediction results. But I got that these contributions are very small, about e-30th power, basically 0. I would like to ask why the contribution of these features is so small? Is it reasonable?

zbjbiubiubiu avatar Oct 17 '23 02:10 zbjbiubiubiu

Hi, I am just another Captum user, but as far as I understand Integrated Gradients, it depends on multiple factors: for example, the attribution will shrink with the number of contributing features, among which the change in prediction will be distributed. Also, if the prediction is not that different from the prediction in the given baseline, there is not much attribution to distribute anyway.

Since you mentioned working with a GCN, I assume you have a lot of features. You can check with the convergence deltas, if the approximation of the riemann integral is close to the actual change in prediction. Hope this helps!

Tianmaru avatar Oct 17 '23 08:10 Tianmaru