captum
captum copied to clipboard
Integrated Gradients - Higher Convergence Delta with more Steps?
Working with the example from this tutorial here: https://github.com/munnm/XAI-for-practitioners/blob/462b1fc79d9cf7998992e1878c60d9d4c6282982/05-text/layer_integrated_gradients.ipynb, it seems as though using less steps actually decreases the size of the convergence delta? Ie see screenshots below.
I'm just wondering if this is normal behaviour? From #311 it seems like we would expect the opposite to occur.
TIA!