pytorch-grad-cam
pytorch-grad-cam copied to clipboard
Grad-CAM ++ implementation doubts
Hi,
Thanks for all your work for implementing and summarizing the Grad-CAM related methods. It is fun and helpful to understand multiple methods.
I have doubts about the Grad-CAM ++ method, where the equation you are using in your implementation is equation 19 in the paper. However, this equation 19 is only valid for the NN with last activation function as exponential function. It is not the general equation should be used for all NN, e.g. NN with the softmax as last activation function.
I feel the correct implementation should be based on equation 10 of the paper, where the first, second, and third order derivatives are necessary for implementation of general NN. Since equation 19 is only a special case for replacing equation 10 if the last activation is exp.
Thanks.