nicogross
nicogross
> [#3725 (comment)](https://github.com/shap/shap/issues/3725#issuecomment-2202052886) > > > Found the solution: for every layers make sure they are not forwarded twice. Especially for activation functions which are typically stored as a class...
> > > [#3725 (comment)](https://github.com/shap/shap/issues/3725#issuecomment-2202052886) > > > > Found the solution: for every layers make sure they are not forwarded twice. Especially for activation functions which are typically stored...
> Yes, #3725 also mentioned this error. If you figure it out I would be interested as well. The problem here is the adapted backpropagation through maxpool (the only maxpool...
Seems like the unpooling can't handle overlapping regions (when `stride < kernel_size`) zero-padding seems to be fine. Some code I experimented with: ``` import torch import torch.nn as nn import...
LRP was designed for ReLU networks and generalized to leaky-ReLU. my idea is, because the Sigmoid function does not satisfy f(0)=0 and sign(f(-x)) = -1 which leads to un-intuitive results,...
Just a simple example: f(x) = sigmoid( x1 w1+ x2 w2 ) = sigmoid(z1 + z2) x1 = 2 and x2 = 1 w1 = -1 and w2 = 1...