Christopher
Christopher
- support computing the gradient of the modified gradient via second order gradients - for this, the second gradient pass must be done without hooks ## TODO - finalize -...
I am currently working on supporting second-order gradients, i.e. gradients of the modified gradients, which is used for example to compute adversarial explanations. The current issue which prevents second order...
`Composite.context` can be implemented slimmer/simpler using `contextlib.contextmanager`. Furthermore, instead of calling `Composite.context`, the same functionality could be implemented as `Composite.__call__`, as the context is the main functionality, and this would...
Some modules are implicitly mapped to the gradient. We can explicitly map Module types to `None` in their respective `module_map` in composites and warn the user when no rule is...
I am currently working on `GradOutHook`, which is different from the current `zennit.core.Hook` in that instead of overwriting the full gradient of the module, it only changes the gradient output....
- change the core Hook to support the modification of multiple inputs and params - for this, now each input and parameter that requires a gradient will be hooked, and...
- add a flag to disable/enable downloading and use of trained weights in tutorials - add this flag to the notebooks - this will result in better visualizations in the...
- use additions to forward hooks in torch 2.0.0 to pass kwargs to pass keyword arguments - handle multiple inputs and outputs in core.Hook and core.BasicHook, by passing all required...
With the introduction of #185 , ResNet18 attributions result in negative attribution sums in the input layer, leading to bad attributions. Although #185 increased the stability of the attribution sums...
With the implementation of #185 allowing for gradient computation wrt. the parameters given an attribution with canonizers, the BatchNorms seem to be leaking attribution even with `zero_params='bias'` on ResNet18.