captum
captum copied to clipboard
Support additional rules for LRP
🚀 Feature
In it's current implementation LRP supports the following rules for assigning relevance: \epsilon-rule, \gamma-rule and z^+-rule. The idea is to extend this implementation to also include w^2-rule, flat-rule, \alpha\beta-rule and z^B-rule, cf. also https://link.springer.com/chapter/10.1007/978-3-030-28954-6_10. For the time being all of the above is restricted to linear layers (dense, convolution).
Motivation
The currently implemented rules seem to be insufficient for some use cases, cf. also the (heuristic?) usage stated by the table in the above link. In particular rules like the w^2-rule or z^B-rule are convenient for input layers, as the input feature sign will not flip the sign of the relevance for these rules.
Pitch
Adding classes for the rules in lrp_rules.py extending PropagationRule.
Alternatives
None ...
Additional context
As stated we have those rules already implemented to some extent and would love to add them to captum, after @nanohanno's PR is finished. There are also some related issues with hook ordering and gradient accumulation that need fixing for this, where we have workarounds but it would probably be good to have some input from others on these issues. Cf. also the remark of @marcoancona: https://github.com/pytorch/captum/issues/143#issuecomment-638845007
Hi,
Thank you again for releasing this wonderful library. I wanted to follow up on this. Is there any active work being done to support additional rules &/ additional activation functions for LRP? If so, is it possible to get an approximate timeline?