Transformer-Explainability
Transformer-Explainability copied to clipboard
The relprop method of the Linear layer
HI Chefer,
Is there a typo with Lines 218 to 219 https://github.com/hila-chefer/Transformer-Explainability/blob/c3e578f76b954e8528afeaaee26de3f07e3fe559/modules/layers_ours.py#L218-L219
which should be the below?
S1 = safe_divide(R, Z1)
S2 = safe_divide(R, Z2)
according to https://github.com/wjNam/Relative_Attributing_Propagation/blob/7fa96822740591b605712f251e556ec8487d1eea/modules/layers.py#L268-L269C36
Thanks,
Hongbo
I want to follow this issue as well. It's something I noticed as well. Curious why you add by both the Z1
and Z2
terms to project the Relevance