captum
captum copied to clipboard
Model interpretability and understanding for PyTorch
Summary: Change remaining imports of `Literal` to be from `typing` library Differential Revision: D64807610
Summary: This diff helps address the number of pyre-fixme's in the layer_feature_ablation.py file. Differential Revision: D64796530
It seems to me there is no reason why LayerLRP is restricted to lists only. It seems any ordered iterable should do the trick here. - Add checks for tuples...
Reviewed By: MaggieMoss Differential Revision: D64503973
Summary: Step 1 to update Captum OSS default branch name from 'master' to 'main'. - Context see T200490048 - Instructions see [wiki](https://www.internalfb.com/intern/wiki/Open_Source/Maintain_a_FB_OSS_Project/Default_Branch_Name_Change/#1-before-you-use-the-too) TODO: update the OSS wiki accordingly Differential Revision:...
In the implementation of DeepLIFT, there is a normalization step that gets applied to contributions from the Softmax module. I am pretty sure this step comes from a misreading of...
I am running pytorch on mac and can run captum.insights but not the visual renderer ## 🐛 Bug visualizer.render(debug=True) [Open Browser Console for more detailed log - Double click to...
## 🐛 Bug ## To Reproduce Steps to reproduce the behavior: I followed [https://captum.ai/tutorials/Llama2_LLM_Attribution](url) My code is here,the only difference is I changed the model_name. ` import torch from transformers...
Summary: Fix unbound variables that flake8 is complaining about Reviewed By: cyrjano Differential Revision: D64261231
I'm encountering an AttributeError while using LayerLRP with the SwitchTransformersAttention module in my custom transformer model. The error occurs when I attempt to compute the attributions using LayerLRP and seems...