visual-attribution icon indicating copy to clipboard operation
visual-attribution copied to clipboard

Pytorch Implementation of recent visual attribution methods for model interpretability

Pytorch Visual Attribution Methods

A collection of visual attribution methods for model interpretability

Including:

  • [x] Vanilla Gradient Saliency
  • [x] Grad X Input
  • [x] Integrated Gradient
  • [x] SmoothGrad
  • [x] Deconv
  • [x] Guided Backpropagation
  • [x] Excitation Backpropagation, Contrastive Excitation Backpropagation
  • [x] GradCAM
  • [x] PatternNet, PatternLRP
  • [x] Real Time Saliency
  • [x] Occlusion
  • [x] Feedback
  • [x] DeepLIFT
  • [ ] Meaningful Perturbation

Setup

Prerequisities

  • Linux
  • NVIDIA GPU + CUDA (Current only support running on GPU)
  • Python 3.x
  • PyTorch version == 0.2.0 (Sorry I haven't tested on newer versions)
  • torchvision, skimage, matplotlib

Getting Started

  • Clone this repo:
git clone [email protected]:yulongwang12/visual-attribution.git
cd visual-attribution
  • Download pretrained weights
cd weights
bash ./download_patterns.sh  # for using PatternNet, PatternLRP
bash ./download_realtime_saliency.sh # for using Real Time Saliency

Note: I convert caffe bvlc_googlenet pretrained models in pytorch format (see googlenet.py and weights/googlenet.pth).

Visual Saliency Comparison

see notebook saliency_comparison.ipynb. If everything works, you will get the above image.

Weakly Supervised Object Localization

TBD

Citation

If you use our codebase or models in your research, please cite this project.

@misc{visualattr2018,
  author =       {Yulong Wang},
  title =        {Pytorch-Visual-Attribution},
  howpublished = {\url{https://github.com/yulongwang12/visual-attribution}},
  year =         {2018}
}