Gcam
Gcam copied to clipboard
Gcam is an easy to use Pytorch library that makes model predictions more interpretable for humans. It allows the generation of attention maps with multiple methods like Guided Backpropagation, Grad-...
Gcam (Grad-Cam)
New version of this repo at https://github.com/MECLabTUDA/M3d-Cam
Gcam is an easy to use Pytorch library that makes model predictions more interpretable for humans.
It allows the generation of attention maps with multiple methods like Guided Backpropagation,
Grad-Cam, Guided Grad-Cam and Grad-Cam++.
All you need to add to your project is a single line of code:
model = gcam.inject(model, output_dir="attention_maps", save_maps=True)
Features
- Works with classification and segmentation data / models
- Works with 2D and 3D data
- Supports Guided Backpropagation, Grad-Cam, Guided Grad-Cam and Grad-Cam++
- Attention map evaluation with given ground truth masks
- Option for automatic layer selection
Installation
- Install Pytorch from https://pytorch.org/get-started/locally/
- Install Gcam via pip with:
pip install gcam
Documentation
Gcam is fully documented and you can view the documentation under:
https://karol-g.github.io/Gcam
Examples
#1 Classification (2D) | #2 Segmentation (2D) | #3 Segmentation (3D) | |
---|---|---|---|
Image | ![]() |
![]() |
![]() |
Guided backpropagation | ![]() |
![]() |
![]() |
Grad-Cam | ![]() |
![]() |
![]() |
Guided Grad-Cam | ![]() |
![]() |
![]() |
Grad-Cam++ | ![]() |
![]() |
![]() |
Usage
# Import gcam
from gcam import gcam
# Init your model and dataloader
model = MyCNN()
data_loader = DataLoader(dataset, batch_size=1, shuffle=False)
# Inject model with gcam
model = gcam.inject(model, output_dir="attention_maps", save_maps=True)
# Continue to do what you're doing...
# In this case inference on some new data
model.eval()
for batch in data_loader:
# Every time forward is called, attention maps will be generated and saved in the directory "attention_maps"
output = model(batch)
# more of your code...
Demos
Classification
You can find a Jupyter Notebook on how to use Gcam for classification using a resnet152 at demos/Gcam_classification_demo.ipynb
or opening it directly in Google Colab:
2D Segmentation
TODO
3D Segmentation
You can find a Jupyter Notebook on how to use Gcam with the nnUNet for handeling 3D data at demos/Gcam_nnUNet_demo.ipynb
or opening it directly in Google Colab: