yolov5
yolov5 copied to clipboard
Add GradCAM integration - Make YOLOv5 Interpretable
Why this PR?
This PR will adapt GradCAM
library to YOLOv5
. This is required since black box models are not always acceptable. We need to know why a certain prediction was made. This is completely different from feature visualization
which is already implemented. This explains the model results on a per image basis. For example, we want to know why the model has detected this Person. What pixels are mostly responsible for this prediction? This will result in a heatmap like this.
EigenCAM layer -2:
EigenCAM layer -3:
Current State
Currently, I've implemented EigenCAM and it works perfectly. Still, I have to write documentation to understand how it works.
Related Issues and Links
This is a long-requested feature. YOLOv5 Related issues:
- #8717
- #5863
- #4575
- #2065
Related Issues in other repositories:
- https://github.com/jacobgil/pytorch-grad-cam/issues/364
- https://github.com/jacobgil/pytorch-grad-cam/issues/359
- https://github.com/jacobgil/pytorch-grad-cam/issues/242
Useful Links:
- https://github.com/pooya-mohammadi/yolov5-gradcam: This one is actually fine but it is too old. Also, It doesn't add this functionality to YOLO in a way that it works with later versions. It implements YOLO from scratch.
- Tutorial: Class Activation Maps for Object Detection with Faster RCNN — Advanced AI explainability with pytorch-gradcam
- EigenCAM for YOLO5 — Advanced AI explainability with pytorch-gradcam