lightning-flash
lightning-flash copied to clipboard
New heads and backbones for object detection 🤔
🚀 Feature
Please consider porting more heads and backbones for object detection task.
Motivation
The number of heads and backbones for the task of object detection remains rather low in comparison to MMDetection.
Pitch
Introduce compatibility with new heads and backbones for object detection.
Alternatives
Additional context
Also, are there any plans to introduce solutions using transformer architecture, such as Swin Transformer, into this project, or is lightning-transformers meant particularly for this?
Discussed in https://github.com/PyTorchLightning/lightning-flash/discussions/1355.
Hey @Nexer8 thanks for the feature request! More heads / backbones (including transformers) is definitely something we'd like to have. We tried to source backbones from MMdetection once before but found it to be too heavy a dependency for our testing , but this is something we should revisit :smiley:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
any plans to add new heads and backbones for object detection? it seems that icevision is not maintained anymore. mmdetection 3.0 has been released with the DINO model which has great performance on the COCO2017 dataset.
https://github.com/open-mmlab/mmdetection/tree/main/configs/dino
@davidblom603 are you interested in sending a PR? :rabbit: