Co-DETR
Co-DETR copied to clipboard
Which model in the HuggingFace repo includes Objects365 pretraining?
Hi, I'm trying to reproduce the Co-DETR ViT-L results on COCO (65.4/65.9 mAP). Based on the discussion in this repo and the paper, I understand that:
- The full training pipeline involves first pretraining on Objects365 using
eva02_L_pt_m38m_medft_in21k_ft_in1k_p14as the backbone - Then fine-tuning this Objects365 pretrained model on COCO
Currently, I'm only using the EVA-02 ViT-L backbone directly for COCO training, which only gives me around 62.7 mAP.
Question: Among the models provided in your HuggingFace repository, which one includes the Objects365 pretraining? Or do I need to perform this pretraining step myself?
Thank you for sharing this excellent work!
The O365 model is here: https://huggingface.co/zongzhuofan/co-detr-vit-large-objects365