mAP is almost 0
Describe the bug mAP is almost 0 when train a 5 cls dataset finetune from coco or obj2coco pretrian mode such as:
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/dfine/custom/objects365/dfine_hgnetv2_m_obj2custom-building-defeat.yml --use-amp --seed=0 -t dfine_m_obj2coco.pth
loss is very high
same dataset use rf-detr is ok
is there some bug?
Hi, You could check this issue https://github.com/roboflow/rf-detr/issues/157#issue-2992519685
The roboflow team also faced this problem, but a guy created his own github repo where training D-FINE on a costumed dataset works well. Here is the link to the repo https://github.com/ArgoHA/custom_d_fine
This was too easy to setup and training is ongoing. Fingers crossed for the model to come out as expected. Thanks @SebastianJanampa
@q-prashant how your experiment end? Is everything good now?
yup, working perfectly now. It was a good decision to switch from official repo to yours!