DG-YOLO
DG-YOLO copied to clipboard
The AP of holothurian is too low
I download your dataset,then change nothing in the code.But when it come to 82 epoch, the AP of holothurian is still have something wrong
I always train 300 epochs. Maybe it is because it has not converged. You can check the mAP curve on tensorboard, and find the best performance.
The program killed twice when I came to 84 epochs, do you have any idea about that? QAQ
The program killed twice when I came to 84 epochs, do you have any idea about that? QAQ
sometimes I would happen like this, and I don't know the reason. But you can continue training by resuming from this checkpoint.
I set the setting all the same with yours, but there is only 46 mAP on ori,it is much less than mAP=56 in your paper. Is that normal?Or there is something wrong with it?
I set the setting all the same with yours, but there is only 46 mAP on ori,it is much less than mAP=56 in your paper. Is that normal?Or there is something wrong with it?
It is not normal, but I do not know the problem. Maybe it is the problem of the environment. Check whether your pytorch version is the same as mine.
@1184125805 : I also had the same issue of the process getting killed in between. I later realise that the sudden stopping of process is no more if I remove nms part. I have replaced it with torchvision's nms and it started working fine. But takes longer time.