Training results are unstable
Using the same model, keeping the same training parameters, and with the seed fixed at 0, why do the results fluctuate significantly (with AP50 and AP50-95 varying by about 2 points)?
Thank you for your interest in our work! If you find it helpful or inspiring, please consider giving us a star. That's surprising — which dataset are you referring to?
Dear @ShihuaHuang95
I'm facing the same problem, particularly with small object detection(APsmall)
I'm glad to see your reply. The dataset we are using is a self-made coffee cup defect dataset, consisting of around 10,000 images with 5 types of defects. While I know that the training results are closely related to the dataset, is it normal to have fluctuations of about two points even when the seed is set to a fixed value and the model is converging? Is there a way to stabilize the training results?
Dear @ShihuaHuang95 : This problem has been troubling me for several days, and I hope you can take some time to help me with it.
@bearhero123 @xiarencunzhang The training instability you mentioned was also observed on the COCO dataset, even with a fixed random seed. I’ve trained hundreds of DEIM models — under the same configuration, fixed seed, and identical GPU setup — and the results typically fluctuate within ±0.1 AP. I’ve also visualized the preproced images from Dense O2O during training, and they look consistent, which confirms the overall stability of Dense O2O. A 2 AP difference is indeed too large — I suggest trying to disable AMP and train again to see if the issue persists.