yolov5
yolov5 copied to clipboard
Training instability
Search before asking
- [x] I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I'm training yolov12's model on my own dataset, and on map50 of the validation set, I often get 0.67β0.68β0.69β0.70β0.71β0.68β0.67β0.67β0.68β0.67β0.67 Such approximate changes in results. I ensured maximum batchsize=8 and maximum image size, tried reducing the learning rate lr0=0.001β0.0001β0.00001, and increased the parameters for data augmentation (mosaic).
I can't change all the data in my training set and validation set, how can I change this bad status quo please
Translated with DeepL.com (free version)
Additional
No response
π Hello @Taoboan1999, thank you for reaching out and for your detailed description! π This is an automated response to help you get started, and an Ultralytics engineer will assist you soon.
Please visit our βοΈ Tutorials for guidance, including quickstart guides for Custom Data Training and tips for Best Training Results.
If this is a π Bug Report, please provide a minimum reproducible example (MRE) to help us debug more effectively.
If your question is about custom training, please include as much information as possibleβsuch as dataset samples, training logs, and the exact commands you are using.
Requirements
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Thank you for your patience! An Ultralytics engineer will review your issue and provide more specific guidance soon. π
Hi @Taoboan1999! The training instability you're experiencing with oscillating mAP50 values (0.67-0.71) is common and can be addressed with several adjustments. Try implementing learning rate scheduling with --cos-lr for cosine annealing, reduce --momentum from default 0.937 to 0.9, and consider using --patience for early stopping to prevent overfitting. You might also benefit from using Exponential Moving Average (EMA) which smooths model weights during training and often leads to more stable convergence patterns.
Hi @Taoboan1999! The training instability you're experiencing with oscillating mAP50 values (0.67-0.71) is common and can be addressed with several adjustments. Try implementing learning rate scheduling with
--cos-lrfor cosine annealing, reduce--momentumfrom default 0.937 to 0.9, and consider using--patiencefor early stopping to prevent overfitting. You might also benefit from using Exponential Moving Average (EMA) which smooths model weights during training and often leads to more stable convergence patterns.
Thank you very much for your patience, I will try the various methods you mentioned one by one and if any of them are more effective, I will post them
Hi @Taoboan1999! The training instability you're experiencing with oscillating mAP50 values (0.67-0.71) is common and can be addressed with several adjustments. Try implementing learning rate scheduling with
--cos-lrfor cosine annealing, reduce--momentumfrom default 0.937 to 0.9, and consider using--patiencefor early stopping to prevent overfitting. You might also benefit from using Exponential Moving Average (EMA) which smooths model weights during training and often leads to more stable convergence patterns.
How do you turn on EMA in yolov12 training.
Hi @Taoboan1999! EMA is enabled by default in YOLOv5 training - you can control it with the --ema flag (enabled by default) and adjust the decay rate using --ema-decay (default 0.9999). If you want to disable EMA for testing, use --ema False, or to modify the decay rate for potentially more stable training, try --ema-decay 0.999 for faster adaptation to recent weights.
Hi @Taoboan1999! EMA is enabled by default in YOLOv5 training - you can control it with the
--emaflag (enabled by default) and adjust the decay rate using--ema-decay(default 0.9999). If you want to disable EMA for testing, use--ema False, or to modify the decay rate for potentially more stable training, try--ema-decay 0.999for faster adaptation to recent weights.
Thank you again for your answer. I will try modifying this parameter when training YOLOv5, but I am still encountering overfitting issues in YOLOv12. I will ask a question in the YOLOv12 issues section. Thank you.
You're welcome @Taoboan1999! Just to clarify, this is the YOLOv5 repository, so for YOLOv12 (YOLO12) questions you'll want to head over to the main Ultralytics repository where the newer YOLO versions are maintained. Good luck with your training!
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
- Docs: https://docs.ultralytics.com
- HUB: https://hub.ultralytics.com
- Community: https://community.ultralytics.com
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β