yolov5
yolov5 copied to clipboard
Detecting Botox injection points in facial images
Search before asking
- [x] I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I am currently working on detecting Botox injection points (very small objects) in facial images using the YOLOv5 model. The images have a resolution of 6240Γ4160, and the injection points were labeled by an expert dermatologist using 40Γ40 bounding boxes. I have trained different YOLOv5 versions (nano, small, medium) using pretrained models, but the precision and recall values remain low during training.
Since YOLOv5 automatically applies the autoanchor feature, I expected better performance. However, the detection results suggest that the model may struggle with small object detection or resolution scaling. I would appreciate any insights or recommendations on improving detection accuracy, such as potential adjustments to the model configuration, anchor tuning, or alternative training strategies.
Looking forward to your advice.
Additional
No response
π Hello @ebkablan, thank you for your interest in YOLOv5 π! Detecting small objects like Botox injection points can indeed be challenging. Please visit our βοΈ Tutorials for guidance on custom data training and optimization techniques. For similar projects, you might find our Custom Data Training and Tips for Best Training Results pages particularly useful.
If this is a π Bug Report, please provide a minimum reproducible example (MRE) to help us investigate further.
If this is a β Question or general request for advice, please include as much relevant detail as possible to help us assist you effectively, such as:
- Example images with corresponding labels.
- Your training settings (e.g., batch size, image size, epochs, augmentations).
- Training logs, plots, or metrics (precision/recall, loss curves), if available.
Requirements
Ensure your environment meets the following: Python>=3.8.0 with all requirements.txt dependencies installed, including PyTorch>=1.8. To set up, simply:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Environments
YOLOv5 can be run in the following verified environments, which include pre-installed dependencies like CUDA/CUDNN:
- Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify the functionality of YOLOv5 features like training, validation, inference, export, and benchmarks across macOS, Windows, and Ubuntu.
This is an automated response, but don't worryβan Ultralytics engineer will also review your issue and provide further assistance soon! π
MERVECELIK_5.txt You can access an example image and label is here. batch size:4, image size:1280, epochs:300, augmentations: hyp-scratch-low.yaml
@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40Γ40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).
@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40Γ40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).
Thanks for the feedback! Although the detection results seem good during inference, the precision and recall values remain very low during training and validation. Given that the objects are very small, are there any threshold values (e.g., IoU, confidence, anchor-related settings) that I should consider adjusting to improve performance? My image resolution is 6240Γ4160, and the object size is fixed at 40Γ40. During training, I use imgsz=1280.
For small 40Γ40 objects in high-res images (6240Γ4160) trained at imgsz=1280:
- Increase imgsz to 2560+ if GPU permits to preserve small object details
- Generate custom anchors specific to your 40Γ40 boxes using:
from utils.autoanchor import kmean_anchors
kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9)
- Adjust IoU thresholds: Lower
iou_tin hyp.yaml (try 0.1) to better match small objects - Modify conf thresholds: Lower val
conffrom 0.001 to 0.0001 in val.py - Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translation
Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.
For small 40Γ40 objects in high-res images (6240Γ4160) trained at imgsz=1280:
- Increase imgsz to 2560+ if GPU permits to preserve small object details
- Generate custom anchors specific to your 40Γ40 boxes using:
from utils.autoanchor import kmean_anchors kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9) 3. Adjust IoU thresholds: Lower
iou_tin hyp.yaml (try 0.1) to better match small objects 4. Modify conf thresholds: Lower valconffrom 0.001 to 0.0001 in val.py 5. Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translationConsider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.
After this command : optimal_anchors = kmean_anchors('D:\OneDrive\03_Research\14_YOLO_ES\yolov5\derma.yaml',img_size=2560, n=9) I got following output. Any problem? AutoAnchor: Running kmeans for 9 anchors on 914 points... AutoAnchor: WARNING switching strategies from kmeans to random init 100%|ββββββββββ| 1000/1000 [00:00<00:00, 2751.42it/s] AutoAnchor: thr=0.25: 0.0000 best possible recall, 0.00 anchors past thr AutoAnchor: n=9, img_size=1280, metric_all=0.038/0.157-mean/best, past_thr=nan-mean: 17,52, 61,92, 101,255, 362,446, 556,660, 673,689, 712,1019, 1108,1200, 1205,1236 [[ 16.945 52.276] [ 61.116 92.386] [ 100.62 255.41] [ 361.63 445.98] [ 556.04 659.86] [ 673.47 689.07] [ 712.31 1018.7] [ 1108.4 1199.9] [ 1205.3 1236.1]]
The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:
kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000) # match training imgsz
If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.
The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:
kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000) # match training imgsz If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.
After finding the anchors, I place them into three scales in the yolov5n.yaml file. Should I set noautoanchor=True when training the model? Is it possible to use the pretrained models without recalculating the anchors?
@ebkablan yes, set noautoanchor=True in your training command to disable autoanchor when using custom anchors. Pretrained models can still be used, but ensure your custom anchors in the YAML file match your dataset's object sizes for optimal performance. For anchor-free alternatives, consider exploring Ultralytics YOLOv8 as referenced in our Anchor-Free Detectors Guide.