yolov5 icon indicating copy to clipboard operation
yolov5 copied to clipboard

Detecting Botox injection points in facial images

Open ebkablan opened this issue 9 months ago β€’ 9 comments
trafficstars

Search before asking

  • [x] I have searched the YOLOv5 issues and discussions and found no similar questions.

Question

I am currently working on detecting Botox injection points (very small objects) in facial images using the YOLOv5 model. The images have a resolution of 6240Γ—4160, and the injection points were labeled by an expert dermatologist using 40Γ—40 bounding boxes. I have trained different YOLOv5 versions (nano, small, medium) using pretrained models, but the precision and recall values remain low during training.

Since YOLOv5 automatically applies the autoanchor feature, I expected better performance. However, the detection results suggest that the model may struggle with small object detection or resolution scaling. I would appreciate any insights or recommendations on improving detection accuracy, such as potential adjustments to the model configuration, anchor tuning, or alternative training strategies.

Looking forward to your advice.

Additional

No response

ebkablan avatar Feb 06 '25 12:02 ebkablan

πŸ‘‹ Hello @ebkablan, thank you for your interest in YOLOv5 πŸš€! Detecting small objects like Botox injection points can indeed be challenging. Please visit our ⭐️ Tutorials for guidance on custom data training and optimization techniques. For similar projects, you might find our Custom Data Training and Tips for Best Training Results pages particularly useful.

If this is a πŸ› Bug Report, please provide a minimum reproducible example (MRE) to help us investigate further.

If this is a ❓ Question or general request for advice, please include as much relevant detail as possible to help us assist you effectively, such as:

  • Example images with corresponding labels.
  • Your training settings (e.g., batch size, image size, epochs, augmentations).
  • Training logs, plots, or metrics (precision/recall, loss curves), if available.

Requirements

Ensure your environment meets the following: Python>=3.8.0 with all requirements.txt dependencies installed, including PyTorch>=1.8. To set up, simply:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 can be run in the following verified environments, which include pre-installed dependencies like CUDA/CUDNN:

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify the functionality of YOLOv5 features like training, validation, inference, export, and benchmarks across macOS, Windows, and Ubuntu.

This is an automated response, but don't worryβ€”an Ultralytics engineer will also review your issue and provide further assistance soon! 😊

UltralyticsAssistant avatar Feb 06 '25 12:02 UltralyticsAssistant

Image

MERVECELIK_5.txt You can access an example image and label is here. batch size:4, image size:1280, epochs:300, augmentations: hyp-scratch-low.yaml

ebkablan avatar Feb 06 '25 13:02 ebkablan

@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40Γ—40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).

pderrenger avatar Feb 06 '25 20:02 pderrenger

@ebkablan thanks for sharing the training details and sample image; for such small objects, you might try increasing the effective resolution (via cropping or tiling) and manually tailoring your anchor settings to better match the 40Γ—40 box size instead of relying solely on autoanchor, as referenced in our anchor-based detectors glossary (https://ultralytics.com/glossary/anchor-based-detectors).

Thanks for the feedback! Although the detection results seem good during inference, the precision and recall values remain very low during training and validation. Given that the objects are very small, are there any threshold values (e.g., IoU, confidence, anchor-related settings) that I should consider adjusting to improve performance? My image resolution is 6240Γ—4160, and the object size is fixed at 40Γ—40. During training, I use imgsz=1280.

ebkablan avatar Feb 07 '25 11:02 ebkablan

For small 40Γ—40 objects in high-res images (6240Γ—4160) trained at imgsz=1280:

  1. Increase imgsz to 2560+ if GPU permits to preserve small object details
  2. Generate custom anchors specific to your 40Γ—40 boxes using:
from utils.autoanchor import kmean_anchors
kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9)
  1. Adjust IoU thresholds: Lower iou_t in hyp.yaml (try 0.1) to better match small objects
  2. Modify conf thresholds: Lower val conf from 0.001 to 0.0001 in val.py
  3. Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translation

Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.

pderrenger avatar Feb 07 '25 16:02 pderrenger

For small 40Γ—40 objects in high-res images (6240Γ—4160) trained at imgsz=1280:

  1. Increase imgsz to 2560+ if GPU permits to preserve small object details
  2. Generate custom anchors specific to your 40Γ—40 boxes using:

from utils.autoanchor import kmean_anchors kmean_anchors(path='your_dataset.yaml', imgsz=1280, n=9) 3. Adjust IoU thresholds: Lower iou_t in hyp.yaml (try 0.1) to better match small objects 4. Modify conf thresholds: Lower val conf from 0.001 to 0.0001 in val.py 5. Use high-aug hyp: Switch to hyp.scratch-high.yaml for stronger scaling/translation

Consider tiling your original images before resizing to maintain object visibility. For anchor tuning details, see our Anchor-Based Detectors Guide.

After this command : optimal_anchors = kmean_anchors('D:\OneDrive\03_Research\14_YOLO_ES\yolov5\derma.yaml',img_size=2560, n=9) I got following output. Any problem? AutoAnchor: Running kmeans for 9 anchors on 914 points... AutoAnchor: WARNING switching strategies from kmeans to random init 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [00:00<00:00, 2751.42it/s] AutoAnchor: thr=0.25: 0.0000 best possible recall, 0.00 anchors past thr AutoAnchor: n=9, img_size=1280, metric_all=0.038/0.157-mean/best, past_thr=nan-mean: 17,52, 61,92, 101,255, 362,446, 556,660, 673,689, 712,1019, 1108,1200, 1205,1236 [[ 16.945 52.276] [ 61.116 92.386] [ 100.62 255.41] [ 361.63 445.98] [ 556.04 659.86] [ 673.47 689.07] [ 712.31 1018.7] [ 1108.4 1199.9] [ 1205.3 1236.1]]

ebkablan avatar Feb 08 '25 15:02 ebkablan

The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:

kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000)  # match training imgsz

If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.

pderrenger avatar Feb 13 '25 09:02 pderrenger

The anchor generation warning suggests potential label scaling issues. Since your objects are 40x40, ensure your labels are normalized (0-1) relative to the 6240x4160 image size, not absolute pixels. Your anchors [[17,52], ...] appear too large for 40x40 objects. Try:

kmean_anchors(path='derma.yaml', imgsz=1280, n=9, gen=1000) # match training imgsz If anchors remain mismatched, you might benefit from YOLOv8's anchor-free approach, which eliminates anchor tuning challenges. For details on anchor-free benefits, see YOLOv11 Anchor-Free Detector Guide.

After finding the anchors, I place them into three scales in the yolov5n.yaml file. Should I set noautoanchor=True when training the model? Is it possible to use the pretrained models without recalculating the anchors?

ebkablan avatar Feb 13 '25 19:02 ebkablan

@ebkablan yes, set noautoanchor=True in your training command to disable autoanchor when using custom anchors. Pretrained models can still be used, but ensure your custom anchors in the YAML file match your dataset's object sizes for optimal performance. For anchor-free alternatives, consider exploring Ultralytics YOLOv8 as referenced in our Anchor-Free Detectors Guide.

pderrenger avatar Feb 16 '25 19:02 pderrenger