YOLO_v3_tutorial_from_scratch
YOLO_v3_tutorial_from_scratch copied to clipboard
Anchors scaling for each feature map output
https://github.com/ayooshkathuria/YOLO_v3_tutorial_from_scratch/blob/8264dfba39a866998b8936a24133f41f12bfbdb7/util.py#L59
I have a question since yolov3 has anchors for all three different scales. (as they mentioned in paper). Why again we need to down sample the anchors for each scale. It is kind of bit hard to understand the anchors scaling as I am new to anchor based detection.
https://github.com/ayooshkathuria/YOLO_v3_tutorial_from_scratch/blob/8264dfba39a866998b8936a24133f41f12bfbdb7/util.py#L59
I have a question since yolov3 has anchors for all three different scales. (as they mentioned in paper). Why again we need to down sample the anchors for each scale. It is kind of bit hard to understand the anchors scaling as I am new to anchor based detection.
Hi, let me try to answer. Because the size of the anchor box we give corresponds to the original image, and when we predict, the obtained anchor box size corresponds to the feature map, so we need to multiply a coefficient to make the anchor box correspond to feature maps of different sizes.