PyTorch-YOLOv3 icon indicating copy to clipboard operation
PyTorch-YOLOv3 copied to clipboard

A problem about scaled_anchors

Open pandalgx opened this issue 5 years ago • 0 comments

In the compute_grid_offsets() function, the self.scaled_anchors is computed as follows,

self.scaled_anchors = FloatTensor([(a_w / self.stride, a_h / self.stride) for a_w, a_h in self.anchors])

I'm wondering in the cfg file, anchors dimension is w.r.t image size of 416 x 416. But if we use multi scale training, the input size may be changed, like 384 x 384. Then the above code will be wrong. I think it should be changed to:

([(a_w * self.img_dim / 416 / self.stride, a_h * self.img_dim / 416 / self.stride) for a_w, a_h in self.anchors] where self.img_dim is the current network input size, like 384 x 384.

Can anyone tell me if it's right?

pandalgx avatar Dec 07 '19 16:12 pandalgx