Pytorch_Retinaface icon indicating copy to clipboard operation
Pytorch_Retinaface copied to clipboard

PriorBox forward is slow

Open gan3sh500 opened this issue 4 years ago • 2 comments

The prior box forward function is not vectorised. This is easily vectorizable as below.

 def priorbox_forward(min_sizes, steps, clip, image_size):
     feature_maps = [[ceil(image_size[0] / step), ceil(image_size[1] / step)] for step in steps]
     anchors = []
     for k, f in enumerate(feature_maps):
         min_sizs = min_sizes[k]
         mat = np.array(list(product(range(f[0]), range(f[1]), min_sizs))).astype(np.float32)
         mat[:, 0] = (mat[:, 0] + 0.5) * steps[k] / image_size[1]
         mat[:, 1] = (mat[:, 1] + 0.5) * steps[k] / image_size[0]
         mat = np.concatenate([mat, mat[:, 2:3]], axis=1)
         mat[:, 2] = mat[:, 2] / image_size[1]
         mat[:, 3] = mat[:, 3] / image_size[0]
         anchors.append(mat)
     output = np.concatenate(anchors, axis=0)
     if clip: 
         output = np.clip(output, 0, 1)
     return torch.from_numpy(output)

Can I submit a PR? The vectorisation makes it 2x faster for me on a Ryzen 5 3600.

gan3sh500 avatar Oct 24 '20 22:10 gan3sh500

Old post but I think this could be useful. Not sure what is stopping you from submitting a PR. I tried your function and I strangely am not getting the exact same result...

rafale77 avatar Jan 05 '21 07:01 rafale77

The prior box forward function is not vectorised. This is easily vectorizable as below.

 def priorbox_forward(min_sizes, steps, clip, image_size):
     feature_maps = [[ceil(image_size[0] / step), ceil(image_size[1] / step)] for step in steps]
     anchors = []
     for k, f in enumerate(feature_maps):
         min_sizs = min_sizes[k]
         mat = np.array(list(product(range(f[0]), range(f[1]), min_sizs))).astype(np.float32)
         mat[:, 0] = (mat[:, 0] + 0.5) * steps[k] / image_size[1]
         mat[:, 1] = (mat[:, 1] + 0.5) * steps[k] / image_size[0]
         mat = np.concatenate([mat, mat[:, 2:3]], axis=1)
         mat[:, 2] = mat[:, 2] / image_size[1]
         mat[:, 3] = mat[:, 3] / image_size[0]
         anchors.append(mat)
     output = np.concatenate(anchors, axis=0)
     if clip: 
         output = np.clip(output, 0, 1)
     return torch.from_numpy(output)

Can I submit a PR? The vectorisation makes it 2x faster for me on a Ryzen 5 3600.

When I ran it, it worked abnormally. so I used it with modifying a little. thanks

    def vectorized_forward(self):
        anchors = []
        for k, f in enumerate(self.feature_maps):
            min_size = self.min_sizes[k]
            mat = np.array(list(product(range(f[0]), range(f[1]), min_size))).astype(np.float32)
            mat[:, 0], mat[:, 1] = ((mat[:, 1] + 0.5) * self.steps[k] / self.image_size[1],
                                    (mat[:, 0] + 0.5) * self.steps[k] / self.image_size[0])
            mat = np.concatenate([mat, mat[:, 2:3]], axis=1)
            mat[:, 2] = mat[:, 2] / self.image_size[1]
            mat[:, 3] = mat[:, 3] / self.image_size[0]
            anchors.append(mat)
        output = np.concatenate(anchors, axis=0)
        if self.clip:
            output = np.clip(output, 0, 1)
        return torch.from_numpy(output)

It runs almost twice as fast for me too. It yet affected by resolution of input image a lot.

aengoo avatar Jan 19 '21 10:01 aengoo