Speed Issue
Hello, Thank you for the Soft NMS PyTorch implementation. However, the speed of the code is really low even though it is implemented using C++. Is it normal? Actually, my training was consuming about a day with NMS, while Soft NMS consumes 4 days. Thank you.
As far as I know, you are not supposed to retrain with Soft NMS. Non-maximum suppression is a post processing step that you can replace with Soft NMS without retraining.
Hello, Thank you for your response. I was trying to implement Soft-NMS to Faster-RCNN. That model implements NMS for both RPN and RoI stages. As you know, RoI is the last step for Faster-RCNN, in contrast, RPN applies NMS to reduce the number of proposals to N (N is usually 1000). Best of my knowledge, the NMS that is applied after the RoI stage is post-process but the RPN stage's NMS should be trained. Is my knowledge true or is it not required to train the model again? Thank you!
Hi,
Sorry for the late reply, I have been quite busy lately.
In soft-nms we need to re-sort the entries after each update since we update the scores according to the overlap. This will create an overhead compared to "normal" nms. I think tensorflow has an implementation of soft-nms & nms, it would be interesting to see if the same difference in performance is present there as well.