pytorch-ssd
pytorch-ssd copied to clipboard
inference speed
trafficstars
what's the inference speed of MobileNet v2 ssd on CPU approximately? The paper says 200ms, but i tested to be around 1 sec per image. Is there any way to speed up the inference?
Caffe2 runtime is faster than Pytorch, as it fuses batchnorm and conv layers. It's less than 200ms usng Caffe2 on my PC, about 1s on a 800M Hz Arm CPU.
Is anyway to increase the speed on CPU inference? mobilenet v2 ssd in Tensorflow is also about 200ms What are the bottlenecks of speed for pytorch version? Thanks
@cwlinghk have you got the reason?