Pytorch_Retinaface
Pytorch_Retinaface copied to clipboard
inference time is longer than 5-6ms produced by author.And different size input, but the inference time is same.
When I run the detect.py which the default image size is 624x1024,the inference time is 14ms.And I resize the image size to 256x256,the inference time is also 14ms.So I have two questions: first is why the inference time is longer than 5-6ms produced by author?Second is why different size input but the inference time is same? And And I test in Tesla p100, pytorch=1.2.0, torchvision=0.4.0. I hope to get a reply,thanks!
the same problem as you . have you worked it out??
the same problem as you . have you worked it out??
mobilenet-0.25在i7 cpu前传时间60ms左右,与作者给出的VGA图片CPU-1的时间17.2ms相差很多,请问您那边遇到这种情况了么?