YOLOv3_PyTorch icon indicating copy to clipboard operation
YOLOv3_PyTorch copied to clipboard

Ran some profiling on a GTX1080Ti

Open joaqo opened this issue 6 years ago • 4 comments

Hey, I modified the eval script a bit to run some predictions and test the FPS a bit.

I was getting much lower FPS than what is quoted in https://github.com/eriklindernoren/PyTorch-YOLOv3 so I decided to do some profiling. The core of the model (the part that runs on the GPU) runs at about 90FPS which is great, but when I add the rest of the algorithm such as NMS and input image re-scaling the FPS drops to around 15.

Am I doing something wrong? Have you tried the FPS on your setup?

Cheers!

joaqo avatar Jun 26 '18 15:06 joaqo

Hi @joaqo , The FPS is measure only include backbone. And the input size is 256x256. Please review the paper and another github. image image

Anyway, I had added a FPS test script in test folder. You can use it to measure FPS in different batch size.

BobLiu20 avatar Jun 28 '18 02:06 BobLiu20

The FPS of full YOLOv3 from Paper: image

BobLiu20 avatar Jun 28 '18 02:06 BobLiu20

Oh thanks a lot, I will give the FPS script a run on my GPU and report back in case you want to post some benchmarks with different GPUs!

joaqo avatar Jun 29 '18 18:06 joaqo

@BobLiu20 I run the eval script on GTX1080,and the output is better than paper?

Batch_Size: 1, Inference_Time: 0.02235 s/image, FPS: 44.747691030733264
Batch_Size: 2, Inference_Time: 0.01832 s/image, FPS: 54.59051620351389
Batch_Size: 3, Inference_Time: 0.01636 s/image, FPS: 61.1291111108592
Batch_Size: 4, Inference_Time: 0.01537 s/image, FPS: 65.07194720359458
Batch_Size: 5, Inference_Time: 0.01536 s/image, FPS: 65.10865661941776
Batch_Size: 6, Inference_Time: 0.01509 s/image, FPS: 66.28779101314666
Batch_Size: 7, Inference_Time: 0.01504 s/image, FPS: 66.495594761547
Batch_Size: 8, Inference_Time: 0.01471 s/image, FPS: 67.99880056907445
Batch_Size: 9, Inference_Time: 0.01508 s/image, FPS: 66.3294118501506

XiaXuehai avatar Jul 24 '18 03:07 XiaXuehai