TensorRT-SSD
TensorRT-SSD copied to clipboard
Related to performance gain
Hi,
I would like to know the performance gain for SSD using Tensorrt.
ssd+vgg16: inference time 80-100ms ssd+Mobilenetv1: inference time 15-20ms
Hi,
Thanks for the reply! I am working with Tesla K80 GPU and x86 architecture. I have experimented with Tensorrt by using different network architectures. For faster r-cnn ( from the samples), there is performance gain by just deploying it using Tensorrt. Whereas, for SSD architecture, I couldn't observe any such performance gain over the caffe deployment.
And I have tested Tensorrt for VGG architecture. Here also, I couldn't observe any performance gain. Time taken for Tensorrt and Caffe deployments for VGG architecture is almost same i.e 25 ms.
If possible, can you please comment with your numbers i.e not by using tensorrt for SSD on your machine?
Thanks, Jagadeesh
Hi, In my jetson-tx2, I got twice the acceleration by using TensorRT on the ssd-vgg16 architecture. Maybe you have a problem with the implementation of your TensorRT code.
@jagadeesh09 Will it be possible for you to give some pointers to implement SSD using TensorRT