TensorRT-SSD icon indicating copy to clipboard operation
TensorRT-SSD copied to clipboard

Related to performance gain

Open jagadeesh09 opened this issue 6 years ago • 4 comments

Hi,

I would like to know the performance gain for SSD using Tensorrt.

jagadeesh09 avatar May 10 '18 09:05 jagadeesh09

ssd+vgg16: inference time 80-100ms ssd+Mobilenetv1: inference time 15-20ms

chenzhi1992 avatar May 11 '18 00:05 chenzhi1992

Hi,

Thanks for the reply! I am working with Tesla K80 GPU and x86 architecture. I have experimented with Tensorrt by using different network architectures. For faster r-cnn ( from the samples), there is performance gain by just deploying it using Tensorrt. Whereas, for SSD architecture, I couldn't observe any such performance gain over the caffe deployment.

And I have tested Tensorrt for VGG architecture. Here also, I couldn't observe any performance gain. Time taken for Tensorrt and Caffe deployments for VGG architecture is almost same i.e 25 ms.

If possible, can you please comment with your numbers i.e not by using tensorrt for SSD on your machine?

Thanks, Jagadeesh

jagadeesh09 avatar May 11 '18 06:05 jagadeesh09

Hi, In my jetson-tx2, I got twice the acceleration by using TensorRT on the ssd-vgg16 architecture. Maybe you have a problem with the implementation of your TensorRT code.

chenzhi1992 avatar May 11 '18 10:05 chenzhi1992

@jagadeesh09 Will it be possible for you to give some pointers to implement SSD using TensorRT

Optimus1072 avatar May 17 '18 09:05 Optimus1072