EfficientNet-PyTorch icon indicating copy to clipboard operation
EfficientNet-PyTorch copied to clipboard

TensorRT for EfficientNet model

Open aditya1709 opened this issue 5 years ago • 11 comments

I tried converting the EfficientNet-b4 model to TensorRT for optimized inference. It looks like there are some layers that don't have converters implemented and throws - AttributeError: 'Tensor' object has no attribute '_trt' Has anyone had luck with TensorRT or is there another way of achieving this?

aditya1709 avatar Oct 02 '19 02:10 aditya1709

Hi @aditya1709 , have you solved this problem? I am also trying to convert the model to trt4.1, but issue #47 claims that only trt > 5.1.5 works. Any updates?

HoracceFeng avatar Oct 10 '19 09:10 HoracceFeng

@aditya1709 Depthwise conv is better on CPU. Nvidia has not improved it.

suman-19 avatar Oct 25 '19 08:10 suman-19

Hey! I'm trying to convert the model to TensorRT (ver. 5.1.6.1) and surprisingly everything work in the conversion process but when I'm doing an inference using the .pth saved model gives me only zeros (no classes detected).

amlarraz avatar Oct 25 '19 18:10 amlarraz

Is it possible to do the conversion process using a TensorRT 5.0.2.6 version? I have issues while converting the depth wise convolution layer. (Depth wise Conv layer is supported in the latest version.)

ViswanathaReddyGajjala avatar Nov 03 '19 09:11 ViswanathaReddyGajjala

Hey! I'm trying to convert the model to TensorRT (ver. 5.1.6.1) and surprisingly everythin

did you resolve it?

korabelnikov avatar Dec 22 '19 00:12 korabelnikov

Nop...but with the update of the repo (the memory usage optimizat) I don't need TensorRT to work because the model fits in the Jetson's nano 4Gb of RAM

amlarraz avatar Dec 22 '19 07:12 amlarraz

Hey! does this issue still persists with tensorRT conversion of Efficientnet. I tried converting Efficientnet-b7 to TensorRT(ver 7.0.0) and everythings works perfectly during conversion. Its only when I start inferencing, it shows a sharp drop in accuracy and inferences not matching the pytorch model inference. Are there anyone still facing the same issue.

pshwetank avatar Feb 19 '21 05:02 pshwetank

@pshwetank I'm facing the same problem, did you solve it ?

xuatpham avatar May 25 '21 04:05 xuatpham

@xuatpham Yeah...I feel that was mostly because of the preprocessing pipeline and less related to tensorRT model itself. Mostly, you will not see drop in performance if conversion was done without errors. Try to check the outputs from the torch model with trt model on same set of inputs. Chances are that they should match if the preprocessing pipeline is same.

pshwetank avatar May 25 '21 06:05 pshwetank

@pshwetank thanks for your quick response. But it seems like I can't figure out what's going on. Can you share the script that you used to convert from eff to TRT and re-load it to inference? thanks a lot !!!

xuatpham avatar May 25 '21 06:05 xuatpham

I took the usual path to convert Pytorch model --> Onnx model --> TRT model. Once that is done, you can use your pytorch preprocessing and convert the torch tensors into numpy arrays to feed them to TRT or onnx model. If you are using same preprocessing, you should not have any issues. I dont have specific scripts for the purpose but you can easily use onnxruntime to run onnx model and Torch-TRT to run TRT models.

pshwetank avatar May 26 '21 05:05 pshwetank