Stark icon indicating copy to clipboard operation
Stark copied to clipboard

tensorrt

Open hmmlencat opened this issue 2 years ago • 3 comments

hello i have follow your steps to convert baseline stark_st2 to onnx successfully, now i want to convert the onnx to tensorrt model,can you give me some advice? if possible, please list your relevant environment (tensorrt/cuda/etc .) thank you sincerely!

hmmlencat avatar Aug 12 '21 12:08 hmmlencat

Hi, after converting the original PyTorch model to the ONNX model, you don't need to further convert it to TensorRT model. Actually, onnxruntime has tensorrt execution provider (ONNXRUNTIME/execution-providers/TensorRT). But to use this function, you have to compile ONNXRUNTIME from source and this process is not too easy. If you have an interest in using ONNXRUNTIME TensorRT execution provider, we can discuss further.

MasterBin-IIAU avatar Aug 13 '21 01:08 MasterBin-IIAU

Hi @MasterBin-IIAU

Thanks for amazing work. I was interested in converting original start_st2 model to onnx and then preferably to tensorrt. I couldn't find the instructions for those. Any pointers on this will be really helpful.

Thanks in advance.

trathpai avatar Nov 23 '21 15:11 trathpai

@trathpai Hi, thanks for the appreciation of our work. ONNXRuntime has a TensorRT execution provider. This link (https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html) may give you some help. But from my experience, TensorRT execution provider would bring too much improvement of speed compared with CUDA execution provider.

MasterBin-IIAU avatar Nov 25 '21 14:11 MasterBin-IIAU