FastFlowNet icon indicating copy to clipboard operation
FastFlowNet copied to clipboard

Deployment on TX2

Open ahangchen opened this issue 3 years ago • 5 comments

Hi Lingtong, the performance of FastFlowNet is surprising. I want to deploy it on TX2. But I find there's a custom layer (correlation layer), thus the model can not be convert to onnx directly. I also try torch2tensorRT, but fail on this layer. How do you deploy it on TX2? Can you share the code about model conversion and test code on TX2 using TensorRT?

ahangchen avatar Jul 03 '21 06:07 ahangchen

Thanks for the interest of my work. Currently, I can not share the conversion and deployment codes on NVIDIA TX2. Some guidance: Since the correlation is custom layer which is not supported in original TensorRT, I add it as plugins and convert the PyTorch model to TensorRT version directly without using any conversion tools.

ltkong218 avatar Jul 05 '21 09:07 ltkong218

Can you provide the toolkit names about converting pytorch model to trt? I guess you convert torch to onnx with pytorch basic api and load onnx model in tensorRT?

ahangchen avatar Jul 08 '21 01:07 ahangchen

I do not use onnx as an intermediate step but define a TRT model directly.

ltkong218 avatar Jul 14 '21 02:07 ltkong218

I do not use onnx as an intermediate step but define a TRT model directly.

How to define a TRT model directly? by torch2tensorRT?

jucic avatar Oct 27 '21 07:10 jucic

@ahangchen @jucic did either of you manage to export FastFlowNet to TensorRT? Also, which version of TensorRT did you use (TensorRT 6 or 7?)?

anenbergb avatar Dec 14 '21 01:12 anenbergb