onnx-tensorrt icon indicating copy to clipboard operation
onnx-tensorrt copied to clipboard

Set the engine input to be a Pytorch Tensor

Open YoushaaMurhij opened this issue 3 years ago • 1 comments

It is not clear how to specify a Pytorch tensor as an input to the engine. Is it possible here? or just a Numpy format is accepted. I want to convert only a part of my model to TensorRT so the input would a tensor and it is not a right way to detach the Tensor to CPU to convert it to Numpy format. Any suggestions? Thanks!

YoushaaMurhij avatar Mar 28 '21 14:03 YoushaaMurhij

No. TensorRT is unaware of PyTorch's tensor definition.

In general, we should not assume the compatibility between different projects unless it's explicitly declared or you know that the tensor format/layout are compatible. In your case, Numpy can bridge PyTorch and TensorRT. If you don't want it CPU involved, you need to make sure that the tensor (memory) shared by PyTorch and TensorRT are the same memory layout, e.g. linear layout, and you can pass directly the GPU memory pointer between.

zhenhuaw-me avatar Jun 20 '22 05:06 zhenhuaw-me