TensorRT
TensorRT copied to clipboard
Question: Example of using IOutputAllocator with enqueueV3?
Description
Based on my understanding, if a layer has data-dependent output shapes I need to use enqueueV3 function and set the input/output tensor bindings.
But what about plugins? Say I implement a plugin in which the output tensor shape is data-dependent (e.g. something like a NonZero layer). How should I specify the output shape of this plugin? I think I somehow need to use IOutputAllocator interface, but I am not sure how "register" my OutputAllocator with the plugin or how to use input tensors to the plugin to compute the actual output tensor shape.
Any help is appreciated.
Environment
TensorRT Version:
NVIDIA GPU:
NVIDIA Driver Version:
CUDA Version:
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):
We still doesn't support data-dependent plugins. @samurdhikaru for more info :-)
please check the latest nonzero plugin in https://github.com/NVIDIA/TensorRT/tree/release/10.0/samples/sampleNonZeroPlugin
I will close this, thanks all!