Zhenhua Wang

Results 56 comments of Zhenhua Wang

No. TensorRT is unaware of PyTorch's tensor definition. In general, we should not assume the compatibility between different projects unless it's explicitly declared or you know that the tensor format/layout...

As @pranavm-nvidia mentioned [above](https://github.com/NVIDIA/TensorRT/issues/2225#issuecomment-1209713235) `cuBLAS initialization failed: 3` is likely to be setup issue. CUDA 11.x requires CUDA driver >= 450.80.02* according to the [CUDA compatibility doc](https://docs.nvidia.com/deploy/cuda-compatibility/index.html#minor-version-compatibility) while your setup...

Glad to hear that! Now close this issue. Please let us know if any further issues.

@larrygoyeau Thank you for reporting this issue! As the log shown, TensorRT requires the scale to of positive. Could you please check if the scale is positive in your model?

@EugeneLesiv Could you please confirm if the NMS model is correctly generated? It seems the attributions such as topK is missing in the NMS node. ``` Node "": type "BatchedNMSDynamic_TRT",...

@EugeneLesiv Hi, have you got a chance to update the model? Thanks!

No feedback from the user for the request of a valid model for 3 weeks, closing. @EugeneLesiv Please feel free to reopen if this is still an issue for you....

@hxyucas Thanks for reporting this issue. When you download the source, have you download the onnx repo? For example ```sh git clone https://github.com/onnx/onnx-tensorrt cd onnx-tensorrt git submodule update --init --recursive...

[`set_flag`](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/BuilderConfig.html#tensorrt.IBuilderConfig.set_flag) with [`FP16`](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/BuilderConfig.html#tensorrt.BuilderFlag)

@reymondzzzz You would need implement the conversion from the ONNX model to your plugin attribution. Others don't know how to convert it.