D-FINE icon indicating copy to clipboard operation
D-FINE copied to clipboard

trt conversion

Open SebastianJanampa opened this issue 1 year ago • 6 comments

Hello,

Thanks for the incredible work you've done. I tried to convert the model to tensorrt fp 16, but I encountered a problem with segmentation. However, if I remove --fp16, it works (I know it computes a model with fp32).

I just wanted to see how much faster the model became after using trt (I am new to this, so I was curious), and I faced another issue.

[12/11/2024-22:29:23] [TRT] [E] IRuntime::deserializeCudaEngine: Error Code 1: Serialization (Serialization assertion safeVersionRead == kSAFE_SERIALIZATION_VERSION failed.Version tag does not match. Note: Current Version: 0, Serialized Engine Version: 239)

 File "/home/sebastian/D-FINE/tools/benchmark/trt_benchmark.py", line 44, in __init__
    self.context = self.engine.create_execution_context()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

Could anyone help me, please?

SebastianJanampa avatar Dec 12 '24 05:12 SebastianJanampa

改一下tensorrt的版本

GiovanniFyc avatar Dec 12 '24 07:12 GiovanniFyc

Hi, Could you elaborate on your answer a bit more?

I've already tried tensorrt versions 8.6 and 10.7 I used the command

trtexec --onnx="model.onnx" --saveEngine="model.engine" --fp16

however, it produced a segmentation fault

I produce the model engine using the command

trtexec --onnx="model.onnx" --saveEngine="model.engine" 

SebastianJanampa avatar Dec 12 '24 07:12 SebastianJanampa

Hi, Could you elaborate on your answer a bit more?

I've already tried tensorrt versions 8.6 and 10.7 I used the command

trtexec --onnx="model.onnx" --saveEngine="model.engine" --fp16

however, it produced a segmentation fault

I produce the model engine using the command

trtexec --onnx="model.onnx" --saveEngine="model.engine" 

The version 10.5.0 of TensorRT works fine for me. By the way, You need to ensure that the version of trtexec you use for model conversion matches the version of the Python TensorRT API you are using.

iangiu avatar Dec 13 '24 03:12 iangiu

Hi @SebastianJanampa , were you able to solve this issue by using a different TensorRT version?

migsdigs avatar Jan 14 '25 08:01 migsdigs

Hi @migsdigs I did solve it. I installed Tensorrt 10.5 with CUDA 11.8 on my ubuntu 22.04.

SebastianJanampa avatar Jan 14 '25 19:01 SebastianJanampa

Hi @migsdigs I did solve it. I installed Tensorrt 10.5 with CUDA 11.8 on my ubuntu 22.04.

Hi again, thanks for letting me know. Strangely enough, I tried 10.5 and it seems to improve the accuracy of the inference a bit, but the inference speed seems very slow - at least slower than real time. Although, I am using CUDA 12.4, so not sure if there might be some conflict there. I see in the repo they recommend Tensorrt 10.4, so I will try that. I was using Tensorrt 10.7 before and the inference was fast but very low accuracy on fp16, and not much improvement on fp32

migsdigs avatar Jan 15 '25 09:01 migsdigs