awesomeboy2

Results 4 comments of awesomeboy2

I tried converting an ONNX model to a TRT model and running it on the TensorRT framework, the speed is faster than ONNX Runtime and CPU @KevinHuSh

Also, it's worth mentioning that when performing inference, you should set the batch size larger to take advantage of GPU parallel computing, rather than doing inference one by one.

> > I tried converting an ONNX model to a TRT model and running it on the TensorRT framework, the speed is faster than ONNX Runtime and CPU @KevinHuSh >...

emmm, i can't upload my file here, if you need, you can leave your email > here is the trt model file, you may unzip the file first.