YOLOv8-TensorRT icon indicating copy to clipboard operation
YOLOv8-TensorRT copied to clipboard

FP16 engine does not detect object

Open MoussaGRICHE opened this issue 1 year ago • 10 comments

Hello,

I have a yolov8 model that I converted to engine.

With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec
--onnx=yolov8s.onnx
--saveEngine=yolov8s.engine

But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec
--onnx=yolov8s.onnx
--saveEngine=yolov8s.engine
--fp16

I am using c++ program on Jetson TX2 NX

I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.

Do you have an idea why I get this problem?

Thank you.

MoussaGRICHE avatar Feb 12 '24 15:02 MoussaGRICHE

What's your tensorrt version in jetson? Could you please upgrade it to 8.5.1 in jetpack 5.0

triple-Mu avatar Mar 01 '24 15:03 triple-Mu

The tensorrt version is 8.2.1

I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.

Could I upgrade tensorrt without upgrading the jetpack?

MoussaGRICHE avatar Mar 01 '24 15:03 MoussaGRICHE

The tensorrt version is 8.2.1

I can't upgrade to jetpack 5 because I am using Jetson TX2 NX.

Could I upgrade tensorrt without upgrading the jetpack?

Do you have further questions? Sorry for replying to you so late.

triple-Mu avatar May 24 '24 15:05 triple-Mu

well, i meet the same problem once again, maybe because my Jetpack version is 4.6?

OPlincn avatar Jul 19 '24 10:07 OPlincn

well, i meet the same problem once again, maybe because my Jetpack version is 4.6?

Suggest using the newest jetpack.

triple-Mu avatar Jul 20 '24 04:07 triple-Mu

Hello,

I have a yolov8 model that I converted to engine.

With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine

But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16

I am using c++ program on Jetson TX2 NX

I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good.

Do you have an idea why I get this problem?

Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

duong0411 avatar Sep 10 '24 04:09 duong0411

Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

I hope to perform model inference with FP16 precision to achieve faster inference speed.

OPlincn avatar Sep 10 '24 07:09 OPlincn

Hello, I have a yolov8 model that I converted to engine. With the fp32 engine, the engine works good for inference and detect very well the object. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine But with the fp16 engine, the engine works good for inference but no object detected. /usr/src/tensorrt/bin/trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16 I am using c++ program on Jetson TX2 NX I have converted the same onnx model to fp32 and fp16 engine on PC with cuda and the 2 engines works and detect very good. Do you have an idea why I get this problem? Thank you.

why do u use with fp16 on jetson nano ,accuracy is negative ?

This is due to a problem with the lower version of tensorrt.

triple-Mu avatar Sep 10 '24 08:09 triple-Mu

whta ? i think it is file build.py have problem. And i try two methods: trtxec and build.py but accuracy is negative

duong0411 avatar Sep 10 '24 08:09 duong0411

Im convert model yolov8 onnx without end2ned use fp16, my accuracy is good but i convert model yolov8 end2end, accuracy is negative

duong0411 avatar Sep 10 '24 08:09 duong0411