YOLOv8-TensorRT icon indicating copy to clipboard operation
YOLOv8-TensorRT copied to clipboard

yolov8 onnx->tensorRT

Open YuHe0108 opened this issue 1 year ago • 2 comments

使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

YuHe0108 avatar Feb 14 '25 02:02 YuHe0108

使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

没有影响,可以正常使用

triple-mu avatar Feb 17 '25 02:02 triple-mu

@YuHe0108 can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?

I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.

I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.

Any help is greatly appreciated.

Device: Jetson Nano 4GB Dev Kit

ejb27 avatar Feb 24 '25 13:02 ejb27