yolov8 onnx->tensorRT
使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
没有影响,可以正常使用
@YuHe0108 can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?
I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.
I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.
Any help is greatly appreciated.
Device: Jetson Nano 4GB Dev Kit