YOLOv8-TensorRT icon indicating copy to clipboard operation
YOLOv8-TensorRT copied to clipboard

Yolov8n-seg TensorRT in Jetson Nano use ROS

Open adysaptr opened this issue 1 year ago • 3 comments

I have run my Yolov8-seg model on Jetson and it runs smoothly. I have also tried my custom Yolov8-seg model on Jetson and it runs fine. This is all using TensorRT. Next I want to run the inference process but in ROS. Is there anyone who can help me?

adysaptr avatar Feb 11 '25 07:02 adysaptr

CPP OR PYTHON?

triple-mu avatar Feb 17 '25 02:02 triple-mu

CPP OR PYTHON?

I need use CPP in my ROS. I will integrate with my ROS workspace

adysaptr avatar Feb 17 '25 02:02 adysaptr

@AdyMuasa can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?

I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.

I've been at hardstuck trying to make yolov8 and tensorrt (and cuda?) work. I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.

Any help is greatly appreciated.

Device: Jetson Nano 4GB Dev Kit

ejb27 avatar Feb 24 '25 12:02 ejb27