Yolov8n-seg TensorRT in Jetson Nano use ROS
I have run my Yolov8-seg model on Jetson and it runs smoothly. I have also tried my custom Yolov8-seg model on Jetson and it runs fine. This is all using TensorRT. Next I want to run the inference process but in ROS. Is there anyone who can help me?
CPP OR PYTHON?
CPP OR PYTHON?
I need use CPP in my ROS. I will integrate with my ROS workspace
@AdyMuasa can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?
I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.
I've been at hardstuck trying to make yolov8 and tensorrt (and cuda?) work. I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.
Any help is greatly appreciated.
Device: Jetson Nano 4GB Dev Kit