isaac_ros_dnn_inference
isaac_ros_dnn_inference copied to clipboard
Hardware-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Closes #23
This PR adds the ability for the `DnnImageEncoderNode` to output tensors in `NHWC` format in addition to `NCHW` format. The `tensor_layout` parameter is used to select the format (either _nchw_...
Hi, we are trying to run foundationpose using Isaac_ROS Docker. We are facing a similar issue this: https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/issues/29. @jaiveersinghNV metioned that 8 GB GPU memory might be less but we...

Issue Description I'm experiencing unexpectedly high latency when running image segmentation using Isaac ROS DNN Inference on a Jetson Orin Nano 8GB. The TensorRT node appears to be the primary...