isaac_ros_dnn_inference icon indicating copy to clipboard operation
isaac_ros_dnn_inference copied to clipboard

Hardware-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Results 15 isaac_ros_dnn_inference issues
Sort by recently updated
recently updated
newest added

This PR adds the ability for the `DnnImageEncoderNode` to output tensors in `NHWC` format in addition to `NCHW` format. The `tensor_layout` parameter is used to select the format (either _nchw_...

Hi, we are trying to run foundationpose using Isaac_ROS Docker. We are facing a similar issue this: https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/issues/29. @jaiveersinghNV metioned that 8 GB GPU memory might be less but we...

![image](https://github.com/user-attachments/assets/999122f9-c2b8-473a-9b51-573cad8ade11)

Issue Description I'm experiencing unexpectedly high latency when running image segmentation using Isaac ROS DNN Inference on a Jetson Orin Nano 8GB. The TensorRT node appears to be the primary...