isaac_ros_dnn_inference icon indicating copy to clipboard operation
isaac_ros_dnn_inference copied to clipboard

How to deploy onnx models requiring multiple inputs?

Open breaker-mm opened this issue 3 months ago • 0 comments

Hi there. I was trying to deploy a pointpillars .onnx model using the dnn_inference package. The model takes three tensor lists as input. Normally, I should write an encoder that publishes all three required nitros tensorlist data with different topics. However, I don't know how to connect the encoder with the tensor_rt_node. image

I read part of the source code of this package and it seems that the tensor_rt_node does not support subscribing to multiple topics. However, in the tensor_rt_inference GXF extension, I found that the model loaded by the TensorRtInference component can fetch data from multiple rx components (inferred from the tick() function) and thus complete the populating of multiple inputs. I was confused. Do I need to modify the source code for tensor_rt_node? What should I do?

After further examination of the source code, I have developed some ideas. Maybe I need to modify the dnn_image_encoder_node.yaml file (adding the DoubleBufferTransmitter components) and modify the nitros::NitrosPublisherSubscriberConfigMap CONFIG_MAP variable accordingly (in tensor_rt_node.cpp file) based on the names of the new components. Am I thinking right?

breaker-mm avatar Apr 02 '24 13:04 breaker-mm