Implement missing QoS when using intraprocess communication
Feature request
Feature description
Implement KEEP_ALL history policy with intraprocess communication (see https://github.com/ros2/rclcpp/issues/727). Implement non-volatile durability with intraprocess communication.
Implementation considerations
For sizing the buffers when KEEP_ALL history policy is used, resource limits qos policy will be needed (https://github.com/ros2/rmw/issues/176).
For implementing TRANSIENT_LOCAL durability:
- With history keep_last, the intra_process_manager could store the last message send of each publisher. Adding a map, with
publisher_idas keys and storing a shared_ptr to the message sounds reasonable. - With history KEEP_ALL, the intra_process_manager should never pop from the internal buffer.
- When a new subscription is created with TRANSIENT_LOCAL durability, all the messages from publishers in the same topic with TRANSIENT_LOCAL durability should be delivered (depending on subscriber and publisher history policy if the last of all the messages are delivered). When the publisher history policy was keep_last, if the message is not found in the buffer (because it was popped), it should be taken from the
last_messagemap (mentioned above).
Hi, resurrecting a very old ticket here, is there any blocker preventing this to be implemented?
Nothing prevents it from being worked on, but I think we'll need to introduce some new QoS elements as @ivanpauno's post mentions. KEEP_ALL isn't enough information to emulate what the middleware is doing, at least dds middlewares.
Any movement on this? The lack of TRANSIENT_LOCAL effectively prevents latching on components which is a major limitation.
Any movement on this? The lack of TRANSIENT_LOCAL effectively prevents latching on components which is a major limitation.
It needs someone to pick it up and implement it.
While I agree that this would be a good feature to have, I'm not sure that it is a major limitation when using intra-process comms. The main reason to use latching is to keep extraneous data off of the wire and not to spend time doing unnecessary publishes when nothing has changed. But since intra-process comms is so efficient, and doesn't send anything over the wire, just repeatedly publishing should accomplish more or less the same thing.
@clalancette You say the main purpose of latching is to prevent extraneous data on the wire, but the use case I have always seen is focused on supporting late joining nodes without impacting already running nodes. If I was to continuously send the same old message it might not incur much over the wire cost but it would mean all receiving nodes would need custom logic to reject the old message otherwise they would incur the cost of reprocessing the data.
I agree on the importance of transient local for the reasons mentioned by @msmcconnell. iRobot may be able to take on this task in the near future (June maybe?)
Keep All is a different story and it's not clear what should be the expected behavior there, but I'm not sure if it's really useful.
Yes, I agree that we should have the feature, and it will be more optimal. I just don't see it as entirely blocking any particular use case.
This was partially completed with #2303.
I think we are still missing support for KEEP_ALL, as well as depth in the intra-process manager, so I'll leave this open.