Fast-DDS
Fast-DDS copied to clipboard
Multiple publishers to a transient local topic gives Problem reserving CacheChange in reader
Is there an already existing issue for this?
- [X] I have searched the existing issues
Expected behavior
No errors when publishing to the same topic from multiple publishers with TRANSIENT_LOCAL durability
Current behavior
The listener receives data, but prints an error.
Steps to reproduce
Use the following docker compose:
services:
talker:
image: ros:humble
command: ros2 topic pub /data std_msgs/msg/Int32 "{}" --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
ipc: host
network_mode: host
talker2:
image: ros:humble
command: ros2 topic pub /data std_msgs/msg/Int32 "{}" --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
ipc: host
network_mode: host
listener:
image: ros:humble
command: ros2 topic echo /data std_msgs/msg/Int32 --qos-depth=1 --qos-history=keep_last --qos-reliability=reliable --qos-durability=transient_local
ipc: host
network_mode: host
Fast DDS version/commit
2.6.0-3jammy.20220520.002055
Platform/Architecture
Other. Please specify in Additional context section.
Transport layer
Default configuration, UDPv4 & SHM
Additional context
amd64 arch on Ubuntu 20.04 Host.
XML configuration file
No response
Relevant log output
talker2_1 | publishing #11: std_msgs.msg.Int32(data=0)
talker2_1 |
talker_1 | publishing #11: std_msgs.msg.Int32(data=0)
talker_1 |
listener_1 | 2022-06-30 20:53:18.284 [RTPS_MSG_IN Error] (ID:139967146595904) Problem reserving CacheChange in reader: 01.0f.0a.33.01.00.68.11.01.00.00.00|0.0.5.4 -> Function processDataMsg
listener_1 | data: 0
listener_1 | ---
listener_1 | data: 0
listener_1 | ---
Network traffic capture
No response
I've seen this too - I assumed it was some odd artifact because I was running some programs through valgrind, but I've seen this in the controller server of Nav2.
Hi @Aposhian @SteveMacenski
What we are seeing here is not a faulty behaviour of the library. When using KEEP_LAST, the maximum number of samples that can be held is dependent on the depth specified. When additional samples arrive, they are rejected. The log trace you are seeing is part of that process. Rejected samples will be notified accordingly and, since the reliability is set to Reliable, they will be sent again.
Upon further review, there are some things that we could do:
- The severity of this log entry can be lowered to a Warning (since it's not an error, it's an expected consequence of the QoS settings)
- Modify the log message itself to give more information regarding why this is happening.
- Fast DDS documentation could add some more information regarding the interaction of History QoS settings and Resource Limits QoS for this particular case.
Thank you for the explanation! Yes, I think updating the log message and changing the severity would be very good: I was assuming that something was fundamentally wrong, and the log message is too arcane for me to figure out it is just a normal behavior of history depth.
@jsantiago-eProsima do you have any update on this? especially on https://github.com/eProsima/Fast-DDS/issues/2799#issuecomment-1178639958?
The Pull Requests that modify this log entry's severity and message have already been merged on Fast DDS's 2.6.x (https://github.com/eProsima/Fast-DDS/pull/2942) and master (https://github.com/eProsima/Fast-DDS/pull/2824) branches.
@Aposhian this should be fixed in rolling and humble in ROS 2 distro. It has been addressed with update logging message and severity as you can see https://github.com/eProsima/Fast-DDS/pull/2942/files. can we close this issue?