geometry2 icon indicating copy to clipboard operation
geometry2 copied to clipboard

Correctly handle multiple StaticTransformBroadcasters in a single process

Open rhaschke opened this issue 5 years ago • 2 comments

As ROS latches only the last message from a given node and topic, when using multiple static broadcasters, only the last message is actually kept. To correctly handle multiple static broadcasters, we need to share the actual publisher instance across all broadcaster instances within a process. This is done here via a private, shared impl pointer.

Originally, I implemented this caching at the application level (i.e. in my rviz plugin). Then I thought, the issue should actually be handled here in tf. Now, I'm even thinking it should be handled at the basic ROS level: whenever there are multiple, latched publishers from a single process, their messages should all be latched. However, I'm not deep enough into the ros_comm library to judge whether this can be handled as easy as here, where the infrastructure to keep several messages (stamped transforms) was already in place.

I don't care very much on which level this bug will be fixed. I just wanted to point out the issue and a possible solution.

Note that this PR builds on #454.

rhaschke avatar Apr 15 '20 15:04 rhaschke

Sorry for not reacting to your feedback, @tfoote, for so long. I already forgot about this PR and just found it again, when scanning my pending PRs.

What's lost in this implementation is the ability to stop publishing a transformation.

Yes, that's an important point I missed.

I think it might make more sense to have the implementation/singleton keep weak pointers to each instance of the StaticTransformBroadcaster which will then iterate all the weak pointers to assemble still active publishers.

I'm not sure what you mean by assembling active publishers. The message cannot get assembled on-demand from active StaticTransformBroadcasters (STB). It is important that there is a single unique net_message_ for latching. If an STB goes out of scope, it should remove all messages from this net_message_ originating from itself. There is also another issue: If different STBs have published different, possibly conflicting transforms, we should probably keep the previous (now overridden) transforms around, such that they can get restored when the "newer" STB gets destroyed.

This could be achieved by storing all transforms twice: in net messages held by each STB and an overall net message held by the Impl. If a new message should be published, it will be added to both transform lists. If an STB gets destroyed, the overall net message should be (costly?) reassembled from the remaining STB's messages.

To avoid storing all messages twice, the Impl could memorize for each transform its "owner", e.g. in a map(child_frame -> STB). If an STB goes out of scope, all invalidated transforms could get removed. To restore previous transforms, we would need to additionally memorize overridden transforms, but only those. However, it will require quite some book-keeping to restore the correct transform, i.e. the one that was overridden latest.

I'm not sure it is worth this effort to get rid of old transforms. Aren't clients caching those old transforms anyway?

rhaschke avatar Oct 22 '20 13:10 rhaschke

Closing and reopening to trigger CI.

rhaschke avatar Jul 02 '22 08:07 rhaschke