Have you considered motion compensation of dynamic objects?
Have you considered compensating the motion of dynamic objects in Lidar points with object tracking and speed estimation? For sequential point clouds, that may be a practical way to reduce the time synchronization problem.
Thanks for bringing up this idea. it definitely makes sense.
In our current pipeline, we only correct radar–camera synchronization errors during dataset processing by using the ego-vehicle speed. This is usually sufficient to fix projection errors on static objects caused by ego motion. In addition, our heuristic filtering can handle additional misalignment in pixel space (XY direction), e.g., lateral motion and occlusion effects, but cannot correct errors caused by object motion along the Z-axis. The advantage is that this approach is simple and already provides the accuracy required for monocular depth estimation.
Your suggestion, estimating object velocity via detection/tracking, could in principle correct motion-induced misalignments much more precisely. However, it comes with stringent requirements: a sufficiently accurate and robust object tracking model, precise timestamps for both camera frames and individual radar points, and a non-trivial method to associate uncorrelated 3D radar point clouds (across frames) with 2D tracked objects.
We really appreciate the proposal. While challenging, it’s indeed important for precise reconstruction and tracking, and we’ll keep this in mind when considering future improvements.