Regarding running rtabmap with three relasense cameras
Hi ...
We need to run RTAB-Map with three RealSense cameras. How much overlap between two RealSense cameras is required? Or does the amount of overlap not affect the actual results in RTAB-Map?
Ideally no overlap. If there is an overlap, the same 3D features could appear in two cameras at the same time, but only unique matches are processed (when global matching is done, e.g., loop closure detection). The feature may be also only matched to first camera seen in it if local matching is done (visual odometry).
For BOW, having the same feature appearing more than one time may affect loop closure hypotheses.
Thanks matlabbe...
If we have stitched rgb and depth images(using three realsense cameras), Can you suggest ? How can we run RTAB-Map?
How are they stitched? The problem is that a resulting single camera calibration may not be possible. It is easier to just send all three cameras separately to rtabmap similarly to https://github.com/introlab/rtabmap_ros/blob/ros2/rtabmap_examples/launch/rtabmap_D405x3.launch.py
If you stiched images, it means you have hardware synchronized the 3 cameras together? If so, set approx_sync:=false for rgbd_sync, rgbd_odometry and rtabmap nodes. You may also want to pre-sync the 3 cameras together (with rgbdx_sync node) so that they are synced only once instead of redoing in both rgbd_odometry and rtabmap nodes, then use subscribe_rgbd:=true with rgbd_cameras:=0 to subscribe to output of rgbdx_sync.