rtabmap icon indicating copy to clipboard operation
rtabmap copied to clipboard

Two ZED2i Visual SLAM Map quality

Open ebrahimabdelghfar opened this issue 2 months ago • 15 comments

After greetings , I have made a lot of trial to enhance the map quality made by double zed2i.

this is my setup

Image

The produced map quality is like this

Image

This is my rtabmap configuration

/**:
  ros__parameters:
    rgbd_cameras: 2
    frame_id: base_link
    use_sim_time: true
    subscribe_rgbd: True
    approx_sync: True
    subscribe_rgb: False
    subscribe_depth: False
    wait_imu_to_init: False
    visual_odometry: False
    wait_for_transform: 0.1
    # Grid/Sensor: "1"                     # 0=Laser 1=RGBD 2=laser+RGBD
    # RGBD/NeighborLinkRefining: 'true'    # Do odometry correction with consecutive laser scans
    # RGBD/ProximityBySpace:     'true'    # Local loop closure detection (using estimated position) with locations in WM
    # RGBD/ProximityByTime:      'false'   # Local loop closure detection with locations in STM
    # RGBD/ProximityPathMaxNeighbors: '10' # Do also proximity detection by space by merging close scans together.
    # Reg/Strategy:              '0'       # 0=Visual 1=ICP 2=Visual+ICP
    # Vis/MinInliers:            '12'      # 3D visual words minimum inliers to accept loop closure
    RGBD/OptimizeFromGraphEnd: 'true'    # Optimize graph from initial node so /map -> /odom transform will be generated
    Reg/Force3DoF:             'true'    # 2D SLAM
    RGBD/CreateOccupancyGrid:  'true'    # Create 2D occupancy grid from RGBD
    Grid/NoiseFilteringMinNeighbors: "5" # 2D occupancy grid noise filtering
    Grid/RangeMin: "0.3"
    Grid/RangeMax: "30.0" # 2D occupancy grid max range
    Grid/NormalsSegmentation:  "true"  # Enable normal computation for segmentation
    RGBD/LoopClosureReextractFeatures: 'true' # Re-extract features for loop closure detection
    Vis/MaxFeatures:           "0"  # Maximum number of features to extract (0=unlimited)
    Vis/MaxDepth:              "0"  # Max depth for feature detection
    Vis/EstimationType:        "0"  # [Motion estimation approach: 0:3D->3D, 1:3D->2D (PnP), 2:2D->2D (Epipolar Geometry)]
    Kp/MaxFeatures:        "0"     # Max features to detect
    Kp/DetectorStrategy:   "8"     #  [0=SURF 1=SIFT 2=ORB 3=FAST/FREAK 4=FAST/BRIEF 5=GFTT/FREAK 6=GFTT/BRIEF 7=BRISK 8=GFTT/ORB 9=KAZE 10=ORB-OCTREE 11=SuperPoint 12=SURF/FREAK 13=GFTT/DAISY 14=SURF/DAISY 15=PyDetector]
    Vis/FeatureType:        "8"     #  [0=SURF 1=SIFT 2=ORB 3=FAST/FREAK 4=FAST/BRIEF 5=GFTT/FREAK 6=GFTT/BRIEF 7=BRISK 8=GFTT/ORB 9=KAZE 10=ORB-OCTREE 11=SuperPoint 12=SURF/FREAK 13=GFTT/DAISY 14=SURF/DAISY 15=PyDetector]
    SIFT/Gpu:           "true"       # 
    GFTT/Gpu: "true" # Use GPU for GFTT feature detection
    GFTT/UseHarrisDetector : "true"  
    Rtabmap/DetectionRate: "10" # 10Hz
    Rtabmap/MapUpdateDetectionRate: "10" # 10Hz
    GFTT/Gpu: "true" # Use GPU for GFTT feature detection
    RGBD/LinearUpdate: "0.01"                           #[Minimum linear displacement (m) to update the map. Rehearsal is done prior to this, so weights are still updated.]
    Optimizer/Strategy: "2"                             #[Graph optimization strategy: 0=TORO, 1=g2o, 2=GTSAM and 3=Ceres.]
    General/cloudFiltering: "true"                      #[Filter point clouds to reduce memory consumption.]
    Grid/DepthDecimation: "2"                         #[Decimation factor for depth images used to create the occupancy grid.]
    General/pointCloudDecimation: "2"                   #[Decimation factor for the point clouds stored in the map. 1 means no decimation.]
    Rtabmap/LoopRatio: "0.9"                           #[Loop closure detection ratio (0.0-1.0). Higher values mean more loop closures will be detected.]
    Optimizer/Epsilon: "0"

ZED2i Configuration

# config/common_stereo.yaml
# Common parameters to Stereolabs ZED Stereo cameras

---
/**:
    ros__parameters:
        use_sim_time: false # Set to `true` only if there is a publisher for the simulated clock to the `/clock` topic. Normally used in simulation mode.

        simulation:
            sim_enabled: false # Set to `true` to enable the simulation mode and connect to a simulation server
            sim_address: '127.0.0.1' # The connection address of the simulation server. See the documentation of the supported simulation plugins for more information.
            sim_port: 30000 # The connection port of the simulation server. See the documentation of the supported simulation plugins for more information.

        svo:
            use_svo_timestamps: false # Use the SVO timestamps to publish data. If false, data will be published at the system time.
            publish_svo_clock: false # [overwritten by launch file options] When use_svo_timestamps is true allows to publish the SVO clock to the `/clock` topic. This is useful for synchronous rosbag playback.
            svo_loop: false # Enable loop mode when using an SVO as input source. NOTE: ignored if SVO timestamping is used
            svo_realtime: false # if true the SVO will be played trying to respect the original framerate eventually skipping frames, otherwise every frame will be processed respecting the `pub_frame_rate` setting
            play_from_frame: 0 # Start playing the SVO from a specific frame
            replay_rate: 1.0 # Replay rate for the SVO when not used in realtime mode (between [0.10-5.0])

        general:
            camera_timeout_sec: 5
            camera_max_reconnect: 5
            camera_flip: false
            self_calib: true # Enable the self-calibration process at camera opening. See https://www.stereolabs.com/docs/api/structsl_1_1InitParameters.html#affeaa06cfc1d849e311e484ceb8edcc5
            serial_number: 0 # overwritten by launch file
            pub_resolution: 'CUSTOM' # The resolution used for image and depth map publishing. 'NATIVE' to use the same `general.grab_resolution` - `CUSTOM` to apply the `general.pub_downscale_factor` downscale factory to reduce bandwidth in transmission
            pub_downscale_factor: 2.0 # rescale factor used to rescale image before publishing when 'pub_resolution' is 'CUSTOM'
            pub_frame_rate: 15.0 # [DYNAMIC] Frequency of publishing of visual images and depth data (not the Point Cloud, see 'depth.point_cloud_freq'). This value must be equal or less than the camera framerate.
            enable_image_validity_check: 1 # [SDK5 required] Sets the image validity check. If set to 1, the SDK will check if the frames are valid before processing.
            gpu_id: -1
            optional_opencv_calibration_file: '' # Optional path where the ZED SDK can find a file containing the calibration information of the camera computed by OpenCV. Read the ZED SDK documentation for more information: https://www.stereolabs.com/docs/api/structsl_1_1InitParameters.html#a9eab2753374ef3baec1d31960859ba19
            async_image_retrieval: false # If set to true will camera image retrieve at a framerate different from \ref grab() application framerate. This is useful for recording SVO or sending camera stream at different rate than application.
            # Other parameters are defined, according to the camera model, in the 'zed.yaml', 'zedm.yaml', 'zed2.yaml', 'zed2i.yaml'
            # 'zedx.yaml', 'zedxmini.yaml', 'virtual.yaml' files

        video:
            saturation: 4 # [DYNAMIC]
            sharpness: 4 # [DYNAMIC]
            gamma: 8 # [DYNAMIC]
            auto_exposure_gain: true # [DYNAMIC]
            exposure: 80 # [DYNAMIC]
            gain: 80 # [DYNAMIC]
            auto_whitebalance: true # [DYNAMIC]
            whitebalance_temperature: 42 # [DYNAMIC] - [28,65] x100 - works only if `auto_whitebalance` is false
            # Other parameters are defined, according to the camera model, in the 'zed.yaml', 'zedm.yaml', 'zed2.yaml', 'zed2i.yaml'
            # 'zedx.yaml', 'zedxmini.yaml', 'virtual.yaml' files

        sensors:
            publish_imu_tf: true # [overwritten by launch file options] enable/disable the IMU TF broadcasting
            sensors_image_sync: false # Synchronize Sensors messages with latest published video/depth message
            sensors_pub_rate: 100. # frequency of publishing of sensors data. MAX: 400. - MIN: grab rate

        region_of_interest:
            automatic_roi: false # Enable the automatic ROI generation to automatically detect part of the robot in the FoV and remove them from the processing. Note: if enabled the value of `manual_polygon` is ignored
            depth_far_threshold_meters: 2.5 # Filtering how far object in the ROI should be considered, this is useful for a vehicle for instance
            image_height_ratio_cutoff: 0.5 # By default consider only the lower half of the image, can be useful to filter out the sky
            #manual_polygon: '[]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #manual_polygon: '[[0.25,0.33],[0.75,0.33],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #manual_polygon: '[[0.25,0.25],[0.75,0.25],[0.75,0.75],[0.25,0.75]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            #manual_polygon: '[[0.5,0.25],[0.75,0.5],[0.5,0.75],[0.25,0.5]]' # A polygon defining the ROI where the ZED SDK perform the processing ignoring the rest. Coordinates must be normalized to '1.0' to be resolution independent.
            apply_to_depth: true # Apply ROI to depth processing
            apply_to_positional_tracking: true # Apply ROI to positional tracking processing
            apply_to_object_detection: true # Apply ROI to object detection processing
            apply_to_body_tracking: true # Apply ROI to body tracking processing
            apply_to_spatial_mapping: true # Apply ROI to spatial mapping processing

        depth:
            depth_mode: 'NEURAL' # Matches the ZED SDK setting: 'NONE', 'PERFORMANCE', 'QUALITY', 'ULTRA', 'NEURAL', 'NEURAL_PLUS' - Note: if 'NONE' all the modules that requires depth extraction are disabled by default (Pos. Tracking, Obj. Detection, Mapping, ...)
            depth_stabilization: 30 # Forces positional tracking to start if major than 0 - Range: [0,100]
            openni_depth_mode: true # 'false': 32bit float [meters], 'true': 16bit unsigned int [millimeters]
            point_cloud_freq: 10.0 # [DYNAMIC] Frequency of the pointcloud publishing. This value must be equal or less than the camera framerate.
            point_cloud_res: 'COMPACT' # The resolution used for point cloud publishing - 'COMPACT'-Standard resolution. Optimizes processing and bandwidth, 'REDUCED'-Half resolution. Low processing and bandwidth requirements
            depth_confidence: 100 # [DYNAMIC]
            depth_texture_conf: 100 # [DYNAMIC]
            remove_saturated_areas: true # [DYNAMIC]
            max_depth: 30.0 # [DYNAMIC] Maximum depth range in meters (0 to disable) - Range: [0.5, 30.0]
            min_depth: 0.3 # [DYNAMIC] Minimum depth range in meters - Range: [0.1, 10.0]
            # Other parameters are defined, according to the camera model, in the 'zed.yaml', 'zedm.yaml', 'zed2.yaml', 'zed2i.yaml'
            # 'zedx.yaml', 'zedxmini.yaml', 'virtual.yaml' files

        pos_tracking:
            pos_tracking_enabled: true # True to enable positional tracking from start
            pos_tracking_mode: 'GEN_1' # Matches the ZED SDK setting: 'GEN_1', 'GEN_2', 'GEN_3'
            imu_fusion: true # enable/disable IMU fusion. When set to false, only the optical odometry will be used.
            publish_tf: false # [overwritten by launch file options] publish `odom -> camera_link` TF
            publish_map_tf: false # [overwritten by launch file options] publish `map -> odom` TF
            map_frame: 'map'
            odometry_frame: 'odom'
            area_memory: false # Enable to detect loop closure
            area_file_path: '' # Path to the area memory file for relocalization and loop closure in a previously explored environment. 
            save_area_memory_on_closing: false # Save Area memory before closing the camera if `area_file_path` is not empty. You can also use the `save_area_memory` service to save the area memory at any time.
            reset_odom_with_loop_closure: false # Re-initialize odometry to the last valid pose when loop closure happens (reset camera odometry drift)
            publish_3d_landmarks: true # Publish 3D landmarks used by the positional tracking algorithm
            publish_lm_skip_frame: 5 # Publish the landmarks every X frames to reduce bandwidth. Set to 0 to publish all landmarks
            depth_min_range: 0.0 # Set this value for removing fixed zones of the robot in the FoV of the camerafrom the visual odometry evaluation
            set_as_static: false # If 'true' the camera will be static and not move in the environment
            set_gravity_as_origin: true # If 'true' align the positional tracking world to imu gravity measurement. Keep the yaw from the user initial pose.
            floor_alignment: false # Enable to automatically calculate camera/floor offset
            initial_base_pose: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] # Initial position of the `camera_link` frame in the map -> [X, Y, Z, R, P, Y]
            path_pub_rate: 2.0 # [DYNAMIC] - Camera trajectory publishing frequency
            path_max_count: -1 # use '-1' for unlimited path size
            two_d_mode: false # Force navigation on a plane. If true the Z value will be fixed to 'fixed_z_value', roll and pitch to zero
            fixed_z_value: 0.0 # Value to be used for Z coordinate if `two_d_mode` is true
            transform_time_offset: 0.0 # The value added to the timestamp of `map->odom` and `odom->camera_link` transform being generated
            reset_pose_with_svo_loop: true # Reset the camera pose the `initial_base_pose` when the SVO loop is enabled and the SVO playback reaches the end of the file.

        gnss_fusion:
            gnss_fusion_enabled: false # fuse 'sensor_msg/NavSatFix' message information into pose data
            gnss_fix_topic: '/fix' # Name of the GNSS topic of type NavSatFix to subscribe [Default: '/gps/fix']
            gnss_zero_altitude: false # Set to `true` to ignore GNSS altitude information
            h_covariance_mul: 1.0 # Multiplier factor to be applied to horizontal covariance of the received fix (plane X/Y)
            v_covariance_mul: 1.0 # Multiplier factor to be applied to vertical covariance of the received fix (Z axis)
            publish_utm_tf: true # Publish `utm` -> `map` TF
            broadcast_utm_transform_as_parent_frame: false # if 'true' publish `utm` -> `map` TF, otherwise `map` -> `utm`
            enable_reinitialization: false # determines whether reinitialization should be performed between GNSS and VIO fusion when a significant disparity is detected between GNSS data and the current fusion data. It becomes particularly crucial during prolonged GNSS signal loss scenarios.
            enable_rolling_calibration: true # If this parameter is set to true, the fusion algorithm will used a rough VIO / GNSS calibration at first and then refine it. This allow you to quickly get a fused position.
            enable_translation_uncertainty_target: false # When this parameter is enabled (set to true), the calibration process between GNSS and VIO accounts for the uncertainty in the determined translation, thereby facilitating the calibration termination. The maximum allowable uncertainty is controlled by the 'target_translation_uncertainty' parameter.
            gnss_vio_reinit_threshold: 5.0 # determines the threshold for GNSS/VIO reinitialization. If the fused position deviates beyond out of the region defined by the product of the GNSS covariance and the gnss_vio_reinit_threshold, a reinitialization will be triggered.
            target_translation_uncertainty: 0.1 # defines the target translation uncertainty at which the calibration process between GNSS and VIO concludes. By default, the threshold is set at 10 centimeters.
            target_yaw_uncertainty: 0.1 # defines the target yaw uncertainty at which the calibration process between GNSS and VIO concludes. The unit of this parameter is in radian. By default, the threshold is set at 0.1 radians.

        mapping:
            mapping_enabled: false # True to enable mapping and fused point cloud pubblication
            resolution: 0.05 # maps resolution in meters [min: 0.01f - max: 0.2f]
            max_mapping_range: 15.0 # maximum depth range while mapping in meters (-1 for automatic calculation) [2.0, 20.0]
            fused_pointcloud_freq: 1.0 # frequency of the publishing of the fused colored point cloud
            clicked_point_topic: '/clicked_point' # Topic published by Rviz when a point of the cloud is clicked. Used for plane detection
            pd_max_distance_threshold: 0.15 # Plane detection: controls the spread of plane by checking the position difference.
            pd_normal_similarity_threshold: 15.0 # Plane detection: controls the spread of plane by checking the angle difference.

        object_detection:
            od_enabled: false # True to enable Object Detection
            enable_tracking: true # Whether the object detection system includes object tracking capabilities across a sequence of images.
            detection_model: 'MULTI_CLASS_BOX_FAST' # 'MULTI_CLASS_BOX_FAST', 'MULTI_CLASS_BOX_MEDIUM', 'MULTI_CLASS_BOX_ACCURATE', 'PERSON_HEAD_BOX_FAST', 'PERSON_HEAD_BOX_ACCURATE', 'CUSTOM_YOLOLIKE_BOX_OBJECTS'
            max_range: 30.0 # [m] Upper depth range for detections.The value cannot be greater than 'depth.max_depth'
            filtering_mode: 'NMS3D' # Filtering mode that should be applied to raw detections: 'NONE', 'NMS3D', 'NMS3D_PER_CLASS'
            prediction_timeout: 2.0 # During this time [sec], the object will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
            allow_reduced_precision_inference: false # Allow inference to run at a lower precision to improve runtime and memory usage
            # Other parameters are defined in the 'object_detection.yaml' and 'custom_object_detection.yaml' files

        body_tracking:
            bt_enabled: false # True to enable Body Tracking
            model: 'HUMAN_BODY_MEDIUM' # 'HUMAN_BODY_FAST', 'HUMAN_BODY_MEDIUM', 'HUMAN_BODY_ACCURATE'
            body_format: 'BODY_38' # 'BODY_18','BODY_34','BODY_38'
            allow_reduced_precision_inference: false # Allow inference to run at a lower precision to improve runtime and memory usage
            max_range: 15.0 # [m] Defines a upper depth range for detections
            body_kp_selection: 'FULL' # 'FULL', 'UPPER_BODY'
            enable_body_fitting: false # Defines if the body fitting will be applied
            enable_tracking: true # Defines if the object detection will track objects across images flow
            prediction_timeout_s: 0.5 # During this time [sec], the skeleton will have OK state even if it is not detected. Set this parameter to 0 to disable SDK predictions
            confidence_threshold: 50.0 # [DYNAMIC] - Minimum value of the detection confidence of skeleton key points [0,99]
            minimum_keypoints_threshold: 5 # [DYNAMIC] - Minimum number of skeleton key points to be detected for a valid skeleton

        stream_server:
            stream_enabled: false # enable the streaming server when the camera is open
            codec: 'H264' # different encoding types for image streaming: 'H264', 'H265'
            port: 30000 # Port used for streaming. Port must be an even number. Any odd number will be rejected.
            bitrate: 12500 # [1000 - 60000] Streaming bitrate (in Kbits/s) used for streaming. See https://www.stereolabs.com/docs/api/structsl_1_1StreamingParameters.html#a873ba9440e3e9786eb1476a3bfa536d0
            gop_size: -1 # [max 256] The GOP size determines the maximum distance between IDR/I-frames. Very high GOP size will result in slightly more efficient compression, especially on static scenes. But latency will increase.
            adaptative_bitrate: false # Bitrate will be adjusted depending the number of packet dropped during streaming. If activated, the bitrate can vary between [bitrate/4, bitrate].
            chunk_size: 16084 # [1024 - 65000] Stream buffers are divided into X number of chunks where each chunk is chunk_size bytes long. You can lower chunk_size value if network generates a lot of packet lost: this will generates more chunk for a single image, but each chunk sent will be lighter to avoid inside-chunk corruption. Increasing this value can decrease latency.
            target_framerate: 0 # Framerate for the streaming output. This framerate must be below or equal to the camera framerate. Allowed framerates are 15, 30, 60 or 100 if possible. Any other values will be discarded and camera FPS will be taken.

        advanced: # WARNING: do not modify unless you are confident of what you are doing
            # Reference documentation: https://man7.org/linux/man-pages/man7/sched.7.html
            thread_sched_policy: 'SCHED_BATCH' # 'SCHED_OTHER', 'SCHED_BATCH', 'SCHED_FIFO', 'SCHED_RR' - NOTE: 'SCHED_FIFO' and 'SCHED_RR' require 'sudo'
            thread_grab_priority: 50 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required
            thread_sensor_priority: 70 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required
            thread_pointcloud_priority: 60 # ONLY with 'SCHED_FIFO' and 'SCHED_RR' - [1 (LOW) z-> 99 (HIGH)] - NOTE: 'sudo' required

        debug:
            sdk_verbose: 1 # Set the verbose level of the ZED SDK
            sdk_verbose_log_file: '' # Path to the file where the ZED SDK will log its messages. If empty, no file will be created. The log level can be set using the `sdk_verbose` parameter.
            use_pub_timestamps: false # Use the current ROS time for the message timestamp instead of the camera timestamp. This is useful to test data communication latency.
            debug_common: false
            debug_sim: false
            debug_video_depth: false
            debug_camera_controls: false
            debug_point_cloud: false
            debug_positional_tracking: false
            debug_gnss: false
            debug_sensors: false
            debug_mapping: false
            debug_terrain_mapping: false
            debug_object_detection: false
            debug_body_tracking: false
            debug_roi: false
            debug_streaming: false
            debug_advanced: false
            debug_nitros: false
            disable_nitros: false # If available, disable NITROS usage for debugging and testing purposes

summary

I have been trying so much to enhance the map quality, but no success , I have been trying to enable the make the processing in the gpu and succeed on that but the map quality didn't improve it's quality, in addition the camera bundelr did't save and i am sure the tf has no problem, and the camera info also so any help or advice

ebrahimabdelghfar avatar Oct 27 '25 11:10 ebrahimabdelghfar

Can you share a resulting database? What do you mean by quality? Is odometry drifting? Is there misalignment of points clouds seen from different point of view and/or between the cameras? Is it just there is too much noise in the resulting point cloud?

The ZED camera is a stereo camera, I would not expect a very crisp point cloud. By default, their depth algorithm would produce a very wavy point cloud. What I would expect with that kind of setup, assuming that the cameras are synchronized together, is a relatively good visual odometry, a too noisy point cloud for 3D reconstruction but accurate enough for obstacles detection.

Because it is stereo data, I would not try to generate point cloud with range farther than 4 meters, if producing an accurate point cloud is my goal.

matlabbe avatar Nov 01 '25 22:11 matlabbe

@matlabbe hello, I am Targeting to build a mesh farm from the following database as the image below

Image I want to be enhanced and meshed to be as follow map: Image

the link to the database is as link to database

Note

  • There are minimum drift in odometery but i was corrected using loop closure
  • I don't notice any misalignment really

ebrahimabdelghfar avatar Nov 02 '25 13:11 ebrahimabdelghfar

  • The static transform between the camera is wrong, we can see that the grounds seen by the two cameras don't align well together. You could try to debug this in rviz with DepthCloud plugin and setting global frame to your camera base frame. As the field of view overlaps a little, you may try some camera extrinsics calibration to find the transform between the two cameras. Image

  • For the trees/leaves, it will be hard to get a good mesh out of that. For the ground, maybe we can get decent textured mesh. I extracted one camera (to avoid the misalignment with the other camera), recomputed visual odometry until it gets lost, and exported the point cloud:

    rtabmap-reprocess -default --Rtabmap/DetectionRate 0 -odom -cam 1 rtabmap.db output.db
    rtabmap-export --cloud --voxel 0 --max_range 10 --decimation 1 output.db
    

    Image Image

then the mesh with these options:

rtabmap-export --mesh --texture --voxel 0 --max_range 10 --decimation 1 --texture_range 10 --poisson_depth 9  output.db

Image

matlabbe avatar Nov 03 '25 00:11 matlabbe

Thank you very much for you help I really apperciate it very much, the camera is indeed have a intended misalignment with 10 degree like this urdf

Image

so is there any advice so,

ebrahimabdelghfar avatar Nov 03 '25 12:11 ebrahimabdelghfar

I am not using the odometry of the rtabmap instead i am using the fusion between the two cameras odometry instead how can make the reprocessing part with this odometry @matlabbe

ebrahimabdelghfar avatar Nov 03 '25 13:11 ebrahimabdelghfar

@matlabbe I have tried the same steps as yours but it didn't give me the same results

PointCloud

rtabmap-reprocess -default --Rtabmap/DetectionRate 0 -odom -cam 1 rtabmap.db output.db
rtabmap-export --cloud --voxel 0 --max_range 10 --decimation 1 output.db

the results

Image

which obviously not as same as you results

for meshing

rtabmap-export --mesh --texture --voxel 0 --max_range 10 --decimation 1 --texture_range 10 --poisson_depth 9  output.db
Image

for odometry

obviously I am not using the rtabmap odometry instead I am using the sensor fusion between two camera's and their IMU's, and it's not being loss so how can I use this odometry in reprocessing the database.

Summary

  • How to use the odometry produced from sensor fusion from the 2 camera's in the rtabmap-reprocess ?
  • How to produce the same results as yours' ?
  • Can you tell me the best tool integrated in rtabmap for mesh production as an example "Alicevision"?

ebrahimabdelghfar avatar Nov 03 '25 15:11 ebrahimabdelghfar

How to use the odometry produced from sensor fusion from the 2 camera's in the rtabmap-reprocess ?

You can remove -odom when reprocessing the database to re-use the odometry saved in it.

How to produce the same results as yours' ?

I think the results you have are pretty close. For the point cloud rendering, I know meshlab doesn't render points like rtabmap's viewer (which is based on PCL visualizer). You may open the point cloud in pcl_viewer instead. However, that is just a rendering difference, the point cloud looks okay. For the textured mesh, it looks in line to my results too, only the ground look okay and most of the trees are gone. There could be a way to keep more textured trees by setting --texture_depth_error -1 (disabled).

Can you tell me the best tool integrated in rtabmap for mesh production as an example "Alicevision"?

We integrated AliceVision in rtabmap just for the texturing stage (with --multiband option), not the meshing stage. For the meshing stage, we use Poisson algorithm (original paper back in 2006), implementation from PCL library.

matlabbe avatar Nov 04 '25 01:11 matlabbe

Thank you very much for you help I really apperciate it very much, the camera is indeed have a intended misalignment with 10 degree like this urdf Image

so is there any advice so,

That is fine that the cameras have 10 deg difference, however it seems that in reality they don't because the point clouds would overlap perfectly if the URDF values matched the reality. EDIT: If you are fusing the odometry of both sensors, that could also add errors in the combined odometry.

matlabbe avatar Nov 04 '25 01:11 matlabbe

I really Want to thank you for your help, I have succed on producing the same results as yours

ebrahimabdelghfar avatar Nov 04 '25 21:11 ebrahimabdelghfar

@matlabbe I have a problem when importing the whole map with the following steps in cloud Compare application the scale is very large and no points appear , the box dimension appears to be very large how to solve this ?

Image

ebrahimabdelghfar avatar Nov 16 '25 11:11 ebrahimabdelghfar

3.4e+38 looks pretty huge, maybe some stereo points that are very far. You can try to export with a maximum depth range to filter spurious very far points. I was using --max_range 10 in my example above to avoid that issue.

matlabbe avatar Nov 23 '25 22:11 matlabbe

@matlabbe Thank you for your reply, I have indeed setting the maximum range to 10, but i have the same problem, the map is fxed when i set the range to maxrange to 4 but this not make any sense as the maximum range of my zed 2i is 30 meter, so is there any parameter should I configure when collecting the database?

ebrahimabdelghfar avatar Nov 24 '25 09:11 ebrahimabdelghfar

In theory, the zed camera can give any distance, unless you've set a hard limit of 30 meters in their config.

matlabbe avatar Nov 27 '25 19:11 matlabbe

I indeed set the hard limit to 30 meter in there config

ebrahimabdelghfar avatar Nov 28 '25 12:11 ebrahimabdelghfar

Can you share the database with that huge 3.4e+38 distance?

matlabbe avatar Nov 28 '25 23:11 matlabbe