Hydra-ROS icon indicating copy to clipboard operation
Hydra-ROS copied to clipboard

Question about generating a 3D scene graph from real-world scan data

Open wmita opened this issue 8 months ago • 3 comments

Hello, I am currently developing a library that reconstructs real-world scan data into a 3D simulator. In doing so, I would like to utilize the 3D scene graph that your team has developed. Specifically, my plan is to generate a 3D scene graph from a ROS bag, retrieve the positions and orientations of detected objects, and then place those object models in the simulator.

I have been attempting to generate this scene graph (from a ROS bag recorded in a real environment) using Hydra, semantic_inference, and Kimera-VIO, but it is not working as I had hoped. Would you mind clarifying the following points?

  1. The 3D scene graph generated from the ROSBAG recorded on a single floor is expanding too much in the vertical (height) direction.

I am using a ROS robot equipped with a RealSense D455 to record the ROS bag. However, when I create a scene graph using this recorded ROS bag, the shape of the room does not appear correctly.

Specifically, I am launching Hydra by setting launch/kimera_vio_ros_uhumans2.launch's <arg name="imu_topic" default="/$(arg robot_name)/imu/clean/imu"/> to the <arg name="imu_topic" default="/imu"/> published by the robot.

Image

Also, when I replace the(published by the RealSense D455) with <arg name="imu_topic" default="/$(arg robot_name)/imu/clean/imu"/> in launch/kimera_vio_ros_uhumans2.launch, the ROS bag that was recorded on a flat plane ends up spreading out in three-dimensional space.

Image

None of them successfully reproduce the shape of the room. Are there any parameters we should adjust or files we need to modify to correct this? Additionally, it appears that some frames are being processed multiple times. Could there be a specific reason for this?

  1. We have been modifying the section of the file shown below to get it to run. Are there any other changes that need to be made?

hydra_ros/launch/datasets/uhumans2.launch

    <arg name="use_gt_frame" default="false" doc="use simulator-provided tf frame"/>
    <arg name="use_gt_semantics" default="false" doc="use simulator-provider semantics"/>

    <arg name="rgb_topic" default="/camera/color/image_raw"/>
    <arg name="rgb_info_topic" default="/camera/color/camera_info"/>
    <arg name="depth_topic" default="/camera/depth/image_rect_raw"/>

/camera/~ topics are published by realsense-ros.

hydra_ros/rviz/uhumans2.rviz

       - Class: rviz/Image
          Enabled: false
          Image Topic: /camera/depth/image_rect_raw

        - Class: rviz/Image
          Enabled: false
          Image Topic: /tesse/left_cam/rgb/image_raw

Kimera-VIO-ROS/launch/kimera_vio_ros_uhumans2.launch

  <arg name="gt_topic"    default="/odom"/>



  <!-- Subscriber Topics -->
  <arg name="left_cam_topic"        default="/camera/color/image_raw"/>
  <arg name="right_cam_topic"       default="/camera/color/image_raw"/>
  <arg name="imu_topic"             default="/imu"/> 
  <arg name="external_odom_topic"   default="/odom"/>

When launching these, I follow the instructions in Hydra-ROS/doc/quickstart.md and run the following commands.

roslaunch hydra_ros uhumans2.launch
rosbag play path/to/rosbag --clock
roslaunch kimera_vio_ros kimera_vio_ros_uhumans2.launch online:=true viz_type:=1 use_lcd:=false odom_frame_id:=odom

Also, regarding Kimera-VIO-ROS/launch/kimera_vio_d455.launch and Kimera-VIO-ROS/launch/kimera_vio_ros_realsense_IR.launch, we updated the topics to the appropriate ones but still cannot launch them. Could you provide the necessary conditions or commands to successfully start them?

  1. Are there any special considerations for recording a ROSBAG?

I suspect there may be an issue with the way I am recording ROSBAG. Could you advise on any best practices for recording data correctly?

From looking at the demo ROSBAG, I’ve noticed that it might be preferable to move the robot slowly, but does speed actually matter?

Also, could reflections from the floor or similar factors affect the recording process?

wmita avatar Apr 11 '25 06:04 wmita

Did you solve it? I also have the same issue that mesh is not properly generated.

yongyun-iastate avatar May 30 '25 03:05 yongyun-iastate

We have solved it . Hydra probably needs map_frame included /tf . We recorded rosbag while booting slam launch (e.g rtabmap, slamtoolbox) . We are in the middle of writing detail document that way of using Hydra with Real Robots , we haven’t published it yet. But I'm not sure it is correct .

wmita avatar Jun 05 '25 09:06 wmita

Thanks so much for team to port Hydra on 24.04!!!! It motivate to go forward! achieved run it with d455 on 24.04. Odometry from rtabmap... Computationally seems very heavy for my rtx4070 and this is without mesh reconstruction like in example with bag file.

Was hoping to get precise odometry and loopclosures in reality like with human2 dataset. But in that dataset semantic labels and poses topics included by default. So i need external precise odometry to make good map.

There are some real cases of implementing Hydra?

Image

maikelborys avatar Aug 11 '25 21:08 maikelborys