industrial_reconstruction
industrial_reconstruction copied to clipboard
[industrial_reconstruction]: Error processing images into tsdf
I keep getting this error upon calling the service. I am using a ZED2i camera.
This is the code I used
ros2 launch industrial_reconstruction reconstruction.launch.xml depth_image_topic:=/zed/zed_node/depth/depth_registered color_image_topic:=/zed/zed_node/rgb/image_rect_color camera_info_topic:=/zed/zed_node/rgb_raw/camera_info
ros2 service call /start_reconstruction industrial_reconstruction_msgs/srv/StartReconstruction "tracking_frame: 'zed_camera_link' relative_frame: 'map' translation_distance: 0.0 rotational_distance: 0.0 live: true tsdf_params: voxel_length: 0.02 sdf_trunc: 0.04 min_box_values: {x: 0.05, y: 0.25, z: 0.1} max_box_values: {x: 7.0, y: 3.0, z: 1.2} rgbd_params: {depth_scale: 1000.0, depth_trunc: 0.75, convert_rgb_to_intensity: false}"
I just updated the main branch to print out a more detailed error when this occurs. Try updating to that and let me know what error you see.
I suspect the problem has to do with your depth image. Ensure that both your color image and depth image are in the same frame and have all the same camera_info properties. I was able to reproduce this error by using a depth and color image with different properties.
Color Camera Image:
header:
stamp:
sec: 1705594923
nanosec: 359981312
frame_id: camera_color_optical_frame
height: 360
width: 640
encoding: rgb8
is_bigendian: 0
step: 1920
Depth image that worked:
header:
stamp:
sec: 1705594721
nanosec: 695499520
frame_id: camera_color_optical_frame
height: 360
width: 640
encoding: 16UC1
is_bigendian: 0
step: 1280
Depth image that didn't work:
header:
stamp:
sec: 1705594741
nanosec: 268154624
frame_id: camera_depth_optical_frame
height: 480
width: 848
encoding: 16UC1
is_bigendian: 0
step: 1696
Notice the size of the images that worked are the same, all other intrinsic properties are also the same.
@alanroyce2010 Did this resolve your issue?
Hey I encountered the same issue and solved it by matching the frame_id like you were saying, but now I'm running into a new issue.. The industrial reconstruction node doesn't spit out any errors/complaints and says it generated and saved mesh in the provided path to directory, but there's nothing there when I go check after.. So out of curiosity, which topics did you choose for each parameter when using an Intel RealSense D400 series camera? I'm using a D415 so the topics in the example usage aren't compatible..
EDIT
When I kill industrial reconstruction I get the same error:
[Open3D WARNING] Write PLY failed: mesh has 0 vertices.
The first step I always try when I encounter an empty/no mesh getting generated is setting the tsdf volume min and max to all zeros. This will cause the resulting mesh to not be cropped at all. I also recommend setting live to true so you can see the mesh generating live in RVIZ by subscribing to the marker topic that is being published.
Hey @marrts thank you so much I managed to generate a mesh with the pointers you suggested here! One thing I might add that might be frustrating to new users is that I couldn't generate the mesh without have the path to the ply file previously existing. That is what was preventing exporting the mesh created. So it might be a silly mistake, but I do think it is worth mentioning to create the results_mesh.ply file before attempting to create mesh.
i.e. touch /home/ros-industrial/industrial_reconstruction_archive/results_mesh.ply
before ros2 service call /start_reconstruction...
So to generate a mesh with this package and a RealSense camera I did the following:
- Set align_depth.enable to true when launching /realsense2_camera/launch/rs_launch.py
- Use RealSense topics created from align_depth so that color and depth images are aligned with each other (using same frame_id)
ros2 launch industrial_reconstruction reconstruction.launch.xml depth_image_topic:=/camera/aligned_depth_to_color/image_raw color_image_topic:=/camera/color/image_raw camera_info_topic:=/camera/color/camera_info
- Create aforementioned ply file and set tsdf min and max to all zeros before calling to start reconstruction service
Again thanks for the feedback, and sorry for the lengthy response. I just thought I would share more detail on the process for any other feedback and/or to lend assistance to any other new industrial reconstructors.
Hi @marrts, @liamcarlos
I'm working with a Universal Robot and an Intel Depth Camera D435. I've set up the hardware and software, but when I move the robot, I notice an unusual mesh appearing. Could you provide some guidance on troubleshooting steps to ensure I've correctly set up everything?
strp 1. I have use this to get the joints position of universal robot and visualize it in RVIZ2
ros2 launch ur_robot_driver ur_control.launch.py ur_type:=ur10 robot_ip:=192.168.1.102 launch_rviz:=true
step2. An static transformation between the camera_link and the flange frame of UR10
ros2 run tf2_ros static_transform_publisher --x 0.01 --y 0.06 --z 0 --yaw 0 --pitch 0 --roll 0 --frame-id flange --child-frame-id camera_link
step3. running D435 depth camera
ros2 launch realsense2_camera rs_launch.py depth_module.profile:=1280x720x30 pointcloud.enable:=true align_depth.enable:=true
step4. running industrial_reconstruction
ros2 launch industrial_reconstruction reconstruction.launch.xml depth_image_topic:=/camera/aligned_depth_to_color/image_raw color_image_topic:=/camera/color/image_raw camera_info_topic:=/camera/color/camera_info
step 5. start reconstruction
ros2 service call /start_reconstruction industrial_reconstruction_msgs/srv/StartReconstruction "tracking_frame: 'camera_link'
relative_frame: 'base'
translation_distance: 0.0
rotational_distance: 0.0
live: true
tsdf_params:
voxel_length: 0.02
sdf_trunc: 0.04
min_box_values: {x: 0.0, y: 0.0, z: 0.0}
max_box_values: {x: 0.0, y: 0.0, z: 0.0}
rgbd_params: {depth_scale: 1000.0, depth_trunc: 0.75, convert_rgb_to_intensity: false}"
The mesh will be generate in the x direction of the camera_link frame. As the robot moves, a new mesh appears to be generated in accordance with its new orientation and position.
Hey @bahman-nouri can you post photos of the mesh you're getting?
I agree that photos would be helpful.
One thing that comes to mind from your description of your setup is I think you might be passing the wrong frame for the tracking_frame
. The realsense publishes their images in the camera_color_optical_frame
, not camera_link
. camera_link
represents the base link of the camera and realsense will publishes transforms relative to camera_link
.
To help debug this I'd:
- echo
/camera/color/camera_info
from the command line to confirm the frame the images are being published in - Look at all the labeled TF frames in RVIZ to make sure they look like you expect. The z-axis of the frame of interest should be pointing out of the camera lens.
- In RVIZ make a camera display for the
/camera/color/image_raw
topic. This will overlay your virtual world with the live camera data. If you point the camera to look at the robot you should be able to see a virtual robot overlaid with the actual robot. This is a great way to sanity check your camera frame location. To get them to precisely line up you'll need to run a camera calibration, but getting them closely aligned with some guesses should be sufficient to get a reasonable mesh out of this pipeline
@marrts Nice catch! @bahman-nouri yeah you would have to change your tracking frame parameter to the frame_id of those topics, which will most likely be camera_color_optical_frame
since 'camera' is your prefix