RGBD-Integration-2020 icon indicating copy to clipboard operation
RGBD-Integration-2020 copied to clipboard

Some reconstructed results

Open ethyd4 opened this issue 10 months ago • 5 comments

Hello mam, While, Scanning the small objects it was showing some incorrect reconstruct results. Some of the reconstructed results were Screenshot from 2023-08-21 15-42-35 Screenshot from 2023-08-28 12-12-14 Screenshot from 2023-08-28 10-21-51 A_hat1_color_frame0 (2) A_hat1_color_frame0 (1) A_hat1_color_frame0

ethyd4 avatar Sep 04 '23 05:09 ethyd4

If I'm scanning any of the medium objects like human , chair , Tins. For these kind of objects I'm getting some decent results. When If I scan small objects it was not giving the proper results. can you give some suggestions to improve the results.

ethyd4 avatar Sep 04 '23 05:09 ethyd4

Hi!

  1. RealSense depth camera has 1-6 cm depth resolution (depending on distance), so probably the camera's accuracy is not enough for finer small models?
  2. Try filtering out clutter more accurately - remove everything which is not the object from the point cloud.
  3. Try stricter convergence criterion, by reducing RANSACConvergenceCriteria parameters.

Ritchizh avatar Sep 04 '23 13:09 Ritchizh

Hello mam, In the point cloud data for each frame we were getting some amount of incorrect data to remove that If I use statistical , radius outliers removal methods at that time I'm loosing the good data also.

For Intel real sense D415 the capturing limits were 0.3m to 10m. It needs to get all the details within that distance. I think there was no issue with the camera.

can you suggest a better way to remove the incorrect data from the point cloud data.

ethyd4 avatar Sep 05 '23 06:09 ethyd4

Statistical and radius outliers removal methods are effective. You can also use a bounding box to cut your object from the point cloud as I have shown in the example: bounds = [[-np.inf, np.inf], [-np.inf, 0.15], [0, 0.56]] # set the bounds

Your distance range should be OK for D415, however, what I meant - the distance error is +/- 2%, you won't see all fine texture details in the shape.

Unfortunately, I don't know the reason why your reconstructed shapes look so distorted. Are only the final meshes distorted or the Ransac merged point clouds too?

Ritchizh avatar Sep 05 '23 15:09 Ritchizh

We are getting good results for objects bigger than 1ft or 30 cm in measurement. This might be because of sensor resolution.We were thinking of using Lidar instead of depth camera, we plan on going with intel realsense 515l maybe that will give us better result for small objects. Lidar directly gives pointcloud data as well as RGB, will the algorithm you designed work with intel realsense 515L lidar camera, and what changes we need to make for that. A request, can you try scanning small objects, or else we can provide the dataset for the same. Please let us know where we are going wrong.

ethyd4 avatar Sep 12 '23 06:09 ethyd4