DROID-SLAM icon indicating copy to clipboard operation
DROID-SLAM copied to clipboard

Problem with reconstruction result

Open lilly5791 opened this issue 2 years ago • 10 comments

Thank you for sharing such a great deep learning slam algorithm. I also thank you for the recently modified code which outputs the numpy pose results on reconstruction_path.

It works very well on my own data and the demo result is almost same with the groundtruth! However, I found a problem when I visualize the npy result with Matlab

As you can see in picture, sfm_bench and rgbd_dataset_freiburg3_cabinet trajectory visualization result is same with demo visualization. However my own data trajectory visualization result and the demo visualization is very different.

In my own data, the demo result is very good, but I don't understand why the matlab visualization result is bad even though the matlab code is same. Do you know why the npy result is different from the demo?

droid_slam_visualization

lilly5791 avatar Jul 29 '22 03:07 lilly5791

Yes, I have the same problem with you. traj

Would you give me a contact method(WeChat or E-mail) therefore we can communicate more details.

buenos-dan avatar Aug 02 '22 11:08 buenos-dan

Here's my email [email protected]

The movement is different from yours and mine but the droid slam result looks similar especially the round and noisy part.

lilly5791 avatar Aug 04 '22 07:08 lilly5791

Try to save data directly from visualization.py and the result is good. Why reconstruction is bad? I guess poses were optimized after, so you should save the data in time and update data with dirty_index. PS: don't save tensor directly, use tensor.item() to save the value.

buenos-dan avatar Aug 09 '22 02:08 buenos-dan

Thank you for your advice. This worked well!

lilly5791 avatar Aug 12 '22 04:08 lilly5791

Hi can you please explain what you mean by save the data in time and update data with dirty_index?

pranav-asthana avatar Aug 17 '22 22:08 pranav-asthana

I think it's because the type of video.poses is torch.tensor and the pose information keeps changing. I'm not used to torch so I just saved all ix and pose as txt file and only used last lines for visualization.

lilly5791 avatar Aug 24 '22 09:08 lilly5791

Hello, I have no idea about how to use the result of reconstruction_path, could you tell me your solution? Thanks!

senhuangpku avatar Oct 16 '22 09:10 senhuangpku

This is what I am using. Just leaving this here in case it helps anyone.

  1. images, poses and depths are output for keyframes into reconstruction_path. These can be used with a reconstruction algorithm (like tsdf, a good implementation is in Open3D) or any other MVS system.
  2. In demo.py, traj_est stores the pose for each frame of the input video after global refinement and trajectory filling. If you need this information, you can save it to allow doing things like MVS on each input frame.

pranav-asthana avatar Oct 16 '22 20:10 pranav-asthana