packnet-sfm
packnet-sfm copied to clipboard
The input and gt depth visulization.
Hi, thanks for your work. I generate input and gt depth via follow code:
self.dataset = SynchronizedSceneDataset(path, split=split, datum_names=cameras, backward_context=back_context, forward_context=forward_context, generate_depth_from_datum='lidar', ) and I get below rgb images and depths of 6 camera.


This depth image is different from the depth gt you provided in your paper. I think this is sparse point cloud or lidar results. How can I get correct depth like figure1 in 3D packing by <generate_depth_from_datum> ?

Thanks.
How are you coloring the depth maps for visualization?
How are you coloring the depth maps for visualization?
Thanks for your reply. The visualize code is similar as https://github.com/TRI-ML/DDAD/blob/master/notebooks/DDAD.ipynb
For resize function, I also tried the "resize_depth_preserve" function provided in your transform codes and got the similar results. By the way, I found both the Packnet-sfm or FSM have not provide the ground truth depth map in their paper. So I guess you may only take the sparse depth map rather than dense depth map for visualization and metric computation?