SfmLearner-Pytorch
SfmLearner-Pytorch copied to clipboard
Pytorch version of SfmLearner from Tinghui Zhou et al.
Thanks for great work! ` if scale is None: scale = np.cos(lat * np.pi / 180.) pose_matrix = pose_from_oxts_packet(metadata[:6], scale) if origin is None: origin = pose_matrix odo_pose = imu2cam...
Thanks for the PyTorch codes! When I use the pretrained model (https://drive.google.com/drive/folders/1H1AFqSS8wr_YzwG2xWwAQHTfXN5Moxmx) to inference disp and depth on Kitti images, the disp and the depth result looks weird. The disp...
I want to compare my prediction performance by visualizing the true value of the picture(depth). How to display the ground_truth of the depth map?
Hi , Thanks a lot for your work, its very inspiring. 1. After executing run_inference.py the depth image obtained is a 3 channel image, so how can I get the...
Hi! Thank you for your nice implementation. I have a question about clipping the depth value in test_disp.py. https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/test_disp.py Currently, before applying the scale factor to predicted depth, depth is...
Hello, thank you for open-source this code. I am training the network recently, but I don't know how to train the pose estimation network, which requires KITTI Odometry 00-08. Looking...
Hi Clement, recently I used the your pretrained model `dispnet_model_best.pth.tar` and `exp_pose_model_best.pth.tar` to warp the images, but got bad result. I used 5-frame snippets as input, and show the target...
For KITTI, it seems that the data preparation code only works for Eigen's split (for depth estimation), but not for Odom split (for pose estimation). I wonder if you can...
I did not find the codes of testing on Make3D Dataset. Could you please give the codes?
Hello Clement, First of all, I have to give you kudos for the amazing work you did in this repo. Coming to the reason that I wrote the issue, I...