DSNeRF
DSNeRF copied to clipboard
How to calibrate depth value or camera pose for evaluation
Thank you for sharing your code, but I have questions.
- Calibration for evaluation. As I understand the released code, for training, you used the camera poses which are obtained from COLMAP running on training images. Then how did you evaluate on test data(0, 8, 16 ,,, for llff data)? I think there are no calibrated camera pose information for test data.
For example of 2-view training,
- training: camera pose and sparse depth (both are obtained using 2 images)
- evaluation: there are only camera poses which are obtained using all images.
How did you solve this gap between 2-view(few-view) data and test data? If possible, could you share the code for calibrating camera pose or depth value?
- max depth value in 'run_nerf.py' I think max depth in line 832 should be changed. max_depth = np.max(rays_depth[:,3,0]) -> max_depth = np.max(rays_depth[:,2,0])
Thanks for your questions and comments.
- Since COLMAP always treats the first camera as the world coordinate, you could simply run COLMAP on train views and test views to get the relative camera pose.
- Thanks for pointing that out! We will definitely correct it.
Hi, thanks for sharing your code for this work.
I am currently working on a nerf related project where I am using the point cloud and camera poses generated by COLMAP in my pipeline. I wanted to understand from your experience, how did you align your scene (point cloud obtained from COLMAP) with the volume which the NeRF considers (the near and far bounds of the scene). I saw in the code that you are using the depth which is calculated as follows: depth = (poses[id_im-1,:3,2].T @ (point3D - poses[id_im-1,:3,3])) * sc. But did you just verify whether the point3D which you are considering here obtained from COLMAP, also lies in the same location in the NeRF volume rendering space? I am asking this because NeRF's load_llff_data function does a number of scaling and rotations operations on the poses which are loaded from the COLMAP. I was just wondering won't it affect the depth values used during the training process?
I hope I made my point clear. Let me know if not.
Thanks, Aditya
Hi, thanks for sharing your code for this work.
I am currently working on a nerf related project where I am using the point cloud and camera poses generated by COLMAP in my pipeline. I wanted to understand from your experience, how did you align your scene (point cloud obtained from COLMAP) with the volume which the NeRF considers (the near and far bounds of the scene). I saw in the code that you are using the depth which is calculated as follows:
depth = (poses[id_im-1,:3,2].T @ (point3D - poses[id_im-1,:3,3])) * sc. But did you just verify whether the point3D which you are considering here obtained from COLMAP, also lies in the same location in the NeRF volume rendering space? I am asking this because NeRF'sload_llff_datafunction does a number of scaling and rotations operations on the poses which are loaded from the COLMAP. I was just wondering won't it affect the depth values used during the training process?I hope I made my point clear. Let me know if not.
Thanks, Aditya
Hello, I would like to ask why the depth is calculated this way? and What do 'sc' and 'bds_raw' mean?