Question for Nerfies experiment
Thanks for your great work! I have a question about the Nerfies experiment mentioned in your paper. Nerfies uses colmap to conduct camera registration and scene-related calculation (scene scale and scene center), but banmo doesn't use colmap. I also want to conduct the experiment mentioned in your paper. Is the related code released? Or any other guidance?
Hi, to run Nerfies on the cat videos, we use colmap to get camera poses, which is more accurate than the PoseNet predictions when object does not move much. colmap pre-processing code is modified from this script and is available here.
To preprocess, you want to store data at third_party/nerfies-0.1/dataset/$seqname and run
python third_party/nerfies-0.1/notebooks/run_colmap.py $seqname
python third_party/nerfies-0.1/notebooks/save_to_cams.py $seqname
The rest should be the same as Nerfies.
Thanks for your quick reply! But how to evaluate Nerfies on ama and eagle hands? I think colmap may not work in these videos since there are few background textures.
For hands,eagle,ama, we convert camera poses to nerfies format with the notebook here. I don't have cycles to clean this up but hope it can at least provide some guidance.
Thanks a lot! It really helps! I have a question about the scene center and scale. Did you set the scene center and scale for all videos as (0,0,0) and 0.05. Since these parameters are computed in Nerfies with colmap point cloud.
For Nerfies, the goal of setting center and scale parameters is to facilitate training, i.e.,moving the scene center to the coordinate center and properly scaling the input to MLPs.
We set center to (0,0,0) because to goal is to reconstruct objects in its own coordinate system. For scale=0.05, we tried a few numbers and found 0.05 works best for Nerfies on the eagle video.
The calibration file can be downloaded here and should be included if you follow the instruction :

