4DGaussians
4DGaussians copied to clipboard
About the translation of camera in HyperNeRF dataset and coords convention
Hi, Thanks for your wonderful work.
- in your codes line 164-165 of
scene/hyper_loader.py
, you useT = - camera.position @ R
instead ofT = camera.position
,Could your please tell What's the reason behind it? - Addtionaly, as far as i know , 3d gaussian splatting uses opencv coords convention, but hypernerf dataset uses openGL convention. Why don't you convert it to opencv coords? Looking forward to your reply! Regards.
Thanks for your question!
- I've tried many times to align camera poses among these datasets. Maybe the camera pose of hypernerf is c2w instead of w2c. So I have to change it.
- In fact ,the full loader code is borrowed from TiNeuVox. And the
colmap.sh
also generate dense point clouds from this dataset, in order to make it useful and simple, so I didn't convert it. When running the hypernerf dataset, you can found a image will be saved as 'output.png', that's my debug figure that point out the relationship between camera poses and point clouds.
Thank your for you quick reply! https://github.com/google/nerfies#datasets
- after checking the repo of nerfies and hypernerf, I found that hypernerf follows nerfies dataset format. The orientation is a w2c matrix, and the position is in world coords. This makes me more confused , I still dont understand why this line of code works.
- So sorry, my bad, Hypernerf uses opencv coords convention.