Chen Wang
Chen Wang
Thanks for your excellent work. I am curious why you set the ray_near and ray_end to 0.88 and 1.12? (and for other variables like h_stddev etc.) Is that set empirically?
Why the undistorted camera camera-to-world matrix is different from that of matterport_camera_poses?
For example, in scene 8WUmhLawc2A, the undistorted pose is ``scan 01b439d39a8f412fa1837be7afb45254_d0_0.png 01b439d39a8f412fa1837be7afb45254_i0_0.jpg 0.863804 0.361417 0.351025 -5.69715 0.502959 -0.577708 -0.642872 2.5578 -0.0295543 0.731867 -0.680806 1.41593 0 0 0 1``, but in...
Thanks for the excellent work. I noticed that you trained other baselines (e.g. PixelNeRF) on the same dataset as yours, but I found that adapting the extra dataset on pixelnerf...
Hi, I see your code includes the dataset loader for LLFF dataset. Could you share any tips on how to run your code on LLFF dataset?
after throwing an instance of std::runtime_error what(): nvrtc: error: failed to open libnvrtc-builtins.so.11.2. Make sure that libnvrtc-builtins.so.11.2 is installed correctly. nvrtc compilation failed After trying setting the LD_LIBRARY_PATH as suggested,...
I think the encoding part is part of the diffusion model and doesn't need to train. But why you are training here?
I trained with the command `python main.py --text 'a squirrel' --workspace trial -O --eval_interval 10`. However, my results turned out to be much worse than the squirrel shown in the...
Can the input four views of the gaussian unet be four random viewpoints and how to input the pose?
Thanks for the amazing work! I tried to run the gaussian reconstruction part on 4 images I rendered, the four images have elevation and azimuth of: (30, 30), (-20, 90),...
Thanks for the excellent work of mipnerf360. I am a little confused about the depth loss in mipnerf360. The current way of calculating depth is: ` rendering['distance_mean'] = ( jnp.clip(...