CoherentGS
CoherentGS copied to clipboard
[ECCV 2024] CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
CoherentGS
CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
Avinash Paliwal, Wei Ye, Jinhui Xiong, Dmytro Kotovenko, Rakesh Ranjan, Vikas Chandra, Nima Khademi Kalantari
ECCV 2024
Prerequisites
You can setup the anaconda environment using:
conda env create --file environment.yml
conda activate coherentgs
CUDA 11.7 is strongly recommended.
Data Preparation
The data preprocessing scripts have now been added to the repository. You can use these to generate flow and depth for our optimization.
You can download the processed LLFF dataset here. We will add optimized pointclouds soon.
Training
Training on LLFF dataset with 3 views. You can choose from [2, 3, 4] views
python train.py --source_path path/nerf_llff_data/flower --eval --model_path output/flower --num_cameras 3
Rendering
Run the following script to render the video.
python renderpath.py -source_path path/nerf_llff_data/flower --eval --model_path output/flower
Acknowledgement
The repo is built on top of 3D Gaussian Splatting
The modified rasterizer to render depth and eval script is from FSGS
Citation
If you find our work useful for your project, please consider citing the following paper.
@inproceedings{paliwal2024coherentgs,
title={Coherentgs: Sparse novel view synthesis with coherent 3d gaussians},
author={Paliwal, Avinash and Ye, Wei and Xiong, Jinhui and Kotovenko, Dmytro and Ranjan, Rakesh and Chandra, Vikas and Kalantari, Nima Khademi},
booktitle={European Conference on Computer Vision},
pages={19--37},
year={2024},
organization={Springer}
}