Question about the resolution of training images on the Mipnerf-360 dataset.
Hi and first of all thanks for your great work! The training image resolution of 3DGS is 2 times downsampled (indoor) and 4 times downsampled (outdoor). The training script 'train_mip360.sh' for Scaffold-GS does not seem to have downsampling parameters. What is the training resolution in the paper?
We use the default settings as shown by the code: rescaling large resolution to 1.6k.
Hello, I have another question. The outdoor scene metrics in Tables 6-8 are lower than those of 3DGS. The visualization results of these scenarios were not presented in the paper. We retrained the model, but the results were not very good. The training parameters are python /GIT/Scaffold-GS/train.py --eval -s /GIT/Scaffold-GS/data/mipnerf360/bicycle --lod 0 --gpu -1 --voxel_size 0.001 --update_init_factor 16 --appearance_dim 0 --ratio 1 --iterations 30_000 -m /SCGS/bicycle.
Test image:
Is it convenient to display official results?
Is there a problem with my parameter settings?
We use unified parameter configuration for all mipnerf360 scenes for simplicity. For the outdoor scene with tiny and thin objects like leaf and grass, a smaller voxel size should be better. Feel free to try and welcome to share the results.
We use the default settings as shown by the code: rescaling large resolution to 1.6k.
May I ask if the Mipnerf-360 experimental results in the paper were also obtained using a resolution of 1.6k?
Yes.
In fact, the effect of the Scaffold-GS has great improvement compared to the vanilla Gaussian splatting, the latest results are shown in the arxiv version of Octree-GS.