compact3d icon indicating copy to clipboard operation
compact3d copied to clipboard

Request for Detailed Configuration to Reproduce Paper Results: Significant Discrepancy in the Number of Gaussians

Open zhuzhiwei99 opened this issue 8 months ago • 2 comments

I want to request further clarification on the configuration details required to reproduce the results from your paper, particularly regarding the number of Gaussians. Despite following the implementation details described in your paper, I have observed a significant difference between my results and those reported in your Appendix.

Here is the configuration I used:

ncls=32768
ncls_sh=4096
ncls_dc=4096
kmeans_iters=1
st_iter=20000
max_iters=30000
max_prune_iter=20000
lambda_reg=1e-7

For the bicycle scene in the MipNeRF360 dataset, I used the following command to train the model:

CUDA_VISIBLE_DEVICES=$cuda_device python train_kmeans.py \
  --port $port \
  -s="$path_source" \
  -m="$path_output" \
  -i images_4  \
  --kmeans_ncls "$ncls" \
  --kmeans_ncls_sh "$ncls_sh" \
  --kmeans_ncls_dc "$ncls_dc" \
  --kmeans_st_iter "$st_iter" \
  --kmeans_iters "$kmeans_iters" \
  --total_iterations "$max_iters" \
  --quant_params sh dc rot scale\
  --kmeans_freq 100 \
  --opacity_reg \
  --lambda_reg "$lambda_reg" \
  --max_prune_iter "$max_prune_iter" \
  --eval

After evaluation and metric calculation, I obtained the following results:

Scene Method SSIM↑ PSNR↑ LPIPS↓ # Gauss
Bicycle 3DGS 0.766 25.21 0.209 4876273
Bicycle CompGS-32K 0.762 25.18 0.227 2617054
Bonsai 3DGS 0.942 32.33 0.203 1075069
Bonsai CompGS-32K 0.937 31.64 0.215 615497

However, the results in your paper show:

Image

I noticed that the original 3DGS repository has been updated, but I believe there might still be some discrepancies in the configuration or implementation that could account for such a large difference in the number of Gaussians. Could you please provide more detailed configuration settings or any additional steps that might help me reproduce the results more accurately? I would greatly appreciate your guidance on this matter.

Thank you for your attention to this issue. I look forward to your response.

Best regards.

zhuzhiwei99 avatar Apr 24 '25 05:04 zhuzhiwei99

I encountered similar problems as you. #18

shallitbeso avatar Apr 25 '25 02:04 shallitbeso

Hi, thanks for your interest in our work. We have shared pretrained models for MipNerf-360, Tanks and Temples and DeepBlending dataset for CompGS-32k both with and w/o opacity regularization in our git repo. You can download these models and provide the paths in the render and evaluation codes to get the metrics. The metrics might not exactly match those in the paper since these are re-runs with different seeds.

When you download each zip file, it includes a folder for each scene, as well as bicycle. In each folder, in addition to the checkpoints, there is a file called train_args.json that indicates the value of all arguments. That will help you in case you want to train from scratch.

arghavan-kpm avatar Apr 25 '25 05:04 arghavan-kpm