Unable to Reproduce Results from the Paper
Thank you for your excellent work! I encountered some difficulties while trying to reproduce the results. For instance, on the mipnerf360/bicycle dataset, I achieved a PSNR of 24.8, while the paper reports 25.068. Could you please share the training parameters or random seed used to achieve the reported results? Thank you very much for your help!
你好,我是张仁杰,我已经收到你的邮件~我会尽快阅读你发来的邮件~
Hi, thanks for your interest in our work! Results of Table D.3 in which PSNR for Bicycle is 25.06 were obtained with 32K clusters for covariance. This is different from the main experimental setup explained is Sec 4. We have shared pretrained models for MipNerf-360, Tanks and Temples and DeepBlending dataset for both CompGS-4k and CompGS-32k variants both with and w/o opacity regularization in our git repo. You can download these models and provide the paths in the render and evaluation codes to get the metrics. The metrics might not exactly match those in the paper since these are re-runs with different seeds.
Thank you very much for your response and for addressing my previous inquiry.
We have a few concerns regarding the provided pre-trained models and would greatly appreciate your clarification on the following points:
The results from the 32K pre-trained model for the mipnerf360 dataset are as follows:
| SCENE | SSIM | PSNR | LPIPS | SIZE |
|---|---|---|---|---|
| bicycle | 0.7552 | 25.0684 | 0.2441 | 33.21 |
| bonsai | 0.9320 | 31.1945 | 0.2226 | 9.17 |
| counter | 0.8954 | 28.4665 | 0.2216 | 9.38 |
| flowers | 0.5882 | 21.2620 | 0.3668 | 25.72 |
| garden | 0.8483 | 26.8220 | 0.1399 | 34.64 |
| kitchen | 0.9175 | 30.7738 | 0.1417 | 13.99 |
| room | 0.9119 | 31.1308 | 0.2347 | 7.96 |
| stump | 0.7700 | 26.6048 | 0.2363 | 30.99 |
| treehill | 0.6342 | 22.7467 | 0.3550 | 24.86 |
| average | 0.8059 | 27.1188 | 0.2403 | 21.10 |
The results from the 16K pre-trained model for the mipnerf360 dataset are as follows:
| SCENE | SSIM | PSNR | LPIPS | SIZE |
|---|---|---|---|---|
| bicycle | 0.7529 | 24.9890 | 0.2468 | 33.21 |
| bonsai | 0.9301 | 30.9926 | 0.2260 | 9.17 |
| counter | 0.8932 | 28.3391 | 0.2251 | 9.38 |
| flowers | 0.5857 | 21.2020 | 0.3685 | 25.72 |
| garden | 0.8464 | 26.7515 | 0.1425 | 34.64 |
| kitchen | 0.9160 | 30.6272 | 0.1438 | 13.99 |
| room | 0.9105 | 31.0393 | 0.2375 | 7.96 |
| stump | 0.7682 | 26.5665 | 0.2389 | 30.99 |
| treehill | 0.6329 | 22.7223 | 0.3566 | 24.86 |
| average | 0.8040 | 27.0255 | 0.2428 | 21.10 |
However, as noted in the paper, the 16K model is expected to have a SIZE of 18, and the 32K model should have a SIZE of 19. We have noticed that both the 16K and 32K pre-trained models have identical SIZE values, which seems to be an inconsistency.
Furthermore, we would be very grateful if you could confirm whether this discrepancy in model sizes is expected or if there may have been some misunderstanding on our part regarding the model configurations.
Additionally, would it be possible for you to kindly provide a checkpoint from the mipnerf360/bicycle dataset training? Having access to this checkpoint would significantly help us in accurately reproducing the results and understanding any potential issues more clearly.
Thank you very much for your time and assistance. I truly appreciate your help and look forward to your response.
Dear authors,
I hope this message finds you well. I would like to follow up on my previous inquiry regarding the discrepancies in the model sizes for the 16K and 32K pre-trained models.In addition to that, we have encountered another issue during our training process and would greatly appreciate your assistance.
Despite training the model based on the parameters provided in the pre-trained model, we have been unable to obtain similar results. Specifically, when using the bicycle dataset, we obtained the following results:
| Dataset | SSIM | PSNR | LPIPS | SIZE |
|---|---|---|---|---|
| bicycle | 0.7591 | 25.1619 | 0.2312 | 69 |
| bonsai | 0.9339 | 31.56 | 0.2176 | 17 |
In comparison, the results from the pre-trained model you provided are as follows:
| Dataset | SSIM | PSNR | LPIPS | SIZE |
|---|---|---|---|---|
| bicycle | 0.7414 | 24.7621 | 0.2676 | 28 |
| bonsai | 0.9319 | 31.19 | 0.2226 | 9.17 |
We strictly followed the parameter settings provided in the pre-trained model and did not modify the source code. Additionally, we used the original 3DGS repository rather than the updated version. Despite these efforts, the results still do not align with those from the pre-trained model.
Here are the parameters we used while training the mipnerf360/bicycle dataset:
ncls=32768
ncls_sh=4096
ncls_dc=4096
kmeans_iters=1
st_iter=20000
max_iters=30000
max_prune_iter=20000
lambda_reg=1e-7
Training command:
CUDA_VISIBLE_DEVICES=$cuda_device python train_kmeans.py \
--port $port \
-s="$path_source" \
-m="$path_output" \
-i images_4 \
--kmeans_ncls "$ncls" \
--kmeans_ncls_sh "$ncls_sh" \
--kmeans_ncls_dc "$ncls_dc" \
--kmeans_st_iter "$st_iter" \
--kmeans_iters "$kmeans_iters" \
--total_iterations "$max_iters" \
--quant_params sh dc rot scale\
--kmeans_freq 100 \
--opacity_reg \
--lambda_reg "$lambda_reg" \
--max_prune_iter "$max_prune_iter" \
--eval
Therefore, I would like to kindly request, once again, if you could provide a checkpoint from the mipnerf360/bicycle dataset training. We believe that having access to this checkpoint would be immensely helpful in accurately reproducing the results and addressing any potential issues we are currently facing.
Thank you very much for your continued support. We truly appreciate your time and assistance, and look forward to your response.
Best regards.
Dear authors,
I hope this message finds you well. I would like to follow up on my previous inquiry regarding the discrepancies in the model sizes for the 16K and 32K pre-trained models.In addition to that, we have encountered another issue during our training process and would greatly appreciate your assistance.
Despite training the model based on the parameters provided in the pre-trained model, we have been unable to obtain similar results. Specifically, when using the bicycle dataset, we obtained the following results:
Dataset SSIM PSNR LPIPS SIZE bicycle 0.7591 25.1619 0.2312 69 bonsai 0.9339 31.56 0.2176 17 In comparison, the results from the pre-trained model you provided are as follows:
Dataset SSIM PSNR LPIPS SIZE bicycle 0.7414 24.7621 0.2676 28 bonsai 0.9319 31.19 0.2226 9.17 We strictly followed the parameter settings provided in the pre-trained model and did not modify the source code. Additionally, we used the original 3DGS repository rather than the updated version. Despite these efforts, the results still do not align with those from the pre-trained model.
Here are the parameters we used while training the mipnerf360/bicycle dataset:
ncls=32768 ncls_sh=4096 ncls_dc=4096 kmeans_iters=1 st_iter=20000 max_iters=30000 max_prune_iter=20000 lambda_reg=1e-7Training command:
CUDA_VISIBLE_DEVICES=$cuda_device python train_kmeans.py \ --port $port \ -s="$path_source" \ -m="$path_output" \ -i images_4 \ --kmeans_ncls "$ncls" \ --kmeans_ncls_sh "$ncls_sh" \ --kmeans_ncls_dc "$ncls_dc" \ --kmeans_st_iter "$st_iter" \ --kmeans_iters "$kmeans_iters" \ --total_iterations "$max_iters" \ --quant_params sh dc rot scale\ --kmeans_freq 100 \ --opacity_reg \ --lambda_reg "$lambda_reg" \ --max_prune_iter "$max_prune_iter" \ --evalTherefore, I would like to kindly request, once again, if you could provide a checkpoint from the mipnerf360/bicycle dataset training. We believe that having access to this checkpoint would be immensely helpful in accurately reproducing the results and addressing any potential issues we are currently facing.
Thank you very much for your continued support. We truly appreciate your time and assistance, and look forward to your response.
Best regards.
Same Issue, looking forward to the author's reply.
Dear authors,
I hope this message finds you well. I would like to follow up on my previous inquiry regarding the discrepancies in the model sizes for the 16K and 32K pre-trained models.In addition to that, we have encountered another issue during our training process and would greatly appreciate your assistance.
Despite training the model based on the parameters provided in the pre-trained model, we have been unable to obtain similar results. Specifically, when using the bicycle dataset, we obtained the following results:
Dataset SSIM PSNR LPIPS SIZE bicycle 0.7591 25.1619 0.2312 69 bonsai 0.9339 31.56 0.2176 17 In comparison, the results from the pre-trained model you provided are as follows:
Dataset SSIM PSNR LPIPS SIZE bicycle 0.7414 24.7621 0.2676 28 bonsai 0.9319 31.19 0.2226 9.17 We strictly followed the parameter settings provided in the pre-trained model and did not modify the source code. Additionally, we used the original 3DGS repository rather than the updated version. Despite these efforts, the results still do not align with those from the pre-trained model.
Here are the parameters we used while training the mipnerf360/bicycle dataset:
ncls=32768 ncls_sh=4096 ncls_dc=4096 kmeans_iters=1 st_iter=20000 max_iters=30000 max_prune_iter=20000 lambda_reg=1e-7Training command:
CUDA_VISIBLE_DEVICES=$cuda_device python train_kmeans.py \ --port $port \ -s="$path_source" \ -m="$path_output" \ -i images_4 \ --kmeans_ncls "$ncls" \ --kmeans_ncls_sh "$ncls_sh" \ --kmeans_ncls_dc "$ncls_dc" \ --kmeans_st_iter "$st_iter" \ --kmeans_iters "$kmeans_iters" \ --total_iterations "$max_iters" \ --quant_params sh dc rot scale\ --kmeans_freq 100 \ --opacity_reg \ --lambda_reg "$lambda_reg" \ --max_prune_iter "$max_prune_iter" \ --evalTherefore, I would like to kindly request, once again, if you could provide a checkpoint from the mipnerf360/bicycle dataset training. We believe that having access to this checkpoint would be immensely helpful in accurately reproducing the results and addressing any potential issues we are currently facing.
Thank you very much for your continued support. We truly appreciate your time and assistance, and look forward to your response.
Best regards.
Hi. When you download each zip file, it includes a folder for each scene, as well as bicycle. In each folder, in addition to the checkpoints, there is a file called train_args.json that indicates the value of all arguments. That will help you in case you want to train the modle from scratch.
Dear author, we would like to clarify that we trained the model exactly according to the parameters specified in the train_args.json file of the pre-trained model, but the results still differ significantly from those reported in the paper. We did not find the checkpoint files in the pre-trained model you provided. If you could provide the checkpoint file, it would greatly help in making the results more convincing. Thank you.
Hi. Inside each scene folder, there is a folder called point_cloud/iteration_30000 that includes the final checkpoint's point_cloud (.ply), kmeans indices, kmeans centers, and kmeans arguments. Hope this helps.
Hi, thank you for the helpful information. I have checked the folder as suggested, and here is the output from my system:
user@machine:~/pre-trained/compgs_16k_pruned/mipnerf360/bicycle$ pwd
~/pre-trained/compgs_16k_pruned/mipnerf360/bicycle
user@machine:~/pre-trained/compgs_16k_pruned/mipnerf360/bicycle$ find . -type f -name 'kmeans*'
./point_cloud/iteration_30000/kmeans_inds.bin
./point_cloud/iteration_30000/kmeans_args.npy
user@machine:~/pre-trained/compgs_16k_pruned/mipnerf360/bicycle$
As you can see, the kmeans_centers file is not present.Without the kmeans_centers file, I am unable to reload the .ply file properly.Could you please confirm if the kmeans_centers file should be available, or if there are any additional steps to obtain it? Thanks again for your assistance!
Hi, sorry for this inconvenience. Unfortunately, we missed uploading kmeans_centers.pth for kmeans_plus_opacity_reg checkpoints. However, both 4k and 32k checkpoints of kmeans_only include kmeans_centers.pth for all scenes. We're working on obtaining them again and add them to our checkpoints. Until then, you can follow train_args.json to train the model from scratch and reproduce the results.