Training big dataset scene is damaged
I used colmap to align 4000 2k pictures, the sparse model has 2.7 million points, I used postshot to train everything is fine, but I used Gsplat1.5.2 to train 200,000 steps, the scene seems to be fine in the beginning stage, and it is completely damaged after reaching 20,000 steps (CUDA_VISIBLE_DEVICES=0 python gsplat/examples/simple_trainer.py mcmc --data_dir /gz-data/ --data_factor 1 --eval-steps -1 --strategy.cap-max 3000000 --max-steps 200000 --save-steps 200000 --ply-steps 200000 --ply-steps 200000 --save-ply )
https://github.com/user-attachments/assets/f767c663-8f06-4302-a473-d4c98b0d3e2d
https://github.com/user-attachments/assets/7fcd46e1-3bd6-4733-8c71-68671fb8aebf
same issue here. it starts well and after ~10% of training it starts to fall apart. tried different settings (mcmc, lower res, ...). dataset ~300 images.
mcmc strategy will regularize unseen splats smaller and low alpha over step, so if you have 4000 camera images, every step it train on only one image, it covers only very small scene area, large part of splats which is outside the camera become "unseen" thus melt down by the regulator. This is a fundamental issue of mcmc. One strategy is to do accumulated update of several cameras, or turn down the mcmc size and opacity regulator