gsplat icon indicating copy to clipboard operation
gsplat copied to clipboard

GS-Scaffold implementation

Open MrNeRF opened this issue 1 year ago • 35 comments

This is a draft implementation of scaffold-gs. It is still very raw, but I wanted to get the base running as quickly as possible.

The simple_trainer.py is already functional, and the rendering can be previewed with Viser. However, the densification algorithm (Algorithm 1) from this CVPR 2024 supplemental document has not yet been implemented.

I spoke with Ruilong about working on this implementation. I'm sharing this draft to indicate that progress is being made (and someone is working on this implementation). I hope to finish a first complete GS-Scaffold implementation by tomorrow (but might be Friday, hahha :)

Here’s a small preview for now.

Screenshot from 2024-09-18 14-13-02

MrNeRF avatar Sep 18 '24 14:09 MrNeRF

@liruilong940607 I think it works quite well as of now but I haven't run the benchmarks yet.

What should we do about the file format? In theory, the mlp parameters need to be saved as well but that would create an own file format. Any thoughts?

MrNeRF avatar Sep 24 '24 13:09 MrNeRF

I removed the pruning and use relocation. It seems to give better results. Here is one of the eval videos at 30k steps. I need to run the benchmarks :)

https://github.com/user-attachments/assets/71b82a0b-9e88-4309-9fff-91c23b571408

MrNeRF avatar Sep 24 '24 21:09 MrNeRF

Great job @MrNeRF! I think saving in it's own file format is fine -- it is what it is

liruilong940607 avatar Sep 25 '24 18:09 liruilong940607

Thank you for your suggestions @liruilong940607

I still play a little bit with the parameters and settings before I start cleaning it up. If the measure are not somewhat flawed (and I need to confirm on more data), this could be the new sota.

I get now on the garden scene: PSNR: 27.801, SSIM: 0.8678, LPIPS: 0.076 Time: 0.010s/image Number of GS: 1021525

MrNeRF avatar Sep 25 '24 20:09 MrNeRF

Update: The last commit gives me PSNR: 27.941, SSIM: 0.8705, LPIPS: 0.074 Time: 0.012s/image Number of GS: 1213319

MrNeRF avatar Sep 25 '24 21:09 MrNeRF

Hey this is insane. The default MCMC gives PSNR: 26.983, SSIM: 0.8479, LPIPS: 0.113 Time: 0.059s/image Number of GS: 1000000

liruilong940607 avatar Sep 25 '24 21:09 liruilong940607

I will help for benchmarking. Training now. Hopefully it runs without hassle

image image

ichsan2895 avatar Sep 26 '24 09:09 ichsan2895

Here are my results. I think I overfitted a bit on the garden scene. What is the base line? According to nerfbaselines it is sota (psnr) on the following scenes: garden, counter, bonsai. The rest is pretty close. Only on the bike scene it is average. However, I don't think the measures are comparable.

In terms of lpips it is everywhere sota (is this comparable at all). I think the visual appearance is in general also better.

I use the script under examples/benchmark. Why are the images downscaled for the training of certain scenes?

Eval Stats This pr

Scene PSNR SSIM LPIPS Memory (GB) Num ### GS (anchors)
garden 27.914 0.8718 0.0734 10.1964 1_072_915
bicycle 25.174 0.7627 0.1599 15.8627 1_694_584
stump 26.873 0.7793 0.1445 14.5899 1_565_929
bonsai 32.833 0.9462 0.1278 6.3332 669_712
counter 29.749 0.9141 0.1518 3.7661 388_440
kitchen 31.626 0.9312 0.0925 3.7940 387_015
room 31.863 0.9219 0.1645 3.7305 387_108

Here is the same benchmark with the mcmc implementation

Scene PSNR SSIM LPIPS Memory (GB) Num GS (anchors)
garden 26.955 0.8454 0.1133 1.5476 1000000
bicycle 25.17 0.7586 0.2008 1.6953 1000000
stump 26.655 0.7762 0.1733 1.6055 1000000
bonsai 32.581 0.9478 0.1235 1.7263 1000000
counter 29.338 0.917 0.1421 1.9675 1000000
kitchen 31.396 0.9297 0.0977 1.7045 1000000
room 32.198 0.9294 0.147 1.9354 1000000

MrNeRF avatar Sep 26 '24 10:09 MrNeRF

@MrNeRF Its strange, stump is not error with my experiment and using this commit e00d134

=== Eval Stats of stump === results/benchmark_scaffold/stump/stats/val_step29999.json {"psnr": 26.83869171142578, "ssim": 0.7819640040397644, "lpips": 0.1407296359539032, "ellipse_time": 0.014932617545127869, "num_GS": 1560666}

ichsan2895 avatar Sep 26 '24 12:09 ichsan2895

Here is the latest state of this draft pr

Scene PSNR SSIM LPIPS Memory (GB) Num GS (anchors)
garden 27.9297 0.8715 0.0728 10.8048 1141663
bicycle 25.2822 0.7641 0.163 16.5653 1766822
stump 26.7358 0.7793 0.1428 14.6143 1550649
bonsai 33.0422 0.9461 0.1273 6.6482 701940
counter 29.9382 0.9176 0.1464 3.9759 412276
kitchen 31.8974 0.9315 0.0913 3.6907 374534
room 31.7386 0.9241 0.1553 3.3813 347669

MrNeRF avatar Sep 26 '24 16:09 MrNeRF

My result with scaffold-gs in commit e00d134

SCENE PSNR SSIM LPIPS VRAM (GB) NUM_GS
garden 27.922 0.873 0.074 10.776 1136493
bicycle 25.250 0.769 0.157 16.090 1717742
stump 26.839 0.782 0.141 14.553 1560666
bonsai 33.036 0.948 0.125 5.932 623538
counter 29.947 0.919 0.144 3.561 364820
kitchen 32.004 0.932 0.092 3.822 391451
room 32.017 0.923 0.166 4.169 435316
AVERAGE 29.574 0.878 0.128 8.415 890004

ichsan2895 avatar Sep 27 '24 02:09 ichsan2895

@ichsan2895 thx for testing. I think that confirms what is to be expected. Awesome!

MrNeRF avatar Sep 27 '24 11:09 MrNeRF

@ichsan2895 thx for testing. I think that confirms what is to be expected. Awesome!

Don't worry @MrNeRF

Now, I testing (again) with latest commit 139b521.

ichsan2895 avatar Sep 27 '24 12:09 ichsan2895

My result with scaffold-gs in commit 139b521

SCENE PSNR SSIM LPIPS VRAM (GB) NUM_GS
garden 27.917 0.873 0.075 12.583 1333404
bicycle 25.165 0.766 0.162 18.581 1988100
stump 26.805 0.780 0.145 15.343 1641045
bonsai 33.043 0.947 0.126 7.146 756973
counter 30.039 0.919 0.144 4.570 474202
kitchen 32.175 0.934 0.091 3.985 404767
room 31.924 0.926 0.153 3.916 407397
AVERAGE 29.581 0.878 0.128 9.446 1000841

BTW, Can it save to ply and view it using supersplat or mkkellogg gaussian viewer?

ichsan2895 avatar Sep 27 '24 15:09 ichsan2895

BTW, Can it save to ply and view it using supersplat or mkkellogg gaussian viewer?

@ichsan2895 There is a way to do that. I thought gsplat is creating ply files but (if I don't missed it), its not doing it. So I voted for saving checkpoints to be consistent.

One can also remove the view dependency from the mlps and it also works well but the measures are a bit worse. I think this is something that should be supported by Nerfstudio? How do you usually convert the checkpoints to ply files?

MrNeRF avatar Sep 27 '24 16:09 MrNeRF

How do you usually convert the checkpoints to ply files?

For default densification strategies & MCMC strategies, this snippet of code in this thread works well: Get .ply file after training?

AFAIK, since that code uses plyfile library which does not have compatible license with gsplat (GPLv3 vs Apache 2), The code is never merged yet. I dont know how to code it with Open3D (Apache 2.0).

ichsan2895 avatar Sep 27 '24 19:09 ichsan2895

I know how to do it. Is it like a big feature request? I can code it up in open3D. I just want to get this here done for a review.

I will be gone for a week and not coding. I can do it thereafter for both, scaffold and the rest.

MrNeRF avatar Sep 27 '24 20:09 MrNeRF

I know how to do it. Is it like a big feature request? I can code it up in open3D. I just want to get this here done for a review.

It is not big feature request, but it is HUGE feature request :1st_place_medal:

I will be gone for a week and not coding. I can do it thereafter for both, scaffold and the rest.

Happy holiday, cant wait for the next week

ichsan2895 avatar Sep 27 '24 20:09 ichsan2895

I think that's good for review now.

MrNeRF avatar Sep 27 '24 21:09 MrNeRF

Amazing work @MrNeRF! It doesn't seem like the current scaffold training is compatible with packed/sparse mode. Would be great if it had it integrated.

earth-bass avatar Sep 28 '24 02:09 earth-bass

Is this draft support multiGPU? I test it with this commit e8d1207

I run this command:

SCENE_DIR="data/360_v2"
RESULT_DIR="results/benchmark_scaffold_2GPUs"
SCENE_LIST="garden bicycle stump bonsai counter kitchen room" # treehill flowers
RENDER_TRAJ_PATH="ellipse"

for SCENE in $SCENE_LIST;
do
    if [ "$SCENE" = "bonsai" ] || [ "$SCENE" = "counter" ] || [ "$SCENE" = "kitchen" ] || [ "$SCENE" = "room" ]; then
        DATA_FACTOR=2
    else
        DATA_FACTOR=4
    fi

    echo "Running $SCENE"

    # train without eval
    CUDA_VISIBLE_DEVICES=0,1 python3 examples/simple_trainer_scaffold.py --eval_steps 30000 --disable_viewer --data_factor $DATA_FACTOR \
        --render_traj_path $RENDER_TRAJ_PATH --steps_scaler 0.5 --packed \
        --data_dir data/360_v2/$SCENE/ \
        --result_dir $RESULT_DIR/$SCENE/
done

The error:

Traceback (most recent call last):
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/examples/simple_trainer_scaffold.py", line 1147, in <module>
    cli(main, cfg, verbose=True)
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/gsplat/distributed.py", line 344, in cli
    process_context.join()
  File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 163, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 1 terminated with the following error:
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 74, in _wrap
    fn(i, *args)
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/gsplat/distributed.py", line 295, in _distributed_worker
    fn(local_rank, world_rank, world_size, args)
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/examples/simple_trainer_scaffold.py", line 1103, in main
    runner = Runner(local_rank, world_rank, world_size, cfg)
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/examples/simple_trainer_scaffold.py", line 336, in __init__
    self.splats, self.optimizers = create_splats_with_optimizers(
  File "/workspace/GSPLAT_SCAFFOLD3/gsplat/examples/simple_trainer_scaffold.py", line 190, in create_splats_with_optimizers
    features = torch.zeros((N, cfg.feat_dim))
NameError: name 'cfg' is not defined

ichsan2895 avatar Sep 28 '24 05:09 ichsan2895

@ichsan2895 No. I can't test it properly as I don't have a multi gpu setup.

Either I remove it to not make it crash or I implement it blindly.

@earth-bass Thx. I'll look into it.

MrNeRF avatar Sep 28 '24 07:09 MrNeRF

My result with scaffold-gs in commit e8d1207

SCENE PSNR SSIM LPIPS VRAM (GB) NUM_GS
garden 27.886 0.872 0.075 12.533 1326524
bicycle 25.137 0.765 0.164 18.463 1976070
stump 26.810 0.780 0.143 15.199 1625766
bonsai 32.948 0.946 0.128 7.138 756253
counter 29.961 0.919 0.145 4.692 487842
kitchen 32.103 0.933 0.091 3.960 401787
room 31.948 0.925 0.155 3.941 410449
AVERAGE 29.542 0.877 0.129 9.418 997813

This commit slightly worse than previous commit. But, it is still good and not significant.

ichsan2895 avatar Sep 28 '24 17:09 ichsan2895

NEW SOTA! @MrNeRF :fire:

Absgrad increase significantly the metrics at the cost increasing of Num_GS and VRAM.

Garden, bicycle, and stump are OOM even I use cloud GPU (RTX A6000 48 GB).

SCENE PSNR SSIM LPIPS VRAM (GB) NUM_GS
garden 0.000 0.000 0.000 0.000 0
bicycle 0.000 0.000 0.000 0.000 0
stump 0.000 0.000 0.000 0.000 0
bonsai 33.501 0.953 0.110 23.696 2539838
counter 30.347 0.924 0.121 20.970 2229860
kitchen 32.525 0.937 0.084 13.022 1355222
room 32.318 0.931 0.136 19.273 2058410
AVERAGE 18.384 0.535 0.064 10.994 1169047

Using commit e8d1207

ichsan2895 avatar Sep 29 '24 01:09 ichsan2895

Awesome! But it should not run oom. There is a vram intense part which is not optimally implemented. I can't fix it before next Sunday.

But if someone wants to jump in, feel free. I can point to the relevant code.

I enjoyed playing with different replacements of mlps and noise. That's why I didn't look much inro the vram consumption. But so far, I didn't manage to find anything that would give another quality boost.

However, absgrad helps but I didn't evaluate it more systematically. Thanks for running the benchmark. Did you also inspect it visually. In my experience absgrad tends to introduce additional floaters.

MrNeRF avatar Sep 29 '24 09:09 MrNeRF

It amazes me that abs grad improves the results by 0.4 - 0.5 dB @ichsan2895. That's absolutely freaking crazy.

It means also that this implementation easily destroys the leading methods on the 3DGS leaderboard in mipnerf360. Some scenes are outperformed by 1dB PSNR.

MrNeRF avatar Sep 29 '24 13:09 MrNeRF

It amazes me that abs grad improves the results by 0.4 - 0.5 dB @ichsan2895. That's absolutely freaking crazy.

It means also that this implementation easily destroys the leading methods on the 3DGS leaderboard in mipnerf360. Some scenes are outperformed by 1dB PSNR.

Yeah. Just fix the VRAM leak problem & feature for exporting PLY -> gets new SOTA -> create paper for next CVPR :laughing:

ichsan2895 avatar Sep 29 '24 15:09 ichsan2895

However, absgrad helps but I didn't evaluate it more systematically. Thanks for running the benchmark. Did you also inspect it visually. In my experience absgrad tends to introduce additional floaters.

So far, its good. It does not create extra unwanted floater in this kitchen scene. Needs to be inspect for another dataset.

Scaffold-GS

https://github.com/user-attachments/assets/62a6b76c-1161-4cfd-81f2-e8112241e896

Scaffold-GS with Absgrad

https://github.com/user-attachments/assets/21140970-3dde-4494-87b9-3b6fbce0f0bb

ichsan2895 avatar Sep 29 '24 16:09 ichsan2895

Hey guys! It's super cool to see the the development here lead to new SOTA! However I would step back a little bit because the comparison here might be slightly unfair as this impl. produces more GSs, consumes more VRAM, and training time than the baselines. For almost all approaches of 3GDS, you could always get better performance by letting it converge to more GSs, with the price of spending more time and VRAM to train it. So I feel a super fair comparison should also consider training time & VRAM & number of GSs.

Btw, have you guys checked if this PR reimplements the performance of the original scaffold gs paper when following its original setup? I think when merging this pr, the default hyper parameters should be following the original paper, so that when user use it it should by default reproduce the paper's results. We then could support something more advanced like absgrad in the training script to get better performance, and log them in a README file or whatever.

liruilong940607 avatar Sep 29 '24 18:09 liruilong940607

There is currently a VRAM issue I need to resolve. It should be much less memory-hungry. But that won't impact the overrall structure of this pr.

The parameterization is different, as I spent some time fine-tuning it starting from what is given in the paper.

I can run some tests with the original parameters for comparison but thats already done on nerfbaselines. However, I noticed that the parameters and implementation from the paper do not align with key points highlighted in the paper. So what exactly should be compared?

Also, note that I made some steps in the densification algorithm optional, as I found they performed worse. Most notably, I replaced the pruning with MCMC relocation. I found that pruning triggered a much stronger densification, which did not translate into higher quality. The relocation seems to dampen this behavior in favour of less anchors and better quality. I also removed dropout, which is not justified in the paper and does not help. There are a few other differences as well. For instance, the schedulers used in gsplat are already different which will give also a different performance.

The voxelization, view-dependent MLPs, and anchor growth are implemented, which I believe are the main contributions of Scaffold-GS.

If I stay closer to the original Scaffold-GS (paper vs their code), the performance will be clearly worse. Absgrad can be off by default which I also did not activate in my tests.

Comparing against mcmc can only be fair with the same final budget as what Scaffold-GS uses. But again, for every anchor 10 neural gaussians are spawned. Most scenes will run oom with this mcmc budget on a 4090RTX.

MrNeRF avatar Oct 01 '24 11:10 MrNeRF