nerfstudio
nerfstudio copied to clipboard
Jkulhanek/zipnerf
Add zipnerf code https://arxiv.org/pdf/2304.06706.pdf based on https://github.com/SuLvXiangXin/zipnerf-pytorch. Also added:
- hash decay loss to nerfacto
LinearizedSceneContractionfrom zipnerf paper.SceneContractionis not supported for zipnerf die to dimensions of Gaussiancov- torch-scatter lib to compute regularizations
- and others
Results on command
ns-download-data nerfstudio --capture-name=poster
ns-train zipnerf --data data/nerfstudio/poster --log-gradients True --vis tensorboard --gradient-accumulation-steps 1 --pipeline.model.implementation tcnn
Photos
Metrics during training:
Also I compared nerfacto/nerfacto+hash_decay via
ns-train nerfacto --data data/nerfstudio/poster --log-gradients True --vis tensorboard --gradient-accumulation-steps 1 --pipeline.model.implementation tcnn --pipeline.model.compute_hash_regularization True/False
Metrics during training (yellow - with hash-decay, purple w/o):
Depth comparison (from left-to-right: ZipNeRF - Nerfacto w/o hash-decay - Nerfacto with hash-decay)
Can you post render results in the description.
Using torch-scatter is a little annoying, we will need to figure out how to handle installation (for non-docker workflow) since it can't be bundled in the pypi pip package. We could alert users to install instructions if they try to run code that uses it. Anyone have other ideas?
@tancik I've posted renders and metrics.
Im using torch_scatter twice. I can replace scale_featurization via torch.scatter_reduce but for regularize_hash_pyramid it does not have backward pass. Maybe @jkulhanek has some comments ?
Also I need a help with docs build because it failed.
I think we need to think a bit on how this gets merged with the main repo. A few things to consider:
- torch scatter makes installation more difficult
- many changes break backwards compatibility
- PR contains changes that are not related to zipnerf (ie num default samples in nerfacto) Additionally we are moving in the direction of incorporating external models via the registry instead of directly into the repo. There is a number of reasons for this, but one big one is that maintaining a growing list of methods in the main repo isn't sustainable (ie github issues get polluted and there is no way to pin an external method to a nerfstudio version). My recommendation would be to split this PR up into core nerfstudio changes so that we can consider them each in turn. The actual zip-nerf model can exist in an external github repo (ie LERF), this external repo can have the torch scatter dependency
-
what if I will solve
torch-scatterlibrary problem ? -
many changes break backwards compatibility
yeah, because nerfstudio does not fully support gaussians. It is more focused on working with point-based methods. I had to change the structure of the code a bit. I also checked to make sure nothing broke.
-
PR contains changes that are not related to zipnerf (ie num default samples in nerfacto)
fully agree, I can fix this
- zip nerf is a great bridge between fast point-based and slow multiscale-based methods. I saw a lot of reactions in #1734 and branches from @jkulhanek, @kerrj. Also zipnerf features can be applied to nerfacto and other hash-based methods. Because of this, I think that zipnerf should be included to nerfstudio
Hi @Ilyabasharov
I'm doing some studies and I'm trying to test the ZipNerf in NerfStudio. I pulled the branch, did the installation again, updated the cli, but it gives an error saying that the zipnerf option for the training is invalid. Could you explain how to upgrade to this option, zipnerf, is available?
Hi @Ilyabasharov
I'm doing some studies and I'm trying to test the ZipNerf in NerfStudio. I pulled the branch, did the installation again, updated the cli, but it gives an error saying that the zipnerf option for the training is invalid. Could you explain how to upgrade to this option, zipnerf, is available?
hello, It is strange, did you solve the problem? for me it is worked
Great work @Ilyabasharov
I tested this zipnerf implementation and the speed is about 1/10 compare to nerfacto (22K rays/sec vs 200+rays/sec), is that due to using the torch vs tcnn nerfacto_field? The reference implementation is only about 1/2 of nerfacto...
@jingyibo123 hello, yes, I set default implementation to torch. But it is working also with tcnn which is much faster
Hello, the implementation seems to work well on real data !
However, I can not make it work on Blender data, as the result is an all-white scene whatever the object, which is a bit odd as it is announced to be working on the torch implementation (I did not test it myself though). I am willing to work on it if needed but I don't really know where to begin the debug.
What's strange is that (normalized) accumulation displayed by the viewer shows the object (its outline at least) (the word "normalized" being the key problem here I guess), and I also tried to display a PCA of the features codes of each samples * weights of each sample after accumulation and we can also see the object's outline... Which means that the model seems to learn something. (I tried lots of hyperparameters combination, including the usual modifications asked for the blender dataset such as near and far plane, etc...)
@Ilyabasharov what your plan of this PR? Where can I find the latest version of zip nerf implementation?
I think a high quality nerf method could still have a lot value to be maintained as an external method. I can probably help turn this PR to a separate well maintained repo.
@bobye Here's one, I thoroughly compared these two, some of the components are fundamentally different, numerically different as well, the best is to test with your own data..
FYI, now the official implementation is out at https://github.com/jonbarron/camp_zipnerf
Close as dup of https://github.com/nerfstudio-project/nerfstudio/pull/2850