nerfstudio icon indicating copy to clipboard operation
nerfstudio copied to clipboard

[WIP] dreamfusion implementation

Open terrancewang opened this issue 2 years ago • 6 comments

terrancewang avatar Nov 23 '22 08:11 terrancewang

@terrancewang I'd try to structure this implementation as a Pipeline since the Model (with Fields) should be largely agnostic to DreamFusion. We should be able to swap out Models that have normals implemented.

ethanweber avatar Nov 26 '22 02:11 ethanweber

@ethanweber I think the larger issue is how we're going to have to do the loss function since the implementation will require more than the standard .backward() call as dreamfusion directly calculates the gradients instead of having a real "loss" function. The stable-dreamfusion code's https://github.com/ashawkey/stable-dreamfusion/blob/15c2f105a6fd659cc1bc72c13e2168d74678753c/nerf/sd.py#L116 has a backward function that is different due to the fact that the dreamfusion loss basically ignores the gradient through the u-net. We can jank this by just doing this backward call / dreamfusion gradient computation in the pipeline's get_train_loss_dict() and then having the trainer class still step the optimizer for the radiance field, but does that sit ok with you that we do it this way or do you have any ideas on how this can fit nicely into the current system?

jake-austin avatar Nov 29 '22 03:11 jake-austin

@ethanweber I think the larger issue is how we're going to have to do the loss function since the implementation will require more than the standard .backward() call as dreamfusion directly calculates the gradients instead of having a real "loss" function. The stable-dreamfusion code's https://github.com/ashawkey/stable-dreamfusion/blob/15c2f105a6fd659cc1bc72c13e2168d74678753c/nerf/sd.py#L116 has a backward function that is different due to the fact that the dreamfusion loss basically ignores the gradient through the u-net. We can jank this by just doing this backward call / dreamfusion gradient computation in the pipeline's get_train_loss_dict() and then having the trainer class still step the optimizer for the radiance field, but does that sit ok with you that we do it this way or do you have any ideas on how this can fit nicely into the current system?

Maybe we could specify generative vs. non-generative pipelines and generative pipelines could do custom backward steps?

mcallisterdavid avatar Nov 29 '22 06:11 mcallisterdavid

Not sure if expected, but I got a warning about UNet, and to run scripts\trace_stablediff.py -- eventually got that to work and then saw Loading traced UNet. -- but I wonder if that should become part of install process.

machenmusik avatar Feb 08 '23 15:02 machenmusik

Not sure if expected, but I got a warning about UNet, and to run scripts\trace_stablediff.py -- eventually got that to work and then saw Loading traced UNet. -- but I wonder if that should become part of install process.

Thats expected. It will eventually be part of the install instructions

tancik avatar Feb 08 '23 17:02 tancik

Thats expected. It will eventually be part of the install instructions

ok np.

Suggestion: given that the iteration times are 1 to 2 orders of magnitude longer, it may be worth changing the default checkpoint interval for dreamfusion from 2000 to something much smaller.

machenmusik avatar Feb 08 '23 18:02 machenmusik

(note: with 16GB VRAM, still getting occasional CUDA OOM, even after training is complete and only viewing.)

machenmusik avatar Feb 11 '23 20:02 machenmusik

Hi @tancik,

How is the project going? Will I be able to use it out of the box?

liming-ai avatar Feb 24 '23 05:02 liming-ai

I get the following warning when running other models like nerfacto (not observed in main)

ns-train nerfacto --data data/nerfstudio/egg
/home/tancik/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/tyro/_fields.py:794: UserWarning: Mutable type <class 'nerfstudio.engine.optimizers.AdamOptimizerConfig'> is used as a default value for `optimizer`. This is dangerous! Consider using `dataclasses.field(default_factory=...)` or marking <class 'nerfstudio.engine.optimizers.AdamOptimizerConfig'> as frozen.
  warnings.warn(
/home/tancik/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/tyro/_fields.py:794: UserWarning: Mutable type <class 'nerfstudio.engine.schedulers.ExponentialDecaySchedulerConfig'> is used as a default value for `scheduler`. This is dangerous! Consider using `dataclasses.field(default_factory=...)` or marking <class 'nerfstudio.engine.schedulers.ExponentialDecaySchedulerConfig'> as frozen.
  warnings.warn(

tancik avatar May 27 '23 20:05 tancik

example results from generfacto:

a high quality zoomed out photo of a palm tree

https://github.com/nerfstudio-project/nerfstudio/assets/19509183/05ffebce-a3d6-43af-9f11-e04ce2ce3237

a high quality photo of a ripe pineapple

https://github.com/nerfstudio-project/nerfstudio/assets/19509183/407ff7c8-7106-4835-acf3-c2f8188bbd1d

a high quality zoomed out photo of a light grey baby shark

https://github.com/nerfstudio-project/nerfstudio/assets/19509183/b1f5b7c5-dd96-48b4-8db0-960632e7798b

terrancewang avatar Jun 18 '23 06:06 terrancewang

@terrancewang could we have the examples above in the nerfstudio documentation?

f-dy avatar Jun 20 '23 22:06 f-dy

@terrancewang could we have the examples above in the nerfstudio documentation?

yup i'll add these vids to the docs as well soon!

terrancewang avatar Jun 21 '23 21:06 terrancewang