Christian Laforte
Christian Laforte
Hi, it seems like we're using FP32 all the time in `nerf_volume_renderer`. Since raymarching seems to be the primary bottleneck (both for speed and VRAM usage), it sounds like we...
- also use venv instead of virtualenv, since venv is in the standard library
Add zero123 challenges, originally in stable-dreamfusion. Also improve the zero123 configuration to reconstruct some of the challenges. NOTE: all experiments and results were run with zero123XL, a yet-to-be-released model from...
- ... so `density_blob_scale` can be annealed over time - this might help in later phases
- currently limited to zero123_guidance and stable_diffusion_guidance - limits number of batch items to evaluate guidance for debugging purposes - defaults to 4 - higher values (e.g. 12) resulted in...
The same seed seems to be used by every GPU, so using multi-GPU produces the same result as just using 1. Reproduction: `python launch.py --config configs/dreamfusion-if.yaml --train --gpu 0,1 system.prompt_processor.prompt="a...
This will track progress on reconstructing Anya. https://spy-x-family.fandom.com/wiki/Anya_Forger Disclaimers: - this is using an A100 40GB GPU. You can try to reproduce these results using `scripts/run_image_anya.sh` but there's no guarantees...
The way we handle arguments and override defaults is messy and confusing. I think it might be cleaner to use hydra instead, similarly to https://github.com/threestudio-project/threestudio. I might do it myself...
also: - pass latent through run_on_prompt - add more scaling options to show_cross_attention, etc. - puppy image generated using StableDiffusion 2.1 in DreamStudio - need latest pytorch (in requirements.txt) for...