Clay Mullis
Clay Mullis
SIREN; no transformers here
There's a large list of of memory optimization techniques on the README (the front page of the documentation). I'll go get em so people dont have to go hunting if...
Something I left it out in the README, you actually can pull off the the `--deeper` flag on an 8 GiB card @NuclearSurvivor (that's how much the 2080 has right?...
> ok so you would use `imagine "whatever" --num_layers=24 --batch_size=1`. and ok i will try to render in 256px > > just to be clear this is how i would...
also! While I said to use a batch_size of 1, that's _very_ low. I would increase it to 4 or more. One cool trick you can do to get higher...
He didn't explicityl mention this, but he's also using --image_width=256 and --gradient_accumulate_every=1. The full command would technically be: ``` imagine "extraterrestrial beings" --batch_size=32 --num_layers=44 --epochs=8 --image_width=256 --gradient_accumulate_every=1 ``` Does that...
So, if you run the `pip install gpustat; gpustat -i` stuff from earlier, what you'll find is that on windows or linux, your GPU is actually already using about 2...
> wouldnt do that `imagine "extraterrestrial beings" --batch_size=32 --num_layers=44 --epochs=8` didnt work for me i got the runtime cuda out of memory error > `imagine "extraterrestrial beings" --gradient_accumulate_every=16 --batch_size=8 --num_layers...
Gradient accumulate is kinda magical. It has zero impact on memory, but it will take a bit longer if you increase it.
> Has anyone run into a running out of GPU memory issue when running the `imagine` command? Below is the error I get. > > ``` > RuntimeError: CUDA out...