Dreambooth-Stable-Diffusion
Dreambooth-Stable-Diffusion copied to clipboard
CUDA out of memory
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.65 GiB total capacity; 22.01 GiB already allocated; 26.44 MiB free; 22.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What can i do
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.65 GiB total capacity; 22.01 GiB already allocated; 26.44 MiB free; 22.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What can i do
Hello @liwei0826 could you solve it?
Seems like the fine tuning requires far more memory than inference. My RTX 3090 with 24 GB VRAM is not enough for the training too. Only A100 with 40GB or A6000 with 48GB VRAM can do the fine tuning job so far.
It is possible for 256x256 resolution, and it's works, but lower quality
One thing I found to reduce memory. This code is based on Textual Inversion, and TI does something here (https://github.com/rinongal/textual_inversion/blob/main/ldm/modules/diffusionmodules/util.py#L112), which disable gradient checkpointing in a hard-code way. This is because in TI, the Unet is not optimized. However, here we optimize the Unet, so we can turn on the gradient checkpoint point trick, as in the original SD repo (here https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/diffusionmodules/util.py#L112). The gradient checkpoint is default to be True in config (here https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/blob/main/configs/stable-diffusion/v1-finetune_unfrozen.yaml#L47)
So, can anyone clarify what requirements are for hardware? Can we add "I'm using _____" to the Readme so users have a realistic idea of hardware to expect?
I was originally using A6000 with 48G ram, now after some optimization I am sure it works on V100 with 32G ram. I would like to see if it works on 3090 with 24GB ram. I think we are close to 24GB but not yet.
Anyway to get this running with a GTX 1080 with 8GB of vram? How do I reduce the resolution? --W or --H is not working
2090Ti: 256x256 resolution
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
2090Ti: 256x256 resolution
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hey! @AmitMY How did you change the resolution? Did you just put your regularization images at 256x256 or changed some parameter?
In the config file, I modified every 512 too 256, hoping that would do the trick. Perhaps that was wrong.
Now on a V100, with 32GB, 512x512 trains just fine.
I tried that too but the nothing seemed to change.
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
have you tried this version? https://github.com/gammagec/Dreambooth-SD-optimized
I trained without any adjustments and I had an OOM in the sixth round.
Changing ch: 128 (with 4channels) = 512 to ch: 64 (464) = 256 does something but i get a bunch of tensor.size errors.
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
have you tried this version? https://github.com/gammagec/Dreambooth-SD-optimized
I trained without any adjustments and I had an OOM in the sixth round.
Will try that now!
@alleniver
VRAM usage only goes to about 5gb but i get this error:
Hey guys, didn't want to start a new thread since this one describes my issue. I'm also running out of VRAM. I'm following the instructions the best I can, and it says I need 10Gb. I have a 12Gb 3080Ti and somehow running out of memory. Is my GPU just not meant for this, or is it obvious that I've done something wrong?
The stable diffusion weights on huggingface proposes a solution for reducing GPU RAM, but I can't find the line of code to edit after doing a CTRL+F for "StableDiffusionPipeline" on every .py script I could find.
I am on the latest dreambooth version, downloaded 3 days ago. Any ideas? I'm pretty stuck with my limited knowledge.
Hi there,
I'm getting the same OOM error and running the code on a AWS p3.8xlarge instance 4x16GB. I guess I get the OOM because the model doesn't fit in one GPU?
Training isn't optimised for model parallelism. Multiple GPUs can only be used for data parallelism.
Is this correct?
+1 to @webeng 's question. I am running it on a g5.12xlarge with 4 GPUs each with 96 total. And I am getting the
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.20 GiB total capacity; 3.25 GiB already allocated; 23.12 MiB free; 3.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
message. My guess is its using just one core as that is roughly 1/4 of the 96 GB.
It would be nice to get a confirmation on the above question to be sure.