Dreambooth-Stable-Diffusion icon indicating copy to clipboard operation
Dreambooth-Stable-Diffusion copied to clipboard

CUDA out of memory

Open liwei0826 opened this issue 2 years ago • 19 comments

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.65 GiB total capacity; 22.01 GiB already allocated; 26.44 MiB free; 22.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF image

What can i do

liwei0826 avatar Sep 15 '22 02:09 liwei0826

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.65 GiB total capacity; 22.01 GiB already allocated; 26.44 MiB free; 22.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF image

What can i do

Hello @liwei0826 could you solve it?

dadobtx avatar Sep 17 '22 08:09 dadobtx

Seems like the fine tuning requires far more memory than inference. My RTX 3090 with 24 GB VRAM is not enough for the training too. Only A100 with 40GB or A6000 with 48GB VRAM can do the fine tuning job so far.

keithkctse avatar Sep 19 '22 02:09 keithkctse

It is possible for 256x256 resolution, and it's works, but lower quality painting of a sks fighter by greg rutkowsky_5 00026 00087

attashe avatar Sep 19 '22 13:09 attashe

One thing I found to reduce memory. This code is based on Textual Inversion, and TI does something here (https://github.com/rinongal/textual_inversion/blob/main/ldm/modules/diffusionmodules/util.py#L112), which disable gradient checkpointing in a hard-code way. This is because in TI, the Unet is not optimized. However, here we optimize the Unet, so we can turn on the gradient checkpoint point trick, as in the original SD repo (here https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/diffusionmodules/util.py#L112). The gradient checkpoint is default to be True in config (here https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/blob/main/configs/stable-diffusion/v1-finetune_unfrozen.yaml#L47)

XavierXiao avatar Sep 21 '22 04:09 XavierXiao

So, can anyone clarify what requirements are for hardware? Can we add "I'm using _____" to the Readme so users have a realistic idea of hardware to expect?

codefaux avatar Sep 25 '22 21:09 codefaux

I was originally using A6000 with 48G ram, now after some optimization I am sure it works on V100 with 32G ram. I would like to see if it works on 3090 with 24GB ram. I think we are close to 24GB but not yet.

XavierXiao avatar Sep 25 '22 21:09 XavierXiao

Anyway to get this running with a GTX 1080 with 8GB of vram? How do I reduce the resolution? --W or --H is not working

ValtteriJokisaari avatar Sep 26 '22 11:09 ValtteriJokisaari

2090Ti: 256x256 resolution

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

AmitMY avatar Sep 26 '22 11:09 AmitMY

2090Ti: 256x256 resolution

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Hey! @AmitMY How did you change the resolution? Did you just put your regularization images at 256x256 or changed some parameter?

ValtteriJokisaari avatar Sep 26 '22 12:09 ValtteriJokisaari

In the config file, I modified every 512 too 256, hoping that would do the trick. Perhaps that was wrong.

Now on a V100, with 32GB, 512x512 trains just fine.

AmitMY avatar Sep 26 '22 12:09 AmitMY

I tried that too but the nothing seemed to change.

ValtteriJokisaari avatar Sep 26 '22 12:09 ValtteriJokisaari

512res Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings

ValtteriJokisaari avatar Sep 26 '22 12:09 ValtteriJokisaari

512res Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings

have you tried this version? https://github.com/gammagec/Dreambooth-SD-optimized

I trained without any adjustments and I had an OOM in the sixth round.

alleniver avatar Sep 26 '22 13:09 alleniver

image Changing ch: 128 (with 4channels) = 512 to ch: 64 (464) = 256 does something but i get a bunch of tensor.size errors. image

ValtteriJokisaari avatar Sep 26 '22 13:09 ValtteriJokisaari

512res Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings

have you tried this version? https://github.com/gammagec/Dreambooth-SD-optimized

I trained without any adjustments and I had an OOM in the sixth round.

Will try that now!

ValtteriJokisaari avatar Sep 26 '22 13:09 ValtteriJokisaari

@alleniver VRAM usage only goes to about 5gb but i get this error: image

ValtteriJokisaari avatar Sep 26 '22 13:09 ValtteriJokisaari

Hey guys, didn't want to start a new thread since this one describes my issue. I'm also running out of VRAM. I'm following the instructions the best I can, and it says I need 10Gb. I have a 12Gb 3080Ti and somehow running out of memory. Is my GPU just not meant for this, or is it obvious that I've done something wrong?

Untitled

The stable diffusion weights on huggingface proposes a solution for reducing GPU RAM, but I can't find the line of code to edit after doing a CTRL+F for "StableDiffusionPipeline" on every .py script I could find. image

I am on the latest dreambooth version, downloaded 3 days ago. Any ideas? I'm pretty stuck with my limited knowledge.

Vaniloth avatar Sep 29 '22 03:09 Vaniloth

Hi there,

I'm getting the same OOM error and running the code on a AWS p3.8xlarge instance 4x16GB. I guess I get the OOM because the model doesn't fit in one GPU?

Training isn't optimised for model parallelism. Multiple GPUs can only be used for data parallelism.

Is this correct?

webeng avatar Nov 18 '22 16:11 webeng

+1 to @webeng 's question. I am running it on a g5.12xlarge with 4 GPUs each with 96 total. And I am getting the

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.20 GiB total capacity; 3.25 GiB already allocated; 23.12 MiB free; 3.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

message. My guess is its using just one core as that is roughly 1/4 of the 96 GB.

It would be nice to get a confirmation on the above question to be sure.

schematical avatar Mar 16 '23 18:03 schematical