stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

CUDA out of memory. Under the same conditions everything worked fine before git pull

Open Coder-Sakura opened this issue 2 years ago • 5 comments

Describe the bug getting this error after git pull. Before this git pull, it worked fine under the same conditions. Now even 64x64 won't work

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.25 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Card is a 2060. running below in the command line argument --precision full --no-half --medvram

Expected behavior Was working perfectly before the latest commit

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows 10 64/bit
  • 2060

Additional context I forgot which version it is locally, the last git pull was about 2 days ago. Now the latest version on github is (c1093b8)

Coder-Sakura avatar Oct 18 '22 19:10 Coder-Sakura

What should I do, it's not working now😵‍💫

Coder-Sakura avatar Oct 18 '22 20:10 Coder-Sakura

What should I do, it's not working nowface_with_spiral_eyes

wait for updates or rollback to older commits

YudhaDev avatar Oct 18 '22 20:10 YudhaDev

This is a great opportunity to learn a bit of python and how repo's work.

Look in the recent commits and see if you can find anything related to your error, then you can reverting the change, etc.

OR, ditch the --precision full --no-half settings for the time being, and try it again after a day or so, as if the error is due to a bug, it'll likely be fixed fairly quick. The dev's of this repo are absolute guns at both breaking shit and quickly fixing it. It's what makes this repo so cutting edge.

Tollanador avatar Oct 19 '22 00:10 Tollanador

This is a great opportunity to learn a bit of python and how repo's work.

Look in the recent commits and see if you can find anything related to your error, then you can reverting the change, etc.

OR, ditch the --precision full --no-half settings for the time being, and try it again after a day or so, as if the error is due to a bug, it'll likely be fixed fairly quick. The dev's of this repo are absolute guns at both breaking shit and quickly fixing it. It's what makes this repo so cutting edge.

Thank for you reply and suggest.

I tried removing --precision full --no-half, leaving only --medvram in the command line argument.

Then I get this... (now at c1093b8051606f0ac90506b7114c4b55d0447c70) image

Coder-Sakura avatar Oct 19 '22 03:10 Coder-Sakura

I don't know if it's caused by models\Stable-diffusion\animefull-final-pruned.yaml

Changed use_ema from True to False and it worked fine

Coder-Sakura avatar Oct 19 '22 17:10 Coder-Sakura