InvokeAI
InvokeAI copied to clipboard
[bug]: RuntimeError: CUDA out of memory
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
4
What happened?
This is what it shows when initialziton of Invoke with python scripts/invoke.py is done:
Model loaded in 14.76s Max VRAM used to load the model: 3.38G Current VRAM usage:3.38G Current embedding manager terms: * Setting Sampler to k_heun
When i input something to be created, it starts generating and when it comes to 100%, instead of done this is what i am presented with: RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 4.00 GiB total capacity; 3.34 GiB already allocated; 0 bytes free; 3.36 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Could not generate image. Usage stats: 0 image(s) generated in 21.55s Max VRAM used for this generation: 3.67G. Current VRAM utilization: 3.38G Max VRAM used since script start: 3.67G Outputs:
Invoke used to work on my laptop before I updated it and now I am not sure whether it can produce images with my GPU anymore? My GPU is NVIDIA GeForce RTX 3050 4GB, CUDa version 11.8 and PyTorch 11.2. Please does anyone know if this can be fixed somehow?
Screenshots
No response
Additional context
No response
Contact Details
No response
In the latest version we activate the NSFW checker by default and this eats up an additional ~0.5GB of memory. If you've got the NSFW checker on, you can try turning it off. Find the .invokeai
file located in your \User directory and change the line that reads --nsfw_checker
to --no-nsfw_checker
.
This ought to fix the problem. Please add a comment to let us know.
@lstein Thank you for your response!
If you can please just clarify which is the .invokeai file you mentioned? This is what my \User directory looks like:
FilipTrichkov, you are in the wrong folder, . if you have not altered any paths then you need to go to drive C then users, then the folder with your user name on it. in that folder you should see the invokeai file, you will right click on it and choose edit with notepad or other similar editor.
@Gobiff87 where would this file be in ubuntu?
@Gobiff87 I found it, it was a hidden file in my Home directory
unfortunately the nsfw filter was already off, but I am still getting
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 1.96 GiB total capacity; 1.58 GiB already allocated; 2.88 MiB free; 1.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
Does this mean my computer just can't handle invoke? I have an Asus Zenbook (cometlake GT2 GPU)
i've run into same issue with no nsfw. same error info metioned above.
I am facing the same issue while switching the model to inPainting-1.5. Do we need to change some configuration to fix it? I am using the miniconda as the default conda gave me error while installation.
From error logs it seems like the memory available is 0 as the memory already consumed by stable diffusion. Can someone confirm that the model stable diffusion takes up all memory and doesn't release it while change of model happens. or its the pytorch that is consuming all the memory.
Apologies if I asked a dumb question, I am new here.
Attaching the error logs invokeAi error log.txt
I have run into the same problem. While I like the UI of the InvokeAI a lot, being unable to load the InPainting-1.5 model on my laptop with RTX 2060 has caused me to move to OpenOutpainting extension with Automatic1111. The same model works there without a hitch, so I assume there is some memory management issue with InvokeAI.
How would i do this for a docker install?
I get this problem, specifically with stable-diffusion-2.1-768 . The other models run fine. I have an RTX 3060 which has 12GB. Maybe it's not a good enough card?
on my M1 the generation of images with stable-diffusion-2.1-768
takes about 5 times as long as with stable-diffusion-2.1-base
.
Is the problem still persisting, or could this issue be closed?