InvokeAI
InvokeAI copied to clipboard
[bug]: Cuda out of memory during loading inpainting model
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
I dont know
VRAM
No response
What happened?
I choose inpainting v5 model and the loading failed
Screenshots
Additional context
No response
Contact Details
I got the exact same error and in my bug report I got this response: "In the latest version we activate the NSFW checker by default and this eats up an additional ~0.5GB of memory. If you've got the NSFW checker on, you can try turning it off. Find the .invokeai file located in your \User directory and change the line that reads --nsfw_checker to --no-nsfw_checker." But I still don't know how to find the file in the directory they were talking about. Do you know how to fix it?
I turned it off You can find it in "C:/Users/username/.invokaai" file But still gettting this error.
it would be helpful if you could post what GPU your system has. Several ways to do that:
- Open the “Run” dialog box by pressing and holding the Windows key, then press the R key (“Run”). Type devmgmt.msc Click OK. Click on Display adapters
- Open Task manager: right click the windows task bar and click on "Task Manager" go to the second tab (performance) and look for the "GPU 0" entry. Below GPU 0 it should say what GPU you have,
With that information we can tell you if you have a fixable problem or just a too weak of a system to run InvokeAI.
GPU 0 Intel(R) UHD Graphocs 630 GPU 1 NVIDIA GeForce GTX 1050 Ti
Maybe should I specify to use GPU 1? How can I do that?
I've got several Out of memory errors as well using inpainting, (6gb here) as a strange workaround I found that switching the model to a different one (like the v1.5) and back again to inpainting solve the out of memory for many runs... You would think that more model is more ram but maybe the cache kicks in and some memory is freed/optimized in the process. (and before you ask nsfw is on and should be on by default I don't like censorship by default!)
BartaG512 the command to do that is : set CUDA_VISIBLE_DEVICES=1 I was using that in automatic1111 however, in invoke, you are supposed to put it in the c:/users/username/ invokeai I tried that and only get errors, it's not working for me. if someone can help please ? I tried a more simple command of COMMANDLINE_ARGS=--opt-split-attention --medvram and also could not get this to work.
did you try messing with the floating point flags? i beleve the 10 series cards had some special flag you needed to add
I got the exact same error too, with Automatic1111 everything is ok
I have the same error, Windows 10, old GTX960 with 4G GPU ram
Same issue here.
Fresh install from today, using all defaults, with an RTX3060/12GB.