InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 802.50 KiB already allocated; 6.59 GiB free; 2.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Open riccardomorabito opened this issue 2 years ago • 1 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

Windows

GPU

cuda

VRAM

8GB

What happened?

Loading inpainting-1.5 from D:\StableDeffusion\InvokeAI - Out\invokeai\models\ldm\stable-diffusion-v1\sd-v1-5-inpainting.ckpt | LatentInpaintDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.54 M params. ** model inpainting-1.5 could not be loaded: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 13107200 bytes. Traceback (most recent call last): File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 80, in get_model requested_model, width, height, hash = self._load_model(model_name) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 233, in _load_model model = instantiate_from_config(omega_config.model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\util.py", line 90, in instantiate_from_config return get_obj_from_str(config['target'])( File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 2219, in init super().init(*args, **kwargs) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 642, in init super().init(conditioning_key=conditioning_key, *args, **kwargs) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 123, in init self.model_ema = LitEma(self.model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\modules\ema.py", line 25, in init self.register_buffer(s_name, p.clone().detach().data) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 13107200 bytes.

** restoring stable-diffusion-1.5

Retrieving model stable-diffusion-1.5 from system RAM cache

Traceback (most recent call last): File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 80, in get_model requested_model, width, height, hash = self._load_model(model_name) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 233, in _load_model model = instantiate_from_config(omega_config.model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\util.py", line 90, in instantiate_from_config return get_obj_from_str(config['target'])( File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 2219, in init super().init(*args, **kwargs) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 642, in init super().init(conditioning_key=conditioning_key, *args, **kwargs) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 123, in init self.model_ema = LitEma(self.model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\modules\ema.py", line 25, in init self.register_buffer(s_name, p.clone().detach().data) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 13107200 bytes.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\backend\invoke_ai_web_server.py", line 304, in handle_set_model model = self.generate.set_model(model_name) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\generate.py", line 849, in set_model model_data = cache.get_model(model_name) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 93, in get_model self.get_model(self.current_model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 73, in get_model self.models[model_name]['model'] = self._model_from_cpu(requested_model) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\ldm\invoke\model_cache.py", line 371, in _model_from_cpu model.to(self.device) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to return super().to(*args, **kwargs) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to return self._apply(convert) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) [Previous line repeated 1 more time] File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply param_applied = fn(param) File "D:\StableDeffusion\InvokeAI - Out\invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 802.50 KiB already allocated; 6.59 GiB free; 2.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

GPU : MSI 3060 TI

I can't change model after installing Invoke-AI

Screenshots

No response

Additional context

No response

Contact Details

No response

riccardomorabito avatar Dec 30 '22 20:12 riccardomorabito

3060 Ti should easily be able to handle stable-diffusion-1.5

I think you have some other software using GPU memory.

clears throat

Have you tried turning your computer on an off again?

Ghost---Shadow avatar Dec 31 '22 15:12 Ghost---Shadow

Would it be possible to integrate the type of model/memory management that Automatic1111 has, as it seems to be able to handle inpainting models with medium memory settings.

Petri3D avatar Jan 05 '23 20:01 Petri3D

Apparently it worked, I closed some software in the background and then rebooted, thanks a lot

riccardomorabito avatar Jan 10 '23 17:01 riccardomorabito