[Bug]: OK, this happened after a while..
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
venv "C:\Stable-Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: e407d1af897a7896d8c81e32dc86e7eb753ce207 Installing requirements for Web UI Launching Web UI with arguments: --lowram No module 'xformers'. Proceeding without it. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Loading weights [543bcbc212] from C:\Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\model.ckpt Error verifying pickled file from C:\Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\model.ckpt: Traceback (most recent call last): File "C:\Stable-Diffusion\stable-diffusion-webui\modules\safe.py", line 135, in load_with_extra check_pt(filename, extra_handler) File "C:\Stable-Diffusion\stable-diffusion-webui\modules\safe.py", line 93, in check_pt unpickler.load() File "C:\Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch_utils.py", line 153, in _rebuild_tensor_v2 tensor = _rebuild_tensor(storage, storage_offset, size, stride) File "C:\Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch_utils.py", line 147, in rebuild_tensor return t.set(storage.untyped(), storage_offset, size, stride) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.
The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument.
loading stable diffusion model: AttributeError Traceback (most recent call last): File "C:\Stable-Diffusion\stable-diffusion-webui\webui.py", line 102, in initialize modules.sd_models.load_model() File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 392, in load_model load_model_weights(sd_model, checkpoint_info) File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 247, in load_model_weights sd = read_state_dict(checkpoint_info.filename) File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 227, in read_state_dict sd = get_state_dict_from_checkpoint(pl_sd) File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 198, in get_state_dict_from_checkpoint pl_sd = pl_sd.pop("state_dict", pl_sd) AttributeError: 'NoneType' object has no attribute 'pop'
Stable diffusion model failed to load, exiting Press any key to continue . . . ___________________________________________________________________________________________________________________________________-- What does this mean? Did i completely run dry on RAM?
Steps to reproduce the problem
- Go to .... webui-user.bat
- Open it
- Yeah, this. I think im out of RAM.
What should have happened?
Booted up stable diffusion.
Commit where the problem happens
e407d1af897a7896d8c81e32dc86e7eb753ce207
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--lowram
List of extensions
No.
Console logs
venv "C:\Stable-Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: e407d1af897a7896d8c81e32dc86e7eb753ce207
Installing requirements for Web UI
Launching Web UI with arguments: --lowram
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [543bcbc212] from C:\Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\model.ckpt
Error verifying pickled file from C:\Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\model.ckpt:
Traceback (most recent call last):
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\safe.py", line 135, in load_with_extra
check_pt(filename, extra_handler)
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\safe.py", line 93, in check_pt
unpickler.load()
File "C:\Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\_utils.py", line 153, in _rebuild_tensor_v2
tensor = _rebuild_tensor(storage, storage_offset, size, stride)
File "C:\Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\_utils.py", line 147, in _rebuild_tensor
return t.set_(storage.untyped(), storage_offset, size, stride)
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
loading stable diffusion model: AttributeError
Traceback (most recent call last):
File "C:\Stable-Diffusion\stable-diffusion-webui\webui.py", line 102, in initialize
modules.sd_models.load_model()
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 392, in load_model
load_model_weights(sd_model, checkpoint_info)
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 247, in load_model_weights
sd = read_state_dict(checkpoint_info.filename)
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 227, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "C:\Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 198, in get_state_dict_from_checkpoint
pl_sd = pl_sd.pop("state_dict", pl_sd)
AttributeError: 'NoneType' object has no attribute 'pop'
Stable diffusion model failed to load, exiting
Press any key to continue . . .
Additional information
No response
DefaultCPUAllocator: not enough memory: you tried to allocate 6553600 bytes.
- Close some background apps on your PC.
- Increase your pagefile.sys
You're already using
--lowramso i don't know what else to suggest.
What happened:
venv "K:\stable-diffusion-webui-master\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Commit hash: <none>
Installing requirements for Web UI
#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:
Python revision: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Dreambooth revision: c2269b8585d994efa31c6582fc19a890253c804e
SD-WebUI revision:
Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[ ] xformers version N/A installed.
[+] torch version 1.12.1+cu113 installed.
[+] torchvision version 0.13.1+cu113 installed.
#######################################################################################################
Launching Web UI with arguments: --precision full --no-half --medvram --listen --api
No module 'xformers'. Proceeding without it.
SD-Webui API layer loaded
Checkpoint not found; loading fallback Anything-V3.0-pruned.ckpt [2700c435]
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [2700c435] from K:\stable-diffusion-webui-master\models\Stable-diffusion\Anything-V3.0-pruned.ckpt
Error verifying pickled file from K:\stable-diffusion-webui-master\models\Stable-diffusion\Anything-V3.0-pruned.ckpt:
Traceback (most recent call last):
File "K:\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\reallysafe.py", line 146, in load_with_extra
check_pt(filename, extra_handler)
File "K:\stable-diffusion-webui-master\extensions\sd_dreambooth_extension\reallysafe.py", line 104, in check_pt
unpickler.load()
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\_utils.py", line 138, in _rebuild_tensor_v2
tensor = _rebuild_tensor(storage, storage_offset, size, stride)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
return t.set_(storage._untyped(), storage_offset, size, stride)
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 22118400 bytes.
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
loading stable diffusion model: AttributeError
Traceback (most recent call last):
File "K:\stable-diffusion-webui-master\webui.py", line 71, in initialize
modules.sd_models.load_model()
File "K:\stable-diffusion-webui-master\modules\sd_models.py", line 321, in load_model
load_model_weights(sd_model, checkpoint_info)
File "K:\stable-diffusion-webui-master\modules\sd_models.py", line 202, in load_model_weights
sd = read_state_dict(checkpoint_file)
File "K:\stable-diffusion-webui-master\modules\sd_models.py", line 184, in read_state_dict
sd = get_state_dict_from_checkpoint(pl_sd)
File "K:\stable-diffusion-webui-master\modules\sd_models.py", line 155, in get_state_dict_from_checkpoint
pl_sd = pl_sd.pop("state_dict", pl_sd)
AttributeError: 'NoneType' object has no attribute 'pop'
Stable diffusion model failed to load, exiting
I did not use the UI for a week, running it today (31.01.23) results in this error while loading the model. tried removing the last used model from the config, but the error persists with the other model too.
Doing a quick modifydate:today search from root directory shows that venv had some changes made to packages upon starting the UI today.
Of course, no changes were made to the models or configuration. Only difference was defragmenting the drive and splittting it into two parts, which shouldn't be an issue as the filesystem and all files are intact.
In terms of ram usage: I currently have 4/16 gb used. Usually I saw 8-10gbs used by python while models were loading and never got such an error until today.
++ Running with --lowram or --lowvram produces same result. BTW it's trying to allocate 50-70 Mb of memory, which are totally available.
++ I did not only defragment my hard drive, but also disabled windows swapping. Even though memory usage goes up to 11.2/16 Gb, I have to have swapping enabled with any amount of reserved memory, otherwise the UI won't load any models.
++ Running again with --precision full --no-half --medvram --listen --api **--lowram** also won't work.
++ With --medvram I get ram filled up to 12.7Gb and once the model is loaded, python.exe uses 4.7Gb
load checkpoint from K:\stable-diffusion-webui-master\models\BLIP\model_base_caption_capfilt_large.pth
Error interrogating
Traceback (most recent call last):
File "K:\stable-diffusion-webui-master\modules\interrogate.py", line 163, in interrogate
artist = self.rank(image_features, ["by " + artist.name for artist in shared.artist_db.artists])[0]
File "K:\stable-diffusion-webui-master\modules\interrogate.py", line 114, in rank
text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\clip\model.py", line 344, in encode_text
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
RuntimeError: CUDA out of memory. Tried to allocate 170.00 MiB (GPU 0; 12.00 GiB total capacity; 1.22 GiB already allocated; 7.81 GiB free; 1.96 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "K:\stable-diffusion-webui-master\modules\ui.py", line 267, in interrogate
prompt = shared.interrogator.interrogate(image.convert("RGB"))
File "K:\stable-diffusion-webui-master\modules\interrogate.py", line 180, in interrogate
self.unload()
File "K:\stable-diffusion-webui-master\modules\interrogate.py", line 101, in unload
self.send_clip_to_ram()
File "K:\stable-diffusion-webui-master\modules\interrogate.py", line 93, in send_clip_to_ram
self.clip_model = self.clip_model.to(devices.cpu)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "K:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 6291456 bytes.
Running with --precision full --no-half --medvram --listen --api, trying img2img interrogate CLIP option.
It's stupid and I hate it. There IS a spike in memory use, which is not shown in windows task manager, but while the model is being loaded, I run into ram limitations, which did not happen before.
Closing as stale.