stable-diffusion-webui
stable-diffusion-webui copied to clipboard
The WebUI wont launch (The file may be malicious, so the program is not going to read it.)
Describe the bug The WebUI wont launch it says,"The file may be malicious, so the program is not going to read it.", then some code and then it says, "Press any key to continue...".
To Reproduce No clue on how to reproduce
Expected behavior Give me a local link to the webui
Screenshots
Desktop (please complete the following information):
- OS: Windows
- Browser: Chrome
- Commit revision: 6a4e84671016d38c10a55fedcdf09321dba737ae
Additional context LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Error verifying pickled file from C:\Users\masho/.cache\huggingface\transformers\c506559a5367a918bab46c39c79af91ab88846b49c8abd9d09e699ae067505c6.6365d436cc844f2f2b4885629b559d8ff0938ac484c01a6796538b2665de96c7: Traceback (most recent call last): File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\safe.py", line 97, in load check_pt(filename) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\safe.py", line 81, in check_pt unpickler.load() File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch_utils.py", line 138, in _rebuild_tensor_v2 tensor = _rebuild_tensor(storage, storage_offset, size, stride) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch_utils.py", line 134, in rebuild_tensor return t.set(storage._untyped(), storage_offset, size, stride) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 12582912 bytes.
The file may be malicious, so the program is not going to read it. You can skip this check with --disable-safe-unpickle commandline argument.
Traceback (most recent call last):
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\launch.py", line 171, in
--disable-safe-unpickle
--disable-safe-unpickle
Yes I tried that and it still shows me this
Traceback (most recent call last):
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\launch.py", line 171, in
I restart my PC and it solved the issue.
So it started but then gave me the same error as I went ahead and generated an image
Global Step: 470000
Applying cross attention optimization (Doggettx).
Weights loaded.
1 out of 14: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:20<00:00, 1.01s/it]
Error completing request█▍ | 20/280 [00:32<03:49, 1.13it/s]
Arguments: ('movie post of Donald Trump as a necromancer, horror, filthy, scary, disgusting, by Mariusz Lewandowski and Zdzisław Beksiński, insane level of details, intricate, cinematic, 16k, ultra HD', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 0, 0, 3, False, False, None, '', 5, '3,4,5,6,7,8,10', 9, 'model, Necromancer FT Model', True, False, False) {}
Traceback (most recent call last):
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\ui.py", line 187, in f
res = list(func(*args, **kwargs))
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\webui.py", line 64, in f
res = func(*args, **kwargs)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\txt2img.py", line 41, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\scripts.py", line 159, in run
processed = script.run(p, *script_args)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 370, in run
processed = draw_xy_grid(
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 201, in draw_xy_grid
processed:Processed = cell(x, y)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 368, in cell
return process_images(pc)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\processing.py", line 417, in process_images
x_samples_ddim = decode_first_stage(p.sd_model, samples_ddim)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\processing.py", line 267, in decode_first_stage
x = model.decode_first_stage(x)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 763, in decode_first_stage
return self.first_stage_model.decode(z)
File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\lowvram.py", line 61, in
running out of memory
So it started but then gave me the same error as I went ahead and generated an image
Global Step: 470000 Applying cross attention optimization (Doggettx). Weights loaded. 1 out of 14: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:20<00:00, 1.01s/it] Error completing request█▍ | 20/280 [00:32<03:49, 1.13it/s] Arguments: ('movie post of Donald Trump as a necromancer, horror, filthy, scary, disgusting, by Mariusz Lewandowski and Zdzisław Beksiński, insane level of details, intricate, cinematic, 16k, ultra HD', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 512, False, 0.7, 0, 0, 3, False, False, None, '', 5, '3,4,5,6,7,8,10', 9, 'model, Necromancer FT Model', True, False, False) {} Traceback (most recent call last): File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\ui.py", line 187, in f res = list(func(*args, **kwargs)) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\webui.py", line 64, in f res = func(*args, **kwargs) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\txt2img.py", line 41, in txt2img processed = modules.scripts.scripts_txt2img.run(p, *args) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\scripts.py", line 159, in run processed = script.run(p, *script_args) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 370, in run processed = draw_xy_grid( File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 201, in draw_xy_grid processed:Processed = cell(x, y) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\scripts\xy_grid.py", line 368, in cell return process_images(pc) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\processing.py", line 417, in process_images x_samples_ddim = decode_first_stage(p.sd_model, samples_ddim) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\processing.py", line 267, in decode_first_stage x = model.decode_first_stage(x) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 763, in decode_first_stage return self.first_stage_model.decode(z) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\lowvram.py", line 61, in sd_model.first_stage_model.decode = lambda z, de=sd_model.first_stage_model.decode: first_stage_model_decode_wrap(sd_model.first_stage_model, de, z) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\lowvram.py", line 47, in first_stage_model_decode_wrap send_me_to_gpu(self, None) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\modules\lowvram.py", line 34, in send_me_to_gpu module_in_gpu.to(cpu) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to return super().to(*args, **kwargs) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to return self._apply(convert) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply module._apply(fn) [Previous line repeated 3 more times] File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply param_applied = fn(param) File "E:\Stable Diffusion\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.
I have the same issue. For me this can be easily reproduced right after triggering a CUDA OOM (though for me it still shows available VRAM #4541) by simply trying to switch the model to any other model.
Any ideas on what might be causing it?
I found a temporary workaround if you are on Windows: Try increasing your paging file size on the drive in which WebUI is installed. It's not exactly a suitable long term solution, but it can alleviate the issue until either more RAM is obtained or we figure out the reason behind why even my 16GB system RAM seems not enough to launch WebUI.
I have the same issue. For me this can be easily reproduced right after triggering a CUDA OOM (though for me it still shows available VRAM #4541) by simply trying to switch the model to any other model.
The same problem just happened to me. Once I get the OOM once, I can launch the web ui, but then every time I try to switch models it behaves like there is no more available, even if I have more than enough VRAM and RAM (64GB) in the system.
I found a temporary workaround if you are on Windows: Try increasing your paging file size on the drive in which WebUI is installed. It's not exactly a suitable long term solution, but it can alleviate the issue until either more RAM is obtained or the reason behind why even my 16GB system RAM seems not enough to launch WebUI.
This worked, thanks! What a weird behaviour... anyone understands what's going on there?
Got this with 32GB of system RAM. Checked and my pagefile wasn't at its maximum size. A reboot fixed it.
Got this with 32GB of system RAM. Checked and my pagefile wasn't at its maximum size. A reboot fixed it.
Damn, even with 32GB huh? Did the reboot permanently fix it? Also did you increase the page file size, or did you just reboot?
Haven't run into the issue since, didn't change the pagefile settings.
This error started for me after I update to a recent version and started using a merged checkpoint.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2569