stable-diffusion-webui
stable-diffusion-webui copied to clipboard
Run run_ webui_ Mac.sh error
Describe the bug After I installed it, it ran successfully, but after a few hours, it ran again: run_webui_mac.sh will report an error, stop at AttributeError: 'NoneType' object has no attribute 'keys'. I tried to delete /Users/yjy/. cache/huggingface, but errors still occur.
To Reproduce Steps to reproduce the behavior: I don't know why there are error
Desktop (please complete the following information):
- OS: macOS
$ ./run_webui_mac.sh
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
To make your changes take effect please reactivate your environment
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
Already up to date.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Error verifying pickled file from /Users/yjy/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/pytorch_model.bin:
Traceback (most recent call last):
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/modules/safe.py", line 76, in load
check_pt(filename)
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/modules/safe.py", line 60, in check_pt
unpickler.load()
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/modules/safe.py", line 23, in persistent_load
return torch.storage._TypedStorage()
AttributeError: module 'torch.storage' has no attribute '_TypedStorage'
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
Traceback (most recent call last):
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/webui.py", line 82, in <module>
shared.sd_model = modules.sd_models.load_model()
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/modules/sd_models.py", line 174, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/util.py", line 85, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 461, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 519, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/util.py", line 85, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/Users/yjy/Documents/Code/AI/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/encoders/modules.py", line 142, in __init__
self.transformer = CLIPTextModel.from_pretrained(version)
File "/Users/yjy/Documents/Code/miniforge3/envs/web-ui/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2138, in from_pretrained
loaded_state_dict_keys = [k for k in state_dict.keys()]
AttributeError: 'NoneType' object has no attribute 'keys'
What is the solution to this problem?
Same.
I managed to solve this problem by uninstalling the nightly versions of Torch and Torchvision and installing the stable ones. if you do that, the webui will load, but as you attempt to generate an image, you'll get back to the dear old error:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
That error can be fixed only with the nightly versions of Torch and Torchvision (installed by the macOS script). So you are back to square one.
My solution, for now, is to keep the nightly versions and revert to a previous commit:
git reset --hard e00b4df7c6f0a13941d6f6ea425eebdaa2bc9318
This commit is safe, at least with my configuration. Once you do this, you can launch the webui as usual. Just don't issue a git pull
command.
Any better solution for this, maybe a specific torchvision & torch nightly version combo?
Just ran into this also after doing a lot of manual installations to get to this point.
Same here. It's working yesterday though, I ran the setup script again but still the same.
Edit:
Temporary solved by rolling back to a previous nightly commit of torch, as mentioned here,
Since the run_webui_mac.sh
script will update torch automatically upon launch, I just commented out the git pull
command in the script, and it worked just as before.
This problem has been fixed. It should be safe to uncomment git pull
in the script for anyone who did so.
just tried to start the webui, but that brings up this error (did run the setup script too same outcome)
thomas@MacBook-Pro-von-Thomas stable-diffusion-webui % ./run_webui_mac.sh
To make your changes take effect please reactivate your environment
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
Already up to date.
Traceback (most recent call last):
File "/Users/thomas/SD_Test/stable-diffusion-webui/webui.py", line 8, in <module>
from fastapi.middleware.gzip import GZipMiddleware
ModuleNotFoundError: No module named 'fastapi'
did that now I have the same error as StableInquest
user@MacBook-Pro-von-user stable-diffusion-webui % ./run_webui_mac.sh
To make your changes take effect please reactivate your environment
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
Already up to date.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [e3b0c442] from /Users/user/SD_Test/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt
Error verifying pickled file from /Users/user/SD_Test/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt:
Traceback (most recent call last):
File "/Users/user/SD_Test/stable-diffusion-webui/modules/safe.py", line 61, in check_pt
with zipfile.ZipFile(filename) as z:
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/zipfile.py", line 1267, in __init__
self._RealGetContents()
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/zipfile.py", line 1334, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/user/SD_Test/stable-diffusion-webui/modules/safe.py", line 80, in load
check_pt(filename)
File "/Users/user/SD_Test/stable-diffusion-webui/modules/safe.py", line 72, in check_pt
unpickler.load()
EOFError: Ran out of input
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
Traceback (most recent call last):
File "/Users/user/SD_Test/stable-diffusion-webui/webui.py", line 82, in <module>
shared.sd_model = modules.sd_models.load_model()
File "/Users/user/SD_Test/stable-diffusion-webui/modules/sd_models.py", line 175, in load_model
load_model_weights(sd_model, checkpoint_info)
File "/Users/user/SD_Test/stable-diffusion-webui/modules/sd_models.py", line 138, in load_model_weights
if "global_step" in pl_sd:
TypeError: argument of type 'NoneType' is not iterable
user@MacBook-Pro-von-user stable-diffusion-webui %
I got by this just adding the --disable-safe-unpickle commandline argument at the end of the line below (in file ./run_webui_mac.sh):
python webui.py --precision full --no-half --opt-split-attention-v1 --use-cpu GFPGAN CodeFormer --disable-safe-unpickle
I am on M1 mac and now the only error I'm seeing is related to textual inversion training:
stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1030, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
python webui.py --precision full --no-half --opt-split-attention-v1 --use-cpu GFPGAN CodeFormer --disable-safe-unpickle
tried that but getting another error, so sad was working flawless the past days
user@MacBook-Pro-von-user stable-diffusion-webui % ./run_webui_mac.sh
To make your changes take effect please reactivate your environment
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
Already up to date.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [e3b0c442] from /Users/user/SD_Test/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt
Traceback (most recent call last):
File "/Users/user/SD_Test/stable-diffusion-webui/webui.py", line 82, in <module>
shared.sd_model = modules.sd_models.load_model()
File "/Users/user/SD_Test/stable-diffusion-webui/modules/sd_models.py", line 175, in load_model
load_model_weights(sd_model, checkpoint_info)
File "/Users/user/SD_Test/stable-diffusion-webui/modules/sd_models.py", line 137, in load_model_weights
pl_sd = torch.load(checkpoint_file, map_location="cpu")
File "/Users/user/SD_Test/stable-diffusion-webui/modules/safe.py", line 89, in load
return unsafe_torch_load(filename, *args, **kwargs)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/serialization.py", line 764, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/serialization.py", line 971, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
user@MacBook-Pro-von-user stable-diffusion-webui %
I'm having similar problems when running ./run_webui_mac.sh
My processor is the M1 Pro
WARNING: overwriting environment variables set in the machine
overwriting variable PYTORCH_ENABLE_MPS_FALLBACK
Already up to date.
Traceback (most recent call last):
File "/Users/luca/repositories/stable-diffusion-webui/webui.py", line 8, in <module>
from fastapi.middleware.gzip import GZipMiddleware
ModuleNotFoundError: No module named 'fastapi'```
I'm having similar problems when running
./run_webui_mac.sh
My processor is the M1 Pro
WARNING: overwriting environment variables set in the machine overwriting variable PYTORCH_ENABLE_MPS_FALLBACK Already up to date. Traceback (most recent call last): File "/Users/luca/repositories/stable-diffusion-webui/webui.py", line 8, in <module> from fastapi.middleware.gzip import GZipMiddleware ModuleNotFoundError: No module named 'fastapi'```
that can be solved by running as suggested by @brkirch but then I had a different error
bash -l -c "conda activate web-ui; pip install jsonmerge einops clean-fid resize_right torchdiffeq lark gradio fastapi omegaconf piexif fonts font-roboto pytorch_lightning transformers kornia realesrgan scunet timm"
fastapi
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1990
@NeoTrace82 I'm having the same error messages as you
don't ask me how I fixed it, did try a lot of things out the things I remember:
I did purge python manually and reinstalled python it installed the requirements manually again did run the setup downloaded the models manually and the GFPGAN for Face Restoration
right now its working, can't tell what fixed it but I don't wanna play around anymore as its running
if I remember any other step I did I will report it immediately
I've fixed a few things in the setup script and submitted a PR: dylancl/stable-diffusion-webui-mps#2.
Hopefully this gets fixed because I'm experiencing the same issues as the rest of you. Was running fine two days ago.
Somehow the "Torch not compiled with CUDA enabled" error only presents if a specific sampling method is used. For example on my M1 machine it only happens when using DDIM/PLMS.
Fresh installed again, latest pull.. most everything is working but for training. See error:
stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1030, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Anyone have success in fixing this?
stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1030, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
@StableInquest Starting with Traceback (most recent call last):
, please provide everything that printed. This error occurred inside of Static Diffusion instead of webui and that makes it potentially much harder to troubleshoot without the full traceback.
Traceback (most recent call last):
File "/Users/user/stable-diffusion-webui/modules/ui.py", line 176, in f
res = list(func(*args, **kwargs))
File "/Users/user/stable-diffusion-webui/webui.py", line 68, in f
res = func(*args, **kwargs)
File "/Users/user/stable-diffusion-webui/modules/textual_inversion/ui.py", line 29, in train_embedding
embedding, filename = modules.textual_inversion.textual_inversion.train_embedding(*args)
File "/Users/user/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 257, in train_embedding
loss = shared.sd_model(x.unsqueeze(0), c)[0]
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 879, in forward
return self.p_losses(x, c, t, *args, **kwargs)
File "/Users/user/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1030, in p_losses
logvar_t = self.logvar[t].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
After latest pull, getting this:
Traceback (most recent call last):
File "/Users/user/stable-diffusion-webui/modules/ui.py", line 176, in f
res = list(func(*args, **kwargs))
File "/Users/user/stable-diffusion-webui/webui.py", line 68, in f
res = func(*args, **kwargs)
File "/Users/user/stable-diffusion-webui/modules/textual_inversion/ui.py", line 29, in train_embedding
embedding, filename = modules.textual_inversion.textual_inversion.train_embedding(*args)
File "/Users/user/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 276, in train_embedding
embedding.save(filename)
File "/Users/user/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 34, in save
torch.save(embedding_data, filename)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/serialization.py", line 379, in save
_save(obj, opened_zipfile, pickle_module, pickle_protocol)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/serialization.py", line 589, in _save
pickler.dump(obj)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/_tensor.py", line 177, in __reduce_ex__
return self._reduce_ex_internal(proto)
File "/Users/user/miniconda/envs/web-ui/lib/python3.10/site-packages/torch/_tensor.py", line 223, in _reduce_ex_internal
return (torch._utils._rebuild_device_tensor_from_numpy, (self.cpu().numpy(),
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
pip3 install torch==1.11.0 torchvision==0.13.1 torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
This version combo resolves the issue, but its painfully slow.
Latest pull and latest versions from requirements.txt seems to resolve this now. I'll update if that isn't the case.
Although this works it is very slow. 6 seconds per step. M1 MacOS. Any suggestions?
I'm having similar problems when running
./run_webui_mac.sh
My processor is the M1 ProWARNING: overwriting environment variables set in the machine overwriting variable PYTORCH_ENABLE_MPS_FALLBACK Already up to date. Traceback (most recent call last): File "/Users/luca/repositories/stable-diffusion-webui/webui.py", line 8, in <module> from fastapi.middleware.gzip import GZipMiddleware ModuleNotFoundError: No module named 'fastapi'```
that can be solved by running as suggested by @brkirch but then I had a different error
bash -l -c "conda activate web-ui; pip install jsonmerge einops clean-fid resize_right torchdiffeq lark gradio fastapi omegaconf piexif fonts font-roboto pytorch_lightning transformers kornia realesrgan scunet timm"
It's work for me, I'm just use m1 silicon air
I'm still not having any luck. I've tried deleting miniconda and stable-diffusion-webui multiple time to fresh install everything. This is error I get when attempting to train embedding:
stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 185, in write_loss
with open(os.path.join(log_directory, filename), "a+", newline='') as fout:
FileNotFoundError: [Errno 2] No such file or directory: 'textual_inversion/2022-10-16/test/textual_inversion_loss.csv'
Ok I was able to get by this by not disabling: "Save an image to log directory every N steps, 0 to disable" or "Save a copy of embedding to log directory every N steps, 0 to disable"
By having this disabled the textual_inversion log directory was never created and it threw the error in my post above.
I take that back. I've now hit this in the middle of training:
miniconda/envs/web-ui/lib/python3.10/site-packages/torch/_tensor.py", line 223, in _reduce_ex_internal
return (torch._utils._rebuild_device_tensor_from_numpy, (self.cpu().numpy(),
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
This occurs once steps complete and a save is attempted at the end or during one of these saves "Save a copy of embedding to log directory every N steps, 0 to disable"
Adding this into the conda environment seems to solve it:
pip3 install --upgrade git+https://github.com/pytorch/[email protected]
Only problem is the speed is terrible