sd-webui-deforum
sd-webui-deforum copied to clipboard
[Bug]: 3d osx ventura m1 crash after 1st generation
Have you read the latest version of the FAQ?
- [X] I have visited the FAQ page right now and my issue is not present there
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
Are you using the latest version of the Deforum extension?
- [X] I have Deforum updated to the lastest version and I still have the issue.
What happened?
SD crashes if creating video using 3d Animation mode
Steps to reproduce the problem
- Go to deforum tab
- select keyframe subtab
- select 3d Animation mode
- go to prompts tab
- edit promt
- go to init tab
- check checkbox use init
- paste init image path
- press generate button
What should have happened?
No response
WebUI and Deforum extension Commit IDs
webui commit id - 22bcc7be428c94e9408f589966c2040187245d81 deforum exten commit id - 32c8bc2e072eac95226818f1158139f3661c7472
On which platform are you launching the webui with the extension?
Local PC setup (Mac)
Deforum settings
https://gist.github.com/akurach/626062149a93ba6e2e9be87639b6a49c
Webui core settings
https://gist.github.com/akurach/17164d8be561f5b70458d8ab70aff9f5
Console logs
Python 3.10.11 (main, Apr 7 2023, 07:24:53) [Clang 14.0.0 (clang-1400.0.29.202)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Launching Web UI with arguments: --skip-torch-cuda-test --no-half --skip-torch-cuda-test --upcast-sampling --opt-sub-quad-attention --use-cpu interrogate
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
/Users/avkurach/git/stable-diffusion-webui/venv-torch-nightly/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Loading weights [a60cfaa90d] from /Users/avkurach/git/stable-diffusion-webui/models/Stable-diffusion/dreamshaper_5BakedVae.safetensors
Creating model from config: /Users/avkurach/git/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying sub-quadratic cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 11.0s (load weights from disk: 0.5s, create model: 0.8s, apply weights to model: 7.9s, move model to device: 1.7s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.9s (import torch: 1.1s, import gradio: 0.7s, import ldm: 0.3s, other imports: 0.9s, load scripts: 0.5s, load SD checkpoint: 11.0s, create ui: 0.2s).
Loading weights [6ce0161689] from /Users/avkurach/git/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Applying sub-quadratic cross attention optimization.
Weights loaded in 24.2s (load weights from disk: 0.3s, apply weights to model: 9.4s, move model to device: 14.5s).
Deforum extension for auto1111 webui, v2.3b
Git commit: 32c8bc2e (Sat Apr 29 17:34:10 2023)
Saving animation frames to:
/Users/avkurach/git/stable-diffusion-webui/outputs/img2img-images/Deforum_20230430104215
Loading MiDaS model...
Loading AdaBins model...
Using cache found in /Users/avkurach/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master
Animation frame: 0/120
Seed: 2156387304
Prompt: great man, with sword, solar energy power, ultra hd, sharp, realistic, adventure
╭─────┬───┬───────┬────┬────┬────┬────┬────┬────╮
│Steps│CFG│Denoise│Tr X│Tr Y│Tr Z│Ro X│Ro Y│Ro Z│
├─────┼───┼───────┼────┼────┼────┼────┼────┼────┤
│ 25 │7.0│ 0.2 │ 0 │ 0 │1.75│ 0 │ 0 │ 0 │
╰─────┴───┴───────┴────┴────┴────┴────┴────┴────╯
100%|█████████████████████████████████████████████| 5/5 [00:20<00:00, 4.07s/it]
Animation frame: 2/120 | 5/540 [00:10<20:32, 2.30s/it]
Creating in-between cadence frame: 0; tween:0.50;
*START OF TRACEBACK*
Traceback (most recent call last):
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum.py", line 109, in run_deforum
render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, root.animation_prompts, root)
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/render.py", line 303, in render_animation
depth = depth_model.predict(turbo_next_image, anim_args.midas_weight, root.half_precision)
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/depth.py", line 103, in predict
midas_depth = self.midas_model.forward(sample)
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/src/midas/dpt_depth.py", line 166, in forward
return super().forward(x).squeeze(dim=1)
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/src/midas/dpt_depth.py", line 114, in forward
layers = self.forward_transformer(self.pretrained, x)
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/src/midas/backbones/vit.py", line 13, in forward_vit
return forward_adapted_unflatten(pretrained, x, "forward_flex")
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/src/midas/backbones/utils.py", line 86, in forward_adapted_unflatten
exec(f"glob = pretrained.model.{function_name}(x)")
File "<string>", line 1, in <module>
File "/Users/avkurach/git/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui/scripts/deforum_helpers/src/midas/backbones/vit.py", line 47, in forward_flex
x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
File "/Users/avkurach/git/stable-diffusion-webui/venv-torch-nightly/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/avkurach/git/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 319, in lora_Conv2d_forward
return torch.nn.Conv2d_forward_before_lora(self, input)
File "/Users/avkurach/git/stable-diffusion-webui/venv-torch-nightly/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Users/avkurach/git/stable-diffusion-webui/venv-torch-nightly/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the same
*END OF TRACEBACK*
User friendly error message:
Error: Input type (float) and bias type (c10::Half) should be the same. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \.
Deforum progress: 1%|▏ | 5/540 [00:16<29:27, 3.30s/it]
^CInterrupted with signal 2 in <frame at 0x1055dcd40, file '/Users/avkurach/git/stable-diffusion-webui/webui.py', line 209, code wait_on_server>
/opt/homebrew/Cellar/[email protected]/3.10.11/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Additional information
using nightly build torch webui-user.sh https://gist.github.com/akurach/138e7bfabf4ef9289bd2f5b3424420cb
This issue has been closed due to incorrect formatting. Please address the following mistakes and reopen the issue (click on the 'Reopen' button below):
- Include THE FULL LOG FROM THE START OF THE WEBUI in the issue description.
sad but no "reopen" button)
You don't see it as the author? IIRC people reopened the issues just fine
I saw your URL, I need to edit the script to account for these. At this time it only sees the content of the issue itself, sorry about that
Thanks for addressing your formatting mistakes. The issue has been reopened now.
I got the same the issue today when trying generate 3D video after launching webui with --no-half
argument.
RuntimeError: Input type (MPSFloatType) and weight type (MPSHalfType) should be the same
*END OF TRACEBACK*
User friendly error message:
Error: Input type (MPSFloatType) and weight type (MPSHalfType) should be the same. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \.