stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
TypeError: UnetPatcher.load_frozen_patcher() missing 1 required positional argument: 'strength'
[LayerDiffuse] LayerMethod.FG_ONLY_ATTN
*** Error running process_before_every_sampling: C:\Ai\Forge_2.0\webui\extensions\sd-forge-layerdiffuse-main\scripts\forge_layerdiffusion.py
Traceback (most recent call last):
File "C:\Ai\Forge_2.0\webui\modules\scripts.py", line 892, in process_before_every_sampling
script.process_before_every_sampling(p, *script_args, **kwargs)
File "C:\Ai\Forge_2.0\webui\extensions\sd-forge-layerdiffuse-main\scripts\forge_layerdiffusion.py", line 247, in process_before_every_sampling
unet.load_frozen_patcher(layer_lora_model, weight)
TypeError: UnetPatcher.load_frozen_patcher() missing 1 required positional argument: 'strength'
venv "C:\Ai\Forge_2.0\webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-531-g210af4f8
Commit hash: 210af4f80406f78a67e1c35a64a6febdf1200a82
CUDA 12.1
Launching Web UI with arguments: --cuda-stream --cuda-malloc
Using cudaMallocAsync backend.
Total VRAM 12282 MB, total RAM 65299 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: True
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Ai\Forge_2.0\webui\models\ControlNetPreprocessor
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.9.0, num models: 10
sd-webui-prompt-all-in-one background API service started successfully.
01:55:44 - ReActor - STATUS - Running v0.7.1-a2 on Device: CUDA
2024-09-15 01:55:45,848 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\\Ai\\Forge_2.0\\webui\\models\\Stable-diffusion\\Модели\\SDXL\\juggernautXL_version6Rundiffusion.safetensors', 'hash': '7d723a7b'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
To create a public link, set `share=True` in `launch()`.
Startup time: 22.8s (prepare environment: 8.3s, import torch: 5.4s, initialize shared: 0.1s, other imports: 0.7s, load scripts: 3.8s, create ui: 2.1s, gradio launch: 1.4s, app_started_callback: 0.9s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 91.66% GPU memory (11257.00 MB) to load weights, and use 8.34% GPU memory (1024.00 MB) to do matrix computation.
Update the layer diffusion extension.