sd-webui-controlnet
sd-webui-controlnet copied to clipboard
[Bug]: T2I Adapters throwing error
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
What happened?
Attempted to use the depth TI adapter and received the following:
Loaded state_dict from [G:\Stable Diffusion\models\ControlNet\t2iadapter_depth_sd14v1.pth] Error running process: D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\sdwebui2\stable-diffusion-webui\modules\scripts.py", line 386, in process script.process(p, *script_args) File "D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 608, in process else self.build_control_model(p, unet, model, lowvram) File "D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 467, in build_control_model network = network_module( File "D:\sdwebui2\stable-diffusion-webui/extensions/sd-webui-controlnet\scripts\adapter.py", line 65, in init self.control_model.load_state_dict(state_dict) File "D:\sdwebui2\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Adapter: size mismatch for conv_in.weight: copying a param with shape torch.Size([320, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 64, 3, 3]).
Steps to reproduce the problem
- Download t2iadapter_depth_sd14v1.pth
- Open webui
- Attempt to run img2img using only the t2i adapter. No preprocessing
What should have happened?
An image should have used the T2I adapters and no error should have been thrown.
Commit where the problem happens
webui:\t\t v controlnet:\t 2ce17c0
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
--xformers --api --medvram --disable-safe-unpickle --opt-sub-quad-attention
Console logs
Loaded state_dict from [G:\Stable Diffusion\models\ControlNet\t2iadapter_depth_sd14v1.pth]
Error running process: D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\sdwebui2\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 608, in process
else self.build_control_model(p, unet, model, lowvram)
File "D:\sdwebui2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 467, in build_control_model
network = network_module(
File "D:\sdwebui2\stable-diffusion-webui/extensions/sd-webui-controlnet\scripts\adapter.py", line 65, in __init__
self.control_model.load_state_dict(state_dict)
File "D:\sdwebui2\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Adapter:
size mismatch for conv_in.weight: copying a param with shape torch.Size([320, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 64, 3, 3]).
Additional information
No response
ensure you didn't forget to switch to "image_adapter_v14.yaml" in settings
also, you can do this too > https://github.com/Mikubill/sd-webui-controlnet/issues/331#issuecomment-1442803081 (this is necessary to do if you want to use to use the sketch t2iadapter with a different t2iadapter)
I haven't been able to get t2i adapters to work in any capacity lol. Trying both the image and sketch adapter yaml files with all of the different t2i adapter models and I either get:
RuntimeError: Error(s) in loading state_dict for Adapter: size mismatch for conv_in.weight: copying a param with shape torch.Size([320, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 192, 3, 3]).
or
RuntimeError: The size of tensor a (8) must match the size of tensor b (7) at non-singleton dimension 2
I'm also using the extracted t2i adapter models from https://huggingface.co/webui/ControlNet-modules-safetensors/tree/main rather than the default ones from the t2i adapter repo, if that makes any difference.
EDIT: Not sure if maybe I have something else that conflicts? My friend got it working with the same model file and yaml while mine continues to throw errors.
The size mismatch while loading state_dict happens while trying to load the t2i adapter model, which makes me think that one is just wrong yaml. While the tensor size mismatch error happens during the actual image generation process so I'm not sure what's causing that.
If I change the weight below 1, the error becomes:
TypeError: can't multiply sequence by non-int of type 'float'
EDIT2: Now I'm on commit 84a2b22 and stuff seems to be working now with the same settings.
Getting the same error despite having all the yaml files and renamed afte rthe t2adapter (git de8fdeff )
This worked for me like this:
- Copy all the ControlNet .yaml files directly into the "stable-diffusion-webui\models" folder.
- In the UI go to settings > ControlNet
- Like @ClashSAN mentioned, in the second form (Config file for Adapter models) set
models\image_adapter_v14.yamlinstead of whatever was there
I've tried the instructions but still not getting past the error, still getting the size mismatch error. Tried it with both txt2img and img2img, with low vram enabled and disabled, even in the launch options. Already on the latest version of both webui and the extension
Loading model: t2iadapter_color-fp16 [743b5c62]████████████████████████████████████████| 15/15 [00:03<00:00, 5.08it/s]
Loaded state_dict from [D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\models\t2iadapter_color-fp16.safetensors]
Error running process: D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 679, in process
model_net = self.load_control_model(p, unet, unit.model, unit.low_vram)
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 468, in load_control_model
model_net = self.build_control_model(p, unet, model, lowvram)
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 510, in build_control_model
network = network_module(
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 82, in __init__
self.control_model.load_state_dict(state_dict)
File "D:\stable-diffusion\empire-install2\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Adapter:
Missing key(s) in state_dict: "body.0.block1.weight", "body.0.block1.bias", "body.0.block2.weight", "body.0.block2.bias", "body.1.block1.weight", "body.1.block1.bias", "body.1.block2.weight", "body.1.block2.bias", "body.2.block1.weight", "body.2.block1.bias", "body.2.block2.weight", "body.2.block2.bias", "body.3.block1.weight", "body.3.block1.bias", "body.3.block2.weight", "body.3.block2.bias", "body.4.in_conv.weight", "body.4.in_conv.bias", "body.4.block1.weight", "body.4.block1.bias", "body.4.block2.weight", "body.4.block2.bias", "body.5.block1.weight", "body.5.block1.bias", "body.5.block2.weight", "body.5.block2.bias", "body.6.block1.weight", "body.6.block1.bias", "body.6.block2.weight", "body.6.block2.bias", "body.7.block1.weight", "body.7.block1.bias", "body.7.block2.weight", "body.7.block2.bias", "conv_in.weight", "conv_in.bias".
Unexpected key(s) in state_dict: "body.0.body.0.block1.bias", "body.0.body.0.block1.weight", "body.0.body.0.block2.bias", "body.0.body.0.block2.weight", "body.0.body.1.block1.bias", "body.0.body.1.block1.weight", "body.0.body.1.block2.bias", "body.0.body.1.block2.weight", "body.0.body.2.block1.bias", "body.0.body.2.block1.weight", "body.0.body.2.block2.bias", "body.0.body.2.block2.weight", "body.0.body.3.block1.bias", "body.0.body.3.block1.weight", "body.0.body.3.block2.bias", "body.0.body.3.block2.weight", "body.0.in_conv.bias", "body.0.in_conv.weight", "body.0.out_conv.bias", "body.0.out_conv.weight", "body.1.body.0.block1.bias", "body.1.body.0.block1.weight", "body.1.body.0.block2.bias", "body.1.body.0.block2.weight", "body.1.body.1.block1.bias", "body.1.body.1.block1.weight", "body.1.body.1.block2.bias", "body.1.body.1.block2.weight", "body.1.body.2.block1.bias", "body.1.body.2.block1.weight", "body.1.body.2.block2.bias", "body.1.body.2.block2.weight", "body.1.body.3.block1.bias", "body.1.body.3.block1.weight", "body.1.body.3.block2.bias", "body.1.body.3.block2.weight", "body.1.in_conv.bias", "body.1.in_conv.weight", "body.1.out_conv.bias", "body.1.out_conv.weight", "body.2.body.0.block1.bias", "body.2.body.0.block1.weight", "body.2.body.0.block2.bias", "body.2.body.0.block2.weight", "body.2.body.1.block1.bias", "body.2.body.1.block1.weight", "body.2.body.1.block2.bias", "body.2.body.1.block2.weight", "body.2.body.2.block1.bias", "body.2.body.2.block1.weight", "body.2.body.2.block2.bias", "body.2.body.2.block2.weight", "body.2.body.3.block1.bias", "body.2.body.3.block1.weight", "body.2.body.3.block2.bias", "body.2.body.3.block2.weight", "body.2.out_conv.bias", "body.2.out_conv.weight", "body.3.body.0.block1.bias", "body.3.body.0.block1.weight", "body.3.body.0.block2.bias", "body.3.body.0.block2.weight", "body.3.body.1.block1.bias", "body.3.body.1.block1.weight", "body.3.body.1.block2.bias", "body.3.body.1.block2.weight", "body.3.body.2.block1.bias", "body.3.body.2.block1.weight", "body.3.body.2.block2.bias", "body.3.body.2.block2.weight", "body.3.body.3.block1.bias", "body.3.body.3.block1.weight", "body.3.body.3.block2.bias", "body.3.body.3.block2.weight", "body.3.in_conv.bias", "body.3.in_conv.weight", "body.3.out_conv.bias", "body.3.out_conv.weight".
size mismatch for body.2.in_conv.weight: copying a param with shape torch.Size([320, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 320, 1, 1]).
size mismatch for body.2.in_conv.bias: copying a param with shape torch.Size([320]) from checkpoint, the shape in current model is torch.Size([640])
getting totally different images - not what I'm seeing in the tutorials at all. what gives?
+1
Bug:
File "C:\Users\user\stable-diffusion-webui\modules\scripts.py", line 417, in process
script.process(p, *script_args)
File "C:\Users\user\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 682, in process
model_net = self.load_control_model(p, unet, unit.model, unit.low_vram)
File "C:\Users\user\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 471, in load_control_model
model_net = self.build_control_model(p, unet, model, lowvram)
File "C:\Users\user\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 513, in build_control_model
network = network_module(
File "C:\Users\user\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 82, in __init__
self.control_model.load_state_dict(state_dict)
File "C:\Users\user\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Adapter:
size mismatch for conv_in.weight: copying a param with shape torch.Size([320, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 64, 3, 3]).
What I found:
- t2iadapter_openpose_sd14v1 and t2iadapter_depth_sd14v1 DO WORK with image_adapter_v14 set in
Settings/ControlNet:[Config file for Adapter models], but image_adapter.yaml doesn't work for canny model. For canny I need sketch_adapter_v14
So, if I try to use MultiControlnet with canny and openpose models at the same time, I get the error above.
I believe it works fine with the standard controlnet models (which are 5.5GB each). I can't test it due to cuda's OutOfMemory error with LowVRAM checked and with 88x88 resolution, but I remember I made it work yesterday
This issue was closed but could anyone explain to me what the fix was because I'm still getting it!