stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Can't load SDXL inpainting VAE

Open cvar66 opened this issue 1 year ago • 1 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I'm trying to use SDXL inpainting but whenever I try to load the SDXL inpainting VAE I get this error: Loading VAE weights specified in settings: E:\SD\auto111\stable-diffusion-webui\models\VAE\diffusion_pytorch_model.safetensors changing setting sd_vae to diffusion_pytorch_model.safetensors: RuntimeError Traceback (most recent call last): File "E:\SD\auto111\stable-diffusion-webui\modules\options.py", line 140, in set option.onchange() File "E:\SD\auto111\stable-diffusion-webui\modules\call_queue.py", line 13, in f res = func(*args, **kwargs) File "E:\SD\auto111\stable-diffusion-webui\modules\initialize_util.py", line 171, in <lambda> shared.opts.onchange("sd_vae", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False) File "E:\SD\auto111\stable-diffusion-webui\modules\sd_vae.py", line 273, in reload_vae_weights load_vae(sd_model, vae_file, vae_source) File "E:\SD\auto111\stable-diffusion-webui\modules\sd_vae.py", line 212, in load_vae _load_vae_dict(model, vae_dict_1) File "E:\SD\auto111\stable-diffusion-webui\modules\sd_vae.py", line 239, in _load_vae_dict model.first_stage_model.load_state_dict(vae_dict_1) File "E:\SD\auto111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for AutoencoderKLInferenceWrapper: Missing key(s) in state_dict: "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.conv1.weight", "encoder.down.0.block.0.conv1.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.conv2.weight", "encoder.down.0.block.0.conv2.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm1.bias", "encoder.down.0.block.1.conv1.weight", "encoder.down.0.block.1.conv1.bias", "encoder.down.0.block.1.norm2.weight", "encoder.down.0.block.1.norm2.bias", "encoder.down.0.block.1.conv2.weight", "encoder.down.0.block.1.conv2.bias", "encoder.down.0.downsample.conv.weight", "encoder.down.0.downsample.conv.bias", "encoder.down.1.block.0.norm1.weight", "encoder.down.1.block.0.norm1.bias", "encoder.down.1.block.0.conv1.weight", "encoder.down.1.block.0.conv1.bias", "encoder.down.1.block.0.norm2.weight", "encoder.down.1.block.0.norm2.bias", "encoder.down.1.block.0.conv2.weight", "encoder.down.1.block.0.conv2.bias", "encoder.down.1.block.0.nin_shortcut.weight", "encoder.down.1.block.0.nin_shortcut.bias", "encoder.down.1.block.1.norm1.weight", "encoder.down.1.block.1.norm1.bias", "encoder.down.1.block.1.conv1.weight", "encoder.down.1.block.1.conv1.bias", "encoder.down.1.block.1.norm2.weight", "encoder.down.1.block.1.norm2.bias", "encoder.down.1.block.1.conv2.weight", "encoder.down.1.block.1.conv2.bias", "encoder.down.1.downsample.conv.weight", "encoder.down.1.downsample.conv.bias", "encoder.down.2.block.0.norm1.weight", "encoder.down.2.block.0.norm1.bias", "encoder.down.2.block.0.conv1.weight", "encoder.down.2.block.0.conv1.bias", "encoder.down.2.block.0.norm2.weight", "encoder.down.2.block.0.norm2.bias", "encoder.down.2.block.0.conv2.weight", "encoder.down.2.block.0.conv2.bias", "encoder.down.2.block.0.nin_shortcut.weight", "encoder.down.2.block.0.nin_shortcut.bias", "encoder.down.2.block.1.norm1.weight", "encoder.down.2.block.1.norm1.bias", "encoder.down.2.block.1.conv1.weight", "encoder.down.2.block.1.conv1.bias", "encoder.down.2.block.1.norm2.weight", "encoder.down.2.block.1.norm2.bias", "encoder.down.2.block.1.conv2.weight", "encoder.down.2.block.1.conv2.bias", "encoder.down.2.downsample.conv.weight", "encoder.down.2.downsample.conv.bias", "encoder.down.3.block.0.norm1.weight", "encoder.down.3.block.0.norm1.bias", "encoder.down.3.block.0.conv1.weight", "encoder.down.3.block.0.conv1.bias", "encoder.down.3.block.0.norm2.weight", "encoder.down.3.block.0.norm2.bias", "encoder.down.3.block.0.conv2.weight", "encoder.down.3.block.0.conv2.bias", "encoder.down.3.block.1.norm1.weight", "encoder.down.3.block.1.norm1.bias", "encoder.down.3.block.1.conv1.weight", "encoder.down.3.block.1.conv1.bias", "encoder.down.3.block.1.norm2.weight", "encoder.down.3.block.1.norm2.bias", "encoder.down.3.block.1.conv2.weight", "encoder.down.3.block.1.conv2.bias", "encoder.mid.block_1.norm1.weight", "encoder.mid.block_1.norm1.bias", "encoder.mid.block_1.conv1.weight", "encoder.mid.block_1.conv1.bias", "encoder.mid.block_1.norm2.weight", "encoder.mid.block_1.norm2.bias", "encoder.mid.block_1.conv2.weight", "encoder.mid.block_1.conv2.bias", "encoder.mid.attn_1.norm.weight", "encoder.mid.attn_1.norm.bias", "encoder.mid.attn_1.q.weight", "encoder.mid.attn_1.q.bias", "encoder.mid.attn_1.k.weight", "encoder.mid.attn_1.k.bias", "encoder.mid.attn_1.v.weight", "encoder.mid.attn_1.v.bias", "encoder.mid.attn_1.proj_out.weight", "encoder.mid.attn_1.proj_out.bias", "encoder.mid.block_2.norm1.weight", "encoder.mid.block_2.norm1.bias", "encoder.mid.block_2.conv1.weight", "encoder.mid.block_2.conv1.bias", "encoder.mid.block_2.norm2.weight", "encoder.mid.block_2.norm2.bias", "encoder.mid.block_2.conv2.weight", "encoder.mid.block_2.conv2.bias", "encoder.norm_out.weight", "encoder.norm_out.bias", "decoder.mid.block_1.norm1.weight", "decoder.mid.block_1.norm1.bias", "decoder.mid.block_1.conv1.weight", "decoder.mid.block_1.conv1.bias", "decoder.mid.block_1.norm2.weight", "decoder.mid.block_1.norm2.bias", "decoder.mid.block_1.conv2.weight", "decoder.mid.block_1.conv2.bias", "decoder.mid.attn_1.norm.weight", "decoder.mid.attn_1.norm.bias", "decoder.mid.attn_1.q.weight", "decoder.mid.attn_1.q.bias", "decoder.mid.attn_1.k.weight", "decoder.mid.attn_1.k.bias", "decoder.mid.attn_1.v.weight", "decoder.mid.attn_1.v.bias", "decoder.mid.attn_1.proj_out.weight", "decoder.mid.attn_1.proj_out.bias", "decoder.mid.block_2.norm1.weight", "decoder.mid.block_2.norm1.bias", "decoder.mid.block_2.conv1.weight", "decoder.mid.block_2.conv1.bias", "decoder.mid.block_2.norm2.weight", "decoder.mid.block_2.norm2.bias", "decoder.mid.block_2.conv2.weight", "decoder.mid.block_2.conv2.bias", "decoder.up.0.block.0.norm1.weight", "decoder.up.0.block.0.norm1.bias", "decoder.up.0.block.0.conv1.weight", "decoder.up.0.block.0.conv1.bias", "decoder.up.0.block.0.norm2.weight", "decoder.up.0.block.0.norm2.bias", "decoder.up.0.block.0.conv2.weight", "decoder.up.0.block.0.conv2.bias", "decoder.up.0.block.0.nin_shortcut.weight", "decoder.up.0.block.0.nin_shortcut.bias", "decoder.up.0.block.1.norm1.weight", "decoder.up.0.block.1.norm1.bias", "decoder.up.0.block.1.conv1.weight", "decoder.up.0.block.1.conv1.bias", "decoder.up.0.block.1.norm2.weight", "decoder.up.0.block.1.norm2.bias", "decoder.up.0.block.1.conv2.weight", "decoder.up.0.block.1.conv2.bias", "decoder.up.0.block.2.norm1.weight", "decoder.up.0.block.2.norm1.bias", "decoder.up.0.block.2.conv1.weight", "decoder.up.0.block.2.conv1.bias", "decoder.up.0.block.2.norm2.weight", "decoder.up.0.block.2.norm2.bias", "decoder.up.0.block.2.conv2.weight", "decoder.up.0.block.2.conv2.bias", "decoder.up.1.block.0.norm1.weight", "decoder.up.1.block.0.norm1.bias", "decoder.up.1.block.0.conv1.weight", "decoder.up.1.block.0.conv1.bias", "decoder.up.1.block.0.norm2.weight", "decoder.up.1.block.0.norm2.bias", "decoder.up.1.block.0.conv2.weight", "decoder.up.1.block.0.conv2.bias", "decoder.up.1.block.0.nin_shortcut.weight", "decoder.up.1.block.0.nin_shortcut.bias", "decoder.up.1.block.1.norm1.weight", "decoder.up.1.block.1.norm1.bias", "decoder.up.1.block.1.conv1.weight", "decoder.up.1.block.1.conv1.bias", "decoder.up.1.block.1.norm2.weight", "decoder.up.1.block.1.norm2.bias", "decoder.up.1.block.1.conv2.weight", "decoder.up.1.block.1.conv2.bias", "decoder.up.1.block.2.norm1.weight", "decoder.up.1.block.2.norm1.bias", "decoder.up.1.block.2.conv1.weight", "decoder.up.1.block.2.conv1.bias", "decoder.up.1.block.2.norm2.weight", "decoder.up.1.block.2.norm2.bias", "decoder.up.1.block.2.conv2.weight", "decoder.up.1.block.2.conv2.bias", "decoder.up.1.upsample.conv.weight", "decoder.up.1.upsample.conv.bias", "decoder.up.2.block.0.norm1.weight", "decoder.up.2.block.0.norm1.bias", "decoder.up.2.block.0.conv1.weight", "decoder.up.2.block.0.conv1.bias", "decoder.up.2.block.0.norm2.weight", "decoder.up.2.block.0.norm2.bias", "decoder.up.2.block.0.conv2.weight", "decoder.up.2.block.0.conv2.bias", "decoder.up.2.block.1.norm1.weight", "decoder.up.2.block.1.norm1.bias", "decoder.up.2.block.1.conv1.weight", "decoder.up.2.block.1.conv1.bias", "decoder.up.2.block.1.norm2.weight", "decoder.up.2.block.1.norm2.bias", "decoder.up.2.block.1.conv2.weight", "decoder.up.2.block.1.conv2.bias", "decoder.up.2.block.2.norm1.weight", "decoder.up.2.block.2.norm1.bias", "decoder.up.2.block.2.conv1.weight", "decoder.up.2.block.2.conv1.bias", "decoder.up.2.block.2.norm2.weight", "decoder.up.2.block.2.norm2.bias", "decoder.up.2.block.2.conv2.weight", "decoder.up.2.block.2.conv2.bias", "decoder.up.2.upsample.conv.weight", "decoder.up.2.upsample.conv.bias", "decoder.up.3.block.0.norm1.weight", "decoder.up.3.block.0.norm1.bias", "decoder.up.3.block.0.conv1.weight", "decoder.up.3.block.0.conv1.bias", "decoder.up.3.block.0.norm2.weight", "decoder.up.3.block.0.norm2.bias", "decoder.up.3.block.0.conv2.weight", "decoder.up.3.block.0.conv2.bias", "decoder.up.3.block.1.norm1.weight", "decoder.up.3.block.1.norm1.bias", "decoder.up.3.block.1.conv1.weight", "decoder.up.3.block.1.conv1.bias", "decoder.up.3.block.1.norm2.weight", "decoder.up.3.block.1.norm2.bias", "decoder.up.3.block.1.conv2.weight", "decoder.up.3.block.1.conv2.bias", "decoder.up.3.block.2.norm1.weight", "decoder.up.3.block.2.norm1.bias", "decoder.up.3.block.2.conv1.weight", "decoder.up.3.block.2.conv1.bias", "decoder.up.3.block.2.norm2.weight", "decoder.up.3.block.2.norm2.bias", "decoder.up.3.block.2.conv2.weight", "decoder.up.3.block.2.conv2.bias", "decoder.up.3.upsample.conv.weight", "decoder.up.3.upsample.conv.bias", "decoder.norm_out.weight", "decoder.norm_out.bias". Unexpected key(s) in state_dict: "encoder.conv_norm_out.bias", "encoder.conv_norm_out.weight", "encoder.down_blocks.0.downsamplers.0.conv.bias", "encoder.down_blocks.0.downsamplers.0.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.bias", "encoder.down_blocks.0.resnets.0.conv1.weight", "encoder.down_blocks.0.resnets.0.conv2.bias", "encoder.down_blocks.0.resnets.0.conv2.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.1.conv1.bias", "encoder.down_blocks.0.resnets.1.conv1.weight", "encoder.down_blocks.0.resnets.1.conv2.bias", "encoder.down_blocks.0.resnets.1.conv2.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm2.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.1.downsamplers.0.conv.bias", "encoder.down_blocks.1.downsamplers.0.conv.weight", "encoder.down_blocks.1.resnets.0.conv1.bias", "encoder.down_blocks.1.resnets.0.conv1.weight", "encoder.down_blocks.1.resnets.0.conv2.bias", "encoder.down_blocks.1.resnets.0.conv2.weight", "encoder.down_blocks.1.resnets.0.conv_shortcut.bias", "encoder.down_blocks.1.resnets.0.conv_shortcut.weight", "encoder.down_blocks.1.resnets.0.norm1.bias", "encoder.down_blocks.1.resnets.0.norm1.weight", "encoder.down_blocks.1.resnets.0.norm2.bias", "encoder.down_blocks.1.resnets.0.norm2.weight", "encoder.down_blocks.1.resnets.1.conv1.bias", "encoder.down_blocks.1.resnets.1.conv1.weight", "encoder.down_blocks.1.resnets.1.conv2.bias", "encoder.down_blocks.1.resnets.1.conv2.weight", "encoder.down_blocks.1.resnets.1.norm1.bias", "encoder.down_blocks.1.resnets.1.norm1.weight", "encoder.down_blocks.1.resnets.1.norm2.bias", "encoder.down_blocks.1.resnets.1.norm2.weight", "encoder.down_blocks.2.downsamplers.0.conv.bias", "encoder.down_blocks.2.downsamplers.0.conv.weight", "encoder.down_blocks.2.resnets.0.conv1.bias", "encoder.down_blocks.2.resnets.0.conv1.weight", "encoder.down_blocks.2.resnets.0.conv2.bias", "encoder.down_blocks.2.resnets.0.conv2.weight", "encoder.down_blocks.2.resnets.0.conv_shortcut.bias", "encoder.down_blocks.2.resnets.0.conv_shortcut.weight", "encoder.down_blocks.2.resnets.0.norm1.bias", "encoder.down_blocks.2.resnets.0.norm1.weight", "encoder.down_blocks.2.resnets.0.norm2.bias", "encoder.down_blocks.2.resnets.0.norm2.weight", "encoder.down_blocks.2.resnets.1.conv1.bias", "encoder.down_blocks.2.resnets.1.conv1.weight", "encoder.down_blocks.2.resnets.1.conv2.bias", "encoder.down_blocks.2.resnets.1.conv2.weight", "encoder.down_blocks.2.resnets.1.norm1.bias", "encoder.down_blocks.2.resnets.1.norm1.weight", "encoder.down_blocks.2.resnets.1.norm2.bias", "encoder.down_blocks.2.resnets.1.norm2.weight", "encoder.down_blocks.3.resnets.0.conv1.bias", "encoder.down_blocks.3.resnets.0.conv1.weight", "encoder.down_blocks.3.resnets.0.conv2.bias", "encoder.down_blocks.3.resnets.0.conv2.weight", "encoder.down_blocks.3.resnets.0.norm1.bias", "encoder.down_blocks.3.resnets.0.norm1.weight", "encoder.down_blocks.3.resnets.0.norm2.bias", "encoder.down_blocks.3.resnets.0.norm2.weight", "encoder.down_blocks.3.resnets.1.conv1.bias", "encoder.down_blocks.3.resnets.1.conv1.weight", "encoder.down_blocks.3.resnets.1.conv2.bias", "encoder.down_blocks.3.resnets.1.conv2.weight", "encoder.down_blocks.3.resnets.1.norm1.bias", "encoder.down_blocks.3.resnets.1.norm1.weight", "encoder.down_blocks.3.resnets.1.norm2.bias", "encoder.down_blocks.3.resnets.1.norm2.weight", "encoder.mid_block.attentions.0.group_norm.bias", "encoder.mid_block.attentions.0.group_norm.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.resnets.0.conv1.bias", "encoder.mid_block.resnets.0.conv1.weight", "encoder.mid_block.resnets.0.conv2.bias", "encoder.mid_block.resnets.0.conv2.weight", "encoder.mid_block.resnets.0.norm1.bias", "encoder.mid_block.resnets.0.norm1.weight", "encoder.mid_block.resnets.0.norm2.bias", "encoder.mid_block.resnets.0.norm2.weight", "encoder.mid_block.resnets.1.conv1.bias", "encoder.mid_block.resnets.1.conv1.weight", "encoder.mid_block.resnets.1.conv2.bias", "encoder.mid_block.resnets.1.conv2.weight", "encoder.mid_block.resnets.1.norm1.bias", "encoder.mid_block.resnets.1.norm1.weight", "encoder.mid_block.resnets.1.norm2.bias", "encoder.mid_block.resnets.1.norm2.weight", "decoder.conv_norm_out.bias", "decoder.conv_norm_out.weight", "decoder.mid_block.attentions.0.group_norm.bias", "decoder.mid_block.attentions.0.group_norm.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.resnets.0.conv1.bias", "decoder.mid_block.resnets.0.conv1.weight", "decoder.mid_block.resnets.0.conv2.bias", "decoder.mid_block.resnets.0.conv2.weight", "decoder.mid_block.resnets.0.norm1.bias", "decoder.mid_block.resnets.0.norm1.weight", "decoder.mid_block.resnets.0.norm2.bias", "decoder.mid_block.resnets.0.norm2.weight", "decoder.mid_block.resnets.1.conv1.bias", "decoder.mid_block.resnets.1.conv1.weight", "decoder.mid_block.resnets.1.conv2.bias", "decoder.mid_block.resnets.1.conv2.weight", "decoder.mid_block.resnets.1.norm1.bias", "decoder.mid_block.resnets.1.norm1.weight", "decoder.mid_block.resnets.1.norm2.bias", "decoder.mid_block.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.0.conv1.bias", "decoder.up_blocks.0.resnets.0.conv1.weight", "decoder.up_blocks.0.resnets.0.conv2.bias", "decoder.up_blocks.0.resnets.0.conv2.weight", "decoder.up_blocks.0.resnets.0.norm1.bias", "decoder.up_blocks.0.resnets.0.norm1.weight", "decoder.up_blocks.0.resnets.0.norm2.bias", "decoder.up_blocks.0.resnets.0.norm2.weight", "decoder.up_blocks.0.resnets.1.conv1.bias", "decoder.up_blocks.0.resnets.1.conv1.weight", "decoder.up_blocks.0.resnets.1.conv2.bias", "decoder.up_blocks.0.resnets.1.conv2.weight", "decoder.up_blocks.0.resnets.1.norm1.bias", "decoder.up_blocks.0.resnets.1.norm1.weight", "decoder.up_blocks.0.resnets.1.norm2.bias", "decoder.up_blocks.0.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.2.conv1.bias", "decoder.up_blocks.0.resnets.2.conv1.weight", "decoder.up_blocks.0.resnets.2.conv2.bias", "decoder.up_blocks.0.resnets.2.conv2.weight", "decoder.up_blocks.0.resnets.2.norm1.bias", "decoder.up_blocks.0.resnets.2.norm1.weight", "decoder.up_blocks.0.resnets.2.norm2.bias", "decoder.up_blocks.0.resnets.2.norm2.weight", "decoder.up_blocks.0.upsamplers.0.conv.bias", "decoder.up_blocks.0.upsamplers.0.conv.weight", "decoder.up_blocks.1.resnets.0.conv1.bias", "decoder.up_blocks.1.resnets.0.conv1.weight", "decoder.up_blocks.1.resnets.0.conv2.bias", "decoder.up_blocks.1.resnets.0.conv2.weight", "decoder.up_blocks.1.resnets.0.norm1.bias", "decoder.up_blocks.1.resnets.0.norm1.weight", "decoder.up_blocks.1.resnets.0.norm2.bias", "decoder.up_blocks.1.resnets.0.norm2.weight", "decoder.up_blocks.1.resnets.1.conv1.bias", "decoder.up_blocks.1.resnets.1.conv1.weight", "decoder.up_blocks.1.resnets.1.conv2.bias", "decoder.up_blocks.1.resnets.1.conv2.weight", "decoder.up_blocks.1.resnets.1.norm1.bias", "decoder.up_blocks.1.resnets.1.norm1.weight", "decoder.up_blocks.1.resnets.1.norm2.bias", "decoder.up_blocks.1.resnets.1.norm2.weight", "decoder.up_blocks.1.resnets.2.conv1.bias", "decoder.up_blocks.1.resnets.2.conv1.weight", "decoder.up_blocks.1.resnets.2.conv2.bias", "decoder.up_blocks.1.resnets.2.conv2.weight", "decoder.up_blocks.1.resnets.2.norm1.bias", "decoder.up_blocks.1.resnets.2.norm1.weight", "decoder.up_blocks.1.resnets.2.norm2.bias", "decoder.up_blocks.1.resnets.2.norm2.weight", "decoder.up_blocks.1.upsamplers.0.conv.bias", "decoder.up_blocks.1.upsamplers.0.conv.weight", "decoder.up_blocks.2.resnets.0.conv1.bias", "decoder.up_blocks.2.resnets.0.conv1.weight", "decoder.up_blocks.2.resnets.0.conv2.bias", "decoder.up_blocks.2.resnets.0.conv2.weight", "decoder.up_blocks.2.resnets.0.conv_shortcut.bias", "decoder.up_blocks.2.resnets.0.conv_shortcut.weight", "decoder.up_blocks.2.resnets.0.norm1.bias", "decoder.up_blocks.2.resnets.0.norm1.weight", "decoder.up_blocks.2.resnets.0.norm2.bias", "decoder.up_blocks.2.resnets.0.norm2.weight", "decoder.up_blocks.2.resnets.1.conv1.bias", "decoder.up_blocks.2.resnets.1.conv1.weight", "decoder.up_blocks.2.resnets.1.conv2.bias", "decoder.up_blocks.2.resnets.1.conv2.weight", "decoder.up_blocks.2.resnets.1.norm1.bias", "decoder.up_blocks.2.resnets.1.norm1.weight", "decoder.up_blocks.2.resnets.1.norm2.bias", "decoder.up_blocks.2.resnets.1.norm2.weight", "decoder.up_blocks.2.resnets.2.conv1.bias", "decoder.up_blocks.2.resnets.2.conv1.weight", "decoder.up_blocks.2.resnets.2.conv2.bias", "decoder.up_blocks.2.resnets.2.conv2.weight", "decoder.up_blocks.2.resnets.2.norm1.bias", "decoder.up_blocks.2.resnets.2.norm1.weight", "decoder.up_blocks.2.resnets.2.norm2.bias", "decoder.up_blocks.2.resnets.2.norm2.weight", "decoder.up_blocks.2.upsamplers.0.conv.bias", "decoder.up_blocks.2.upsamplers.0.conv.weight", "decoder.up_blocks.3.resnets.0.conv1.bias", "decoder.up_blocks.3.resnets.0.conv1.weight", "decoder.up_blocks.3.resnets.0.conv2.bias", "decoder.up_blocks.3.resnets.0.conv2.weight", "decoder.up_blocks.3.resnets.0.conv_shortcut.bias", "decoder.up_blocks.3.resnets.0.conv_shortcut.weight", "decoder.up_blocks.3.resnets.0.norm1.bias", "decoder.up_blocks.3.resnets.0.norm1.weight", "decoder.up_blocks.3.resnets.0.norm2.bias", "decoder.up_blocks.3.resnets.0.norm2.weight", "decoder.up_blocks.3.resnets.1.conv1.bias", "decoder.up_blocks.3.resnets.1.conv1.weight", "decoder.up_blocks.3.resnets.1.conv2.bias", "decoder.up_blocks.3.resnets.1.conv2.weight", "decoder.up_blocks.3.resnets.1.norm1.bias", "decoder.up_blocks.3.resnets.1.norm1.weight", "decoder.up_blocks.3.resnets.1.norm2.bias", "decoder.up_blocks.3.resnets.1.norm2.weight", "decoder.up_blocks.3.resnets.2.conv1.bias", "decoder.up_blocks.3.resnets.2.conv1.weight", "decoder.up_blocks.3.resnets.2.conv2.bias", "decoder.up_blocks.3.resnets.2.conv2.weight", "decoder.up_blocks.3.resnets.2.norm1.bias", "decoder.up_blocks.3.resnets.2.norm1.weight", "decoder.up_blocks.3.resnets.2.norm2.bias", "decoder.up_blocks.3.resnets.2.norm2.weight".

If I try using just the regular sdxl-vae with the sdxl inpaint model i get another huge error. It ends with this RuntimeError: "log_vml_cpu" not implemented for 'Half'

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

works

Sysinfo

https://pastebin.com/arb9p3Xf

What browsers do you use to access the UI ?

Mozilla Firefox

Console logs

https://pastebin.com/yDm296LQ

Additional information

No response

cvar66 avatar Nov 22 '23 19:11 cvar66

sdxl inpainting model in diffusers need to be converted to the format suitable for automatic1111 webui, and then add codes to support it. you can check for my PR https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14390 and discussion https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/13195.

wangqyqq avatar Dec 26 '23 08:12 wangqyqq