sd-forge-layerdiffuse icon indicating copy to clipboard operation
sd-forge-layerdiffuse copied to clipboard

ValueError: Wrong LoRA Key: diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight

Open DiegoRRR opened this issue 1 year ago • 0 comments

Every time I use LayerDiffuse and I load another checkpoint this error happens.

If I first disable LayerDiffuse, then change the checkpoint, generate, enable again LayerDiffuse and generate, it works. But if I keep LayerDiffuse enabled and change the checkpoint, the error happens.

Once it happened it keeps happening every time I try to generate. Even if I close Forge's command window then start it again, the error still happens. Even if I disable LayerDiffuse it keeps happening anyway. The only way to have it generating again is to restart Forge AND open a new tab. So the error messes up something in the page. (and copying all the values and settings from the "dead" tab to a new one every time it happens is very annoying)

Loading Model: {'checkpoint_info': {'filename': 'D:\\apps\\stable-diffusion\\For
ge_2024\\webui\\models\\Stable-diffusion\\1.5\\mix CA2_ artUniverse x0.62 + toon
10.safetensors', 'hash': '81640685'}, 'additional_modules': [], 'unet_storage_dt
ype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done
.
StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0}
D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\transformers
\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces`
was not set. It will be set to `True` by default. This behavior will be depracte
d in transformers v4.45, and will be then set to `False` by default. For more de
tails check this issue: https://github.com/huggingface/transformers/issues/31884

  warnings.warn(
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.flo
at16}
Model loaded in 1.3s (unload existing model: 0.2s, forge model load: 1.1s).
[Unload] Trying to free 1026.93 MB for cuda:0 with 0 models keep loaded ... Done
.
[Memory Management] Target: LatentTransparencyOffsetEncoder, Free GPU: 11455.46
MB, Model Require: 2.25 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.
00 MB, Remaining: 10429.21 MB, All loaded to GPU.
Moving model(s) has taken 0.02 seconds
[Unload] Trying to free 3686.21 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 10348.53 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 10348.53 MB, Mode
l Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB,
 Remaining: 9164.97 MB, All loaded to GPU.
Moving model(s) has taken 0.10 seconds
[LORA] Loaded D:\apps\stable-diffusion\Forge_2024\webui\models\Lora\1.5\style\Cr
abapple_Trouble_15-14.safetensors for KModel-UNet with 192 keys at weight 0.8 (s
kipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\apps\stable-diffusion\Forge_2024\webui\models\Lora\1.5\style\Cr
abapple_Trouble_15-14.safetensors for KModel-CLIP with 72 keys at weight 0.8 (sk
ipped 0 keys) with on_the_fly = False
[LORA] Loaded D:\apps\stable-diffusion\Forge_2024\webui\models\Lora\1.5\style\st
yle_paint_6-06.safetensors for KModel-UNet with 192 keys at weight 0.6 (skipped
0 keys) with on_the_fly = False
[LORA] Loaded D:\apps\stable-diffusion\Forge_2024\webui\models\Lora\1.5\style\st
yle_paint_6-06.safetensors for KModel-CLIP with 72 keys at weight 0.6 (skipped 0
 keys) with on_the_fly = False
[Unload] Trying to free 1329.14 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 10316.81 MB ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 10316.81 MB, Model Requi
re: 234.72 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remain
ing: 9058.09 MB, All loaded to GPU.
Moving model(s) has taken 1.02 seconds
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Curr
ent free memory is 9984.96 MB ... Done.
[LayerDiffuse] LayerMethod.FG_ONLY_ATTN_SD15
[Unload] Trying to free 3421.47 MB for cuda:0 with 0 models keep loaded ... Curr
ent free memory is 9984.49 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 9984.49 MB, Model Require: 1639.41
 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 7321.
08 MB, All loaded to GPU.
Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\patcher\lora.py", line
 344, in refresh
    parent_layer, child_key, weight = utils.get_attr_with_parent(self.model, key
)
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\utils.py", line 85, in
 get_attr_with_parent
    obj = getattr(obj, name)
  File "D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\torc
h\nn\modules\module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{nam
e}'")
AttributeError: 'AttentionSharingUnit' object has no attribute 'to_out'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules_forge\main_thread.py",
 line 30, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules\img2img.py", line 250,
 in img2img_function
    processed = process_images(p)
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules\processing.py", line 8
17, in process_images
    res = process_images_inner(p)
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules\processing.py", line 9
60, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, s
eeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=
p.prompts)
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules\processing.py", line 1
790, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditionin
g, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "D:\apps\stable-diffusion\Forge_2024\webui\modules\sd_samplers_kdiffusion
.py", line 138, in sample_img2img
    sampling_prepare(self.model_wrap.inner_model.forge_objects.unet, x=x)
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\sampling\sampling_func
tion.py", line 383, in sampling_prepare
    memory_management.load_models_gpu(
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\memory_management.py",
 line 679, in load_models_gpu
    loaded_model.model_load(model_gpu_memory_when_using_cpu_swap)
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\memory_management.py",
 line 518, in model_load
    self.model.refresh_loras()
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\patcher\base.py", line
 126, in refresh_loras
    self.lora_loader.refresh(lora_patches=self.lora_patches, offload_device=self
.offload_device)
  File "D:\apps\stable-diffusion\Forge_2024\system\python\lib\site-packages\torc
h\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\apps\stable-diffusion\Forge_2024\webui\backend\patcher\lora.py", line
 347, in refresh
    raise ValueError(f"Wrong LoRA Key: {key}")
ValueError: Wrong LoRA Key: diffusion_model.input_blocks.1.1.transformer_blocks.
0.attn2.to_out.0.weight
Wrong LoRA Key: diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_o
ut.0.weight

DiegoRRR avatar Oct 10 '24 16:10 DiegoRRR