diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Error when using blockwise scales with sd_xl_offset_example-lora_1.0.safetensors

Open asomoza opened this issue 9 months ago • 3 comments

Describe the bug

When loading the sd_xl_offset_example-lora_1.0.safetensors from the official repository and setting the scales by blocks, it throws this error:

RuntimeError: No LoRA weight found for module down_blocks.0.resnets.0.conv1

Loading the same lora with just a float e.g.: pipe.set_adapters(["offset"], [1.0]) doesn't raise the error.

Reproduction

import torch

from diffusers import StableDiffusionXLPipeline


pipe = StableDiffusionXLPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")

pipe.load_lora_weights(
    "stabilityai/stable-diffusion-xl-base-1.0",
    weight_name="sd_xl_offset_example-lora_1.0.safetensors",
    adapter_name="offset",
)
offset_scales = {
    "unet": {
        "down": {"block_1": [1.0, 1.0], "block_2": [1.0, 1.0]},
        "mid": 0,
        "up": {"block_0": [1.0, 1.0, 1.0], "block_1": [1.0, 1.0, 1.0]},
    },
    "text_encoder": 1.0,
    "text_encoder_2": 1.0,
}
pipe.set_adapters(["offset"], [offset_scales])

prompt = "high quality photo of a panther in the jungle, night, dark, low light, dim"
generator = torch.Generator(device="cpu").manual_seed(899041154)

image = pipe(
    prompt=prompt,
    negative_prompt="",
    guidance_scale=6.5,
    num_inference_steps=25,
    generator=generator,
).images[0]
image.save("result.png")

Logs

No response

System Info

  • diffusers version: 0.28.0.dev0
  • Platform: Linux-6.8.7-arch1-1-x86_64-with-glibc2.39
  • Python version: 3.11.8
  • PyTorch version (GPU?): 2.2.2+cu121 (True)
  • Huggingface_hub version: 0.20.3
  • Transformers version: 4.40.1
  • Accelerate version: 0.30.0
  • xFormers version: not installed

Who can help?

@sayakpaul @UmerHA

asomoza avatar May 06 '24 17:05 asomoza

I can open a PR to fix it but first I would like to know if there is a reason behind raising the error here:

https://github.com/huggingface/diffusers/blob/7fa3e5b0f6a593d06e65f5b40ccd46acfafcfeb1/src/diffusers/utils/peft_utils.py#L249

instead of just returning a default value.

asomoza avatar May 06 '24 17:05 asomoza

@asomoza At this point the lora weight dict is expanded and should contain an entry for every peft-tunable module. So that error should never happen.

My naive guess is that convolutions are now also peft-tunable (ie a subclass of BaseTunerLayer) but the dict expansion doesn't reflect that.

Imo returning a default value is fine.

UmerHA avatar May 06 '24 18:05 UmerHA

thanks for the clarification, I really appreciate it.

asomoza avatar May 07 '24 00:05 asomoza