peft
peft copied to clipboard
FIX: bugs of assign=True in load lora
When I load Flux trained lora through:
from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("model_qk_text.safetensors")
It raised this problem:
pipe.load_lora_weights("model_qk_text.safetensors")
File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1848, in load_lora_weights
self.load_lora_into_transformer(
File "/output/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1951, in load_lora_into_transformer
incompatible_keys = set_peft_model_state_dict(transformer, state_dict, adapter_name, **peft_kwargs)
File "/usr/local/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 458, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)
After remove assign=True, it all works.
Thanks for reporting this error. We cannot change the argument just like that, as this will lead to failure in loading other models. Instead, let's try to debug why Flux fails in this case. As a first step, could you please check if loading while passing low_cpu_mem_usage=False to load_lora_weights resolves your error?
low_cpu_mem_usage=False is work, so what's your opinion on this problem? Should we add a special flag for peft Flux compatible?
Should we add a special flag for peft Flux compatible?
Before we do that, we need to first understand why this adapter causes the issue, while others work. Then we can think of the best solution. I'll take a look at it when I have a bit of time on my hands.
I have a bit of time to investigate the issue this week. Do you know of a publicly available LoRA Flux adapter that causes the issue you described (only safetensors)? That way, I can try to reproduce the error.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
not stale
@tomguluson92 Did you see my last question?
@BenjaminBossan Hi, is this still in progress? It looks like the work might have stopped midway, so if this is still a valid issue, I'd like to continue working on it. I'm leaving this comment to check.
@nsbg A first step would be to find a way to replicate the error. If you're interested in working on this, I'd be happy to assist.
Okay. That sounds like an interesting task. I'll start by writing and testing some example code, using the exact same model as the user who first reported this issue.
I ran the code in Colab and encountered two different scenarios.
The first scenario occurred when I ran the original user's code exactly as they provided it: I got an error indicating that the file 'model_qk_text.safetensors' could not be found.
I searched for this file on Hugging Face and GitHub, but it doesn't seem to exist anywhere. I suspect the original user might have uploaded this file locally. Therefore, I can't reproduce the exact error with the provided code.
The second scenario happened when I arbitrarily modified the contents of the load_lora_weights method. I referenced black-forest-labs/FLUX.1-Depth-dev-lora and modified the code as shown below.
import torch
from diffusers import AutoPipelineForText2Image, FluxPipeline
from safetensors.torch import load_file
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
In this case, the model loaded successfully, but an error occurred during the image generation process.
Given the initial issue, it seems like an error should have occurred during the model loading process, but it's proving difficult to reproduce.
In this case, the model loaded successfully, but an error occurred during the image generation process.
Could you please show the code and error message.
Given the initial issue, it seems like an error should have occurred during the model loading process, but it's proving difficult to reproduce.
Yes, exactly, that is why I asked about access to the weight earlier. @tomguluson92 do you still have that issue with latest versions of PEFT and diffusers?