Unloading LoRA raises errors on refiner SDXL models when employing only ONE text encoder
Unloading LoRA raises errors on refiner SDXL models when employing only ONE text encoder:
https://github.com/huggingface/diffusers/blob/7f551e29ff4ad05615cb38530a8940811f9e5936/src/diffusers/loaders/lora.py#L967C9-L967C28 on self._remove_text_encoder_monkey_patch()
https://github.com/huggingface/diffusers/blob/7f551e29ff4ad05615cb38530a8940811f9e5936/src/diffusers/loaders/lora.py#L1539
def _remove_text_encoder_monkey_patch(self):
if USE_PEFT_BACKEND:
recurse_remove_peft_layers(self.text_encoder)
Handling of LoRA loading and unloading poses limitations in quite a few use cases. When refining an image using StableDiffusionXLImg2ImgPipeline. unlod_lora_weight raise error
File "/opt/conda/envs/creative-inference-env/lib/python3.9/site-packages/diffusers/utils/peft_utils.py", line 37, in recurse_remove_peft_layers for module in model.modules(): AttributeError: 'NoneType' object has no attribute 'modules'
We seek a solution that allows us to manually unload LoRA weights with one text encoder.
on loaders.lora.py ( lora.py) there are two '_remove_text_encoder_monkey_patch' implementation @yiyixuxu @sayakpaul @DN6 @patrickvonplaten
Can you share the LoRA model file with us?
cc: @younesbelkada
Also, having a fuller stack trace would be helpful here.
any refiner using unload_lora_weights() , raise the error
refiner_pipeline = AutoPipelineForImage2Image.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0", requires_safety_checker=False)
refiner_pipeline.load_lora_weights("path/to/refiner_lora/model", weight_name="pytorch_lora_weights.safetensors") refiner_image = refiner_pipeline("A pokemon with blue eyes",image=base_image).images[0] refiner_pipeline.unload_lora_weights()
File "/opt/conda/envs/creative-inference-env/lib/python3.9/site-packages/diffusers/loaders/lora.py", line 997, in unload_lora_weights self._remove_text_encoder_monkey_patch() File "/opt/conda/envs/creative-inference-env/lib/python3.9/site-packages/diffusers/loaders/lora.py", line 1541, in _remove_text_encoder_monkey_patch recurse_remove_peft_layers(self.text_encoder) File "/opt/conda/envs/creative-inference-env/lib/python3.9/site-packages/diffusers/utils/peft_utils.py", line 37, in recurse_remove_peft_layers for module in model.modules(): AttributeError: 'NoneType' object has no attribute 'modules'
Can you host path/to/refiner_lora/model on the Hub so that we can take a look?
unfortunately not, but any refiner test unload_lora_weights() raises the error since it doesn't take in consideration one text_encoder.
hi @arnonrodman Thanks for the issue! If you can share a small reproducible snippet it would be great, you can push a dummy lora weight on the Hub so that we can have a look and fix the issue asap
Hi @younesbelkada upload dummy LoRA : https://huggingface.co/arnonrodman/error_in_refiner/tree/main with one text encoding.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Any news regarding this one? how can we unload LoRa with only one encoder? Thanks!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Gentle ping: @younesbelkada
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Gentle ping @younesbelkada
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.