diffusers
                                
                                 diffusers copied to clipboard
                                
                                    diffusers copied to clipboard
                            
                            
                            
                        Error converting LoRA to safetensors/ckpt
Generally speaking, I am not clear on what to do with the output of these LoRA python scripts. I don't think the output can be natively used by the webuis. Other LoRAs I've seen online are usually safetensors. Here is what I did...
I used the provided python script in examples to generate a LoRA:
#!/bin/sh
accelerate launch train_dreambooth_lora.py \
  --pretrained_model_name_or_path=$1  \
  --instance_data_dir=$2 \
  --output_dir=$3 \
  --instance_prompt="a photo of laskajavids" \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=1 \
  --checkpointing_steps=200 \
  --learning_rate=1e-4 \
  --report_to="wandb" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=1000 \
  --validation_prompt="A photo of laskajavids by the pool" \
  --validation_epochs=50 \
  --seed="0" \
  --mixed_precision="fp16" \
  --use_8bit_adam
I successfully (or not?) created a LoRA, and it output the following to /output:
checkpoint-1000
checkpoint-200
checkpoint-400
checkpoint-600
checkpoint-800
pytorch_lora_weights.bin
Then, I tried to run scripts\convert_diffusers_to_original_stable_diffusion.py, like so:
 python /diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path /output --checkpoint_path /test.ckpt --use_safetensors
I received the following error:
Traceback (most recent call last):
  File "/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py", line 290, in <module>
    unet_state_dict = torch.load(unet_path, map_location="cpu")
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 771, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 270, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 251, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/output/unet/diffusion_pytorch_model.bin'
Did I miss something when creating the LoRA?
I just tried running the newer script examples\text_to_image\train_text_to_image_lora.py, with the following
export MODEL_NAME=$1
export TRAIN_DIR=$2
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --train_data_dir=$TRAIN_DIR --caption_column="additional_feature" \
  --resolution=512 --random_flip \
  --train_batch_size=1 \
  --validation_epochs=120 \
  --max_train_steps=1000 \
  --checkpointing_steps=100 \
  --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
  --seed=42 \
  --output_dir=$3 \
  --validation_prompt="photo of laskajavids" \
  --enable_xformers_memory_efficient_attention
And the output appeared to be the same as examples/train_dreambooth_lora.py.  So, I got the same error:
python /diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path /output --checkpoint_path /test.ckpt --use_safetensors
Traceback (most recent call last):
  File "/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py", line 290, in <module>
    unet_state_dict = torch.load(unet_path, map_location="cpu")
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 771, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 270, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ubuntu/miniconda3/envs/ldm/lib/python3.8/site-packages/torch/serialization.py", line 251, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/output/unet/diffusion_pytorch_model.bin'
Looking at the script itself, I'm seeing why this isn't working. My output directory isn't a model. What do I do with the output in there currently?
checkpoint-1000
checkpoint-200
checkpoint-400
checkpoint-600
checkpoint-800
pytorch_lora_weights.bin
@williamberman could you take a look here?
Following
By other WebUIs was mostly Automatic1111 meant?
We should really think about whether we can somehow integrate diffusers into AUTO1111
@patrickvonplaten Yes, I am speaking to automatic1111's webui specifically. Though, I want to be able to do this via a python script without having to use a webui.
I'll have to go back and make sure I followed the 🤗 docs. I do feel like I followed them closely, which is why I am confused as to how to use the outputted LoRAs and convert them into a safetensors or ckpt.
You can use LoRA trained embeddings easily with diffusers, see: https://huggingface.co/docs/diffusers/v0.12.0/en/training/lora#lora-support-in-diffusers but I think it's not so easy to convert them to A1111.
Hi, @patrickvonplaten, does diffusers support convert or directly load from Lora+safetensor? I have tried with the converting scripts, but failed.
The weight is from civitai. Any help would be appreciated.
I'll try to allocate time for this. Otherwise maybe @williamberman or @sayakpaul could check it out as well. I'm not 100% sure whether we can convert civitai lora weights
W.r.t https://github.com/huggingface/diffusers/issues/2363 I think there are a couple of different conversion pathways we're talking about for completness:
- Diffusers LoRA weights to safetensors
- civitai weights to diffusers
- civitai LoRA weights to diffusers
@patrickvonplaten am I missing out on something?
I think the second one has already been ready with ./scripts/convert_original_stable_diffusion_to_diffusers.py, it can convert civitai weights (in safetensors but without lora) into diffusers format.
The third one should be civitai LoRA weights (in safetensors format) to diffusers. I'm actually working on it by diving into stable-diffusion-webui which supports loading from lora+safetensors format, I will provide my script for your reference once I finish.
@sayakpaul
That is amazing! It would be amazing to contribute a PR for that as well!
haofanwang
@haofanwang Oh, I really need that master. I am also stuck in coverting civitai LoRA weights to diffusers. Do you mind add me as friend on Wechat? Hope that I can have some communication with you on stable diffusion. I have sent you a email to this email address [email protected].
Should we add Lora related code to research_projects? @sayakpaul
If it's about conversion, okay to add them to scripts/. WDYT @williamberman?
@sayakpaul @williamberman I have made a PR for this, if it looks good to you, it should be fine to merge.
@haofanwang That is awesome - now how can I convert my 🤗 diffuser to a safetensors/ckpt? Is it just a matter if mapping keys?
@jndietz Yes. Once the PR merged into diffuser, you can just run the convert script!
@haofanwang thanks a lot for the PR. I have a question related to the discussion https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/7387 - after we trained lora with dreambooth using diffusers - the .bin file is only about ~3m while the lora models on Civitai.com are around ~150m - how can we convert the 3m file to the file we can use in Automatic1111? I am still learning about this and sorry for asking stupid questions. Thanks again.
@haofanwang Isn't that script for converting safetensors to diffusers?
@harrywang Don't worry about the file size, you are on the right way, it can work just as other weights in civitai. This is because the default setting for dimension lora layer is quite small. You can find more info at the end of this tutorial, give us a star if it is helpful.
@jndietz No, it is not a general converting script. For now, we only handle LoRA weights stored in safetensors format.
@harrywang Don't worry about the file size, you are on the right way, it can work just as other weights in civitai. This is because the default setting for dimension lora layer is quite small. You can find more info at the end of this tutorial, give us a star if it is helpful.
@haofanwang Thanks a lot for the nice reply and tutorial! - I have shared that with our team.
@haofanwang Another question: convert_lora_safetensor_to_diffusers.py converts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder:
 

How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111?
Thanks a lot!!
@harrywang could you open a new issue here? as this issue becomes just like a Q&A, I will take a look soon.
@harrywang could you open a new issue here? as this issue becomes just like a Q&A, I will take a look soon.
No problem. I have created an issue https://github.com/haofanwang/Easy-Lora-Handbook/issues/1 Thanks!
@haofanwang Another question:
convert_lora_safetensor_to_diffusers.pyconverts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder:
How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111?
Thanks a lot!!
@harrywang check out my comment in this discussion, it might help you until an official solution gets released
@haofanwang Another question:
convert_lora_safetensor_to_diffusers.pyconverts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder:
How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111? Thanks a lot!!
@harrywang check out my comment in this discussion, it might help you until an official solution gets released
Thanks. But when I use https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py to train a model, there is no custom_checkpoint_0.pkl file in the checkpoint folders:

any idea?
@harrywang I've just realized that I used a different version of the lora training script which explains the missing file: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py I'll check out the one you've linked if I can make the converter work on it as well.
@harrywang so I have just replaced the 'custom_checkpoint_0.pkl' with 'pytorch_model.bin' in the converter script and it works just fine using in automatic1111