ml-stable-diffusion icon indicating copy to clipboard operation
ml-stable-diffusion copied to clipboard

LoRA support

Open SaladDays831 opened this issue 1 year ago • 11 comments

Are there any plans to support LoRA? If so, I assume the .safetensors file will need to be converted with the model?

SaladDays831 avatar Jul 11 '23 12:07 SaladDays831

apple team, please add it!!

treksis avatar Jul 23 '23 05:07 treksis

This repo has some models converted to Core ML after a LoRA was merged into a base model. Not the real thing, but a good bit of it . . .

https://huggingface.co/jrrjrr/LoRA-Merged-CoreML-Models

Merging was done with the Super Merger extension of Automatic1111. The Core ML conversion included a VAEEncoder for image2image, but not the ControlledUnet for ControlNet use, and they are "bundled for Swift". You could just as easily convert with the ControlledUnet added and/or skip the bundle step for use with a different pipeline.

jrittvo avatar Jul 23 '23 21:07 jrittvo

We need Lora🥹

jiangdi0924 avatar Jul 24 '23 04:07 jiangdi0924

I have a feeling that SD-XL is capturing everyone's attention right now. LoRA probably won't happen now until SD-XL is all figured out, but that seems to be happening quickly. Hopefully that is out of the way before Sonoma and full ml-s-d 1.0.0 grab the spotlight and LoRA gets bumped again.

jrittvo avatar Jul 24 '23 04:07 jrittvo

Hey @jrittvo thanks so much for the link! Didn't know we could do that with LoRAs, gonna test a couple of your models now

SaladDays831 avatar Jul 24 '23 09:07 SaladDays831

Hi again @jrittvo The models work great, thanks again for this insight. Would you mind sharing some info on how you converted the output .safetensors model (from the SuperMerger extension) to CoreML? I assume just uploading the .safetensors file to HuggingFace and using it with the command from "Converting Models to Core ML" won't work as it needs the unet, scheduler, and other folders.

SaladDays831 avatar Jul 31 '23 10:07 SaladDays831

The conversion from .safetensors (or .ckpt) to CoreML is pretty straightforward, once you get the environment to do it all set up. Getting it set up is not that straightforward, unfortunately. There is a good guide here:

https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-Stable-Diffusion-models-to-Core-ML

If you give it a shot and get stuck, someone at the Mochi Diffusion Discord can help you:

https://discord.gg/x2kartzxGv

You can also drop a specific request (or requests) at my LoRA-Merged-CoreML-Models repo and I'll run it (or them) for you, usually within a day or two.

jrittvo avatar Jul 31 '23 15:07 jrittvo

Hello everyone! I just added the option to merge LoRAs before conversion on Guernika Model Converter, basically it takes the LoRAs and merges them using this script by Kohya.

GuiyeC avatar Aug 12 '23 11:08 GuiyeC

@GuiyeC that's awesome, thanks!

At the moment I stopped experimenting with LoRAs, as it's crucial for us to "hot-swap" them. E.g., have one SD model (~1Gb), and multiple LoRA models (~30Mb), and pick which one to use. Baking LoRAs into the SD model works great for testing, but having multiple heavy models for each LoRA in the project sucks, so I'm still waiting for some info on official LoRA support.

SaladDays831 avatar Aug 14 '23 10:08 SaladDays831

Is there any progress now? The XL model size has increased, and the demand for Lora has become more urgent. 🥶

jiangdi0924 avatar Oct 23 '23 03:10 jiangdi0924

Hi.

I am trying to convert LCM-LoRA applied model, but failing. Could someone advice me?

What I did are,

  1. added the code below in get_pipeline function in torch2coreml.py. (You also need adding import LCMScheduler)
    pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
    pipe.fuse_lora()
    pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
  1. converted a model by the script below.
mkdir -p mlmodels
pipenv run python -m python_coreml_stable_diffusion.torch2coreml \
    --model-version runwayml/stable-diffusion-v1-5 \
    --attention-implementation ORIGINAL \
    --convert-unet \
    --convert-text-encoder \
    --convert-vae-decoder \
    --convert-vae-encoder \
    --convert-safety-checker \
    --quantize-nbits 6 \
    --bundle-resources-for-swift-cli \
    -o mlmodels \
  1. adding codes for LCMScheduler options in pipeline.py.

  2. generate an image by the script below

#!/bin/zsh

prompt="rabbit on moon, high resolution"

pipenv run python -m python_coreml_stable_diffusion.pipeline \
    --model-version runwayml/stable-diffusion-v1-5 \
    --scheduler LCM \
    --prompt "${prompt}" \
    -i mlmodels \
    -o . \
    --compute-unit ALL \
    --seed 42 \
    --num-inference-steps 8

The image which I got is something strange.

randomSeed_42_computeUnit_ALL_modelVersion_runwayml_stable-diffusion-v1-5_customScheduler_LCM_numInferenceSteps8

The image which I expect, which is actually generated on Diffusers at the same condition, is below.

foo

I wonder that I am missing something, but I have no idea since I am a newbie of Generative AI. Give me advices, please! Thanks.

y-ich avatar Dec 25 '23 03:12 y-ich