Simo Ryu
Simo Ryu
Compatibility is a current issue. Currently, you can merge it to ckpt and use it, although that will remove the nice feature of LoRA. I'll work on it in the...
#140 Should be dealt with this update. Please wait for the next release, or use dev branch!
Sorry, its not possible now. I'll add this to a feature as well. If you want to get notified, "watch" this repo!
Yes grad checkpoint doesnt work rn sorry...
Ah sorry we've actually talked about this on the slack channel as well. Just finished final exam. I'll work on it today
I'm sorry for not updating this earlier. And thank you @patrickvonplaten ! I just checked this PR as well : https://github.com/huggingface/diffusers/pull/1639 this would make LoRA and Paint with words implementation...
Hmm... so as you mightve probably checked my code, there is [extract_lora_ups_down](https://github.com/cloneofsimo/lora/blob/ba349e56e23e92e3b128c7c67ae58d3067540daa/lora_diffusion/lora.py#L71) function that you can use to extract lora weights. Now you can load new models, and merge lora...
Not sure if this is the solution you were looking for, as it just extracts lora and reloads the new, raw model
So there is CLI for this, `lora_add` that you've probably checked. There is also `weight_apply_lora`, which "doesn't add anything", and simply updates the weight in the unet.
So im just suggesting to remove the pipeline after injection, reinitiate the new ones, and use weight_apply_lora