Can lora also be implemented with stable diffusion?
Candle-lora can be applied to any model by adding the derive and attribute macros to each struct containing a linear, embedding, or conv layer. Therefore, it should be possible to apply candle-lora to any model.
https://github.com/huggingface/candle/tree/main/candle-transformers/src/models/stable_diffusion now there are a couple of files in this so in case I am planningng to port SD then do I have to port all these files or only some of the files?
It is necessary to port all the files which need to be modified, depending on the structure. If the modified types are depended on, then it is necessary to port all files. This is necessary to do because unlike peft, we are unable to change the type at runtime.
ok let me try again then
Hello, have you successfully transplanted to stable diffusion?
I looked at some examples, but I don’t quite understand how AutoLoraConvert introduces LoRA weights.
@oovm, no, but you can do it by adding:
#[replace_layer_fields]
#[derive(AutoLoraConvert)]
to each model struct.
Hello, have you successfully transplanted to stable diffusion?
I looked at some examples, but I don’t quite understand how AutoLoraConvert introduces LoRA weights.
I haven't already but it can be done by adding macro to every struct.
intersting. so just adding the macros and then you can apply loras with your library to stable diffusion? sounds awesome
@donkey-donkey @staru09 yes exactly! Closing this as there hasn't been any activity for a bit.