Support for Stable Cascade LoRAs and embeddings
I've now implemented LoRA and embedding training support in OneTrainer (for now only on a separate branch https://github.com/Nerogar/OneTrainer/tree/stable_cascade) . But to actually use them, support in the inference tools is needed. I propose the following format:
LoRA:
Each unet key has a prefix lora_prior_unet_, followed by the base model key in diffusers format, then lora_down.weight, lora_up.weight and alpha
Each text encoder key has a prefix lora_prior_te_, followed by the base model key in diffusers format, then lora_down.weight, lora_up.weight and alpha
For the unet, I have only included lora weights for the attention blocks. But the naming scheme would be the same for other blocks.
Embedding:
The file has a single key: clip_g. This is the same format as SDXL embeddings, just without the clip_l key.
Example files
I have attached two example files. The example LoRA is for the 1B variant of stage C. Both are untrained, they are just examples to show the file format. examples.zip
For reference, I have opened the same issue for SD.Next https://github.com/vladmandic/automatic/issues/2882