lora-inference
lora-inference copied to clipboard
Error: Rank should be the same per model
I use safetensors on civitai and I get an error:Rank should be the same per model. Am I using it wrong?
I have the same error, with a lora coming straight from https://replicate.com/cloneofsimo/lora-training.
same error here, with civitai lora models and ones that i trained myself, using network_dim (rank) of 32, 16, 8 is the same, in automatic 1111 they work as expected, but not in cloneofsimo/lora
Same here, any help appreciated :)
Since this problem seems to be currently ignored by the Relicate team, I wrote this: https://medium.com/itnext/running-civitai-loras-on-your-replicate-models-32db1c9a9383 I hope it helps someone!
@deleteman I read your article, but this isn't about how to convert other models, but instead just use replicate for training? this doesn't really help bring across the vast world of civitAI and other LoRA formats?
automatic1111 compatible is more like the format that was built by kohya https://github.com/bmaltais/kohya_ss and is now fully supported by automatic1111
so the problem is more that the main projects recommended by replicate use an unusual format?
automatic accepted:
replicate / cloneofsimo format:
Hey @dcsan , not sure exactly what you mean by your last statement. As for the article, yes you're right. The article simply shows an alternative, but I would LOVE to be able to use CivitAI loras directly on replicate's models.
Do you think there is a way to translate between automatic accepted format and replicate's?
I was looking around for some converters but didn't see anything and don't really know the formats well enough.
It seems the lora training method is common, but there's no agreed format for the safetensors? automatic is a lot more prevalent than what's on replicate tho. other option is just to port over another of the inference notebooks to replicate, or run inference on huggingface / another service that supports the auto1111 formats.
I see another training notebook here, from the same user who has provided many interesting lora inference books (with masks etc)
https://replicate.com/zhouzhengjun/lora_train_base