lora-inference icon indicating copy to clipboard operation
lora-inference copied to clipboard

LoRA inference model packaged with Cog

Results 9 lora-inference issues
Sort by recently updated
recently updated
newest added

Steps: - Get link from https://civitai.com/models/4361/olivia-diffusion - Inference with https://replicate.com/cloneofsimo/lora and a style trained by https://replicate.com/cloneofsimo/lora-training Actual: Error 'NoneType' object is not iterable Expect: can use civitai

I just want to launch Kohya-ss LoRA inference on a clean GPU server. Any way i can do this?

- Just a small dependency update to enable use of different Lora formats in Replicate using the [0.17.0 release of diffusers library](https://github.com/huggingface/diffusers/releases/tag/v0.17.0). - Fixes #3 Thank you 🙏

I use safetensors on civitai and I get an error:Rank should be the same per model. Am I using it wrong?

This issue is investigating the gradual change in generated imagery as loras are loaded/unloaded through normal use: First, the assumption: Using the same inputs (seed / prompt / lora URL...

In the mean time can you at least provide a convert script to transform to your LoRA file format from regular civitai safetensors format?

Output from Lora sometimes not easy to control. I believe a Face restore option via API would help much.

should this be cog-lora similar to cog-stable-diffusion?

I think 768 usually messes up with doubles ![image](https://user-images.githubusercontent.com/27/216172428-c0922c9b-eb6e-4ec0-b9f2-8fd3313d1e7e.png)