lora-inference
lora-inference copied to clipboard
LoRA inference model packaged with Cog
LoRA Replicate inference model
Run inference on Replicate:
Training models:
- Easy-to-use model pre-configured for faces, objects, and styles:
- Advanced model with all the parameters:
If you have questions or ideas, please join the #lora channel in the Replicate Discord.
Deployments
You can deploy any models at huggingface or ones you trained yourself. You can add LoRA with these models
1. Manual deployment
We have a default SD1.5 deployed at replicate, so you can run your own in a scalable manner. If you would like to launch your own model, run
cog run script/download-weights.py
to download the weights and place them in the cache directory. This will save base model that will get mounted to the cog container.
Either push the model to replicate (follow these instructions for pushing model to replicate) or run
cog predict -i prompt="monkey scuba diving"
to run locally.
2. Deploy & Push to replicate with bash script
First, make a model at replicate.com. Create one here
Specify the following parameter file at deploy_others.sh file.
export MODEL_ID="lambdalabs/dreambooth-avatar" # change this to model at huggingface or your local repository.
export SAFETY_MODEL_ID="CompVis/stable-diffusion-safety-checker"
export IS_FP16=1
export USERNAME="cloneofsimo" # change this to your replicate ID.
export REPLICATE_MODEL_ID="avatar" #replciate model ID,
Run it with
bash deploy_others.sh