ai-toolkit
ai-toolkit copied to clipboard
Help! how do you run an inference with the saved flux lora model?
It seems that something in GenerateProcess is handling image generation during training. I was wondering if there are any resources we could take a look to handle image generations with the flux lora model that was just created.
You can upload it as a model to hugging face and use the Inference API, if that works for you. This way you can use diffusers pipeline also
Is this the only way? i want to make inference on my server, not in the huggingface