SimpleTuner
SimpleTuner copied to clipboard
A general fine-tuning kit geared toward diffusion models.
when training using EMA, the validations follow upstream Diffusers, and we temporarily overwrite the unet / transformer parameters in the base model with the EMA weights, before running inference. however,...
a user might want to manually score their dataset and then use those values instead of the single faked value. otoh it can also be useful to import a score...
Would be neat so we can tell how much we managed to improve from the base model when using a small training set. I managed to hack it on my...
it would be nice as an optional dependency to include the civitai api client as a way to export the final checkpoint straight to the site. it's a low priority...
this will wait a bit longer to shake issues out before merging
@komninoschatzipapas can you look into this one?
Currently the trainer crashes when saving Flux lora checkpoints due to cuda home missing for the newer deepspeed. I'm on the latest main branch, with all the updated dependencies afaik....
After setting up the repo with the [FLUX quickstart guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md), I ran a training session overnight with my RTX 4090 to find that it had died somewhere along the way....
workaround is to continue training it without the base being quantised but obviously that's difficult-to-impossible. the bug is seemingly upstream in PEFT.
- Total optimization steps = 3000 - Total optimization steps remaining = 3000 Epoch 1/3, Steps: 0%| | 0/3000 [00:00