SimpleTuner icon indicating copy to clipboard operation
SimpleTuner copied to clipboard

A general fine-tuning kit geared toward diffusion models.

Results 47 SimpleTuner issues
Sort by recently updated
recently updated
newest added

when training using EMA, the validations follow upstream Diffusers, and we temporarily overwrite the unet / transformer parameters in the base model with the EMA weights, before running inference. however,...

documentation
enhancement
good first issue

a user might want to manually score their dataset and then use those values instead of the single faked value. otoh it can also be useful to import a score...

enhancement
help wanted
good first issue

Would be neat so we can tell how much we managed to improve from the base model when using a small training set. I managed to hack it on my...

enhancement
help wanted
good first issue

it would be nice as an optional dependency to include the civitai api client as a way to export the final checkpoint straight to the site. it's a low priority...

enhancement
help wanted
good first issue

@komninoschatzipapas can you look into this one?

bug

Currently the trainer crashes when saving Flux lora checkpoints due to cuda home missing for the newer deepspeed. I'm on the latest main branch, with all the updated dependencies afaik....

bug
help wanted
good first issue
1 / 0 magic
regression

After setting up the repo with the [FLUX quickstart guide](https://github.com/bghira/SimpleTuner/blob/main/documentation/quickstart/FLUX.md), I ran a training session overnight with my RTX 4090 to find that it had died somewhere along the way....

bug
documentation
1 / 0 magic
work-in-progress
pending

workaround is to continue training it without the base being quantised but obviously that's difficult-to-impossible. the bug is seemingly upstream in PEFT.

help wanted
regression
upstream-bug

- Total optimization steps = 3000 - Total optimization steps remaining = 3000 Epoch 1/3, Steps: 0%| | 0/3000 [00:00

regression
pending