fast-stable-diffusion
fast-stable-diffusion copied to clipboard
fp16 being incorrectly set for checkpointing even when fp32 training
Hello, thanks for the amazing repository.
I find your fork of the diffusers library (https://github.com/TheLastBen/diffusers/blob/main/examples/dreambooth/train_dreambooth.py#L742) is calling --half even if it is fp32 training (unchecking fp16).
hi, thanks I set it to fp16 only for the intermediary checkpoints to save gdrive space, but the final checkpoint will be fp32
@TheLastBen If I understand correctly, this file converts to half precision as well (https://github.com/TheLastBen/fast-stable-diffusion/blob/main/Dreambooth/convertosd.py#L226) when saving the final checkpoint.
For my use case, I have a local GPU that only handles fp32. So I cannot use any of the intermediate checkpoints which are in fp16. For now, I just forked your repo and commented out --half, and it seems to work (but need to test more).
Edit: Whoops my bad, just noticed that line (L226) is being removed in colab.