fast-stable-diffusion
fast-stable-diffusion copied to clipboard
Saving every 200 steps but saving from for eg. 1000 steps, not from 0
Can we do that ? Thhis would let us test the weights better, since low step values are not there yet, id prefere to turn on steps saving later when its over 1000, what do you think? Also a path so i can save them to shared drive with unlimited space would help
on the other hand, we can merge in webui for example checkpoint 1000 with checkpoint 2000 to get checkpoint 1500 i suppose
Also shiv added /merged vae , his command is --pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse"
What this does, is it improves the faces , its like extra 350mb model from huggingface and its merged into ckpt at the end
I merged sd-vae-ft-mse since the release of the 1.5, it is merged in the "download model" cell
as for the checkpoint saving, I will add a save_starting_step
great, can you also optionally add not saving bin files just the ckpt ? im sure most people dont use huggingface format at all so its just wasting space on out gdrive and we download ckpt , or maybe make it default, and if someone do wants bin then he can tick the box and it will save both ckpt and bin for him to gdrive.
the diffusers model isn't saved in gdrive, it is saved in the colab storage which is required to make the CKPT
well, i do have diffusers folders on my gdrive with the same saving date and time as the ckpt file , how come ?
Because you changed the OUTPUT_DIR to your gdrive instead of the local Colab storage Keep OUTPUT_DIR as the default to avoid filling your gdrive with useless model files
ah thanks for explanation!
added https://github.com/TheLastBen/fast-stable-diffusion/commit/950b92f983920aade12831dca02c9d646d024fb1
Great! Shiv added inference during training when saving ckpt files, You think that would fly in this repo ? It helps a ton to see if your subject is there, but his code only allows for one prompt , cant test multiple styles
The best way is to use another account to preview the intermediary model using "link_to_trained_model" in the AUT1111 colab during the training
ah yes thats a good idea
I'm running into this storage issue. Appears to be related to what you mentioned above regarding OUTPUT_DIR. The current version of the fast dreambooth colab doesn't have a OUTPUT_DIR field??
I'm manually moving the interval .ckpt's elsewhere in my gdrive and then in this temp storage area I'm manually deleting the folders under models >> <SessionIntervals> Is there a way to auto-cleanup this? I don't need the .json files, etc. once the .ckpts are generated.
The diffusers models aren't stored in gdrive, only the ckpt are sent there.