d8ahazard
d8ahazard
Yeeehaw! This has been fixed in the latest version. If a yaml file exists alongside the source checkpoint, it will be copied to the working dir and used when generating...
@leppie - You alive? Got a few questions regarding profiling for you, if you could be so kind to hop on Discord when you're free. ;)
> for me replacing `e` with `str(e)` in line 409 of sd_dreambooth_extension/dreambooth/train_dreambooth.py seems to fix it. Can you pull the latest version and lmk where to apply this, if it's...
Try with the latest version, please. Should be possible with fp16/adam/xformers or flash attention on 12gb.
So, while I'm always hesitant to say "no" to a feature request...this is a BIG one. Like, porting Shivam's repo isn't too tedious, because I'm *mostly* just exposing the args...
Yeah, so, we've had a lot of improvements to VRAM and speed, but I don't see ColossalAI in the cards. As such, I'm gonna close this.
Another one I've been thinking of as being potentially useful...
Implemented with e5908ab.
This isn't really that tough to implement. Add a flag on "start training" that tells it whether to use the checkpoint for class gen, or the pipeline. Only trick would...
Class batch size *only* controls the speed of class image generation, fwiw. ;) On Tue, Nov 29, 2022 at 3:52 PM Llewellyn Pritchard < ***@***.***> wrote: > On my system...