fast-stable-diffusion
fast-stable-diffusion copied to clipboard
fast-DreamBooth Colab not creating good results compared to Runpod
The google collab is more convent and has been easier to use, however the runpod tutorial I followed by Aitrepreneur has drastically different results that are working way better for what I'm trying to generate. I trained a character in both and the fast-DreamBooth collab results just the character with no other prompts. The runpod blends the characters face with prompt and style flawlessly. Is it because it's a 1.4 model?
I'm doing 20 images, 500 step increments, at 1000 steps it's already not implementing other prompt keywords unless I turn down the cfg scale to 1-3, but as soon as I hit 1500 steps, it's hard to get any keywords in unless I'm doing image2image. Am I doing something wrong or should I convert back to 1.4 model?
0.00001, 50 train text encoder.
set the text encoder to 10% and do 6000 steps
I'll give it a shot and see if this helps. Thanks for the input and keeping tabs with issues all the time, I know it's really hard to keep up with everyone and there personal issues.
See also:
- https://github.com/TheLastBen/fast-stable-diffusion/issues/80
- https://github.com/TheLastBen/fast-stable-diffusion/pull/150
- https://github.com/TheLastBen/fast-stable-diffusion/issues/448
- https://github.com/TheLastBen/fast-stable-diffusion/issues/518