raymondgp
raymondgp
> 1. faceswap should be the last stage and not the first, otherwise the connected loras will distort the portrait resemblance. Actually this is a bug! > 2. need a...
I can confirm this works well with RedHat SSO (Keycloack). I hope it eventually gets merged to main, this is a must for on-prem implementations.
@nivedin Hello, would you have any high-level idea when this PR will be approved? It's a significant improvement for on-premises deployments and very good work done by @becelot
Should I also come and mention? 2.1 is training but without Text-Encoder. Selecting Text Encoder gives a CUDA memory error. Windows 10, RTX 3060 12GB, FB16, xFormers,
I just tried the latest commit, 12GB 3060, win10. EMA with Text Training OOM. LORA with Text Training is running , I'll leave it running overnight, but now I'm super...
Reporting back, latest build, 3060 12g/win10, LORA, 10K saved at 1K intervals, 62 Images Concept, 1.1e-6, 260 class images. v2.1/512res, correcting scheduler_config/_lora.yaml, finished without trouble. Concept was not well learned....
@cloneofsimo Indeed! I ran the first 10K with Lora and text training, 4e-4, my subject is there, not fully learned, but I will work on improving this. Amazing, thank you...
Hi, I'm able to train the 512 base2-1 model, Colab, preservation on, 63 concept images, 1200 class images. The Samples come out fantastic from the 1k step checkpoint, the concept...
I kept experimenting. The pipeline/inference inside Shivam's colab works perfectly. I can't tell if the ckpt conversion shreds the model or if it's the lack of a proper .yaml file...
Update here, the training of a 2.1 model in Dreambooth, used in automatic1111 webui works. It seems the Colab conversion to ckpt produces a bad stable diffusion model, or I'm...