michp
michp
Seems to be some kind of issue when using the gradio share link in colab, it can't read the .toml file correctly, works fine when creating a tunnel using cloudflare...
> > Seems to be some kind of issue when using the gradio share link in colab, it can't read the .toml file correctly, works fine when creating a tunnel...
> I tried ngrok on colab with the latest version of Kohya and it still doesn't work. You sure Cloudflare works? Cuz I wasted the entire day making kohya to...
> > > I tried ngrok on colab with the latest version of Kohya and it still doesn't work. You sure Cloudflare works? Cuz I wasted the entire day making...
> > #1866 > > I tried both the Lora algorithm and the Glora+Dora algorithm on SDXL - no noticeable decrease in VRAM usage. Speeds are the same with and...
> wow nice > > @michP247 you find this better than other optimizers? will check results later, I've haven't actually completed any training in my tests, just did a quick...
> Thanks for this pull request! > > But I think it may work with the `--optimizer_type` and `--optimizer_args` options, like `--optimizer_type "prodigyplus.ProdigyPlusScheduleFree" --optimizer_args "fused_back_pass=True"` without any additional implementation. Have...
> > Thanks for this pull request! > > But I think it may work with the `--optimizer_type` and `--optimizer_args` options, like `--optimizer_type "prodigyplus.ProdigyPlusScheduleFree" --optimizer_args "fused_back_pass=True"` without any additional implementation....
> Hello all, and thanks for your interest in the optimiser. I made a best-effort attempt to match how Kohya had implemented fused backward pass for Adafactor, in the hope...