nitinh12
nitinh12
> https://arxiv.org/abs/2306.00306 https://arxiv.org/abs/2402.19215 https://arxiv.org/abs/2211.16152 https://arxiv.org/abs/2407.12538 > > Looking into this but some related papers. Thanks for sharing these. It would be great if this gets added for flux training. AI...
@rockerBOO Any update on your PR?
@kohya-ss @rockerBOO Simplertuner has already started implementing this. Please can you add this, too? I am more used to SD scripts.
@rockerBOO @kohya-ss AI toolkit also implemented this.
> Thank you for your suggestion. HiDream-I1 is a very interesting model. It's good that other trainers have already implemented it, so we can refer to them. > > However,...
> Thank you for your suggestion. HiDream-I1 is a very interesting model. It's good that other trainers have already implemented it, so we can refer to them. > > However,...
> I've almost finished the work related to FramePack, so I'd like to start working on sd-scripts issues and PRs, as well as HiDream-I1. > > I can't promise when...
@kohya-ss Please add this. I see you're very active with musubi tunner, but please don't forget us
> I'm sorry for the delay. I'll try to find some time to work on Lumina and HiDream. @kohya-ss simpletuner caches the text encoder and VAE outputs first and then...
What network dim and alpha are you using? It happened with me as well turns out flux learns even the bad quality from the images if you are using 128...