ai-toolkit
ai-toolkit copied to clipboard
Flux lora training Error with Batch >1
On the current version in linux, if you set batch to more than 1, the start of Flux Lora training aborts with this error. The old version (commit https://github.com/ostris/ai-toolkit/commit/6d31c6db730a9cba3004eae1a3d7283c663d9295) does not have this problem.
this is not caused by the batch size, it's caused by linear_timesteps being true.
Comment it out like in my example
This should be resolved with and without linear_timestep now.