Max Becker
Max Becker
``` 6ffb08b95e23 FLUX: Gradient checkpointing enabled. 6ffb08b95e23 prepare optimizer, data loader etc. 6ffb08b95e23 enable fp8 training for U-Net. 6ffb08b95e23 enable fp8 training for Text Encoder. 6ffb08b95e23 running training / 学習開始...
Also happens with torch==2.1.2+cu118 torchvision==0.16.2+cu118 xformers==0.0.23.post1+cu118
Workaround by @chenxluo here: https://github.com/bmaltais/kohya_ss/issues/2717#issuecomment-2366769178 Works for me on 2060 Super (Although training ultimately has no effect, but I don't yet know what is causing that)
Can you share your modification of the related functions? I am having the same issue: https://github.com/bmaltais/kohya_ss/issues/2720
Thank you. Training is running for me with this fix. I also get ` 480/1600 [43:23
Second attempt was also unsuccessful, I increased rank and alpha to 32, and increased the learning rate. But the produced Lora (after epoch 3) does not have any effect on...
Duplicate #2812. There is a workaround, but it is already fixed I think. Maybe try the search function ;)
I also had this issue and for me it went away after using a different VAE as far as I remember. I am now using this one for Flux1-Dev: (Flux1DevVAE_stock.safetensors](https://civitai.com/models/735031?modelVersionId=821978)...
I am still trying to get it to work with on my RTX 2060 Super. Currently, I am facing some issue apparently specific to RTX 20xx series, but at least...