FLUX lora train from SimpleTurner has no effect?
Expected Behavior
lora do effect the result
Actual Behavior
I have been troubled for several days. I used SimpleTurner to train Flux LoRA, but it had no effect. The training process went smoothly, and there were no errors when loading it with the latest version of ComfyUI. However, after adding the trigger words for LoRA, there was no change compared to Flux
Steps to Reproduce
[ SimpleTuner_train.zip ](url)
Debug Logs
no
Other
No response
There might be some overlap with this issue I opened in bghira/SimpleTuner#748.
Make sure to update ComfyUI. ComfyUI added support for it a couple of days ago.
I'm on the latest version of ComfyUI (comfyanonymous/ComfyUI@39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd). LoRAs trained on this commit of SimpleTuner work fine in ComfyUI with no issue (bghira/SimpleTuner@24991824d64d288129500c12e061833111f2a27b).
LoRAs trained on a later commit of SimpleTuner (bghira/SimpleTuner@03568468eb025e03107ad2de9fa0c2bdfe17a51a) seem to have no effect in ComfyUI. bghira, the maintainer of SimpleTuner, tentatively thinks that it could be related to extra layers that aren't loaded (i.e. all+ffs).
I've updated to the latest commit again and still have the same issue. Does this response here help?
on our side there was a fix put in yesterday to resolve this problem for comfyUI and Diffusers loading of the LoRAs after. it impacted users who quanto'd the base model during training.
additionally there is another order-of-operations issue where quantising things before fusing it means that precision vanishes and the weights do not get modified enough.
load the lora and base in bf16 before fusing them and then quantise to fp8. i'm sure that's not what anyone wants to hear, but that's how it goes currently
This is still an issue. I think you might want to remove the quantizations in the provided configs so that new users (like poor me) don't train for days to end up with an unusable checkpoint :( can't afford to quantize at inference time.