ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

FLUX lora train from SimpleTurner has no effect?

Open LianShuaiLong opened this issue 1 year ago • 5 comments

Expected Behavior

lora do effect the result

Actual Behavior

I have been troubled for several days. I used SimpleTurner to train Flux LoRA, but it had no effect. The training process went smoothly, and there were no errors when loading it with the latest version of ComfyUI. However, after adding the trigger words for LoRA, there was no change compared to Flux

Steps to Reproduce

[ SimpleTuner_train.zip ](url)

Debug Logs

no

Other

No response

LianShuaiLong avatar Aug 13 '24 07:08 LianShuaiLong

There might be some overlap with this issue I opened in bghira/SimpleTuner#748.

kasukanra avatar Aug 13 '24 09:08 kasukanra

Make sure to update ComfyUI. ComfyUI added support for it a couple of days ago.

ltdrdata avatar Aug 13 '24 09:08 ltdrdata

I'm on the latest version of ComfyUI (comfyanonymous/ComfyUI@39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd). LoRAs trained on this commit of SimpleTuner work fine in ComfyUI with no issue (bghira/SimpleTuner@24991824d64d288129500c12e061833111f2a27b).

checkpoint-69972_0001

checkpoint-69972_0001

LoRAs trained on a later commit of SimpleTuner (bghira/SimpleTuner@03568468eb025e03107ad2de9fa0c2bdfe17a51a) seem to have no effect in ComfyUI. bghira, the maintainer of SimpleTuner, tentatively thinks that it could be related to extra layers that aren't loaded (i.e. all+ffs).

kasukanra avatar Aug 13 '24 10:08 kasukanra

I've updated to the latest commit again and still have the same issue. Does this response here help?

kasukanra avatar Aug 14 '24 05:08 kasukanra

on our side there was a fix put in yesterday to resolve this problem for comfyUI and Diffusers loading of the LoRAs after. it impacted users who quanto'd the base model during training.

additionally there is another order-of-operations issue where quantising things before fusing it means that precision vanishes and the weights do not get modified enough.

load the lora and base in bf16 before fusing them and then quantise to fp8. i'm sure that's not what anyone wants to hear, but that's how it goes currently

bghira avatar Aug 15 '24 15:08 bghira

This is still an issue. I think you might want to remove the quantizations in the provided configs so that new users (like poor me) don't train for days to end up with an unusable checkpoint :( can't afford to quantize at inference time.

odusseys avatar Aug 15 '25 16:08 odusseys