sd-scripts icon indicating copy to clipboard operation
sd-scripts copied to clipboard

FLUX LoRA Training - Text Encoder active training indicated when it should be disabled with --network_train_unet_only

Open Enyakk opened this issue 5 months ago • 2 comments

The log for my training shows: INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594 INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595 INFO split qkv for LoRA lora_flux.py:603 INFO train all blocks only lora_flux.py:605 INFO create LoRA for Text Encoder 1: lora_flux.py:741 INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744 INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765 INFO enable LoRA for U-Net: 6 modules lora_flux.py:916

That is despite using the option --network_train_unet_only both by commandline and in the config-toml: network_train_unet_only = true. I use the following commandline: accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --persistent_data_loader_workers --max_data_loader_n_workers 2 --highvram --network_train_unet_only --config_file %1 config_lora.zip

This option might be out of the ordinary: network_args = [ "train_double_block_indices=none", "train_single_block_indices=7,20", "split_qkv=True",]

Enyakk avatar Sep 22 '24 12:09 Enyakk