sd-scripts
sd-scripts copied to clipboard
I've tried the options for 12G, 16G, and 20G VRAM options here: https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#flux1-lora-training and confirm they all work. But is it possible to do 8GB VRAM? Is there a specific...
Since it appears that Flux LoRA training can still be effective when only training specific layers, I am wondering if this functionality can be expanded to Finetuning, since this is...
The log for my training shows: INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594 INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595 INFO split...
Been mentioned that is one of Flux's greatest downfalls, but I wonder where it comes from (and how to fix it)? In my training I train one style I demand...
I trained a custom character model using flux schnell, that training was success however the resulting lora is too small (1.42mb). And When I use that lora on the minimal...
Still actual: https://github.com/kohya-ss/sd-scripts/issues/280
Multi GPU train flux, Is the script of flux train not supported now?
essentially the title. i've tried the following: ``` transformer = diffusers.Flux2DTransformer.from_single_file(path_to_kohya_safetensors, torch.dtype=torch.bfloat16) pipeline = diffusers.FluxPipeline.from_pretrained(..., transformer=transformer, torch.dtype=torch.bfloat16) ``` however all generated images have weird artifacts, regardless of checkpoint number i've...