sd-scripts icon indicating copy to clipboard operation
sd-scripts copied to clipboard

Results 334 sd-scripts issues
Sort by recently updated
recently updated
newest added

prepare_buckets_latents.py seems to not working with fluxx, is they're any way to generate this file for a flux full finetuning ? Thanks

As per title, I'm experimenting with 8gb gpu training and would like to use fp8 precision for speed improvement but I have this error: ValueError: Using `fp8` precision requires `transformer_engine`...

First of all, many thanks for doing this! This is the only repo I'm aware of which allows doing Flux Lora training on a 16GB GPU. I appreciate this is...

I've been looking into the sd3 train branch, im trying to understand how are the loss gathered for multi-gpu and would love to understand the logic behind it. I'm used...

## Error When I use 2 GPU to train flux lora, everything is fine, successful training~, but when I use one GPU or start with 2GPU, but use one, it...

When my images in my dataset are higher than 1024, like 2048 or 2536 the likeness in my training is gone. is there any way to fix this or is...

I'm using the following command for training: ``` bash accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path ~/work/media/flux_lora/models/flux1-dev.safetensors --clip_l ~/work/media/flux_lora/models/clip_l.safetensors --t5xxl ~/work/media/flux_lora/models/t5xxl_fp16.safetensors --ae ~/work/media/flux_lora/models/ae.safetensors --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers...

I have done over 300+ trainings now (renting gpus) of different training parameters from both kohya and ai toolkit. I get good to decent results now on both. I believe...

Should it not be faster to run it compiled then interpreted? As an end user i have no use for it being modular like it is during development. It would...