Pete Allen

Results 7 comments of Pete Allen

IE may be running in Quirks Mode. Try adding this to your `` section: ``

I'm seeing this as well on an RTX 4090 with 24 GB of VRAM. Only about 35% of the VRAM is in use when it happens, so I don't think...

> Have you checked the model cache size in settings? Using cache size 0 will always trigger reload on generation. This worked, thank you. The cache size was set at...

Any progress on this? I've seen discussion of people finetuning flux.1-dev with 24 GB VRAM, but have only been able to get LoRA training to work so far. Attempting to...

For multi-res training, might it be better if the behavior was that it would only include each image once, in the largest bucket it can be in without upscaling? That...

@BootsofLagrangian You mention swapping flux_train.py for flux_train_network.py to do multi-GPU full finetune. With your config, I can run `flux_train_network.py` with no problems, but `flux_train.py` throws an out-of-memory error. Watching `nvidia-smi`,...