SimpleTuner
SimpleTuner copied to clipboard
A general fine-tuning kit geared toward diffusion models.
Hello, would 2 x Nvidia 4090 be enough to train the flux model or would there be a need for vram while training? Can we allocate the required memory size...
when I follow the FLUX.md to set the args ,but the train always stop,because the error: (urllib3.connectionpool) Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to...
already update to latest diffusers. (qwen) [wangxi@v100-4 SimpleTuner]$ git pull Already up to date. (qwen) [wangxi@v100-4 SimpleTuner]$ pip show diffusers Name: diffusers Version: 0.30.0.dev0 Summary: State-of-the-art diffusion in PyTorch and...
Hi, I follow the exactly same procedure and use the example dataset for finetuning flux. This is what i get: File "/root/tmp/SimpleTuner/helpers/training/validation.py", line 1241, in validate_prompt███████████████████████████████████| 30/30 [00:28
- hashing instead of shortening - csv.py renamed to csv_.py to avoid conflict with pandas internal csv.py
I'm using Flux quickstart settings with fp8 quantization on 4x3090s. The same settings work on 1x3090. TRAINING_NUM_PROCESSES=2 export ACCELERATE_EXTRA_ARGS="--multi_gpu" on line: results = accelerator.prepare(primary_model RuntimeError: Modules with uninitialized parameters can't...
We're going to add Hunyuan DiT to its own feature branch, and this issue encompasses the plan to integrate this model. If you wish to take one of the components,...
The complete end-to-end test would: 1. Generate a bunch of random noise images. 2. Create tiny networks for diffusion, VAE, and text encoder and load it to the GPU. 3....
Refer to: https://github.com/lucidrains/ema-pytorch
Hunyuan-DiT is a new image generation AI. Benchmarks show that it exceeds SD3 overall. However, the model is relatively complex and uses a lot of VRAM for training. So I...