ai-toolkit
ai-toolkit copied to clipboard
qwen nunchaku support
Thanks for your work for letting qwen traing in less than 4G vram.Qwen image and edit all have nunchaku support,and it is really fast and vram friendly. I wonder if the training works on nunchaku int4 models
You generally want to train in the original accuracy then quantize it later. That being said, the Nunchaku quantization doesn't seem to perform well below 80GB of VRAM without a lot of tweaking, but it still took me hours to quantize a model with 32GB of VRAM. Lower bit training is still pretty new, and I've only really seen it on some newer LLMs, but not any diffusion models yet.