stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Allow TI training using 6GB VRAM when xformers is available

Open MarkovInequality opened this issue 3 years ago • 0 comments

Pic of me training TI at 512x512 using <6GB VRAM image

This PR consists of 2 parts.

  1. Added a setting to allow using cross attention optimizations when TI training. When the xformers cross attention optimization is available, this saves around 1.5GB of VRAM during training. I know that there has been past reports of bad results when using cross attention optimizations when training TI, but I have tested it myself using the xformers optimization and the InvokeAI optimization and did not have any issues.

  2. Changed the "Unload VAE and CLIP to RAM..." option to also unload VAE to RAM when training TI. This is safe(also tested by me) and saves us around .25GB of VRAM during training.

When both options are enabled, I was able to get TI training down below 6GB VRAM.

MarkovInequality avatar Oct 31 '22 14:10 MarkovInequality