lora
lora copied to clipboard
precompute then offload latents and text encodes as well as VAE and Text Encoder
the latents and text vectors can be precomputed and then stored in RAM instead of VRAM, and then offload the text encoder and VAE if they aren't being trained, would allow for significant time and VRAM savings.
This is great, I am currently refactoring the training script, I will use this trick in it