DeepSpeedExamples icon indicating copy to clipboard operation
DeepSpeedExamples copied to clipboard

Add LoRA optimization to the SD training example

Open PareesaMS opened this issue 4 months ago • 0 comments

This PR integrates LoRA optimization into the Stable Diffusion training example, building upon the already implemented distillation benefits. By applying LoRA-enhanced distillation, we achieve further improvements, including reduced inference time, minimized memory overhead, and a notable 50% decrease in memory consumption prior to distillation. The enhancements lead to significantly quicker inference times and major memory usage reductions.

Our analysis of the produced images confirms that LoRA-enhanced distillation preserves image quality and fidelity to the prompts. For more detailed insights, refer to the published document here: LoRA-Enhanced Distillation on Guided Diffusion Models Besides the code, the README file is updated accordingly.

PareesaMS avatar Mar 08 '24 19:03 PareesaMS