stablediffusion
stablediffusion copied to clipboard
Community Integration: Making AIGC cheaper, faster, and more efficient
Thank you for your rapid and outstanding contribution to Stable Diffusion 2.0! AIGC has recently risen to be one of the hottest topics in AI. Unfortunately, large hardware requirements and training costs are still a severe impediment to the rapid growth of the AIGC industry. The Stable Diffusion v1 version of the model requires 150,000 A100 GPU Hours for a single training session.
We are happy to share a fantastic solution where the costs of training AIGC models such as stable diffusion can be 7 times cheaper!
Colossal-AI released a complete open-source Stable Diffusion pretraining and fine-tuning solution with the pretraining cost reduced by 6.5 times, and the hardware cost of fine-tuning by 7 times. An RTX 2070/3050 PC is good enough to accomplish the fine-tuning task flow, allowing AIGC models such as Stable Diffusion to be available to a wider community.
Open-source code:https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion
More details can be found on the blog. We are also very happy to provide such improvements for Stable Diffusion 2.0, and believe the democratization of AIGC models is also very helpful for Stable Diffusion 2.0 users. We would appreciate it if we could build the integration with you to benefit both of our users, and we are willing to provide help you need in this cooperation for free.
Thank you very much.
Best regards, Yongbin Li, HPC-AI Tech