ColossalAI icon indicating copy to clipboard operation
ColossalAI copied to clipboard

Use gemini plugin and LowLevelZero to run llama2_7b. In the pulgin in gemini, set the policy to static, shard_param_frac, offload_optim_frac, and offload_param_frac to 0.0, making gemini equal to zero2, and set stage to 2 in LowLevelZero. Using bf16 for training, and comparing the two plugins, we found that the GPU memory usage of gemini is higher than that of LowLevelZero. Why is this? In principle, gemini should save more GPU memory

Open JJGSBGQ opened this issue 1 year ago β€’ 2 comments

JJGSBGQ avatar Jun 18 '24 09:06 JJGSBGQ

Bot detected the issue body's language is not English, translate it automatically. πŸ‘―πŸ‘­πŸ»πŸ§‘β€πŸ€β€πŸ§‘πŸ‘«πŸ§‘πŸΏβ€πŸ€β€πŸ§‘πŸ»πŸ‘©πŸΎβ€πŸ€β€πŸ‘¨πŸΏπŸ‘¬πŸΏ


Title: Use gemini plugin and LowLevelZero to run llama2_7b. In the pulgin in gemini, set the policy to static, shard_param_frac, offload_optim_frac, and offload_param_frac to 0.0, making gemini equal to zero2, and set stage to 2 in LowLevelZero. Using bf16 for training, and comparing the two plugins, we found that the memory usage of gemini is higher than that of LowLevelZero. Why is this? In principle, gemini should save more video memory

Issues-translate-bot avatar Jun 18 '24 09:06 Issues-translate-bot

When perform stable-diffusion in the same way, find that gemni has a lower GPU memory usage than LowLevelZero

JJGSBGQ avatar Jun 18 '24 09:06 JJGSBGQ