DeepSpeed
DeepSpeed copied to clipboard
[REQUEST] universal checkpoint for ZeRO - 1,2,3
Is your feature request related to a problem? Please describe.
I think we now have all the components ready to do universal checkpoint in ZeRO - 1,2,3, like we had for BF16Optimizer.
The need is to be able to add/remove gpus when resuming from a checkpoint with a different number of gpus once the training started.
Thank you.
Progress update:
- [x] Stage 1 https://github.com/microsoft/DeepSpeed/pull/4516
- [x] Stage 2 https://github.com/microsoft/Megatron-DeepSpeed/tree/main/examples_deepspeed/universal_checkpointing#zero-stage-2-training
- [ ] Stage 3
@tjruwase
Any plans to work on that, Tunji? We could have really used that feature in the current 80b m4 training, as we would like to add new parameters that were previously frozen and thus aren't in the optimizer.
which also of adds a new interesting feature request.
Train a new model with some pretrained frozen params and then towards the end of the training expand the optimizer to include the frozen params and unfreeze those to finetune the whole ensemble.
Granted one could train from the beginning with lr=0 for frozen params, but it will require a lot more memory from the beginning. So this approach could save days to weeks of training for a large model training.
+1
@stas00, I believe zero stage 1 should be supported: #2284. I think stage 2 might be working or pretty to close. I do plan to work on this, so perhaps we can chat a bit about your timing.
Excuse me, can stage1 directly resume the number of different cards, or do you need to set some parameters?
like we had for BF16Optimizer.
does it mean deepspeed already supports automatically changing world size for Zero-1,2,3 if I use "bf16": { "enabled": true } ?
@stas00, I believe zero stage 1 should be supported: #2284. I think stage 2 might be working or pretty to close. I do plan to work on this, so perhaps we can chat a bit about your timing.
I'm attempting to enhance DeepSpeed by enabling it to support a dynamic world size. This is particularly for the setup involving AdamW, stage 3, and bf16. However, I'm uncertain about the level of complexity in comparison to expanding DeepSpeed's support for a universal dynamic world size across all optimizers and precisions. Could you provide some insights on this matter?
does it mean deepspeed already supports automatically changing world size for Zero-1,2,3 if I use bf16
No, it's not. It's currently a confusing situation as BF16Optimizer
was written specifically for Megatron-Deepspeed when we trained BLOOM-176B, so it works only in that framework.
As the heavy lifting to support universal checkpoint has been done, porting it to ZeRO should take significantly less effort than it did to create the initial work as all the components are already in place. So it's really about @tjruwase and his team finding time and prioritizing this effort to make this happen. Clearly it's very desirable by many users at this point.
No, it's not. It's currently a confusing situation as
BF16Optimizer
was written specifically for Megatron-Deepspeed when we trained BLOOM-176B, so it works only in that framework.
Is it possible and easier if we convert Deepspeed's checkpoint into Megatron-Deepspeed format, change the world size, then convert back to Deepspeed's format 😀
Hmm, there you convert from/to TP/DP/PP topology. In ZeRO-3 you only have DP so perhaps it might be possible, but it won't find info on TP/PP and probably fail. e.g. it'd expect a different set of shard files for TP and PP, which don't exist in ZeRO-3.
But if it worked once you convert it to the universal checkpoint, the tricky part would be to move to the new topology, as again the code is written for Meg-DS 3D topology.
But as I have just explained the ZeRO case is much simpler than TP/DP/PP so it should be relatively easy to make it work with just ZeRO files.
But as I have just explained the ZeRO case is much simpler than TP/DP/PP so it should be relatively easy to make it work with just ZeRO files.
I think it can be achieved by a single tool similar to zero_2_fp32.py
But as I have just explained the ZeRO case is much simpler than TP/DP/PP so it should be relatively easy to make it work with just ZeRO files.
I have implemented the conventional tool, and I now find myself faced with a minor question. In the 'bf16_zero_*_optim_states.pt' file, the loss scaler is stored as <deepspeed.runtime.fp16.loss_scaler.LossScaler object at 0x7f0733de5610>. However, the address 0x7f0733de5610 doesn't serve any purpose, correct? Additionally, is there a need to scale the stored optimizer state (gradient and square of gradient for all trainable params) according to the old and new world size?
I'm just a contributor - so I am tagging @tjruwase who hopefully will have the resources to address your questions, @GradientGuru
@GradientGuru, saving loss_scaler as an object instead of state_dict is a bug. Please feel free to submit a PR. Thanks!
I'm not sure this is the right place to ask, we're researchers in a situation where we sometimes get access to a bunch of GPUs, and sometimes we don't, and we're counting on this donated GPU time to train a large model that requires deepspeed Zero 2 even just to fit into the GPUs.
We're trying figure out how to best handle the changing world sizes that come in the above setting, as it looks like currently we would not be able to restore optimizer state from a checkpoint. Im wondering what advice do you have on how we could proceed?
It's the perfect place to ask - @tjruwase, is it possible to raise the priority for this time? This is a very critical requirement for users to choose Deepspeed over other frameworks. Thanks a ton!
@stas00, thanks for this notification. We will raise the priority.
@orm011, are you using megatron by any chance? There is some partial support there that you could start playing with.
Didn't know about megatron. Do you mean this library: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron.html? or is it something else?
Or probably this one: https://github.com/microsoft/Megatron-DeepSpeed
Or probably this one: https://github.com/microsoft/Megatron-DeepSpeed
Or this one? https://github.com/bigscience-workshop/Megatron-DeepSpeed.
Both of the above are forks of the original from NVIDIA: https://github.com/NVIDIA/Megatron-LM.
@tjruwase, thank you for implementing the universal checkpoint for stage 1 in https://github.com/microsoft/DeepSpeed/pull/4516
this issue was opened for 1,2,3 - so perhaps this Issues shouldn't have been closed yet?
@stas00, correct! I didn't know how to partially close an issue :)
I updated the OP that stage 1 is done.
Also interested in stage 2 support.
@zaptrem, stage 1/2 and bf16_optimizer are supported. Only stage 3 support is pending. https://github.com/microsoft/Megatron-DeepSpeed/tree/main/examples_deepspeed/universal_checkpointing#zero-stage-2-training
@samadejacobs, @lekurile FYI
@zaptrem, stage 1/2 and bf16_optimizer are supported. Only stage 3 support is pending.
https://github.com/microsoft/Megatron-DeepSpeed/tree/main/examples_deepspeed/universal_checkpointing#zero-stage-2-training
@samadejacobs, @lekurile FYI
We tried to use this last night and the universal checkpoint conversion script failed because our DS checkpoint was missing universal _checkpoint_info. We commented out all references to that and converted it anyway then got this error when we tried to restore from the newly converted universal checkpoint:
deepspeed.runtime.zero.utils.ZeRORuntimeException: The checkpoint being loaded used a DP world size of 8 but the current world size is 128. Automatic adjustment of ZeRO's optimizer state partitioning with a new world size is not currently supported.
We tried to use this last night and the universal checkpoint conversion script failed because our DS checkpoint was missing universal _checkpoint_info. We commented out all references to that and converted it anyway then got this error when we tried to restore from the newly converted universal checkpoint:
deepspeed.runtime.zero.utils.ZeRORuntimeException: The checkpoint being loaded used a DP world size of 8 but the current world size is 128. Automatic adjustment of ZeRO's optimizer state partitioning with a new world size is not currently supported.
Hello @zaptrem,
Just as a clarification, are you using Megatron-DeepSpeed for creation of the checkpoint?
In Megatron-DeepSpeed, when the checkpoint gets saved, there's a call to _universal_checkpoint_info(model)
that updates state_dict[UNIVERSAL_CHECKPOINT_INFO]
here.
state_dict[UNIVERSAL_CHECKPOINT_INFO] = _universal_checkpoint_info(model)
If you're not using Megatron-DeepSpeed, you can try ensuring that the same universal checkpoint metadata that gets stored in the _universal_checkpoint_info()
call is present in your checkpoint as well.
Please share any questions or concerns.
Thanks, Lev
We're not training a language model. Is that just a fork of DeepSpeed or something specifically for transformer LMs?
Instead of the universal checkpoint, I use the code from @tjruwase to convert a DeepSpeed checkpoint without TP and PP (128 ranks) to another DeepSpeed checkpoint (32 ranks).
https://gist.github.com/rgtjf/aa90fc37efe38ad773046623780a1026
Discussions and comments are welcome.
We're not training a language model. Is that just a fork of DeepSpeed or something specifically for transformer LMs?
@zaptrem, just wanted to check if this something we can still help with? Thanks!
Instead of the universal checkpoint, I use the code from @tjruwase to convert a DeepSpeed checkpoint without TP and PP (128 ranks) to another DeepSpeed checkpoint (32 ranks). https://gist.github.com/rgtjf/e621f3ac27192cb34a10bea700d9a0c0
Discussions and comments are welcome.
@rgtjf, thanks for sharing your usage of the conversion script. However, our plan is to focus on universal checkpointing, which is general, so that it replaces the conversion script. Are you able to work with us to make universal checkpointing work correctly for your scenario? I am looking at your report here. Thanks!