DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[Draft][Demo] auto tp training

Open inkcherry opened this issue 1 year ago • 4 comments

This is an experimental demo on autoTP training, not for review. Apologies for its somewhat rudimentary draft version, I hope to elucidate this process.

Currently, I tested pure TP (DP=1 cases), directly using the HF transformers Trainer. I trained llama7B (finetune from pretrained weights) on 4GPUs and 8GPUs with pure TP and achieved a loss curve of 1.6~0.3(expected). Main modifications are as follows:

  • On the train script side:
  1. Explicit use of an API (currently directly using the inference API, invoking autoTP to do module replacement).
  2. Manual modification of a dataloader to synchronize data for all TP ranks (this is a temporary solution).
  3. Setting ds_config.json where zero_config zero_stage=0, autotp_size=num_gpus(DP=1).
  • On the DS side change, in this demo:

    1 Decoupling MPU and Megatron, I've directly taken Megatron's code and put it in the parallel_states.py file 2 Adding backward code for the main replace modules, linelinear & linearallreduce. 3 Adding the 'tensor_model_parallel' attribute for linelinear & linearallreduce, ensuring they are correctly handled in grad norm or other calculations. 5 Setting requires_grad=True for the weights and bias of linelinear & linearallreduce, ensuring they are captured in model_params by transformer prepare_deepspeed logic and fed to the DS optimizer in related. 6 _broadcast_model: Due to some inconsistencies in group settings, the dp group used by _broadcast_model is not correct, so I directly bypass the logic here(DP=1). 7 gradient_allreduce: directly disable it because of the similar reason as 6. 5&6 can be resolved by a unified group init function. 8 Adding the autotp_size config.

Currently, in this basic version, I did two simple tests. Under the same gbs and gas conditions, it has 70% performance compared to zero3, but there are some gbs threshold limits lower than zero3 (at this time, zero3 performs better, TP oom, may be either the dataloader or lacking some optimizations from Megatron? I didn't further analyze)

The benefit of doing this is to decouple TP and Megatron bindings, enable user directly using transformers+ds training with tp+something, which can also be applied to other simple models (through module replacement). Additionally, because There are autoTP inference code and the inheritance between zero backend and transformers, No need for particularly much additional logic

  • For the most basic usage, there are some issues to handle:
  1. Transformer seems to have no logic about real TP(Only single-device simulation), which causes some minor problems with dataset counting. For example, if two ranks each load 4 identical data, it will consider this as 8 data. This affects the display of counters and some parameters of the optimizer schedule (equivalent to increasing the lr decay). Need to correct the counting of trained num_samples.
  2. Load and save checkpoint: If the code for autoTP_training is just for autoTP inference, I think this should be much easier. Otherwise, some reverse shard operations may be needed.

For better use: The most basic is to consider compatibility with zero dp, and it may also be compatible with more features(reuse the relevant logic of ds for Megatron's TP), Some performance and memory optimizations.

inkcherry avatar Apr 22 '24 09:04 inkcherry

@inkcherry Is there a link to the demo code? I'm interested in the potential use case of this feature proposal.

delock avatar Apr 23 '24 01:04 delock

This PR should be addressing this discussion. Link. https://github.com/microsoft/DeepSpeed/discussions/4930

delock avatar Apr 23 '24 06:04 delock

@inkcherry Is there a link to the demo code? I'm interested in the potential use case of this feature proposal. hi, @delock

FYI:https://github.com/inkcherry/stanford_alpaca/tree/tp_demo see the latest commit msg Due to my bandwidth, it's a bit hard for me to sustain continuous focus on this. If possible, really appreciate an experienced engineer like you to help completing or enhancing it.

inkcherry avatar Apr 23 '24 14:04 inkcherry

@inkcherry and @delock, please let us know any way we can help. Thanks!

tjruwase avatar Apr 23 '24 14:04 tjruwase