DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

Upgrading to Deepspeed v0.6.5 causes higher GPU memory usage

Open SuhitK opened this issue 3 years ago • 1 comments

Hi,

I am trying to train a 10B GPT-2 model with Huggingface and Deepspeed. This is the deepspeed configuration I am using:

{
  "train_micro_batch_size_per_gpu": 16,
  "gradient_accumulation_steps": 2,
  "prescale_gradients": false,

  "zero_optimization": {
    "stage": 2,
    "overlap_comm": true,
    "sub_group_size": 1.000000e+12, 
    "allgather_bucket_size": 5e7,
    "reduce_bucket_size": 5e7
  },

  "zero_allow_untested_optimizer": true,
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": 5e-5,
      "weight_decay": 0.01,
      "bias_correction": true,
      "eps": 1e-6
    }
  },
  
  "gradient_clipping": 1.0,
  "wall_clock_breakdown": false,

  "scheduler": {
      "type": "WarmupDecayLR", 
      "params": {
          "last_batch_iteration": -1, 
          "total_num_steps": 40, 
          "warmup_min_lr": 0, 
          "warmup_max_lr": 5e-05, 
          "warmup_num_steps": 5
          }
    },

  "fp16": {
    "enabled": true,
    "loss_scale": 0,
    "initial_scale_power": 20,
    "loss_scale_window": 1000
  }
}

In addition, I also have gradient_checkpointing enabled via the HuggingFace API. This configuration runs out of memory during the first forward pass through the model. When I had previously run the same configuration with Deepspeed v0.3.16, I was not facing this issue. It is only after upgrading to v0.6.5 that this issue is observed.

ds_report output Please run ds_report to give us details about your setup.

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch']
torch version .................... 1.10.2+cu113
torch cuda version ............... 11.3
torch hip version ................ None
nvcc version ..................... 11.3
deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.6.5, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.10, cuda 11.3

System info (please complete the following information):

  • OS: Ubuntu 20.04
  • GPU count and types: 16 nodes, each with 8 A100 GPUs
  • Python version: 3.8

Launcher context Using deepspeed launcher

SuhitK avatar Jun 21 '22 14:06 SuhitK

@SuhitK, apologies for the delayed response. Can you please check if this issue still exists with latest master? Thanks!

tjruwase avatar Jul 29 '22 12:07 tjruwase

@SuhitK, please re-open if you are still having issues.

jeffra avatar Dec 02 '22 19:12 jeffra