DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG]try to finetune chatglm-6b with zero 3,but CUDA OOM

Open zhujc000 opened this issue 2 years ago • 1 comments

Describe the bug try to finetune chatglm-6b with zero 3,but CUDA OOM model huggingface: https://huggingface.co/THUDM/chatglm-6b

To Reproduce

os.environ["LOCAL_RANK"] = str(rank)
os.environ["PATH"] = ":/xxx/anaconda3/bin/"
deepspeed.init_distributed(dist_backend="nccl",
                           distributed_port=3456,
                           init_method="tcp://xxx:23456",
                           world_size=8,
                           rank=rank,
                           auto_mpi_discovery=False)
torch.cuda.set_device(rank)

config = ChatGLMConfig.from_pretrained(CONFIG_PATH)
tokenizer = ChatGLMTokenizer.from_pretrained(BASE_DIR)

train_dataset = GLMDataset(data_path=DATA_FILE, tokenizer=tokenizer)
print("train_dataset loaded")

model = ChatGLMForConditionalGeneration.from_pretrained(pretrained_model_name_or_path=BASE_DIR,
                                                        config=config,
                                                        torch_dtype=torch.float16)

print("model loaded")

peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM,
                         inference_mode=False,
                         r=12,
                         lora_alpha=32,
                         lora_dropout=0.1)

model = get_peft_model(model, peft_config=peft_config)
print("peft model loaded")
model.print_trainable_parameters()

model, _, data_loader, _ = deepspeed.initialize(model=model,
                                                training_data=train_dataset,
                                                model_parameters=[p for p in model.parameters() if p.requires_grad],
                                                config=DEEPSPEED_CONF)

model.train()

for echo in range(9):
    for step, input_v in enumerate(data_loader):
        for k, v in input_v.items():
            try:
                input_v[k] = v.cuda()
            except Exception as e:
                raise e

        loss = model(**input_v)
        model.backward(loss)
        model.step()

Expected behavior finetune without cuda OOM

ds_report output

DeepSpeed C++/CUDA extension op report

NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op.

JIT compiled ops requires ninja ninja .................. [OKAY]

op name ................ installed .. compatible

async_io ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [YES] ...... [OKAY] fused_adam ............. [YES] ...... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] utils .................. [YES] ...... [OKAY]

DeepSpeed general environment info: torch install path ............... ['/xxx/anaconda3/lib/python3.9/site-packages/torch'] torch version .................... 2.0.0+cu118 deepspeed install path ........... ['/xxx/anaconda3/lib/python3.9/site-packages/deepspeed'] deepspeed info ................... 0.9.2, unknown, unknown torch cuda version ............... 11.8 torch hip version ................ None nvcc version ..................... 11.8 deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8

System info (please complete the following information):

  • OS: centOS 7.9
  • one node with 8 nvidia t4 card(16GB GPU RAM), CPU RAM > 500G
  • Python version
  • CUDA Version: 11.8

Launcher context just use python command to lanuch this script like: python xxx.py

**deepspeed config

DEEPSPEED_CONF = { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 },

"optimizer": {
    "type": "AdamW",
    "params": {
        # "torch_adam": True,
        # "adam_w_mode": True,
        "lr": 1e-5,
        "weight_decay": 0.1
    }
},
#
# "scheduler": {
#     "type": "WarmupLR",
#     "params": {
#         "warmup_min_lr": "auto",
#         "warmup_max_lr": "auto",
#         "warmup_num_steps": "auto"
#     }
# },

"zero_optimization": {
    "stage": 3,
    "offload_optimizer": {
        "device": "cpu",
        "pin_memory": True
    },
    "offload_param": {
        "device": "cpu",
        "pin_memory": True
    },
    "overlap_comm": True,
    "contiguous_gradients": True,
    "sub_group_size": 1e9,
    "reduce_bucket_size": "auto",
    "stage3_prefetch_bucket_size": "auto",
    "stage3_param_persistence_threshold": "auto",
    "stage3_max_live_parameters": 1e9,
    "stage3_max_reuse_distance": 1e9,
    "stage3_gather_16bit_weights_on_model_save": True
},

"gradient_accumulation_steps": 2,
"gradient_clipping": 1,
"steps_per_print": 1234,
"train_batch_size": 16,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False

}

zhujc000 avatar May 11 '23 14:05 zhujc000

solved after i enable gradient checkpointing.but it seems zero 3 mode does not partition params acoss GPUs,is it a bug?

zhujc000 avatar May 13 '23 16:05 zhujc000

@zhujc000, gradient checkpointing solves activation memory consumption which is different from model/optimizer memory consumption that zero 3 solves. I think you found the right solution for the problem.

tjruwase avatar May 15 '23 15:05 tjruwase