DeepSpeed
DeepSpeed copied to clipboard
[BUG] loading model error
Describe the bug I use 8 * A100-80GB GPUs to fine-tune a 72B model, but when it loads 16/37 safetensor, it always been shutdown for no reasons without any hints!
The error is as follows:
Loading checkpoint Shards"43%16/37 [19:49<25:57,74.18s/it]
Sending process4029695 closing signal SIGTERK
failad (edtcote: -5) lcalrank:5
I use zero-3 and acclerate to launch my script. The command I use is accelerate launch --config_file xxx --gpu_ids 0,1,2,3,4,5,6,7 xxx.py.
The deepspeed file is as follows:
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": "auto"
},
"gradient_accumulation_steps": 1,
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
Expected behavior
I think it should load all of safetensors, what's more, during the loading process, I notice that gpu memory is still 0.
Screenshots
System info (please complete the following information):
- OS: Ubuntu 18.04
- GPU count and types: 8*A100-80GB
- Python version 3.12
Hi, I actually encountered the same issue here. Did you happened to figure what was causing the problem?