Trainer.train(resume_from_checkpoint=...) fails when using auto tensor parallel
Describe the bug I tried to use Trainer.train(resume_from_checkpoint=...) to resume training, then I got the following AssertError:
[rank1]: Traceback (most recent call last):
[rank1]: File "/data/llm_algorithms/scripts/pretrain/pretrain.py", line 129, in <module>
[rank1]: main(data_args, train_args)
[rank1]: File "/data/llm_algorithms/scripts/pretrain/pretrain.py", line 123, in main
[rank1]: trainer.train(resume_from_checkpoint=train_args.resume_from_checkpoint)
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/trainer.py", line 2240, in train
[rank1]: return inner_training_loop(
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/trainer.py", line 2393, in _inner_training_loop
[rank1]: deepspeed_load_checkpoint(
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/integrations/deepspeed.py", line 489, in deepspeed_load_checkpoint
[rank1]: load_path, _ = deepspeed_engine.load_checkpoint(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 2980, in load_checkpoint
[rank1]: load_path, client_states = self._load_checkpoint(load_dir,
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 3030, in _load_checkpoint
[rank1]: sd_loader = SDLoaderFactory.get_sd_loader(ckpt_list, checkpoint_engine=self.checkpoint_engine)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 43, in get_sd_loader
[rank1]: return MegatronSDLoader(ckpt_list, version, checkpoint_engine)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 193, in __init__
[rank1]: super().__init__(ckpt_list, version, checkpoint_engine)
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 55, in __init__
[rank1]: self.check_ckpt_list()
[rank1]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 174, in check_ckpt_list
[rank1]: assert len(self.ckpt_list) == sd[
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: AssertionError: checkpoint count 1 is different from saved mp_world_size 2
[2025-05-29 17:19:49,532] [WARNING] [lr_schedules.py:683:get_lr] Attempting to get learning rate from scheduler before it has started
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/llm_algorithms/scripts/pretrain/pretrain.py", line 129, in <module>
[rank0]: main(data_args, train_args)
[rank0]: File "/data/llm_algorithms/scripts/pretrain/pretrain.py", line 123, in main
[rank0]: trainer.train(resume_from_checkpoint=train_args.resume_from_checkpoint)
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/trainer.py", line 2240, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/trainer.py", line 2393, in _inner_training_loop
[rank0]: deepspeed_load_checkpoint(
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/transformers/integrations/deepspeed.py", line 489, in deepspeed_load_checkpoint
[rank0]: load_path, _ = deepspeed_engine.load_checkpoint(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 2980, in load_checkpoint
[rank0]: load_path, client_states = self._load_checkpoint(load_dir,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/engine.py", line 3030, in _load_checkpoint
[rank0]: sd_loader = SDLoaderFactory.get_sd_loader(ckpt_list, checkpoint_engine=self.checkpoint_engine)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 43, in get_sd_loader
[rank0]: return MegatronSDLoader(ckpt_list, version, checkpoint_engine)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 193, in __init__
[rank0]: super().__init__(ckpt_list, version, checkpoint_engine)
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 55, in __init__
[rank0]: self.check_ckpt_list()
[rank0]: File "/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed/runtime/state_dict_factory.py", line 174, in check_ckpt_list
[rank0]: assert len(self.ckpt_list) == sd[
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AssertionError: checkpoint count 1 is different from saved mp_world_size 2
[rank0]:[W529 17:19:50.117906332 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
To Reproduce Steps to reproduce the behavior: I used Trainer to pretrained Qwen3-0.6B
ds_config:
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"checkpoint": {
"load_universal": false,
"tag_validation": "Warn",
"use_node_local_storage": true
},
"zero_optimization": {
"stage": 2,
"gather_16bit_weights_on_model_save": true,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"tensor_parallel": {
"autotp_size": 2
},
"environment": {
"NCCL_DEBUG": "WARN"
},
"logging": {
"rank_zero_only": true,
"level": "INFO"
},
"wall_clock_breakdown": false
}
The training script is run on two nodes(each node has two L40 GPU). the checkpints in each node is:
node1
checkpoint-10
├── config.json
├── generation_config.json
├── global_step10
├──├── bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
├──├── bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt
├──├── mp_rank_00_model_states.pt
├── latest
├── model.safetensors
├── rng_state_0.pth
├── rng_state_1.pth
├── trainer_state.json
├── training_args.bin
├── zero_to_fp32.py
node2
checkpoint-10
├── config.json
├── generation_config.json
├── global_step10
├──├── bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt
├──├── bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt
├──├── mp_rank_00_model_states.pt
├── latest
├── model.safetensors
├── rng_state_0.pth
├── rng_state_1.pth
├── trainer_state.json
├── training_args.bin
├── zero_to_fp32.py
Expected behavior It should load the checkpoint and resume training.
ds_report output
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
dc ..................... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/data/miniconda3/envs/llm_algorithms/compiler_compat/ld: cannot find -lcufile: 没有那个文件或目录
collect2: 错误:ld 返回 1
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.6
[WARNING] using untested triton version (3.2.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/torch']
torch version .................... 2.6.0+cu124
deepspeed install path ........... ['/data/miniconda3/envs/llm_algorithms/lib/python3.11/site-packages/deepspeed']
deepspeed info ................... 0.16.9, unknown, unknown
torch cuda version ............... 12.4
torch hip version ................ None
nvcc version ..................... 12.4
deepspeed wheel compiled w. ...... torch 2.6, cuda 12.4
shared memory (/dev/shm) size .... 62.56 GB
System info (please complete the following information):
- OS: Rocky Linux 9.3 (Blue Onyx)
- GPU two machines with x2 L40s each]
- Python version: 3.11
- transformers 4.52.3
- accelerate 1.7.0
- bitsandbytes 0.45.5
Launcher context
PYTHONPATH=. deepspeed --hostfile=ds_configs/pretrain/pt_hostfile \
--num_nodes=2 --node_rank $node_rank \
--no_ssh --no_ssh_check \
--master_addr=192.168.72.117 --master_port=6601 \
scripts/pretrain/pretrain.py \
--resume_from_checkpoint=ckpts/pretrain/Qwen_Qwen3-0.6B/v1/checkpoint-10 \
--model_name_or_path=$model_name_or_path \
--deepspeed "$ds_config" \
--max_seq_length=$max_seq_length \
--bf16 \
--output_dir=$output_dir \
--overwrite_output_dir \
--data_dir=$data_dir \
--max_steps=$max_steps \
--learning_rate=$learning_rate \
--adam_beta1=0.9 \
--adam_beta2=0.95 \
--weight_decay=0.1 \
--warmup_steps=$warmup_steps \
--lr_scheduler_type="linear" \
--per_device_train_batch_size=$per_device_train_batch_size \
--gradient_accumulation_steps=$gradient_accumulation_steps \
--per_device_eval_batch_size=$per_device_eval_batch_size \
--eval_strategy="steps" \
--eval_steps=$eval_steps \
--metric_for_best_model="eval_loss" \
--save_strategy="best" \
--save_total_limit 1 \
--save_on_each_node \
--load_best_model_at_end \
--logging_dir="./PT_LOGS" \
--logging_steps=10
Additional context resume_from_checkpoint works fine when I use ZeRO 3, it seems that auto tp strategy does not work well with the Trainer.
hi @Peter-Chou ,I gave it a try and it works correctly. Here's my list of checkpoint files — it looks like yours is missing some content compared to mine. It seems related to multi-node and your disks are not shared between nodes. Could you merge the files first and try it out?
├── checkpoint-10
│ ├── added_tokens.json
│ ├── chat_template.jinja
│ ├── config.json
│ ├── generation_config.json
│ ├── global_step10
│ │ ├── bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
│ │ ├── bf16_zero_pp_rank_0_mp_rank_01_optim_states.pt
│ │ ├── bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt
│ │ ├── bf16_zero_pp_rank_1_mp_rank_01_optim_states.pt
│ │ ├── mp_rank_00_model_states.pt
│ │ └── mp_rank_01_model_states.pt
│ ├── latest
│ ├── merges.txt
│ ├── model.safetensors
│ ├── rng_state_0.pth
│ ├── rng_state_1.pth
│ ├── rng_state_2.pth
│ ├── rng_state_3.pth
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ ├── trainer_state.json
│ ├── training_args.bin
│ ├── vocab.json
│ └── zero_to_fp32.py
Hi @inkcherry, Thanks for your quick reply, Did you saved checkpoint on shared filesystem?
I guess this error only occurs when use_node_local_storage is enabled and model is divided to GPUs like autoTP.
the bug disappears when I saved checkpoint to the shared filesystem.