DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] Universal checkpoint conversion failed

Open hongshanli23 opened this issue 6 months ago • 10 comments

Describe the bug while converting a sharded zero3 checkpoint of llava styled multimodal model, I got the following error

""" Traceback (most recent call last): File "/scratch/hongshal/code/DeepSpeed/deepspeed/checkpoint/ds_to_universal.py", line 551, in main(args) File "/scratch/hongshal/code/DeepSpeed/deepspeed/checkpoint/ds_to_universal.py", line 525, in main _extract_zero_shard_files_stage3(args, optim_files, param_shapes, dp_degree, temp_dir) File "/scratch/hongshal/code/DeepSpeed/deepspeed/checkpoint/ds_to_universal.py", line 377, in _extract_zero_shard_files_stage3 _do_parallel_work(do_work, list(range(dp_degree)), args.num_extract_workers) File "/scratch/hongshal/code/DeepSpeed/deepspeed/checkpoint/ds_to_universal.py", line 356, in _do_parallel_work results.append(f.result()) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception RuntimeError: start (241829312) + length (176) exceeds dimension size (241829312). """

To Reproduce Tough for you to reproduce as it is not a public checkpoint

Expected behavior A clear and concise description of what you expected to happen.

ds_report output

[2024-08-02 18:31:05,140] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-08-02 18:31:06,422] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
 [WARNING]  FP Quantizer is using an untested triton version (2.0.0), only 2.3.0 and 2.3.1 are known to be compatible with these kernels
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
 [WARNING]  using untested triton version (2.0.0), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
 [WARNING]  FP Quantizer is using an untested triton version (2.0.0), only 2.3.0 and 2.3.1 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
 [WARNING]  using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
torch version .................... 2.1.0a0+32f93b1
deepspeed install path ........... ['/scratch/hongshal/code/DeepSpeed/deepspeed']
deepspeed info ................... 0.14.5+unknown, unknown, unknown
torch cuda version ............... 12.2
torch hip version ................ None
nvcc version ..................... 12.2
deepspeed wheel compiled w. ...... torch 2.1, cuda 12.2
shared memory (/dev/shm) size .... 1.91 TB

Screenshots If applicable, add screenshots to help explain your problem.

System info (please complete the following information):

  • OS: ubunut 22.04
  • GPU count and types: 8 H100 on one node
  • Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
  • Python version: 3.10.12

Launcher context

python ds_to_universal.py --input_folder /path/to/checkpoint/checkpoint-97650/global_step97650/ --output_folder /path/to/checkpoint/checkpoint-97650-universal/

Docker context Are you using a specific docker image that you can share?

Additional context Add any other context about the problem here.

hongshanli23 avatar Aug 02 '24 18:08 hongshanli23