DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

zero3 training hangs with mixed multimodal dataset

Open zhangyuygss opened this issue 8 months ago • 2 comments

Describe the bug zero3 qwen2-vl training hangs when with mixed multimodal dataset. When different GPUs have different modalities of mini-batch, multimodal related variables have different shapes among GPUs. For example, video related tensor video_grid_thw have values on GPU0, but is None on GPU1. The training hangs when dealing with this variable.

The hanging DOES NOT occur when using zero-2. Is it caused by variable comunication between GPUs in zero-3? What's the right way to train mixed modality data with zero-3?

dataset: mixure of pure-text, image-text model: qwen2-vl training on: 8xA100 stage3 config:

{
  "fp16": {
    "enabled": "auto",
    "loss_scale": 0,
    "loss_scale_window": 1000,
    "initial_scale_power": 16,
    "hysteresis": 2,
    "min_loss_scale": 1
  },
  "bf16": {
    "enabled": "auto"
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": "auto",
      "betas": "auto",
      "eps": "auto",
      "weight_decay": "auto"
    }
  },
  "zero_optimization": {
    "stage": 3,
    "overlap_comm": true,
    "contiguous_gradients": true,
    "sub_group_size": 1e9,
    "reduce_bucket_size": "auto",
    "stage3_prefetch_bucket_size": "auto",
    "stage3_param_persistence_threshold": "auto",
    "stage3_max_live_parameters": 1e9,
    "stage3_max_reuse_distance": 1e9,
    "stage3_gather_16bit_weights_on_model_save": true
  },
  "gradient_accumulation_steps": "auto",
  "gradient_clipping": "auto",
  "steps_per_print": 100,
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "wall_clock_breakdown": false
}

zhangyuygss avatar Mar 24 '25 08:03 zhangyuygss

Is the problem solved now? I come across the same problem with zero 3 on mix-modality training. The training always hangs at the make_experience stage, and the progress is always 0. Changing to pure image-text data or pure text data solves this issue

Nebularaid2000 avatar May 15 '25 16:05 Nebularaid2000

Is the problem solved now? I come across the same problem with zero 3 on mix-modality training. The training always hangs at the make_experience stage, and the progress is always 0. Changing to pure image-text data or pure text data solves this issue

In llama-factory, they use fake mm inputs to avoid the hanging: https://github.com/hiyouga/LLaMA-Factory/blob/main/src/llamafactory/data/collator.py#L131

zhangyuygss avatar May 19 '25 03:05 zhangyuygss