Amphion icon indicating copy to clipboard operation
Amphion copied to clipboard

[Help]: MultiGPU TTA training

Open fpicetti opened this issue 1 year ago • 3 comments

Problem Overview

I'd like to train a TTA model (following your examples) in a multiGPU environment (i.e., 4 A100) but I have been unsuccessful so far.

Steps Taken

  1. prepared AudioCaps dataset
  2. fix typos in the base config files for both autoencoderkl and audioldm folders
  3. updated json and sh files according with my dataset
  4. launched the train script with sh egs/tta/autoencoderkl/run_train.sh, no further modification -> it works on the first GPU, as expected
  5. modified run_train.sh#L19 as `export CUDA_VISIBLE_DEVICES="0,1,2,3" -> it works on the first GPU only
  6. keeping point 4, also changed exp_config.json#L38 to "ddp": true -> fails, it asks for all the distribution parameters (RANK, WORLD_SIZE, MASTER_ADDR, MASTER_PORT)
  7. reverted 4 and 5, and thought to leverage accelerate: run accelerate config to set up a single node multiGPU training. accelerate test works fine on the 4 GPUs.
  8. Removed run_train.sh#L19 and modified run_train.sh#L22 to accelerate launch "${work_dir}"/bins/tta/train_tta.py -> I see 4 processes on the first GPU, then it goes OOM.

Expected Outcome

A single train job on 4 GPUs.

Environment Information

  • Operating System: Ubuntu 22.04 LTS
  • Python Version: Python 3.9.15 (conda env created following your instruction)
  • Driver & CUDA Version: CUDA 12.2, Driver 535.86.10
  • Error Messages and Logs: See Steps Taken above

fpicetti avatar Mar 15 '24 01:03 fpicetti

@HeCheng0625 any update on this?

fpicetti avatar Mar 27 '24 00:03 fpicetti

Hi, TTA now only supports single GPU training, you can refer to other tasks to implement multi-card training based on accelerate. Welcome to submit PRs.

HeCheng0625 avatar Apr 02 '24 12:04 HeCheng0625

Any plan on support training multi GPU on TTA task yet.

hieuhthh avatar Jun 08 '24 11:06 hieuhthh