MuseTalk icon indicating copy to clipboard operation
MuseTalk copied to clipboard

video ./dataset/HDTF/video_audio_clip_root/clip012_0e890b168d52bb3d79e7707fbd16e13e.mp4 has invalid mel spectrogram shape: (15, 80), expected: 52

Open LEONPICKBOY opened this issue 5 months ago • 2 comments

08/04/2025 18:51:25 - INFO - main - Total optimization steps = 250000 Steps: 0%| | 0/250000 [00:00<?, ?it/s]log type of models unet torch.float32 vae torch.float32 wav2vec torch.float32 video ./dataset/HDTF/video_audio_clip_root/clip012_0e890b168d52bb3d79e7707fbd16e13e.mp4 has invalid mel spectrogram shape: (15, 80), expected: 52 Traceback (most recent call last): File "/data/cheng.wang/MuseTalk-main/train.py", line 580, in main(config) File "/data/cheng.wang/MuseTalk-main/train.py", line 424, in main accelerator.backward(loss) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/accelerator.py", line 1995, in backward self.deepspeed_engine_wrapped.backward(loss, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 166, in backward self.engine.backward(loss, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn ret_val = func(*args, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2268, in backward self._backward_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2204, in _backward_epilogue self.allreduce_gradients() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn ret_val = func(*args, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2160, in allreduce_gradients self.optimizer.overlapping_partition_gradients_reduce_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 935, in overlapping_partition_gradients_reduce_epilogue self.independent_gradient_partition_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 826, in independent_gradient_partition_epilogue self.reduce_ipg_grads() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1453, in reduce_ipg_grads self.average_tensor(bucket.buffer[bucket.index].narrow(0, 0, bucket.elements), comm_dtype) IndexError: list index out of range Steps: 0%| | 0/250000 [00:07<?, ?it/s] ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 4058169) of binary: /data/conda/envs/MuseTalk/bin/python3.1 Traceback (most recent call last): File "/data/conda/envs/MuseTalk/bin/accelerate", line 8, in sys.exit(main()) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main args.func(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1042, in launch_command deepspeed_launcher(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/launch.py", line 754, in deepspeed_launcher distrib_run.run(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures: <NO_OTHER_FAILURES>

Root Cause (first observed failure): [0]: time : 2025-08-04_18:51:34 host : mdsk-ops.lan rank : 0 (local_rank: 0) exitcode : 1 (pid: 4058169) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

LEONPICKBOY avatar Aug 04 '25 10:08 LEONPICKBOY

08/04/2025 18:51:25 - INFO - main - Total optimization steps = 250000

Steps: 0%| | 0/250000 [00:00<?, ?it/s]log type of models unet torch.float32 vae torch.float32 wav2vec torch.float32 video ./dataset/HDTF/video_audio_clip_root/clip012_0e890b168d52bb3d79e7707fbd16e13e.mp4 has invalid mel spectrogram shape: (15, 80), expected: 52 Traceback (most recent call last): File "/data/cheng.wang/MuseTalk-main/train.py", line 580, in main(config) File "/data/cheng.wang/MuseTalk-main/train.py", line 424, in main accelerator.backward(loss) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/accelerator.py", line 1995, in backward self.deepspeed_engine_wrapped.backward(loss, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 166, in backward self.engine.backward(loss, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn ret_val = func(*args, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2268, in backward self._backward_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2204, in _backward_epilogue self.allreduce_gradients() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn ret_val = func(*args, **kwargs) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2160, in allreduce_gradients self.optimizer.overlapping_partition_gradients_reduce_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 935, in overlapping_partition_gradients_reduce_epilogue self.independent_gradient_partition_epilogue() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 826, in independent_gradient_partition_epilogue self.reduce_ipg_grads() File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1453, in reduce_ipg_grads self.average_tensor(bucket.buffer[bucket.index].narrow(0, 0, bucket.elements), comm_dtype) IndexError: list index out of range Steps: 0%| | 0/250000 [00:07<?, ?it/s] ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 4058169) of binary: /data/conda/envs/MuseTalk/bin/python3.1 Traceback (most recent call last): File "/data/conda/envs/MuseTalk/bin/accelerate", line 8, in sys.exit(main()) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 46, in main args.func(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1042, in launch_command deepspeed_launcher(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/accelerate/commands/launch.py", line 754, in deepspeed_launcher distrib_run.run(args) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/data/conda/envs/MuseTalk/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures:

<NO_OTHER_FAILURES>

Root Cause (first observed failure):

[0]: time : 2025-08-04_18:51:34 host : mdsk-ops.lan rank : 0 (local_rank: 0) exitcode : 1 (pid: 4058169) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

你好,我也遇到了这个问题,想请教下你是怎么解决的呢

dbofseuofhust avatar Aug 18 '25 07:08 dbofseuofhust

OS: Ubuntu 22.04.2 LTS python: 3.11 torch: 2.7.1+cu126 cuda.is_available: True cuda: 12.6 cudnn: 90501 torchvision: 0.15.2 accelerate: 0.28.0 diffusers: 0.34.0 tensorflow: 2.12.0

gpu.yaml

compute_environment: LOCAL_MACHINE debug: true distributed_type: MULTI_GPU downcast_bf16: 'no' gpu_ids: "0,1,2,3" # modify this according to your GPU number machine_rank: 0 main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 4 # it should be the same as the number of GPUs rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false main_process_ip: '127.0.0.1' main_process_port: 29500

... ... video ./dataset/video_audio_clip_root/clip000_output.1478_1755171170.mp4 has invalid mel spectrogram shape: (47, 80), expected: 52 Steps: 3%|█▊ | 11482/50000 [10:22:14<31:55:11, 2.98s/it, lr=8.96e-7, step_loss=3.25, td=0.06s, tm=2.97s]video ./dataset/video_audio_clip_root/clip000_output.1473_1755170818.mp4 has invalid mel spectrogram shape: (47, 80), expected: 52 Steps: 3%|█▊ | 11590/50000 [10:28:07<33:02:06, 3.10s/it, lr=9.05e-7, step_loss=3.12, td=0.06s, tm=2.33s]video ./dataset/video_audio_clip_root/clip000_output.1473_1755170818.mp4 has invalid mel spectrogram shape: (38, 80), expected: 52 Steps: 3%|█▊ | 11681/50000 [10:32:57<31:51:07, 2.99s/it, lr=9.13e-7, step_loss=3.05, td=0.04s, tm=2.53s]video ./dataset/video_audio_clip_root/clip000_output.1471_1755170715.mp4 has invalid mel spectrogram shape: (49, 80), expected: 52 Steps: 4%|██▊ | 11874/50000 [10:43:15<34:29:13, 3.26s/it, lr=9.28e-7, step_loss=3.33, td=0.05s, tm=3.42s]video ./dataset/video_audio_clip_root/clip000_output.1473_1755170818.mp4 has invalid mel spectrogram shape: (44, 80), expected: 52 Steps: 4%|██▊ |12000/50000 [10:50:03<32:56:22, 3.12s/it, lr=9.38e-7, step_loss=3.24, td=0.04s, tm=2.74s]video ./dataset/video_audio_clip_root/clip000_output.1471_1755170715.mp4 has invalid mel spectrogram shape: (49, 80), expected: 52 ... ...

Hello, I have also encountered this problem and would like to ask how you solved it?

humphery0sh avatar Aug 22 '25 23:08 humphery0sh