insightface icon indicating copy to clipboard operation
insightface copied to clipboard

torch.distributed.elastic.multiprocessing.errors.ChildFailedError: train.py FAILED

Open Xhlkx opened this issue 2 years ago • 4 comments

Hello, could you tell me how to solve this problem?

Xhlkx avatar Nov 29 '22 09:11 Xhlkx

torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures: [1]: time : 2022-11-29_17:14:20 host : GPU236 rank : 1 (local_rank: 1) exitcode : 1 (pid: 49149) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2022-11-29_17:14:20 host : GPU236 rank : 2 (local_rank: 2) exitcode : 1 (pid: 49150) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [3]: time : 2022-11-29_17:14:20 host : GPU236 rank : 3 (local_rank: 3) exitcode : 1 (pid: 49151) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure): [0]: time : 2022-11-29_17:14:20 host : GPU236 rank : 0 (local_rank: 0) exitcode : 1 (pid: 49148) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Xhlkx avatar Nov 29 '22 09:11 Xhlkx

Same issue:

pytorch version: 1.10 cuda version: 10.1

/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  FutureWarning,
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -4) local_rank: 0 (pid: 20670) of binary: /usr/bin/python3
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/run.py", line 713, in run
    )(*cmd_args)
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
    failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
=====================================================
train.py FAILED
-----------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
-----------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-11-29_15:38:40
  host      : mahdi-ThinkCentre-M72e
  rank      : 0 (local_rank: 0)
  exitcode  : -4 (pid: 20670)
  error_file: <N/A>
  traceback : Signal 4 (SIGILL) received by PID 20670

pourfard avatar Nov 29 '22 12:11 pourfard

I tested it on another system with: pytorch version: 1.9.0 cuda: 11.1

and it worked.

pourfard avatar Nov 29 '22 13:11 pourfard

i have been same problem in linux docker environment use cuda 11.6 4 gpu and i solve it mount host /dev/shm volume to my docker container

this is my docker exec code

docker run -it --gpus all -v /dev/shm:/dev/shm <image_name>

hankug1234 avatar Mar 28 '23 00:03 hankug1234