LLaMA-Factory icon indicating copy to clipboard operation
LLaMA-Factory copied to clipboard

训练模型出错

Open kin1n opened this issue 8 months ago • 1 comments

Reminder

  • [x] I have read the above rules and searched the existing issues.

System Info

按照视频指引,尝试着去训练一个模型,在点击“开始”后一段时间就报错了。请问这个是什么问题导致的,

E0512 17:04:53.622000 124936 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 6 (pid: 125052) of binary: /usr/local/python3/bin/python3.10 Traceback (most recent call last): File "/usr/local/python3/bin/torchrun", line 8, in sys.exit(main()) File "/usr/local/python3/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 355, in wrapper return f(*args, **kwargs) File "/usr/local/python3/lib/python3.10/site-packages/torch/distributed/run.py", line 919, in main run(args) File "/usr/local/python3/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run elastic_launch( File "/usr/local/python3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/python3/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

/root/LLaMA-Factory/src/llamafactory/launcher.py FAILED

Failures: <NO_OTHER_FAILURES>

Root Cause (first observed failure): [0]: time : 2025-05-12_17:04:53 host : localhost.localdomain rank : 6 (local_rank: 6) exitcode : 1 (pid: 125052) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Traceback (most recent call last): File "/usr/local/python3/bin/llamafactory-cli", line 8, in sys.exit(main()) File "/root/LLaMA-Factory/src/llamafactory/cli.py", line 95, in main process = subprocess.run( File "/usr/local/python3/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['torchrun', '--nnodes', '1', '--node_rank', '0', '--nproc_per_node', '8', '--master_addr', '127.0.0.1', '--master_port', '58797', '/root/LLaMA-Factory/src/llamafactory/launcher.py', 'saves/Qwen2.5-7B-Instruct/lora/train_2025-05-12-16-42-37/training_args.yaml']' returned non-zero exit status 1.

Image

Image

Reproduction

Put your message here.

Others

No response

kin1n avatar May 12 '25 09:05 kin1n

Image

kin1n avatar May 12 '25 09:05 kin1n