stanford_alpaca icon indicating copy to clipboard operation
stanford_alpaca copied to clipboard

Fine-tuning Does not work

Open akanyaani opened this issue 2 years ago • 6 comments

Traceback (most recent call last):
  File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
    train()
  File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
    trainer.train()
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
    return inner_training_loop(
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
    model = self._wrap_model(self.model_wrapped)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
    raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
Traceback (most recent call last):
  File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
    train()
  File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
    trainer.train()
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
    return inner_training_loop(
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
    model = self._wrap_model(self.model_wrapped)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
    raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
Traceback (most recent call last):
  File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
    train()
  File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
    trainer.train()
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
    return inner_training_loop(
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
    model = self._wrap_model(self.model_wrapped)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
    raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
[I ProcessGroupNCCL.cpp:844] [Rank 2] NCCL watchdog thread terminated normally
Traceback (most recent call last):
  File "/home/ubuntu/stanford_alpaca/train.py", line 231, in <module>
    train()
  File "/home/ubuntu/stanford_alpaca/train.py", line 225, in train
    trainer.train()
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1628, in train
    return inner_training_loop(
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1715, in _inner_training_loop
    model = self._wrap_model(self.model_wrapped)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/transformers-4.27.0.dev0-py3.10.egg/transformers/trainer.py", line 1442, in _wrap_model
    raise Exception("Could not find the transformer layer class to wrap in the model.")
Exception: Could not find the transformer layer class to wrap in the model.
[I ProcessGroupNCCL.cpp:844] [Rank 3] NCCL watchdog thread terminated normally
[I ProcessGroupNCCL.cpp:844] [Rank 0] NCCL watchdog thread terminated normally
[I ProcessGroupNCCL.cpp:844] [Rank 1] NCCL watchdog thread terminated normally
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 13697) of binary: /home/ubuntu/anaconda3/envs/lama/bin/python
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/lama/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/ubuntu/anaconda3/envs/lama/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

akanyaani avatar Mar 28 '23 06:03 akanyaani

same error!

chengh3 avatar Mar 29 '23 01:03 chengh3

@chengh3

--fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' Replace LLaMADecoderLayer with this class LlamaDecoderLayer in command It will work

akanyaani avatar Mar 29 '23 06:03 akanyaani

@akanyaani Does this affect its performance?

WangRongsheng avatar Mar 29 '23 08:03 WangRongsheng

Nope, it works fine, GPU usage was 100%

akanyaani avatar Mar 29 '23 15:03 akanyaani

Nope, it works fine, GPU usage was 100%

Hi, do you use Pytorch1.13?

Hiusam avatar Mar 30 '23 02:03 Hiusam

Nope, it works fine, GPU usage was 100%

Thanks for your advices, but I meet another error, could you help me to solve it?

WARNING:main:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/transformers/training_args.py:1356: FutureWarning: using --fsdp_transformer_layer_cls_to_wrap is deprecated. Use fsdp_config instead warnings.warn( /home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/transformers/training_args.py:1356: FutureWarning: using --fsdp_transformer_layer_cls_to_wrap is deprecated. Use fsdp_config instead warnings.warn( /home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/transformers/training_args.py:1356: FutureWarning: using --fsdp_transformer_layer_cls_to_wrap is deprecated. Use fsdp_config instead warnings.warn( /home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/transformers/training_args.py:1356: FutureWarning: using --fsdp_transformer_layer_cls_to_wrap is deprecated. Use fsdp_config instead warnings.warn( WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 77807 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 77808 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 77809 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 77806) of binary: /home/la/anaconda3/envs/alpaca_torch/bin/python Traceback (most recent call last): File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/run.py", line 798, in main() File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/la/anaconda3/envs/alpaca_torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

xiaoweiweixiao avatar Mar 30 '23 12:03 xiaoweiweixiao