BELLE icon indicating copy to clipboard operation
BELLE copied to clipboard

哪个大佬救救孩子吧,这个问题好几天了,都没有解决

Open listwebit opened this issue 1 year ago • 4 comments

源码下载执行 sh training_scripts/single_node/run_LoRA.sh 报错如下:

len(train_dataloader) = 334 len(train_dataset) = 1000 args.per_device_train_batch_size = 1 len(eval_dataloader) = 334 len(eval_dataset) = 1000 args.per_device_eval_batch_size = 1 [2023-04-23 11:34:49,179] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.0, git-hash=unknown, git-branch=unknown [2023-04-23 11:34:49,182] [INFO] [comm.py:580:init_distributed] Distributed backend already initialized [2023-04-23 11:34:49,335] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main Traceback (most recent call last): File "main.py", line 399, in model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError : list index out of rangeself.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer

optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:49,336] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer [2023-04-23 11:34:49,337] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2023-04-23 11:34:49,338] [INFO] [utils.py:51:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 3 optimizer [2023-04-23 11:34:49,549] [INFO] [utils.py:785:see_memory_usage] Stage 3 initialize beginning [2023-04-23 11:34:49,550] [INFO] [utils.py:786:see_memory_usage] MA 6.94 GB Max_MA 10.77 GB CA 25.82 GB Max_CA 26 GB [2023-04-23 11:34:49,550] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 13.53 GB, percent = 5.4% [2023-04-23 11:34:49,552] [INFO] [stage3.py:113:init] Reduce bucket size 500,000,000 [2023-04-23 11:34:49,552] [INFO] [stage3.py:114:init] Prefetch bucket size 30000000 Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:51,217] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2315 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2316 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2317 [2023-04-23 11:34:51,221] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python3', '-u', 'main.py', '--local_rank=2', '--sft_only_data_path', '/home/centos/belle/BELLE/data/dev1K.json', '--data_split', '10,0,0', '--model_name_or_path', '/home/centos/belle/BELLE/models/BELLE-7B-2M', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--max_seq_len', '1024', '--learning_rate', '2e-4', '--weight_decay', '0.0001', '--num_train_epochs', '3', '--gradient_accumulation_steps', '16', '--lr_scheduler_type', 'cosine', '--num_warmup_steps', '100', '--seed', '1234', '--zero_stage', '3', '--lora_dim', '8', '--lora_module_name', 'decoder.layers.', '--only_optimize_lora', '--deepspeed', '--output_dir', 'output-lora'] exits with return code = 1

感觉docker环境有问题呀 docker中的 transformers为: 4.29.0.dev0

listwebit avatar Apr 23 '23 12:04 listwebit

源码下载执行 sh training_scripts/single_node/run_LoRA.sh 报错如下:

len(train_dataloader) = 334 len(train_dataset) = 1000 args.per_device_train_batch_size = 1 len(eval_dataloader) = 334 len(eval_dataset) = 1000 args.per_device_eval_batch_size = 1 [2023-04-23 11:34:49,179] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.0, git-hash=unknown, git-branch=unknown [2023-04-23 11:34:49,182] [INFO] [comm.py:580:init_distributed] Distributed backend already initialized [2023-04-23 11:34:49,335] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main Traceback (most recent call last): File "main.py", line 399, in model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError : list index out of rangeself.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer

optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:49,336] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer [2023-04-23 11:34:49,337] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2023-04-23 11:34:49,338] [INFO] [utils.py:51:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 3 optimizer [2023-04-23 11:34:49,549] [INFO] [utils.py:785:see_memory_usage] Stage 3 initialize beginning [2023-04-23 11:34:49,550] [INFO] [utils.py:786:see_memory_usage] MA 6.94 GB Max_MA 10.77 GB CA 25.82 GB Max_CA 26 GB [2023-04-23 11:34:49,550] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 13.53 GB, percent = 5.4% [2023-04-23 11:34:49,552] [INFO] [stage3.py:113:init] Reduce bucket size 500,000,000 [2023-04-23 11:34:49,552] [INFO] [stage3.py:114:init] Prefetch bucket size 30000000 Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:51,217] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2315 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2316 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2317 [2023-04-23 11:34:51,221] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python3', '-u', 'main.py', '--local_rank=2', '--sft_only_data_path', '/home/centos/belle/BELLE/data/dev1K.json', '--data_split', '10,0,0', '--model_name_or_path', '/home/centos/belle/BELLE/models/BELLE-7B-2M', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--max_seq_len', '1024', '--learning_rate', '2e-4', '--weight_decay', '0.0001', '--num_train_epochs', '3', '--gradient_accumulation_steps', '16', '--lr_scheduler_type', 'cosine', '--num_warmup_steps', '100', '--seed', '1234', '--zero_stage', '3', '--lora_dim', '8', '--lora_module_name', 'decoder.layers.', '--only_optimize_lora', '--deepspeed', '--output_dir', 'output-lora'] exits with return code = 1

感觉docker环境有问题呀 docker中的 transformers为: 4.29.0.dev0

您好,请问您能运行run_FT.sh脚本嘛

xianghuisun avatar Apr 23 '23 13:04 xianghuisun

源码下载执行 sh training_scripts/single_node/run_LoRA.sh 报错如下:

len(train_dataloader) = 334 len(train_dataset) = 1000 args.per_device_train_batch_size = 1 len(eval_dataloader) = 334 len(eval_dataset) = 1000 args.per_device_eval_batch_size = 1 [2023-04-23 11:34:49,179] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.0, git-hash=unknown, git-branch=unknown [2023-04-23 11:34:49,182] [INFO] [comm.py:580:init_distributed] Distributed backend already initialized [2023-04-23 11:34:49,335] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main Traceback (most recent call last): File "main.py", line 399, in model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError : list index out of rangeself.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer

optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:49,336] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer [2023-04-23 11:34:49,337] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2023-04-23 11:34:49,338] [INFO] [utils.py:51:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 3 optimizer [2023-04-23 11:34:49,549] [INFO] [utils.py:785:see_memory_usage] Stage 3 initialize beginning [2023-04-23 11:34:49,550] [INFO] [utils.py:786:see_memory_usage] MA 6.94 GB Max_MA 10.77 GB CA 25.82 GB Max_CA 26 GB [2023-04-23 11:34:49,550] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 13.53 GB, percent = 5.4% [2023-04-23 11:34:49,552] [INFO] [stage3.py:113:init] Reduce bucket size 500,000,000 [2023-04-23 11:34:49,552] [INFO] [stage3.py:114:init] Prefetch bucket size 30000000 Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:51,217] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2315 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2316 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2317 [2023-04-23 11:34:51,221] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python3', '-u', 'main.py', '--local_rank=2', '--sft_only_data_path', '/home/centos/belle/BELLE/data/dev1K.json', '--data_split', '10,0,0', '--model_name_or_path', '/home/centos/belle/BELLE/models/BELLE-7B-2M', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--max_seq_len', '1024', '--learning_rate', '2e-4', '--weight_decay', '0.0001', '--num_train_epochs', '3', '--gradient_accumulation_steps', '16', '--lr_scheduler_type', 'cosine', '--num_warmup_steps', '100', '--seed', '1234', '--zero_stage', '3', '--lora_dim', '8', '--lora_module_name', 'decoder.layers.', '--only_optimize_lora', '--deepspeed', '--output_dir', 'output-lora'] exits with return code = 1

感觉docker环境有问题呀 docker中的 transformers为: 4.29.0.dev0

未必是docker环境的问题,关于LoRA训练,您可参考我们之前的代码https://github.com/LianjiaTech/BELLE/tree/4f84c89372b435bae039b47f1f31078b1c6fc23e/train

目前的代码是基于deepspeed-chat,关于LoRA训练的那部分还不完善。

xianghuisun avatar Apr 23 '23 13:04 xianghuisun

源码下载执行 sh training_scripts/single_node/run_LoRA.sh 报错如下:

len(train_dataloader) = 334 len(train_dataset) = 1000 args.per_device_train_batch_size = 1 len(eval_dataloader) = 334 len(eval_dataset) = 1000 args.per_device_eval_batch_size = 1 [2023-04-23 11:34:49,179] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.0, git-hash=unknown, git-branch=unknown [2023-04-23 11:34:49,182] [INFO] [comm.py:580:init_distributed] Distributed backend already initialized [2023-04-23 11:34:49,335] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main Traceback (most recent call last): File "main.py", line 399, in model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError : list index out of rangeself.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer

optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:49,336] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer [2023-04-23 11:34:49,337] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2023-04-23 11:34:49,338] [INFO] [utils.py:51:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 3 optimizer [2023-04-23 11:34:49,549] [INFO] [utils.py:785:see_memory_usage] Stage 3 initialize beginning [2023-04-23 11:34:49,550] [INFO] [utils.py:786:see_memory_usage] MA 6.94 GB Max_MA 10.77 GB CA 25.82 GB Max_CA 26 GB [2023-04-23 11:34:49,550] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 13.53 GB, percent = 5.4% [2023-04-23 11:34:49,552] [INFO] [stage3.py:113:init] Reduce bucket size 500,000,000 [2023-04-23 11:34:49,552] [INFO] [stage3.py:114:init] Prefetch bucket size 30000000 Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:51,217] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2315 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2316 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2317 [2023-04-23 11:34:51,221] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python3', '-u', 'main.py', '--local_rank=2', '--sft_only_data_path', '/home/centos/belle/BELLE/data/dev1K.json', '--data_split', '10,0,0', '--model_name_or_path', '/home/centos/belle/BELLE/models/BELLE-7B-2M', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--max_seq_len', '1024', '--learning_rate', '2e-4', '--weight_decay', '0.0001', '--num_train_epochs', '3', '--gradient_accumulation_steps', '16', '--lr_scheduler_type', 'cosine', '--num_warmup_steps', '100', '--seed', '1234', '--zero_stage', '3', '--lora_dim', '8', '--lora_module_name', 'decoder.layers.', '--only_optimize_lora', '--deepspeed', '--output_dir', 'output-lora'] exits with return code = 1

感觉docker环境有问题呀 docker中的 transformers为: 4.29.0.dev0

如果把transformers版本降低为4.28.1试下呢

xianghuisun avatar Apr 23 '23 13:04 xianghuisun

源码下载执行 sh training_scripts/single_node/run_LoRA.sh 报错如下:

len(train_dataloader) = 334 len(train_dataset) = 1000 args.per_device_train_batch_size = 1 len(eval_dataloader) = 334 len(eval_dataset) = 1000 args.per_device_eval_batch_size = 1 [2023-04-23 11:34:49,179] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.0, git-hash=unknown, git-branch=unknown [2023-04-23 11:34:49,182] [INFO] [comm.py:580:init_distributed] Distributed backend already initialized [2023-04-23 11:34:49,335] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main Traceback (most recent call last): File "main.py", line 399, in model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError : list index out of rangeself.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer

optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:49,336] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer [2023-04-23 11:34:49,337] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2023-04-23 11:34:49,338] [INFO] [utils.py:51:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2023-04-23 11:34:49,338] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 3 optimizer [2023-04-23 11:34:49,549] [INFO] [utils.py:785:see_memory_usage] Stage 3 initialize beginning [2023-04-23 11:34:49,550] [INFO] [utils.py:786:see_memory_usage] MA 6.94 GB Max_MA 10.77 GB CA 25.82 GB Max_CA 26 GB [2023-04-23 11:34:49,550] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 13.53 GB, percent = 5.4% [2023-04-23 11:34:49,552] [INFO] [stage3.py:113:init] Reduce bucket size 500,000,000 [2023-04-23 11:34:49,552] [INFO] [stage3.py:114:init] Prefetch bucket size 30000000 Traceback (most recent call last): File "main.py", line 399, in main() File "main.py", line 340, in main model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/opt/conda/lib/python3.8/site-packages/deepspeed/init.py", line 156, in initialize engine = DeepSpeedEngine(args=args, File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 328, in init self._configure_optimizer(optimizer, model_parameters) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1187, in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1465, in _configure_zero_optimizer optimizer = DeepSpeedZeroOptimizer_Stage3( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 133, in init self.dtype = self.optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range [2023-04-23 11:34:51,217] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2315 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2316 [2023-04-23 11:34:51,220] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 2317 [2023-04-23 11:34:51,221] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python3', '-u', 'main.py', '--local_rank=2', '--sft_only_data_path', '/home/centos/belle/BELLE/data/dev1K.json', '--data_split', '10,0,0', '--model_name_or_path', '/home/centos/belle/BELLE/models/BELLE-7B-2M', '--per_device_train_batch_size', '1', '--per_device_eval_batch_size', '1', '--max_seq_len', '1024', '--learning_rate', '2e-4', '--weight_decay', '0.0001', '--num_train_epochs', '3', '--gradient_accumulation_steps', '16', '--lr_scheduler_type', 'cosine', '--num_warmup_steps', '100', '--seed', '1234', '--zero_stage', '3', '--lora_dim', '8', '--lora_module_name', 'decoder.layers.', '--only_optimize_lora', '--deepspeed', '--output_dir', 'output-lora'] exits with return code = 1

感觉docker环境有问题呀 docker中的 transformers为: 4.29.0.dev0 确认了一下,在镜像里面运行run_FT.sh训练没有问题。所以可以排除镜像的原因。 @xianghuisun @listwebit

bestpredicts avatar Apr 23 '23 14:04 bestpredicts

我的建议是lora微调代码用tloen / alpaca-lora斯坦福官方的lora微调代码,这个代码 我也用了,微调没法用int8微调,tesla A40 微调lora爆显存,但是斯坦福alpaca可以int8 lora微调 13G显存就能微调70B

Minami-su avatar Apr 23 '23 22:04 Minami-su