nproc_per_node=4
CUDA_VISIBLE_DEVICES=0,1,2,3
NPROC_PER_NODE=$nproc_per_node
swift sft
--model_id_or_path "AI-ModelScope/llava-v1.6-mistral-7b"
--template_type "llava-mistral-instruct"
--custom_train_dataset_path train_swift.json
--custom_val_dataset_path test_swift.json
--dataset_test_ratio "0.15"
--save_steps "20"
--lora_target_modules q_proj k_proj v_proj
--batch_size "8"
--learning_rate "1e-4"
--num_train_epochs "2"
--gradient_accumulation_steps "16"
--eval_batch_size "8"
--use_flash_attn "True"
--add_output_dir_suffix False
--output_dir finetune_output_epoch_100
--logging_dir finetune_output_epoch_100
--max_length -1
--train_dataset_sample -1
--sft_type lora \
--tuner_backend peft \
--quantization_bit 4 \
--bnb_4bit_comp_dtype AUTO \
--ddp_backend nccl \
--check_dataset_strategy warning \
--gradient_checkpointing "True" \
--deepspeed zero3-offload \
网上的解决方案是:
原来代码,load进的数据放在gpu里
pretrain_weight = torch.load(path)['model']
应该改成
pretrain_weight = torch.load(path, map_location=torch.device('cpu'))['model']
model.load_state_dict(pretrain_weight)
请问栈信息能贴一下吗,我们的代码中应该没有显式直接用这种方式加载完整模型的地方
请问栈信息能贴一下吗,我们的代码中应该没有显式直接用这种方式加载完整模型的地方
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 161, in forward
return self.model.forward(*args, **kwargs)
File "/home/jianc/.cache/modelscope/hub/_github/LLaVA.git/llava/model/language_model/llava_mistral.py", line 91, in forward
return super().forward(
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 1158, in forward
outputs = self.model(
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 1033, in forward
layer_outputs = self._gradient_checkpointing_func(
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/llm/utils/model.py", line 3803, in
_old_checkpoint(*args, use_reentrant=use_reentrant, **kwargs))
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
return fn(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return fn(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 489, in checkpoint
ret = function(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 770, in forward
hidden_states = self.mlp(hidden_states)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py", line 179, in forward
return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 478.00 MiB. GPU 3 has a total capacity of 23.69 GiB of which 194.06 MiB is free. Including non-PyTorch memory, this process has 23.49 GiB memory in use. Of the allocated memory 19.49 GiB is allocated by PyTorch, and 3.58 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[2024-04-30 16:13:19,865] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 2376292) of binary: /home/jianc/miniconda3/envs/benchmark-llm/bin/python
Traceback (most recent call last):
File "/home/jianc/miniconda3/envs/benchmark-llm/bin/torchrun", line 8, in
sys.exit(main())
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 347, in wrapper
return f(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
batch_size设置为1
设置为1也不行,会突然间爆显存
可以尝试 export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 减少内存碎片。
batch_size设置为1
设置为1也不行,会突然间爆显存
可以尝试 export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 减少内存碎片。
还是不行,运行了一个step之后会爆显存
Traceback (most recent call last):
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/cli/sft.py", line 5, in
sft_main()
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/utils/run_utils.py", line 31, in x_main
result = llm_x(args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/llm/sft.py", line 261, in llm_sft
trainer.train(training_args.resume_from_checkpoint)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/trainers/trainers.py", line 54, in train
res = super().train(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/trainer.py", line 1859, in train
return inner_training_loop(
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/trainer.py", line 2203, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/transformers/trainer.py", line 3138, in training_step
loss = self.compute_loss(model, inputs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/swift/trainers/trainers.py", line 220, in compute_loss
outputs = model(**inputs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1519, in forward
inputs, kwargs = self._pre_forward(*inputs, **kwargs)
File "/home/jianc/miniconda3/envs/benchmark-llm/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1413, in _pre_forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameters which did not receive grad for rank 3: base_model.model.model.vision_tower.vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.lora_B.default.weight, base_model.model.model.vision_tower.vision_tower.vision_model.encoder.layers.23.
已经支持deepspeed zero3 使用--deepspeed default-zero3试一下