swift icon indicating copy to clipboard operation
swift copied to clipboard

微调Qwen1___5-7B-Chat,4比特量化,为何还需要占用这么多显存?

Open xiaolvtongxue-zt opened this issue 3 months ago • 2 comments

这是我微调模型时的参数:

SftArguments(
model_type='qwen1half-7b-chat', 
model_id_or_path='../models/models/qwen/Qwen1___5-7B-Chat', 
model_revision='master', 
sft_type='lora', 
freeze_parameters=0.0, 
additional_trainable_parameters=[], 
tuner_backend='swift', 
template_type='qwen', output_dir='/home/centos/xiaolv/太安模型微调/swift_qwen/output/qwen1half-7b-chat-swift/qwen1half-7b-chat/v1-20240327-083203', add_output_dir_suffix=True, ddp_backend='nccl', ddp_find_unused_parameters=None, ddp_broadcast_buffers=None, seed=42, resume_from_checkpoint=None, dtype='bf16', dataset=['_custom_dataset'], dataset_seed=42, dataset_test_ratio=0.01, train_dataset_sample=-1, train_dataset_mix_ratio=None, train_dataset_mix_ds=['ms-bench'], val_dataset_sample=None, use_loss_scale=False, system='You are a helpful assistant.', max_length=3072, truncation_strategy='delete', check_dataset_strategy='none', custom_train_dataset_path=['/home/centos/xiaolv/太安模型微调/train_data/2024_03_24/train_data__2024_03_24_11_06_36.json'], custom_val_dataset_path=[], self_cognition_sample=0, model_name=[None, None], model_author=[None, None], quantization_bit=4, bnb_4bit_comp_dtype='bf16', bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, lora_target_modules=['q_proj', 'k_proj', 'v_proj'], lora_rank=16, lora_alpha=64, lora_dropout_p=0.05, lora_bias_trainable='none', lora_modules_to_save=[], lora_dtype='fp32', lora_lr_ratio=None, use_rslora=False, lora_layers_to_transform=None, lora_layers_pattern=None, lora_rank_pattern={}, lora_alpha_pattern={}, lora_loftq_config={}, use_dora=False, use_galore=False, galore_rank=128, galore_target_modules=None, galore_update_proj_gap=50, galore_scale=1.0, galore_proj_type='std', galore_optim_per_parameter=False, galore_with_embedding=False, adalora_target_r=8, adalora_init_r=12, adalora_tinit=0, adalora_tfinal=0, adalora_deltaT=1, adalora_beta1=0.85, adalora_beta2=0.85, adalora_orth_reg_weight=0.5, ia3_target_modules=['DEFAULT'], ia3_feedforward_modules=[], ia3_modules_to_save=[], llamapro_num_new_blocks=4, llamapro_num_groups=None, neftune_noise_alpha=10, neftune_backend='transformers', gradient_checkpointing=True, deepspeed=None, batch_size=1, eval_batch_size=1, num_train_epochs=20, max_steps=-1, optim='adamw_torch', adam_beta1=0.9, adam_beta2=0.999, learning_rate=0.0001, weight_decay=0.01, gradient_accumulation_steps=16.0, max_grad_norm=0.5, predict_with_generate=False, lr_scheduler_type='linear', warmup_ratio=0.05, eval_steps=200, save_steps=20, save_only_model=False, save_total_limit=10, logging_steps=10, dataloader_num_workers=1, dataloader_pin_memory=True, push_to_hub=False, hub_model_id=None, hub_token=None, hub_private_repo=False, push_hub_strategy='push_best', test_oom_error=False, disable_tqdm=False, lazy_tokenize=True, preprocess_num_proc=4, use_flash_attn=None, ignore_args_error=False, check_model_is_latest=True, logging_dir='/home/centos/xiaolv/太安模型微调/swift_qwen/output/qwen1half-7b-chat-swift/qwen1half-7b-chat/v1-20240327-083203/runs', report_to=['tensorboard'], acc_strategy='token', save_on_each_node=True, evaluation_strategy='steps', save_strategy='steps', save_safetensors=True, gpu_memory_fraction=None, max_new_tokens=2048, do_sample=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams=1, per_device_train_batch_size=None, per_device_eval_batch_size=None, only_save_model=None, neftune_alpha=None, deepspeed_config_path=None, model_cache_dir=None)

大概是

model_max_length_=1024*3
lora_rank_ = 16
lora_alpha_ = lora_rank_*4
batch_size_ = 1
gradient_accumulation_steps_ = 16/batch_size_
eval_batch_size_ = 1
num_train_epochs_=20
max_new_tokens_ = 1024*2
quantization_bit_ = 4

这种情况下,2张4090的显存基本跑满,这里面是否使用了其他显存优化技术??为何训练数据batch_size_==1的情况下,需要占用这么多的显存?还是4比特量化的情况下【原始量化后的模型大约占8G的显存】。 image

xiaolvtongxue-zt avatar Mar 27 '24 00:03 xiaolvtongxue-zt

有几种可能性:

  1. DDP训练会有额外显存占用
  2. 训练数据集最大长度是否比较长

tastelikefeet avatar Mar 27 '24 11:03 tastelikefeet

有另外一个疑问,我在训练时,使用了quantization_bit_ = 4量化后,后续推理是,加载训练后的模型,是否是自动识别自动量化还是需要自己传入参数?

xiaolvtongxue-zt avatar Mar 29 '24 01:03 xiaolvtongxue-zt