FlagEmbedding icon indicating copy to clipboard operation
FlagEmbedding copied to clipboard

如何提升passage_max_len到8192来Finetune BGE-M3?

Open Rilzob opened this issue 1 year ago • 2 comments

使用的显卡是A800,在设置passage_max_len=4096时,per_device_train_batch_size只能设置为2,使用examples中的ds_stage0.json时内存占用62G。 image

使用我自己写的ds_stage3.json时,per_device_train_batch_size=2,内存占用56G。虽然显存占用有所减少,但仍不足将passage_max_len提升到8192。 image

ds_stage3.json配置如下

`{ "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": 524288, "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "gather_16bit_weights_on_model_save": true },

"fp16": {
    "enabled": "auto",
    "loss_scale": 0,
    "initial_scale_power": 10,
    "loss_scale_window": 1000,
    "hysteresis": 2,
    "min_loss_scale": 1
},
"bf16": {
    "enabled": "auto",
    "loss_scale": 0,
    "initial_scale_power": 10,
    "loss_scale_window": 1000,
    "hysteresis": 2,
    "min_loss_scale": 1
},
"optimizer": {
    "type": "AdamW",
    "params": {
        "lr": "auto",
        "betas": "auto",
        "eps": "auto",
        "weight_decay": "auto",
        "torch_adam": true
    }
},

"scheduler": {
    "type": "WarmupDecayLR",
    "params": {
        "warmup_min_lr": "auto",
        "warmup_max_lr": "auto",
        "warmup_num_steps": "auto",
        "total_num_steps": "auto"
    }
},

"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 1000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false

}`

有什么办法可以将passage_max_len提升到8192并且增加batch_size?

Rilzob avatar Dec 30 '24 02:12 Rilzob

楼主有进展吗?

CNXDZS avatar Feb 13 '25 08:02 CNXDZS

同问。

jiaohuix avatar Mar 06 '25 15:03 jiaohuix