ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG/Help] OVERFLOW导致loss异常

Open Jaren1907 opened this issue 1 year ago • 11 comments

Is there an existing issue for this?

  • [ ] I have searched the existing issues

Current Behavior

  • 在使用ds_train_finetune.sh加自己数据训练过程中,训练刚开始就会出现如下OVERFLOW的提示信息,训练过程中也时常出现。

  • 训练了一段时间,在出现OVERFLOW信息后,loss飙升。

  • 持续训练,直至最终出现Exception: Current loss scale already at minimum - cannot decrease scale anymore. Exiting run.后,训练中断。

  • 使用loss飙升前的节点进行预测,效果正常;使用loss飙升后的节点进行预测,结果就是胡乱输出了。

请教下,OVERFLOW出现的原因是什么,如何避免此问题的出现,谢谢!

[INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, reducing to 16384
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384, reducing to 8192
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8192, reducing to 4096
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4096, reducing to 2048
[INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2048, reducing to 1024

image

Expected Behavior

No response

Steps To Reproduce

ds_train_finetune.sh如下:

LR=1e-4

MASTER_PORT=$(shuf -n 1 -i 10000-65535)

deepspeed --num_gpus=8 --master_port $MASTER_PORT main.py \
    --deepspeed deepspeed.json \
    --do_train \
    --train_file /data/data_train.json \
    --prompt_column instruction \
    --response_column output\
    --preprocessing_num_workers 8 \
    --cache_dir ./cache/self-ft-$LR \
    --overwrite_cache \
    --model_name_or_path /data/chatglm-6b \
    --output_dir ./output/self-chatglm-6b-ft-$LR \
    --overwrite_output_dir \
    --max_source_length 128 \
    --max_target_length 350 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 1 \
    --predict_with_generate \
    --max_steps 200000 \
    --logging_steps 10 \
    --save_steps 5000 \
    --learning_rate $LR \
    --fp16

Environment

- OS: Ubuntu 20.04
- Python: 3.9.12
- Transformers: 4.28.0
- PyTorch: 2.0.0
- CUDA Support True

Anything else?

No response

Jaren1907 avatar May 12 '23 02:05 Jaren1907

I met the same problem. how to solve it?

Antiman-cmyk avatar May 25 '23 06:05 Antiman-cmyk

请问解决了吗?我也遇到了这个问题

jilianwang-meta avatar Jun 01 '23 10:06 jilianwang-meta

把fp16换成bf16,我的是这样做的。亲测有效

ChanLee9 avatar Jun 06 '23 10:06 ChanLee9

非常感谢您的回复,问题已经解决了  

王继莲 @.***

 

------------------ 原始邮件 ------------------ 发件人: "Chan @.>; 发送时间: 2023年6月6日(星期二) 晚上6:26 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [THUDM/ChatGLM-6B] [BUG/Help] OVERFLOW导致loss异常 (Issue #1008)

把fp16换成bf16,我的是这样做的。亲测有效

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

jilianwang-meta avatar Jun 07 '23 02:06 jilianwang-meta

我在新的训练过程中,增大batch,调小学习率,并将训练数据处理为格式相近的数据后,训练过程变的正常了

同时,可以参考DeepSpeed issue #1773中的做法,将fp16切换为bf16,如同上面的ChanLee9所述,但我目前还未尝试这种方法

Jaren1907 avatar Jun 07 '23 03:06 Jaren1907

@ChanLee9 请问是怎么切换呢,好像不是在命令行直接切换。

ZR-Huang avatar Jun 09 '23 10:06 ZR-Huang

@ChanLee9 请问是怎么切换呢,好像不是在命令行直接切换。

修改ds_train_finetune.sh文件中的--fp16--bf16即可。

ChanLee9 avatar Jun 09 '23 10:06 ChanLee9

如果是v100 其并不支持bf16

lucasjinreal avatar Jul 24 '23 05:07 lucasjinreal

如果是v100 其并不支持bf16

可以把zero3的offload关了,fp16可以

BruceJust avatar Aug 06 '23 12:08 BruceJust

bf16

实测有效

zhangyx0417 avatar Sep 06 '23 01:09 zhangyx0417

如果是v100 其并不支持bf16

你好,可是我用V100,在lora模式下微调了zero3-offload的glm4,没有发现报错,这是什么情况?

zydmtaichi avatar Aug 01 '24 03:08 zydmtaichi