verl icon indicating copy to clipboard operation
verl copied to clipboard

内存突然炸了

Open gauss-clb opened this issue 7 months ago • 11 comments

跑了好几十个小时,内存突然炸了,有没有办法定位这种问题?不是100%能够复现。

ray.exceptions.OutOfMemoryError: Task was killed due to the node running low on memory.
Memory on the node (IP: 22.8.197.77, ID: 012d92e8eb02243a135cc0370650c943be7d9c0fa7514a14fa2c5aef) where the task (actor ID: 4bccdadc1a067a5710ecd4bf02000000, name=TaskRunner.__init__, pid=358, memory used=1519.84GB) was running was 1520.01GB / 1600.00GB (0.950003), which exceeds the memory usage threshold of 0.95. Ray killed this worker (ID: 03952d69c36e035dbbafc0f3a071e45ac27e4c95d951a58e9c53fc34) because it was the most recently scheduled task; to see more information about memory usage on this node, use `ray logs raylet.out -ip 22.8.197.77`. To see the logs of the worker, use `ray logs worker-03952d69c36e035dbbafc0f3a071e45ac27e4c95d951a58e9c53fc34*out -ip 22.8.197.77. Top 10 memory users:
PID	MEM(GB)	COMMAND
358	1519.84	ray::TaskRunner.run
99	1.68	/usr/local/lib/python3.11/site-packages/ray/core/src/ray/raylet/raylet --raylet_socket_name=/tmp/ray...
100	0.14	/usr/local/bin/python -u /usr/local/lib/python3.11/site-packages/ray/_private/log_monitor.py --sessi...
1	0.13	/usr/local/bin/python /usr/local/bin/ray start --address=dlcdmv8y0q3yjita-head-svc:6379 --block --da...
177	0.10	/usr/local/bin/python -u /usr/local/lib/python3.11/site-packages/ray/dashboard/agent.py --node-ip-ad...
642	0.06	ray::IDLE
640	0.06	ray::IDLE
3519	0.06	/usr/local/lib/python3.11/site-packages/wandb/bin/wandb-core --port-filename /tmp/tmpxisfkf6_/port-3...
3869	0.05	ray::TaskRunner.run
3865	0.05	ray::TaskRunner.run
Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. Set max_restarts and max_task_retries to enable retry when the task crashes due to OOM. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.

Image

gauss-clb avatar May 26 '25 02:05 gauss-clb

can you provide more details like system config, training scripts?

ccclyu avatar May 26 '25 06:05 ccclyu

can you provide more details like system config, training scripts?

比较复杂,改过很多东西,不过最初版本基本上只改动了reward,用了math_verify,可以拉取这个commit https://github.com/volcengine/verl/commit/894e174ec545b5771f561b6b18f3a3db0405ca77

不知道有没有可能是模型生成了什么,然后math_verify库有bug,然后内存就炸了。我遇到的问题不是缓慢泄漏,而是突然爆炸,正常情况没有改变条件,不太可能发生突变。

train_files和val_files数据的data_source字段为空字符串。

set -e

# For wandb
export project_name=verl-rllm
export experiment_name=ds7b_skywork_sys_t1.0_tr128mi64n4_lr:2e-6_cliphigh0.28_rout16

# For data/model/save path
export train_files=/path/skywork_system.parquet
export val_files=/path/aime2_system.parquet
export model_path=/path/DeepSeek-R1-Distill-Qwen-7B
export default_local_dir=/path/ds7b

# For save/test parameters
export save_freq=10000
export test_freq=10
export log_val_generations=0
export n_val=8

# For hyper-parameters
export temperature=1.0
export train_batch_size=128
export learning_rate=2e-6
export ppo_micro_batch_size=64
export ppo_mini_batch_size=64
export use_kl_loss=True
export clip_ratio_high=0.28
export rollout_n=16

# For analysis
export is_save=True
# signal
# - is_verify_save: rllm_verify.jsonl
# - is_validate_save: rllm_aime.jsonl
# - is_model_save

cd /path/verl
bash rllm/deepscaler_1.5b_8k.sh

或者也可以拉取最新代码 https://github.com/gauss-clb/verl 开启dynamic_sampling,复现概率比较大,

export dynamic_sampling_enable=True
export dynamic_sampling_mode=std

或者有没有什么定位方法,我来修改记录相关信息,提供给你们。

gauss-clb avatar May 26 '25 06:05 gauss-clb

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

Qsingle avatar May 27 '25 01:05 Qsingle

@ccclyu 有没有办法在程序中断前,监控到是哪行代码导致的内存激增?

gauss-clb avatar May 27 '25 02:05 gauss-clb

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

不是,我没有保存参数

gauss-clb avatar May 27 '25 02:05 gauss-clb

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

save checkpoint 爆host mem然后ray把进程都杀了吗?我也有这个困扰,有什么办法吗?

jinyouzhi avatar May 27 '25 07:05 jinyouzhi

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

save checkpoint 爆host mem然后ray把进程都杀了吗?我也有这个困扰,有什么办法吗?

我加了一个信号,想要保存的时候输入信号就可以了,平时不保存。单次保存如果要爆内存应该只能加内存,现在集群上的机器内存一般都是大于1T的 https://github.com/gauss-clb/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L1180

gauss-clb avatar May 27 '25 07:05 gauss-clb

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

save checkpoint 爆host mem然后ray把进程都杀了吗?我也有这个困扰,有什么办法吗?

我加了一个信号,想要保存的时候输入信号就可以了,平时不保存。单次保存如果要爆内存应该只能加内存,现在集群上的机器内存一般都是大于1T的 https://github.com/gauss-clb/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L1180

1T/2T都爆了,这个信号的功能是帮助跳过save吗?

jinyouzhi avatar May 27 '25 10:05 jinyouzhi

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

save checkpoint 爆host mem然后ray把进程都杀了吗?我也有这个困扰,有什么办法吗?

我加了一个信号,想要保存的时候输入信号就可以了,平时不保存。单次保存如果要爆内存应该只能加内存,现在集群上的机器内存一般都是大于1T的 https://github.com/gauss-clb/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L1180

1T/2T都爆了,这个信号的功能是帮助跳过save吗?

就是写入信号才开始保存,不写入不保存,原先是设定步数保存,现在可以人工动态设置保存点,在指标快收敛的时候保存一下就行了。但都要调用_save_checkpoint()。你的情况没遇到过,不知道是不是bug。

gauss-clb avatar May 27 '25 10:05 gauss-clb

是不是在保存参数之后的几百个或者多少个步长之后就会出现这种情况?我在我们的实验上面也遇到了

save checkpoint 爆host mem然后ray把进程都杀了吗?我也有这个困扰,有什么办法吗?

这个目前还没有,等我之后检查一下保存相关的代码看看吧,现在比较赶所以来不及。

Qsingle avatar May 28 '25 01:05 Qsingle

@ccclyu 发现失败的日志有一个共性,就是有大量的case在算reward的时候timeout,代码对应 https://github.com/huggingface/Math-Verify/blob/main/src/math_verify/grader.py#L854

不知道这个是否是造成内存突然暴增的原因,有没有什么想法?比如虽然timeout抛出了异常,但是原始函数还有可能继续执行,占用内存?

(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:20:42,472:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:20:47,790:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:20:54,490:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:20:59,636:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:04,949:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:10,078:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:15,253:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:20,264:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:25,601:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:31,966:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:37,330:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:43,673:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:48,682:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:54,753:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:21:59,780:Timeout during comparison
(TaskRunner pid=358, ip=22.8.197.77) ERROR:2025-05-26 06:22:05,044:Timeout during comparison

gauss-clb avatar May 29 '25 02:05 gauss-clb