rayChen

Results 3 issues of rayChen

### Your current environment None ### How would you like to use vllm I want to load qwen2-14B-chat using VLLM, but I only have 1 RTX4090(24G). Can vllm offload some...

usage

sh train_multi_gpu.py之后,控制台出现WARING后无其他信息,几秒后直接跳出。这个是什么原因呢? root@pt_exp://data/chatglm/chatglm-6B/fine_tuning# sh train_multi_gpu.sh [09:32:06] WARNING The following values were not passed to `accelerate launch` and had defaults used instead: root@pt_exp://data/chatglm/chatglm-6B/fine_tuning#

### System Info with docker method. ### Information - [X] Docker - [ ] The CLI directly ### Tasks - [X] An officially supported command - [X] My own modifications...