[Usage]: cuda oom when serving multi task on same server
Your current environment
vllm 0.6.0
qwen2.5-14b
cuda 12.4
How would you like to use vllm
I would serving task generate and embedding on same server, but cuda oom can i serving generate on gpu , but embedding on cpu? please advice
Before submitting a new issue...
- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
You can try to use the qwen2.5-14b model after INT4 quantization to reduce the GPU memory.
You can try to use the qwen2.5-14b model after INT4 quantization to reduce the GPU memory.
got, will have a try
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!