[Bug]: Runtime error occurs when running deepseek v3
Your current environment
When I use the following command: python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 9111 --model /deepseek_v3 --max-num-batched-tokens 16384 --gpu-memory-utilization 0.97 --tensor-parallel-size 8 --disable-log-requests --trust-remote-code --enable-chunked-prefill
it shows runtimeerror: nccl error 1:unhandled cuda error (run with nccl_debug=info for details)
model: deepseek v3 vllm : v0.7.1--->pip3 install vllm how can i do?
🐛 Describe the bug
When I use the following command: python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 9111 --model /deepseek_v3 --max-num-batched-tokens 16384 --gpu-memory-utilization 0.97 --tensor-parallel-size 8 --disable-log-requests --trust-remote-code --enable-chunked-prefill
it shows runtimeerror: nccl error 1:unhandled cuda error (run with nccl_debug=info for details)
model: deepseek v3 vllm : v0.7.1--->pip3 install vllm how can i do?
Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
with NCCL_DEBUG_SUBSYS=ALL NCCL_DEBUG=TRACE env, I found that it turned out to be OOM when capturing the graph
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!