[Bug] CUDA Graph Capture Fail on H200
Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
Describe the bug
CUDA graph capture fails on H200. Setting mem-fraction-static or cuda-graph-max-bs to lower values doesn't help.
- Docker image:
lmsysorg/sglang:v0.4.6.post2-cu124 - Model:
deepseek-ai/DeepSeek-V3 - Hardware: H200
Error message:
Exception: Capture cuda graph failed: RuntimeError: Cannot call CUDAGeneratorImpl::current_seed during CUDA graph capture.
If you need this call to be captured, please file an issue.
Current cudaStreamCaptureStatus: cudaStreamCaptureStatusActive
Reproduction
docker run --rm -it --runtime nvidia --gpus all --ipc host --privileged --ulimit memlock=-1 --ulimit stack=67108864 \
-v "$PWD/.hf_cache/":/root/.cache/huggingface/hub/ -v "$PWD/.inductor_cache/":/tmp/torchinductor_root/ \
-e HF_TOKEN="$(cat hf_token.txt)" -e SGL_ENABLE_JIT_DEEPGEMM=1 \
lmsysorg/sglang:v0.4.6.post2-cu124 \
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3 --host 0.0.0.0 --port 8000 --tp 8 --trust-remote-code \
--enable-torch-compile --torch-compile-max-bs 8 --mem-fraction-static 0.7 --cuda-graph-max-bs 16
Environment
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H200
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 570.124.06
PyTorch: 2.6.0+cu124
sglang: 0.4.6.post2
sgl_kernel: 0.1.1
flashinfer_python: 0.2.5+cu124torch2.6
triton: 3.2.0
transformers: 4.51.1
torchao: 0.10.0
numpy: 2.2.5
aiohttp: 3.11.18
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.30.2
interegular: 0.3.3
modelscope: 1.25.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.4
python-multipart: 0.0.20
pyzmq: 26.4.0
uvicorn: 0.34.2
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.17
openai: 1.76.2
tiktoken: 0.9.0
anthropic: 0.50.0
litellm: 1.67.5
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-175 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-175 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-175 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-175 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 0-175 0 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 0-175 0 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 0-175 0 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 0-175 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: KVM
ulimit soft: 1048576
Root cause: torch.compile maybe incompatible torch.cuda.is_current_stream_capturing() Ref: https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L715
Success:
python3 -m sglang.launch_server --model /DeepSeek-V3 --tp 8 --trust-remote-code --mem-fraction-static 0.7 --cuda-graph-max-bs 16
Failed:
python3 -m sglang.launch_server --model /DeepSeek-V3 --tp 8 --trust-remote-code --enable-torch-compile --torch-compile-max-bs 8 --mem-fraction-static 0.7 --cuda-graph-max-bs 16
Possible solution:
remove torch.cuda.is_current_stream_capturing() at Ref
@ispobock WDYT?
Root cause: torch.compile maybe incompatible torch.cuda.is_current_stream_capturing() Ref: https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/deepseek_v2.py#L715
Success:
python3 -m sglang.launch_server --model /DeepSeek-V3 --tp 8 --trust-remote-code --mem-fraction-static 0.7 --cuda-graph-max-bs 16Failed:
python3 -m sglang.launch_server --model /DeepSeek-V3 --tp 8 --trust-remote-code --enable-torch-compile --torch-compile-max-bs 8 --mem-fraction-static 0.7 --cuda-graph-max-bs 16Possible solution: remove
torch.cuda.is_current_stream_capturing()at Ref@ispobock WDYT?
I also met this issue. And replacing torch.cuda.is_current_stream_capturing() with True works around. No idea if there is any side effect.
Thank you everyone for the help.
I would like to report that this error still exists on the latest Docker image lmsysorg/sglang:v0.4.6.post4-cu124:
python3 -m sglang.launch_server --model /DeepSeek-V3 --tp 8 --trust-remote-code \
--enable-torch-compile --torch-compile-max-bs 4 --mem-fraction-static 0.8
I think maybe we need to take another way to check if the status is cuda graph capturing. @Fridge003