lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Bug] 使用LMdeploy在v100推理 Qwen2.5-VL-32B 过一段时间就会卡住

Open muziyongshixin opened this issue 2 months ago • 3 comments

Checklist

  • [ ] 1. I have searched related issues but cannot get the expected help.
  • [ ] 2. The bug has not been fixed in the latest version.
  • [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

使用LMdeploy在v100推理 Qwen2.5-VL-32B 过一段时间就会卡住 能正常启动服务,但是跑一会就会在某块卡卡住,其他GPU显示100%利用率,一块卡显示0利用率

Reproduction

lmdeploy serve api_server /data/phd/hf_models/Qwen2.5-VL-32B-Instruct/ --model-name=Qwen2.5-VL-32B-Instruct --backend turbomind --tp=8 --session-len=32768 --server-port=8080 --max-concurrent-requests=128 --enable-prefix-caching --log-level=DEBUG

Environment

GPU: V100*8
环境安装:uv pip install lmdeploy

Error traceback

卡住是最近的日志:
TM][DEBUG] turbomind::core::Tensor turbomind::LlamaV2::postDecodeEmbedding(const turbomind::core::Tensor&, turbomind::core::Buffer)
[TM][DEBUG] void turbomind::LlamaV2::dynamicDecode(turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int)
[TM][DEBUG] void turbomind::LogitsProcessorLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] start
[TM][DEBUG] void turbomind::LogitsProcessorLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] stop
[TM][DEBUG] void turbomind::SamplingLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] start
[TM][DEBUG] void turbomind::SamplingLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] stop
[TM][DEBUG] void turbomind::StopCriteriaLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] start
[TM][DEBUG] void turbomind::invokeStopWordsCriterion(const int*, const int*, const int*, bool*, size_t, size_t, int, int, int, cudaStream_t) start
[TM][DEBUG] void turbomind::invokeLengthCriterion(bool*, const int*, int, int, int, cudaStream_t) start
[TM][DEBUG] void turbomind::StopCriteriaLayer<T>::Forward(turbomind::core::TensorMap&) [with T = float] stop
[TM][INFO] [ProcessInferRequests] Request for 99 received.
[TM][INFO] [SeqMgr][Create] ID 99
[TM][WARNING] [ProcessInferRequests] [99] total sequence length (1516 + 31252) exceeds `session_len` (32768), `max_new_tokens` is truncated to 31251
[TM][INFO] [Forward] [0, 7), dc=6, pf=1, sum_q=1522, sum_k=1516, max_q=1516, max_k=1763
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)
[TM][DEBUG] void turbomind::LlamaV2::Forward(turbomind::core::Buffer_<int>, turbomind::core::Tensor, turbomind::core::Tensor, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer_<int>, turbomind::core::Buffer_<int>, turbomind::core::Buffer, turbomind::MropeRope*, turbomind::core::Buffer, turbomind::core::Buffer, turbomind::core::Buffer, int, int, const turbomind::Sequence**)
[TM][DEBUG] void turbomind::LlamaV2::updateEmbedding(char*, int, const int*, const turbomind::Sequence**, int, int*, bool*)

muziyongshixin avatar Oct 20 '25 12:10 muziyongshixin

Can you kindly attach the output of the following command to help us debug?

lmdeploy check_env

windreamer avatar Oct 21 '25 02:10 windreamer

By default, TP in Turbomind uses NCCL for multi-GPU communication, and this may get stucked due to incorrect NCCL environment setup. You may think of the following checklist to help you debug:

  1. Use native communication backend instead of nccl, to check if the issue is related with nccl. doc
  2. Enable NCCL_DEBUG=INFO to get more detailed NCCL logging to help troubleshooting. doc

windreamer avatar Oct 21 '25 02:10 windreamer

Can you kindly attach the output of the following command to help us debug?

lmdeploy check_env
source lmdeploy/bin/activate
(lmdeploy) root@ai-platform-wlf1-ge4-231:/home/zhuangnan03# lmdeploy check_env
/home/zhuangnan03/lmdeploy/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
/home/zhuangnan03/lmdeploy/lib/python3.10/site-packages/_distutils_hack/__init__.py:53: UserWarning: Reliance on distutils from stdlib is deprecated. Users must rely on setuptools to provide the distutils module. Avoid importing distutils or import setuptools first, and avoid setting SETUPTOOLS_USE_DISTUTILS=stdlib. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
  warnings.warn(
sys.platform: linux
Python: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.140
GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.8.0+cu128
PyTorch compiling details: PyTorch built with:
  - GCC 13.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.8
  - NVCC architecture flags: -gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_100,code=sm_100;-gencode;arch=compute_120,code=sm_120
  - CuDNN 91.0.2  (built against CUDA 12.9)
    - Built with CuDNN 90.8
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=a1cb3cc05d46d198467bebbb6e8fba50a325d4e7, CUDA_VERSION=12.8, CUDNN_VERSION=9.8.0, CXX_COMPILER=/opt/rh/gcc-toolset-13/root/usr/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 

TorchVision: 0.23.0+cu128
LMDeploy: 0.10.1+
transformers: 4.57.1
fastapi: 0.119.0
pydantic: 2.12.3
triton: 3.4.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV1     NV2     NV1     SYS     SYS     SYS     NV2     NODE    0-25,52-77      0               N/A
GPU1    NV1      X      NV1     NV2     SYS     SYS     NV2     SYS     NODE    0-25,52-77      0               N/A
GPU2    NV2     NV1      X      NV2     SYS     NV1     SYS     SYS     PIX     0-25,52-77      0               N/A
GPU3    NV1     NV2     NV2      X      NV1     SYS     SYS     SYS     PIX     0-25,52-77      0               N/A
GPU4    SYS     SYS     SYS     NV1      X      NV2     NV2     NV1     SYS     26-51,78-103    1               N/A
GPU5    SYS     SYS     NV1     SYS     NV2      X      NV1     NV2     SYS     26-51,78-103    1               N/A
GPU6    SYS     NV2     SYS     SYS     NV2     NV1      X      NV1     SYS     26-51,78-103    1               N/A
GPU7    NV2     SYS     SYS     SYS     NV1     NV2     NV1      X      SYS     26-51,78-103    1               N/A
NIC0    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

muziyongshixin avatar Oct 21 '25 06:10 muziyongshixin