lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

一个很奇怪的现象,多卡时,第一张卡的显存消耗明显多于其他几张卡,而且一直占用cpu时间

Open bltcn opened this issue 8 months ago • 2 comments

root@d53b3f6f1be8:/opt/lmdeploy# lmdeploy check_env sys.platform: linux Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0,1,2,3: NVIDIA GeForce RTX 2080 Ti CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.4, V12.4.131 GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 PyTorch: 2.5.1+cu121 PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX512
  • CUDA Runtime 12.1
  • NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  • CuDNN 90.1 (built against CUDA 12.4)
  • Magma 2.6.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.20.1+cu121 LMDeploy: 0.7.2.post1+aa51a73 transformers: 4.51.0.dev0 gradio: 5.22.0 fastapi: 0.115.11 pydantic: 2.10.6 triton: 3.1.0 NVIDIA Topology: GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X PIX PIX PIX 0-19,40-59 0 N/A GPU1 PIX X PIX PIX 0-19,40-59 0 N/A GPU2 PIX PIX X PIX 0-19,40-59 0 N/A GPU3 PIX PIX PIX X 0-19,40-59 0 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

截图如下

Image

Image

图中红框部分可以看出与其他几个明显的差别,这个问题之前在某个版本中修订过,但是现在又出现了

启动命令如下 lmdeploy serve api_server /root/hf_model/Qwen/Qwen2.5-72B-Instruct-AWQ
--model-name pkumlm_txt --backend turbomind --server-port 8000
--log-level INFO --max-log-len 0
--enable-prefix-caching
--model-format awq --tp 4 --session-len 1024000 --cache-max-entry-count 0.8

bltcn avatar Apr 05 '25 05:04 bltcn

请问你现在有解决方案吗

y1501028421 avatar Apr 28 '25 11:04 y1501028421

这个是turbomind引擎的转换问题,需要在运行之前使用lmdeploy convert命令:lmdeploy convert deepseek-r1:70b deepseek-r1-70b-awq --dst-path ./deepseek-r1-70b --tp 4 然后用转换的目录作为输入再运行就可以了应该,在线转换不保存似乎默认tp=1

20000419 avatar May 03 '25 09:05 20000419