lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Bug] 2卡internvl2-26b推理,卡间通信是pcie会失败,nvlink会成功,这是为啥

Open chestnut111 opened this issue 1 year ago • 12 comments
trafficstars

Checklist

  • [X] 1. I have searched related issues but cannot get the expected help.
  • [X] 2. The bug has not been fixed in the latest version.
  • [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

代码; backend_config = TurbomindEngineConfig(session_len=8192, tp=2, max_batch_size=2) pipe = pipeline(model, backend_config=backend_config) 报错; RuntimeError: [TM][ERROR] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/allocator.h:246

Reproduction

python run.py

Environment

A800 80g
2卡

Error traceback

x

chestnut111 avatar Aug 30 '24 03:08 chestnut111

环境?如果是docker启动需要加大 shm,因为NCCL会用到

QwertyJack avatar Aug 30 '24 04:08 QwertyJack

环境?如果是docker启动需要加大 shm,因为NCCL会用到

但是为啥nvlink没问题呢、应该也用吧

chestnut111 avatar Aug 30 '24 05:08 chestnut111

环境?如果是docker启动需要加大 shm,因为NCCL会用到

但是为啥nvlink没问题呢、应该也用吧

我是用的docker, 64核,

CPU 256GiB,

A800 4卡GPU

chestnut111 avatar Aug 30 '24 05:08 chestnut111

试试看:docker run --shm-size=64gb ...

QwertyJack avatar Aug 30 '24 06:08 QwertyJack

试试看:docker run --shm-size=64gb ...

感谢,我试一下

chestnut111 avatar Aug 30 '24 06:08 chestnut111

试试看:docker run --shm-size=64gb ...

没有效果。。。报错依旧

chestnut111 avatar Aug 30 '24 07:08 chestnut111

@chestnut111 麻烦贴一下python3 -m lmdeploy check_env 的输出

lzhangzz avatar Aug 30 '24 07:08 lzhangzz

@chestnut111 麻烦贴一下python3 -m lmdeploy check_env 的输出 sys.platform: linux Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0,1,2,3: NVIDIA A800 80GB PCIe CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.4, V12.4.131 GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 PyTorch: 2.3.1+cu121 PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 12.1
  • NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  • CuDNN 8.9.2
  • Magma 2.6.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.18.1+cu121 LMDeploy: 0.5.3+1280f59 transformers: 4.37.2 gradio: Not Found fastapi: 0.112.2 pydantic: 2.8.2 triton: 2.3.1 NVIDIA Topology: GPU0 GPU1 GPU2 GPU3 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NODE SYS SYS SYS 0-31,64-95 0 N/A GPU1 NODE X SYS SYS SYS 0-31,64-95 0 N/A GPU2 SYS SYS X NV8 PHB 32-63,96-127 1 N/A GPU3 SYS SYS NV8 X NODE 32-63,96-127 1 N/A NIC0 SYS SYS PHB NODE X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0

chestnut111 avatar Aug 30 '24 09:08 chestnut111

@chestnut111 麻烦贴一下python3 -m lmdeploy check_env 的输出 两卡并行的时候 0、1卡会报错,1、2卡会报错,只有2、3卡可以

chestnut111 avatar Aug 30 '24 09:08 chestnut111

@chestnut111 麻烦贴一下python3 -m lmdeploy check_env 的输出

辛苦再帮忙看一下吧

chestnut111 avatar Sep 02 '24 04:09 chestnut111

建议试试设置 NCCL_P2P_DISABLE=1

lzhangzz avatar Sep 02 '24 07:09 lzhangzz

建议试试设置 NCCL_P2P_DISABLE=1

不行,依然报错 RuntimeError: [TM][ERROR] CUDA runtime error: out of memory /lmdeploy/src/turbomind/utils/allocator.h:246

chestnut111 avatar Sep 03 '24 03:09 chestnut111