[Bug] [TM][ERROR] CUDA runtime error: misaligned address
Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
I build lmdeploy using -DCMAKE_BUILD_TYPE=Debug
and then it reports error:
terminate called after throwing an instance of 'std::runtime_error'
what(): [TM][ERROR] CUDA runtime error: misaligned address /opt/lmdeploy/src/turbomind/models/llama/unified_attention_layer.cc:353
Reproduction
cmake .. \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_EXPORT_COMPILE_COMMANDS=1 \
-DCMAKE_INSTALL_PREFIX=/opt/tritonserver \
-DBUILD_PY_FFI=ON \
-DBUILD_MULTI_GPU=ON \
-DBUILD_CUTLASS_MOE=OFF \
-DBUILD_CUTLASS_MIXED_GEMM=OFF \
-DCMAKE_CUDA_FLAGS="-lineinfo" \
-DUSE_NVTX=ON
Environment
sys.platform: linux
Python: 3.8.10 (default, Jul 29 2024, 17:02:10) [GCC 9.4.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.1.0+cu118
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- CUDA Runtime 11.8
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=com
pute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
- CuDNN 8.7
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi
-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DE
BUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unk
nown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-styl
e-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-vari
able -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WI
TH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL
=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.16.0+cu118
LMDeploy: 0.6.0a0+97b880b
transformers: 4.44.2
gradio: 4.42.0
fastapi: 0.112.2
pydantic: 2.8.2
triton: 2.1.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE SYS SYS NODE 0-47,96-143 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 PXB NODE SYS SYS NODE 0-47,96-143 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PXB SYS SYS NODE 0-47,96-143 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE PIX SYS SYS NODE 0-47,96-143 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS PXB NODE SYS 48-95,144-191 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS PIX NODE SYS 48-95,144-191 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS NODE PXB SYS 48-95,144-191 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS NODE PIX SYS 48-95,144-191 1 N/A
NIC0 PIX PXB NODE NODE SYS SYS SYS SYS X NODE SYS SYS NODE
NIC1 NODE NODE PXB PIX SYS SYS SYS SYS NODE X SYS SYS NODE
NIC2 SYS SYS SYS SYS PXB PIX NODE NODE SYS SYS X NODE SYS
NIC3 SYS SYS SYS SYS NODE NODE PXB PIX SYS SYS NODE X SYS
NIC4 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_4
NIC3: mlx5_5
NIC4: mlx5_bond_0
### Error traceback
_No response_
Can you try to reproduce it with TM_DEBUG_LEVEL=DEBUG?
Furthermore, I need the model and the prompt you used to pinpoint the bug.
TM_DEBUG_LEVEL=DEBUG
This is my code. I tried with TM_DEBUG_LEVEL=DEBUG python offline.py, nothing more printed.
from lmdeploy import pipeline
model = "/data/vicuna-13b-v1.5/"
prompts = ["tell me something about USA."]
pipe = pipeline(model)
response = pipe(prompts)
print(response)
========= Invalid __local__ read of size 16 bytes
========= at 0x120 in /opt/lmdeploy/src/turbomind/kernels/core/array_ops.h:178:void turbomind::Store<__half, (int)8>(T1 *, const turbomind::Array<T1, T2> &)
========= by thread (0,0,0) in block (0,0,8)
========= Address 0xffe6a8 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2f16d0]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x1488c]
========= in /opt/py38/lib/python3.8/site-packages/torch/lib/libcudart-d0da41ae.so.11.0
========= Host Frame:cudaLaunchKernel [0x6c318]
========= in /opt/py38/lib/python3.8/site-packages/torch/lib/libcudart-d0da41ae.so.11.0
========= Host Frame:/opt/lmdeploy/src/turbomind/kernels/attention/attention_universal.h:585:void turbomind::attention_kernel<turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap> >(turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap>::ParamType, turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap>::CacheIteratorFactory, turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap>::CtaMap, int, int, int) [0xba76cc]
========= in /opt/lmdeploy/lmdeploy/lib/_turbomind.cpython-38-x86_64-linux-gnu.so
========= Host Frame:/opt/lmdeploy/src/turbomind/kernels/attention/codegen/../attention_template.h:68:void turbomind::invokeAttention<turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap> >(turbomind::AttentionUniversal<turbomind::arch::Sm80, turbomind::attention::Mainloop<turbomind::attention::Sm80_CpAsync<2>, turbomind::attention::Impl<turbomind::attention::MMA_16816, __half, __half, 1, 64, 64, 1, 16, 64, 128, 2> >, turbomind::LinearIteratorFactory<__half, 64, 128>, turbomind::attention::AttentionCtaMap>::ParamType const&) [0xba7a4d]
========= in /opt/lmdeploy/lmdeploy/lib/_turbomind.cpython-38-x86_64-linux-gnu.so
========= Host Frame:/opt/lmdeploy/src/turbomind/kernels/attention/attention.cu:36:void turbomind::dispatchAttention<__half>(turbomind::AttentionParams<__half> const&) [0xba2bf5]
========= in /opt/lmdeploy/lmdeploy/lib/_turbomind.cpython-38-x86_64-linux-gnu.so
@lzhangzz
My suggestion is that you directly use the latest release instead of compiling it yourself, as lzhangzz is currently busy with feature development :P
I guess I didn't align all the arrays properly in the kernel code. When there is no local memory usage in a release build, the arrays are mapped to registers and the alignment is enforced by the compiler. However in a debug build some arrays are mapped to local memory and the address may be misaligned in this case.
I want to trace and learn the implementation of lmdeploy using gdb. That's why I'm compiling the debug version.
I tried the RelWithDebInfo option, but it's not accurate, a lot of codes are optimized.
Any advice to trace it without recompiling? @lzhangzz @zhyncs
@sleepwalker2017 If you don't need to debug CUDA codes, you can remove the -G option from CMAKE_CUDA_FLAGS_DEBUG