fastllm icon indicating copy to clipboard operation
fastllm copied to clipboard

在2080ti&&3090上运行chatglm2-6b挂在cublasSgemmStridedBatched,同样条件下A10运行正常

Open zhaoying9105 opened this issue 1 year ago • 3 comments

环境:

硬件:

机器A: NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 2080ti 机器B: NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 A10

docker image:

nvcr.io/nvidia/pytorch:23.04-py3

软件版本:

  1. transformers 4.30.2
  2. sentence-transformers 2.2.2
  3. torch 2.1.0a0+fe05266
  4. nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

代码:

chatglm2版本:

https://huggingface.co/THUDM/chatglm2-6b/

commit id: 0ecfe0b857efd00836a4851b3dd2ed04bd4b197f

fastllm版本:

https://github.com/ztxz16/fastllm.git

commit id:bb36a7c22431bebb76c9c78904d97e5da86aeee0

复现:

1. 编译

mkdir build
cd build 
cmake .. -DUSE_CUDA=ON
cd tools
python setup.py install

2. torch model to flm

➜  fastllm git:(master) ✗ cat tools/scripts/chatglm_export.py
import sys
from transformers import AutoTokenizer, AutoModel
from fastllm_pytools import torch2flm

if __name__ == "__main__":
    model_path = "/data/mnt/models/chatglm2-6b/"
    tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
    model = AutoModel.from_pretrained(model_path, trust_remote_code=True).float()
    model = model.eval()

    exportPath = sys.argv[1] if (sys.argv[1] is not None) else "chatglm-6b-fp32.flm";
    torch2flm.tofile(exportPath, model, tokenizer)

执行

python tools/scripts/chatglm_export.py  chatglm2-6b-fp32.flm

在机器A上执行得到的chatglm2-6b-fp32.flm MD5是 462792d9c7f5d800753978cf5f6f91b5 在机器B上执行得到的chatglm2-6b-fp32.flm MD5是 e399be7800e044ba8df0327e1bc7de00

3. 量化

cd build
./quant  -p  ../chatglm2-6b-fp32.flm -b 8 -o ../chatglm2-6b-int8.flm

在机器A上执行得到的chatglm2-6b-int8.flm MD5是 e4967dfddf5333a591a1a182b8568708 在机器B上执行得到的chatglm2-6b-int8.flm MD5是 23a08b30aaaa5ea497ac833341e9670c

4. 运行

机器B上运行:

./main -p ../chatglm2-6b-int8.flm 
Load (200 / 200) 
Warmup...
finish.
欢迎使用 chatglm 模型. 输入内容对话,reset清空历史记录,stop退出程序.
用户: 你好
chatglm: 你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
用户: 你好
chatglm: 你好!我是人工智能助手ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
用户: “?
chatglm: 你好!我是人工智能助手ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
用户: 介绍一下你自己
chatglm: 我是一个名为 ChatGLM2-6B 的人工智能助手,是基于清华大学 KEG 实验室和智谱 AI 公司于 2023 年共同训练的语言模型 GLM2-6B 开发的。我的任务是针对用户的问题和要求提供适当的答复和支持。由于我是一个计算机程序,所以我没有自我意识,也不能像人类一样感知世界。我只能通过分析我所学到的信息来回答问题。

显存占用 6GB

在机器A上运行

gdb --args ./main  -p ../chatglm2-6b-int8.flm 
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./main...
(No debugging symbols found in ./main)
(gdb) r
Starting program: /data/projects/fastllm/build/main -p ../chatglm2-6b-int8.flm
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffcd505000 (LWP 948347)]
[New Thread 0x7fffccd04000 (LWP 948348)]
[New Thread 0x7fffcc503000 (LWP 948349)]
[New Thread 0x7fffcbd02000 (LWP 948350)]
[Thread 0x7fffcc503000 (LWP 948349) exited]
[Thread 0x7fffcbd02000 (LWP 948350) exited]
[Thread 0x7fffccd04000 (LWP 948348) exited]
[New Thread 0x7fffcbd02000 (LWP 948351)]
[Thread 0x7fffcd505000 (LWP 948347) exited]
[New Thread 0x7fffcc503000 (LWP 948352)]
[New Thread 0x7fffccd04000 (LWP 948353)]
[New Thread 0x7fffcd505000 (LWP 948354)]
Load (200 / 200) 
Warmup...
[New Thread 0x7fffb0bfc000 (LWP 948391)]
[New Thread 0x7ffe2f095000 (LWP 948392)]
[New Thread 0x7ffe2e894000 (LWP 948393)]
finish.
欢迎使用 chatglm 模型. 输入内容对话,reset清空历史记录,stop退出程序.
用户: 你好
chatglm:status = 7
128 16 4225
Error: cublas error.
terminate called after throwing an instance of 'char const*'

Thread 1 "main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff11e3859 in __GI_abort () at abort.c:79
#2  0x00007ffff15bd911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007ffff15c938c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007ffff15c93f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007ffff15c96a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00005555555cee55 in FastllmCudaBatchMatMul ()
#7  0x00005555555c41ea in fastllm::CudaMatMulOp::Run(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastllm::Data*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastllm::Data*> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, float, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, float> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > > const&) ()
#8  0x0000555555581660 in fastllm::BaseDevice::Run(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastllm::Data*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastllm::Data*> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, float, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, float> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > > const&) ()
#9  0x0000555555579e4a in fastllm::MatMul(fastllm::Data const&, fastllm::Data const&, fastllm::Data&, float) ()
#10 0x000055555559c33c in fastllm::ChatGLMModel::ForwardBatch(int, fastllm::Data const&, fastllm::Data const&, fastllm::Data const&, fastllm::Data const&, std::vector<std::pair<fastllm::Data, fastllm::Data>, std::allocator<std::pair<fastllm::Data, fastllm::Data> > >&) ()
#11 0x00005555555a60c9 in fastllm::ChatGLMModel::Response(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const--Type <RET> for more, q to quit, c to continue without paging--
&, std::function<void (int, char const*)>) ()
#12 0x000055555556cd2a in main ()
(gdb) quit
A debugging session is active.

        Inferior 1 [process 948330] will be killed.

显存占用从6GB冲高到接近11GB后出现上面的错误,看起来是显存爆了

zhaoying9105 avatar Jul 07 '23 06:07 zhaoying9105

哪怕量化到int4,在2080ti上运行还是会爆显存

zhaoying9105 avatar Jul 07 '23 06:07 zhaoying9105

目测和模型权重无关, 我用机器B的权重,在3090上运行,出现:

gdb --args ./main  -p ../chatglm2-6b-int8.flm 
GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1) 9.2
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./main...
(No debugging symbols found in ./main)
(gdb) r
Starting program: /data/projects/fastllm/build/main -p ../chatglm2-6b-int8.flm
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffcd505000 (LWP 948347)]
[New Thread 0x7fffccd04000 (LWP 948348)]
[New Thread 0x7fffcc503000 (LWP 948349)]
[New Thread 0x7fffcbd02000 (LWP 948350)]
[Thread 0x7fffcc503000 (LWP 948349) exited]
[Thread 0x7fffcbd02000 (LWP 948350) exited]
[Thread 0x7fffccd04000 (LWP 948348) exited]
[New Thread 0x7fffcbd02000 (LWP 948351)]
[Thread 0x7fffcd505000 (LWP 948347) exited]
[New Thread 0x7fffcc503000 (LWP 948352)]
[New Thread 0x7fffccd04000 (LWP 948353)]
[New Thread 0x7fffcd505000 (LWP 948354)]
Load (200 / 200) 
Warmup...
[New Thread 0x7fffb0bfc000 (LWP 948391)]
[New Thread 0x7ffe2f095000 (LWP 948392)]
[New Thread 0x7ffe2e894000 (LWP 948393)]
finish.
欢迎使用 chatglm 模型. 输入内容对话,reset清空历史记录,stop退出程序.
用户: 你好
chatglm:status = 7
128 16 4225
Error: cublas error.
terminate called after throwing an instance of 'char const*'

Thread 1 "main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff11e3859 in __GI_abort () at abort.c:79
#2  0x00007ffff15bd911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007ffff15c938c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
#4  0x00007ffff15c93f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00007ffff15c96a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00005555555cee55 in FastllmCudaBatchMatMul ()
#7  0x00005555555c41ea in fastllm::CudaMatMulOp::Run(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastllm::Data*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastllm::Data*> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, float, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, float> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > > const&) ()
#8  0x0000555555581660 in fastllm::BaseDevice::Run(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, fastllm::Data*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, fastllm::Data*> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, float, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, float> > > const&, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, int> > > const&) ()
#9  0x0000555555579e4a in fastllm::MatMul(fastllm::Data const&, fastllm::Data const&, fastllm::Data&, float) ()
#10 0x000055555559c33c in fastllm::ChatGLMModel::ForwardBatch(int, fastllm::Data const&, fastllm::Data const&, fastllm::Data const&, fastllm::Data const&, std::vector<std::pair<fastllm::Data, fastllm::Data>, std::allocator<std::pair<fastllm::Data, fastllm::Data> > >&) ()
#11 0x00005555555a60c9 in fastllm::ChatGLMModel::Response(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const--Type <RET> for more, q to quit, c to continue without paging--
&, std::function<void (int, char const*)>) ()
#12 0x000055555556cd2a in main ()
(gdb) quit
A debugging session is active.

        Inferior 1 [process 948330] will be killed.

从原代码看,3090 和 2080ti都挂在cublasSgemmStridedBatched函数上

zhaoying9105 avatar Jul 07 '23 08:07 zhaoying9105

这个确实比较奇怪,是docker环境吗? 我自己这边3090是正常运行的,之前群友2080ti也是正常的,应该是有别的问题

ztxz16 avatar Jul 07 '23 10:07 ztxz16

从log上看更像是系统本身的问题,建议换镜像测试其他正常程序试试。

wildkid1024 avatar Jul 08 '23 16:07 wildkid1024

debug了一下,发现这是个精度问题: ChatGLMModel::ForwardBatch 函数返回的lastRet 一直是0; 然后在ChatGLMModel::Response 的while中疯狂循环到4000+iter,显存爆炸,然后cudaMalloc失败,调用 cublasSgemmStridedBatched 时 input1的dev ptr是空指针,然后出现上述错误

zhaoying9105 avatar Jul 09 '23 04:07 zhaoying9105

在2080ti上 从chatglm2 换成 chatglm 也是一样的情况

zhaoying9105 avatar Jul 09 '23 04:07 zhaoying9105

之前有人碰到这个问题是设置了set(CMAKE_CUDA_ARCHITECTURES "native")之后解决的(目前git上最新版中CMakelists.txt已经加上这句了),可以试试能不能解决

ztxz16 avatar Jul 09 '23 05:07 ztxz16

可以了,感谢!!

zhaoying9105 avatar Jul 09 '23 10:07 zhaoying9105