ChatGLM-6B
ChatGLM-6B copied to clipboard
[Help] 如何支持多显卡
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
公司内部使用,装了2卡,发现默认配置只有1卡在跑,请问如何使用才可以使用多卡
Expected Behavior
No response
Steps To Reproduce
无
Environment
OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.26.1
PyTorch: 1.12
CUDA Support: True
Anything else?
No response
CUDA_VISIBLE_DEVICES=0,1
CUDA_VISIBLE_DEVICES=0,1
夹在web_demo.py么?
CUDA_VISIBLE_DEVICES=0,1
夹在web_demo.py么?
系统环境变量
CUDA_VISIBLE_DEVICES=0,1
设了CUDA_VISIBLE_DEVICES=0,1后,用nvidia-smi命令也只看到用了GPU 0呢?
0,1都可以看到,但是我用demo.py只能对0生效
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(8).cuda() model = model.eval()
MAX_TURNS = 20 MAX_BOXES = MAX_TURNS * 2 CUDA_VISIBLE_DEVICES = 0,1
因为网络层需要自己映射到其它显卡上,不然只会用gpu 0
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True,device_map='auto').half()
添加了os.environ["CUDA_VISIBLE_DEVICES"] = "1,0" 和 device_map='auto',好像还不行,随着推理并发和交互的增多,一张卡的显存还是在飙升,另一张虽然有显存占用,但只有783M,而且一直不增加
@jeffsjf 请使用 https://github.com/THUDM/ChatGLM-6B#%E5%A4%9A%E5%8D%A1%E9%83%A8%E7%BD%B2