ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[Help] 如何支持多显卡

Open ChinaGPT opened this issue 1 year ago • 7 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

公司内部使用,装了2卡,发现默认配置只有1卡在跑,请问如何使用才可以使用多卡

Expected Behavior

No response

Steps To Reproduce

Environment

OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.26.1
PyTorch: 1.12
CUDA Support: True

Anything else?

No response

ChinaGPT avatar Apr 03 '23 02:04 ChinaGPT

CUDA_VISIBLE_DEVICES=0,1

fushengwuyu avatar Apr 03 '23 05:04 fushengwuyu

CUDA_VISIBLE_DEVICES=0,1

夹在web_demo.py么?

ChinaGPT avatar Apr 03 '23 09:04 ChinaGPT

CUDA_VISIBLE_DEVICES=0,1

夹在web_demo.py么?

系统环境变量

musicfish1973 avatar Apr 04 '23 07:04 musicfish1973

CUDA_VISIBLE_DEVICES=0,1

设了CUDA_VISIBLE_DEVICES=0,1后,用nvidia-smi命令也只看到用了GPU 0呢?

musicfish1973 avatar Apr 04 '23 08:04 musicfish1973

0,1都可以看到,但是我用demo.py只能对0生效

tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().quantize(8).cuda() model = model.eval()

MAX_TURNS = 20 MAX_BOXES = MAX_TURNS * 2 CUDA_VISIBLE_DEVICES = 0,1

ChinaGPT avatar Apr 04 '23 08:04 ChinaGPT

因为网络层需要自己映射到其它显卡上,不然只会用gpu 0

cywjava avatar Apr 06 '23 13:04 cywjava

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True,device_map='auto').half()

dofish avatar Apr 13 '23 12:04 dofish

添加了os.environ["CUDA_VISIBLE_DEVICES"] = "1,0" 和 device_map='auto',好像还不行,随着推理并发和交互的增多,一张卡的显存还是在飙升,另一张虽然有显存占用,但只有783M,而且一直不增加

jeffsjf avatar Apr 19 '23 12:04 jeffsjf

@jeffsjf 请使用 https://github.com/THUDM/ChatGLM-6B#%E5%A4%9A%E5%8D%A1%E9%83%A8%E7%BD%B2

duzx16 avatar Apr 19 '23 14:04 duzx16