ChatGLM-6B
ChatGLM-6B copied to clipboard
[Help] 请问如何实现多卡微调?
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
/
Expected Behavior
No response
Steps To Reproduce
/
Environment
/
Anything else?
No response
See https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning#finetune (?
See https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning#finetune (?
官方给的多卡加载模型方法load_model_on_gpus一直报错,sh配置文件中CUDA_VISIBLE_DEVICES=0添加为0,1,两卡都用上了,ptuning v2微调int8模型,其他参数不变的情况下,每张卡卡占用接近24g,看起来比官方给出的最低9G要高出不少
找到原因
找到原因
你好,你这边是怎么解决的啊?我多卡ptuning也遇到类似问题,两张卡的情况下平均每张卡占用的显存比我单卡ptuning时高不少
@tolecy 请问这边您是如何解决的,我的情况和您类似,多卡反而比单卡高不少