CogVLM2 icon indicating copy to clipboard operation
CogVLM2 copied to clipboard

请问basic_demo/cli_demo_multi_gpus.py能降低单张gpu卡的要求吗?

Open fallbernana123456 opened this issue 1 year ago • 2 comments

我看到BF16 / FP16 推理 42GB 但是说明中在cli_demo_multi_gpu.py 中,我们使用了 infer_auto_device_map 函数来自动分配模型的不同层到不同的GPU上。你需要设置 max_memory 参数来指定每张GPU的最大内存。例如,如果你有两张GPU,每张GPU的内存为23GiB。 是否是指通过多卡来减少单张卡的gpu内存要求? 我现在有4张16G的卡,通过这个方式{0: '15GiB', 1: '15GiB', 2: '15GiB', 3: '15GiB'} ,还是会导致 CUDA out of memory,请问是我理解的不对还是我设置的不对?

fallbernana123456 avatar May 24 '24 03:05 fallbernana123456

可能是因为一个模块就超过了16G了,只测试了24G的卡

zRzRzRzRzRzRzR avatar May 24 '24 13:05 zRzRzRzRzRzRzR

可能是因为一个模块就超过了16G了,只测试了24G的卡

您好,我试了报另一个错误,这是我的代码 import torch from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer from accelerate import init_empty_weights, load_checkpoint_and_dispatch, infer_auto_device_map

MODEL_PATH = "/mnt/data/spdi-code/paddlechat/cogvlm2-llama3-chat-19B" DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' TORCH_TYPE = torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else torch.float16

tokenizer = AutoTokenizer.from_pretrained( MODEL_PATH, trust_remote_code=True )

with init_empty_weights(): model = AutoModelForCausalLM.from_pretrained( MODEL_PATH, trust_remote_code=True, )

num_gpus = torch.cuda.device_count() max_memory_per_gpu = "16GiB" if num_gpus > 2: max_memory_per_gpu = f"{round(42 / num_gpus)}"

device_map = infer_auto_device_map( model=model, max_memory={i: max_memory_per_gpu for i in range(num_gpus)}, no_split_module_classes=["CogVLMDecoderLayer"] ) model = load_checkpoint_and_dispatch(model, MODEL_PATH, device_map=device_map, dtype=TORCH_TYPE) model = model.eval()

text_only_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"

query = text_only_template.format('您好') history = [] input_by_model = model.build_conversation_input_ids( tokenizer, query=query, history=history, template_version='chat' )

inputs = { 'input_ids': input_by_model['input_ids'].unsqueeze(0).to(DEVICE), 'token_type_ids': input_by_model['token_type_ids'].unsqueeze(0).to(DEVICE), 'attention_mask': input_by_model['attention_mask'].unsqueeze(0).to(DEVICE), 'image': None } gen_kwargs = { "max_new_tokens": 2048, "pad_token_id": 128002, } with torch.no_grad(): outputs = model.generate(**inputs, **gen_kwargs) outputs = outputs[:, inputs['input_ids'].shape[1]:] response = tokenizer.decode(outputs[0]) response = response.split("")[0] print("\nCogVLM2:", response) history.append((query, response))

企业微信截图_20240527094115

whysirier avatar May 27 '24 01:05 whysirier

用最新的代码也是这个问题吗,直接用我们切分的办法

zRzRzRzRzRzRzR avatar Jun 27 '24 17:06 zRzRzRzRzRzRzR