MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

[BUG] <title>MiniCPM_V_2_6_INT4 版本执行提示 tensor 异常

Open Xls1994 opened this issue 1 year ago • 2 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [X] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

INT4 版本的官方示例代码,使用 transformers 执行,看起来是采样的方法有问题,详细报错如下。

Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. low_cpu_mem_usage was None, now set to True since model is quantized. Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.76s/it] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Traceback (most recent call last): File "minicpm_2_6.py", line 19, in res = model.chat( File "/home/deploy/.cache/huggingface/modules/transformers_modules/MiniCPM-V-2_6-int4/modeling_minicpmv.py", line 378, in chat res = self.generate( File "/home/deploy/.cache/huggingface/modules/transformers_modules/MiniCPM-V-2_6-int4/modeling_minicpmv.py", line 262, in generate result = self._decode(model_inputs["inputs_embeds"], tokenizer, attention_mask, decode_text=decode_text, **kwargs) File "/home/deploy/.cache/huggingface/modules/transformers_modules/MiniCPM-V-2_6-int4/modeling_minicpmv.py", line 186, in _decode output = self.llm.generate( File "/home/deploy/anaconda3/envs/yyl_env_py388/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/deploy/anaconda3/envs/yyl_env_py388/lib/python3.8/site-packages/transformers/generation/utils.py", line 1622, in generate result = self._sample( File "/home/deploy/anaconda3/envs/yyl_env_py388/lib/python3.8/site-packages/transformers/generation/utils.py", line 2829, in _sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either inf, nan or element < 0

期望行为 | Expected Behavior

可以正常运行出结果,之前 V2.5 版本是可以正常运行的。 环境都是同一样的,按道理不应该出现这个问题。

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS: centos
- Python: 3.8.9
- Transformers: 4.40.0
- PyTorch:2.1.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1

备注 | Anything else?

No response

Xls1994 avatar Aug 07 '24 03:08 Xls1994

测试代码如下,这个问题确实比较奇怪

import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer

path = "/home/deploy/user/yangyunlong/model"
model_path = path+"/MiniCPM-V-2_6-int4"
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()

img_path = path+"/rag.jpg"
image = Image.open(img_path).convert('RGB')

question = '这张图片上有什么内容?'
msgs = [{'role': 'user', 'content': [image,question]}]

res = model.chat(
    image=None,
    msgs=msgs,
    tokenizer=tokenizer    
)
print(res)

Xls1994 avatar Aug 07 '24 04:08 Xls1994

@tc-mb 来看看

Cuiunbo avatar Aug 07 '24 14:08 Cuiunbo

你好,我记得这个issue似乎已经沟通解决了是么?好像是package版本的问题

LDLINGLINGLING avatar Aug 08 '24 06:08 LDLINGLINGLING

你好,我记得这个issue似乎已经沟通解决了是么?好像是package版本的问题

是的,现在没有问题了,我关掉这个 issue。是某一个依赖库的版本有点异常导致的,重新安装环境后正常运行。

Xls1994 avatar Aug 08 '24 10:08 Xls1994

@Xls1994 你好您好,int4版本的MiniCPM-V-2_6可以用vllm加速吗? image

LJY6356 avatar Aug 12 '24 03:08 LJY6356

我在做awq的int4,已经跑通了,目前在做算子融合加速,做完后可以用vllm,目前的int4是bnb的,无法使用vllm进行加速

LDLINGLINGLING avatar Aug 12 '24 03:08 LDLINGLINGLING

这个问题有点神奇 应该是依赖库的原因 我重新pip install -r requirements.txt后成功了 应该是之前配vllm的时候,环境改动了。

colorfulandcjy0806 avatar Aug 13 '24 07:08 colorfulandcjy0806

@LDLINGLINGLING 请教您一下,awq的int4可以支持vllm的推理吗? 推理的速度怎么样,大概多少秒能出字?

seasoncool avatar Aug 13 '24 13:08 seasoncool