[BUG] <title>MiniCPM_V_2_6_INT4 版本执行提示 tensor 异常
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
INT4 版本的官方示例代码,使用 transformers 执行,看起来是采样的方法有问题,详细报错如下。
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
low_cpu_mem_usage was None, now set to True since model is quantized.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.76s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "minicpm_2_6.py", line 19, in inf, nan or element < 0
期望行为 | Expected Behavior
可以正常运行出结果,之前 V2.5 版本是可以正常运行的。 环境都是同一样的,按道理不应该出现这个问题。
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS: centos
- Python: 3.8.9
- Transformers: 4.40.0
- PyTorch:2.1.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1
备注 | Anything else?
No response
测试代码如下,这个问题确实比较奇怪
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
path = "/home/deploy/user/yangyunlong/model"
model_path = path+"/MiniCPM-V-2_6-int4"
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()
img_path = path+"/rag.jpg"
image = Image.open(img_path).convert('RGB')
question = '这张图片上有什么内容?'
msgs = [{'role': 'user', 'content': [image,question]}]
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
@tc-mb 来看看
你好,我记得这个issue似乎已经沟通解决了是么?好像是package版本的问题
你好,我记得这个issue似乎已经沟通解决了是么?好像是package版本的问题
是的,现在没有问题了,我关掉这个 issue。是某一个依赖库的版本有点异常导致的,重新安装环境后正常运行。
@Xls1994 你好您好,int4版本的MiniCPM-V-2_6可以用vllm加速吗?
我在做awq的int4,已经跑通了,目前在做算子融合加速,做完后可以用vllm,目前的int4是bnb的,无法使用vllm进行加速
这个问题有点神奇 应该是依赖库的原因 我重新pip install -r requirements.txt后成功了 应该是之前配vllm的时候,环境改动了。
@LDLINGLINGLING 请教您一下,awq的int4可以支持vllm的推理吗? 推理的速度怎么样,大概多少秒能出字?