Qwen-VL icon indicating copy to clipboard operation
Qwen-VL copied to clipboard

[BUG] <title> AutoGPTQForCausalLM.from_quantized( "Qwen/Qwen-VL-Chat-Int4", 。。。) 报错

Open xiayq1 opened this issue 10 months ago • 2 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [X] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

从repo找到的脚本里面: i160Nn7j5Q

elif quant_type == "int4": # please install AutoGPTQ following the readme to use quantization from auto_gptq import AutoGPTQForCausalLM model = AutoGPTQForCausalLM.from_quantized( "Qwen/Qwen-VL-Chat-Int4", device="cuda:0", trust_remote_code=True, use_safetensors=True, use_flash_attn=use_flash_attn ).eval()

测试会报错。 FileNotFoundError: Could not find a model in Qwen-VL-Chat-Int4 with a name in model.safetensors. Please specify the argument model_basename to use a custom file name.

发现有人和我遇到了一样的问题。 https://github.com/AutoGPTQ/AutoGPTQ/issues/319

去看了下源代码,加载的模型是一个。不是5个呢?

那为什么做作者的测试脚本,可以使用呢?我用tansform加载预测是可以的。

期望行为 | Expected Behavior

None

复现方法 | Steps To Reproduce

None

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

None

xiayq1 avatar Apr 25 '24 14:04 xiayq1

我也遇到了同样的问题,请问找到解决方法了吗

li563042811 avatar May 08 '24 02:05 li563042811

把auto_gptq版本更新到github上的最新版本就可以了,他们解决了这个问题

li563042811 avatar May 08 '24 07:05 li563042811