ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG] Cannot import name 'convert_file_size_to_int' from 'transformers.utils.hub'

Open wqw547243068 opened this issue 1 year ago • 2 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

kaggle 默认环境下执行官方示例代码

错误信息:

ImportError: cannot import name 'convert_file_size_to_int' from 'transformers.utils.hub' (/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py)

Expected Behavior

No response

Steps To Reproduce

env:

  • pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
print(response)

Environment

- OS:
- Python: 3.7.12
- Transformers: 4.26.1
- PyTorch: 1.11.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : true

Anything else?

No response

wqw547243068 avatar Mar 17 '23 04:03 wqw547243068

什么是kaggle默认环境?

duzx16 avatar Mar 17 '23 08:03 duzx16

什么是kaggle默认环境?

kaggle新建notebook,系统默认的环境,没有更改

wqw547243068 avatar Mar 17 '23 11:03 wqw547243068

提供的信息过少。这个方法是在 transformers 库内部 import 的,与本仓库无关

duzx16 avatar Apr 12 '23 15:04 duzx16

kaggle下运行,推理时报错,请教什么原因?

File ~/.cache/huggingface/modules/transformers_modules/model/quantization.py:274, in extract_weight_to_half(weight, scale_list, source_bit_width) 272 func = kernels.int8WeightExtractionHalf 273 elif source_bit_width == 4: --> 274 func = kernels.int4WeightExtractionHalf 275 else: 276 assert False, "Unsupported bit-width"

AttributeError: 'NoneType' object has no attribute 'int4WeightExtractionHalf'

qiunlp avatar Jun 01 '23 08:06 qiunlp