ChatGLM-6B
ChatGLM-6B copied to clipboard
[BUG] Cannot import name 'convert_file_size_to_int' from 'transformers.utils.hub'
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
kaggle 默认环境下执行官方示例代码
错误信息:
ImportError: cannot import name 'convert_file_size_to_int' from 'transformers.utils.hub' (/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py)
Expected Behavior
No response
Steps To Reproduce
env:
- pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
print(response)
Environment
- OS:
- Python: 3.7.12
- Transformers: 4.26.1
- PyTorch: 1.11.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : true
Anything else?
No response
什么是kaggle默认环境?
什么是kaggle默认环境?
kaggle新建notebook,系统默认的环境,没有更改
提供的信息过少。这个方法是在 transformers
库内部 import 的,与本仓库无关
kaggle下运行,推理时报错,请教什么原因?
File ~/.cache/huggingface/modules/transformers_modules/model/quantization.py:274, in extract_weight_to_half(weight, scale_list, source_bit_width) 272 func = kernels.int8WeightExtractionHalf 273 elif source_bit_width == 4: --> 274 func = kernels.int4WeightExtractionHalf 275 else: 276 assert False, "Unsupported bit-width"
AttributeError: 'NoneType' object has no attribute 'int4WeightExtractionHalf'