ChatGLM-6B
ChatGLM-6B copied to clipboard
[BUG/Help] <在Linux服务器终端运行程序加载Tokenizer时报错,KeyError:1>
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
当我试图加载Tokenizer时,使用如下代码:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
但是出现如下错误:
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "<stdin>", line 1, in
Expected Behavior
No response
Steps To Reproduce
按照如下输入:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
Environment
- OS: Ubuntu 14.04
- Python:3.8
- Transformers:4.16.2
- PyTorch:1.8.1
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True
Anything else?
No response