ChatGLM-6B icon indicating copy to clipboard operation
ChatGLM-6B copied to clipboard

[BUG/Help] 20230507晚遇到:huggingface.co 无法访问并获取token配置文件!

Open mikestut opened this issue 1 year ago • 17 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

Current Behavior

20230506都正常使用,到了0507晚上就不行了。

python web_demo.py

'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm-6b/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f4df05f51f0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/THUDM/chatglm-6b/resolve/main/tokenizer_config.json Traceback (most recent call last): File "/home/tyn/ChatGLM-6B/web_demo.py", line 6, in tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=False) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 646, in from_pretrained raise ValueError( ValueError: Loading THUDM/chatglm-6b requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. 修改:tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 后报一下错误: Traceback (most recent call last): File "/home/tyn/ChatGLM-6B/web_demo.py", line 6, in tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 619, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 463, in get_tokenizer_config resolved_config_file = cached_file( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/transformers/utils/hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1134, in hf_hub_download metadata = get_hf_file_metadata( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn return fn(*args, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1466, in get_hf_file_metadata r = _request_wrapper( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper response = _request_wrapper( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper return http_backoff( File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 129, in http_backoff response = requests.request(method=method, url=url, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/root/miniconda3/envs/thudm/lib/python3.9/site-packages/requests/adapters.py", line 563, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm-6b/resolve/main/tokenizer_config.json (Caused by SSLError(CertificateError("hostname 'huggingface.co' doesn't match 'm3u8.se9913.com'")))

Expected Behavior

求解决方案,应该还有其他小伙伴类似问题。

Steps To Reproduce

cd ChatGLM-6

conda env

python web_demo.py

然后报错

Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

Anything else?

不是环境问题,应该在https://huggingface.co/THUDM/chatglm-6b/resolve/main/tokenizer_config.json

mikestut avatar May 07 '23 14:05 mikestut

我也遇到了,不知道为啥,前天还好好的

lyric135 avatar May 07 '23 15:05 lyric135

我也遇到了,不知道为啥,前天还好好的

huggingface.co 无法访问导致,需要本地加载

config.json -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/a907a4d270fc3456b076018f28b57b9925e56750 configuration_chatglm.py -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/b4b196a51c922f44936b6f6bd2bbd80980f1755d ice_text.model -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/99871e0c85db81ad7af1028854fd091cd5778c8414ae9d94bbbc10d02c831c21 modeling_chatglm.py -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/10a1df2c45e5b3271446da824823bb52e30b50fb pytorch_model-00001-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/fe5bac6bfa5b5404ddfe3fabe04862b785e013afd7b308b7beca08239f9489fa pytorch_model-00002-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/a80198fb714f7363d7e541125bb70b9cb6b1d1ef5988d32a7a25a852a374cbc3 pytorch_model-00003-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/aaba0ae53b3ea30559575c8528dab52ca291a26ac847c5601fcf874db401198f pytorch_model-00004-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/968d134dd9b11e393d160144f097d6bff8c559413e3f75e9e0b6d35618eba669 pytorch_model-00005-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/fc628ce0dcd5c38783e63fc81dd1b609fe01670ec3b855b358aa0d1d7ea48bf3 pytorch_model-00006-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/511ec23b7907b7a26461671775a2ac08c08fb3695285bbe7d91fc534d7cbfd7e pytorch_model-00007-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/245d64e05cebeb214d696bccc87c1dbdf16c67c366e7f54af452ec5748c2186e pytorch_model-00008-of-00008.bin -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/e764ebdece24219efeda3c18aa32fe6414da3d1f533df8845815609e9ef7f4a7 pytorch_model.bin.index.json -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/b8ada2bdf39c8297dc2b3159270227c587bd13e9 quantization.py -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/bf30790cd8cd4adbead4f8cb64afd6db7ff29cf2 tokenization_chatglm.py -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/5f594e62e671d682fab23e42922102cee1078f7e tokenizer_config.json -> /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/21c1748d14ccbb115d065fcf87007c0c0e6c098e

以此修改 映射文件名后复制到 path/to/tokenizer 修改加载路径: tokenizer = AutoTokenizer.from_pretrained("path/to/tokenizer") model = AutoModel.from_pretrained("path/to/tokenizer")

试试看!

mikestut avatar May 07 '23 16:05 mikestut

cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/a907a4d270fc3456b076018f28b57b9925e56750 tokenizer/config.json cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/b4b196a51c922f44936b6f6bd2bbd80980f1755d tokenizer/configuration_chatglm.py cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/99871e0c85db81ad7af1028854fd091cd5778c8414ae9d94bbbc10d02c831c21 tokenizer/ice_text.model cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/10a1df2c45e5b3271446da824823bb52e30b50fb tokenizer/ modeling_chatglm.py cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/fe5bac6bfa5b5404ddfe3fabe04862b785e013afd7b308b7beca08239f9489fa tokenizer/pytorch_model-00001-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/a80198fb714f7363d7e541125bb70b9cb6b1d1ef5988d32a7a25a852a374cbc3 tokenizer/pytorch_model-00002-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/aaba0ae53b3ea30559575c8528dab52ca291a26ac847c5601fcf874db401198f tokenizer/pytorch_model-00003-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/968d134dd9b11e393d160144f097d6bff8c559413e3f75e9e0b6d35618eba669 tokenizer/pytorch_model-00004-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/fc628ce0dcd5c38783e63fc81dd1b609fe01670ec3b855b358aa0d1d7ea48bf3 tokenizer/pytorch_model-00005-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/511ec23b7907b7a26461671775a2ac08c08fb3695285bbe7d91fc534d7cbfd7e tokenizer/pytorch_model-00006-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/245d64e05cebeb214d696bccc87c1dbdf16c67c366e7f54af452ec5748c2186e tokenizer/pytorch_model-00007-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/e764ebdece24219efeda3c18aa32fe6414da3d1f533df8845815609e9ef7f4a7 tokenizerpytorch_model-00008-of-00008.bin cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/b8ada2bdf39c8297dc2b3159270227c587bd13e9 tokenizer/pytorch_model.bin.index.json cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/bf30790cd8cd4adbead4f8cb64afd6db7ff29cf2 tokenizer/quantization.py cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/5f594e62e671d682fab23e42922102cee1078f7e tokenizer/tokenization_chatglm.py cp /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/blobs/21c1748d14ccbb115d065fcf87007c0c0e6c098e tokenizer/tokenizer_config.json

然后运行过程中报错缺少模块,先拆卸TORCH1.10.安装torch1.8. 后安装pip install icetk 安装好后再安装torch1.10.* 然后运行python web_demo.py image

mikestut avatar May 07 '23 17:05 mikestut

如果之前下载过,可以直接加载本地cache

查看模型本地路径:

ls /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/snapshots/
# 658202d88ac4bb782b99e99ac3adff58b4d0b813

修改web_demo.py:

model_path = "/root/.cache/huggingface/hub/models--THUDM--chatglm-6b/snapshots/658202d88ac4bb782b99e99ac3adff58b4d0b813/"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path)

微调时,修改 train.sh --model_name_or_path

--model_name_or_path /root/.cache/huggingface/hub/models--THUDM--chatglm-6b/snapshots/658202d88ac4bb782b99e99ac3adff58b4d0b813

如果没下载过,可以试试手动下载模型文件,国内用户可以试试以下链接:

https://cloud.tsinghua.edu.cn/d/674208019e314311ab5c/?p=%2F&mode=list

模型太大无法打包,可以用下面命令一键下载

curl "https://cloud.tsinghua.edu.cn/api/v2.1/share-links/674208019e314311ab5c/dirents/?thumbnail_size=48&path=%2Fchatglm-6b%2F" | jq  ".dirent_list[].file_name" | xargs -I {}  wget https://cloud.tsinghua.edu.cn/seafhttp/files/5ea6bba5-d11e-4eea-b64d-2aa1bf2c8ef7/{}

下载后将model_path替换成模型保存路径。

pansinm avatar May 08 '23 02:05 pansinm

@pansinm 感谢pansinm,已按您的方法解决。 1、为了避免以下错误:ValueError: Loading ./model/ requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. 可采用以下代码:

model_path = "./model/"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda()

2、为了避免cache被清除,可将cache(658202d88ac4bb782b99e99ac3adff58b4d0b813)下的文件复制到项目目录下,如./model下

little51 avatar May 08 '23 03:05 little51

I guess it's because the energy of the huggingface exceeds 1.8 joules per square centimeter

zuojianghua avatar May 08 '23 03:05 zuojianghua

如果没有下载过的,需要科学上网,下载模型到本地,再上传服务器,从本地加载模型,模型包存放地址如:/root/autodl-tmp/chatglm/chatglm_model

  • web_demo.py 修改模型包引入路径
# 将默认路径:
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()

# 改成本地路径:
tokenizer = AutoTokenizer.from_pretrained("/root/autodl-tmp/chatglm/chatglm_model", trust_remote_code=True)
model = AutoModel.from_pretrained("/root/autodl-tmp/chatglm/chatglm_model", trust_remote_code=True).half().cuda()

GOOD LUCK!

xiaobaimao-wan avatar May 08 '23 03:05 xiaobaimao-wan

有谁知道这是怎么导致的么; 为什么原先好好的,突然不行了;

liseri avatar May 08 '23 05:05 liseri

huggingface.co似乎是被墙了

AceyKubbo avatar May 08 '23 07:05 AceyKubbo

AutoTokenizer.from_pretrained(模型名)是从缓存加载模型的,问题是transformers首先要从huggingface取commit_hash比对本地的缓存是不是最新的,这一步过不去,还没到从本地cache装模型那一步。

little51 avatar May 08 '23 07:05 little51

哈哈哈 现在又可以了喔...

xiaobaimao-wan avatar May 08 '23 09:05 xiaobaimao-wan

哈哈哈 现在又可以了喔...

是的,亲测可用。不过不知道能持续多久;如果只是提供窗口期,那么做些本地化处理还是有必要的。

mikestut avatar May 08 '23 11:05 mikestut

国内访问,处于抽风状态。过个半小时就断了。一会又好了。

anuxs avatar Jun 28 '23 01:06 anuxs

Traceback (most recent call last):

File "/Users/pengwenzhi/Desktop/demo.py", line 1, in from transformers import AutoTokenizer, AutoModel File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/init.py", line 26, in from . import dependency_versions_check File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 16, in from .utils.versions import require_version, require_version_core File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/utils/init.py", line 18, in from huggingface_hub import get_full_repo_name # for backward compatibility ImportError: cannot import name 'get_full_repo_name' from 'huggingface_hub' (/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/huggingface_hub/init.py)

我这个问题哪位大佬遇到过,求解答

pengwenzhi avatar Nov 08 '23 14:11 pengwenzhi

Traceback (most recent call last):

File "/Users/pengwenzhi/Desktop/demo.py", line 1, in from transformers import AutoTokenizer, AutoModel File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/init.py", line 26, in from . import dependency_versions_check File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 16, in from .utils.versions import require_version, require_version_core File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/utils/init.py", line 18, in from huggingface_hub import get_full_repo_name # for backward compatibility ImportError: cannot import name 'get_full_repo_name' from 'huggingface_hub' (/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/huggingface_hub/init.py)

我这个问题哪位大佬遇到过,求解答

请问你解决了吗

kimiyeah001 avatar Jan 11 '24 05:01 kimiyeah001

Traceback (most recent call last):

File "/Users/pengwenzhi/Desktop/demo.py", line 1, in from transformers import AutoTokenizer, AutoModel File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/init.py", line 26, in from . import dependency_versions_check File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 16, in from .utils.versions import require_version, require_version_core File "/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/transformers/utils/init.py", line 18, in from huggingface_hub import get_full_repo_name # for backward compatibility ImportError: cannot import name 'get_full_repo_name' from 'huggingface_hub' (/Users/pengwenzhi/opt/anaconda3/lib/python3.9/site-packages/huggingface_hub/init.py)

我这个问题哪位大佬遇到过,求解答

请问解决了吗

gaohan-cmd avatar Jun 03 '24 07:06 gaohan-cmd

import os os.environ["HF_ENDPOINT"] = "https://hf-mirror.com" 可以解决

315386775 avatar Jun 12 '24 07:06 315386775