hf-mirror-site
hf-mirror-site copied to clipboard
a huggingface mirror site.
我在evaluation的issue里发现了很多和hf-mirror相关的问题 ``` File "/modules/evaluate_modules/metrics/evaluate-metric--exact_match/009c8b5313309ea5b135d526433d5ee76508ba1554cbe88310a30f85bb57ec88/exact_match.py", line 16 } ``` 这个问题是因为不完全下载导致的吗?
使用transformer.js加载模型会向hugging face发送请求,如果用hf-mirror的话,就需要在响应头中包含 `Access-Control-Allow-Origin:*`
干净的环境,Python=3.11,只安装了 datasets(==2.18.0) ``` from datasets import load_dataset dataset = load_dataset("codeparrot/apps") ``` 会提示 ``` Traceback (most recent call last): File "", line 1, in File "/home/xxx/miniforge3/envs/test/lib/python3.11/site-packages/datasets/load.py", line 2556, in load_dataset builder_instance...
使用的huggingface-cli download --resume-download --local-dir-use-symlinks False bigscience/bloom-560m --local-dir bloom-560m 已经export HF_ENDPOINT改过,第一次下载速度很快。第二次下就完全没反应了。求解。换了一个服务器也是如此。
download file miss something , and retry does not fix. `OSError: Consistency check failed: file should be of size 4983508568 but has size 3507967946 (model-00001-of-00006.safetensors). We are sorry for the...

今天开始频繁出现的,load_dataset 加载数据集时变得比以往慢很多,过了很久弹出错误 ``` requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2d8e5fc4-b6df-4b44-bebd-ba2fc95332f4)') ``` 但 huggingface-cli、ping hf-mirror.com 都还正常
> ConnectionResetError: [Errno 104] Connection reset by peer 不开代理下载就报错,但是开了代理却可以从us的cdn下载
models: https://huggingface.co/internlm/internlm-xcomposer2-4khd-7b ### Reproduce code: ``` import torch from transformers import AutoModel, AutoTokenizer torch.set_grad_enabled(False) # init model and tokenizer model = AutoModel.from_pretrained('internlm/internlm-xcomposer2-4khd-7b', torch_dtype=torch.bfloat16, trust_remote_code=True).cuda().eval() ``` ### Errors OSError: We couldn't...