Everytime when I try to downloat meta-llama/Llama-2-7b-chat-hf model i get error
Describe the bug
Everytime when I try to downloat meta-llama/Llama-2-7b-chat-hf model i get this error:
Is there an existing issue for this?
- [X] I have searched the existing issues
Reproduction
try to install meta-llama/Llama-2-7b-chat-hf
Screenshot
Logs
Traceback (most recent call last):
File "C:\text-generation-webui-main\modules\ui_model_menu.py", line 244, in download_model_wrapper
links, sha256, is_lora, is_llamacpp = downloader.get_download_links_from_huggingface(model, branch, text_only=False, specific_file=specific_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\text-generation-webui-main\download-model.py", line 74, in get_download_links_from_huggingface
r.raise_for_status()
File "C:\text-generation-webui-main\installer_files\env\Lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/meta-llama/Llama-2-7b-chat-hf/tree/main
System Info
win11, rtx 2060 nvidia
@Chovanec
use a different model like https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b which is not gated and optimised OR
- you need to first register with Facebook to get link for download https://ai.meta.com/resources/models-and-libraries/llama-downloads/
- make sure you have a account in hugging face with same email id
- then go to hugging face https://huggingface.co/meta-llama/Llama-2-7b-hf
- make sure you see
Or you can get this model https://huggingface.co/TheBloke/Llama-2-7B-Chat-GPTQ
us this in download field TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True
To download that model, we need to specify the HuggingFace Token to Text Generation WebUI, but it doesn't have that option in the UI nor in the command line. The solution I found is to set the HF_TOKEN environment variable before running the server. It would be nice if someday some of these features could be added to Text Generation WebUI:
- be able to specify the token on the web UI.
- to automatically use the token from the $HOME/.cache/huggingface/token file.
- be compatible with $HOME/.cache/huggingface/ so that all of the already downloaded models from there get added to the list automatically and if a new model is downloaded then it gets saved/cached there with the format compatible with the HF Transformers model downloader/loaders. Thanks!!!
This issue has been closed due to inactivity for 2 months. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.