private-gpt
private-gpt copied to clipboard
Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
getting following error when running 'PGPT_PROFILES=ollama make run ' after fresh install (no cache).
OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2. 401 Client Error. (Request ID: Root=1-6621fd8f-45a5ffb02852831b1f476fbc;d622d97e-b698-4e30-89c4-e31278dd17ca)
Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json. Repo model mistralai/Mistral-7B-Instruct-v0.2 is gated. You must be authenticated to access it. make: *** [run] Error 1
I am also seeing this issue
this is the full stacktrace:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
return self._context[key]
~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.components.llm.llm_component.LLMComponent'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/worker/app/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 270, in hf_raise_for_status
response.raise_for_status()
File "/home/worker/app/.venv/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json```
You need to follow the steps given by HuggingFace first (here):
- pip install huggingface_hub
- huggingface-cli login here you enter the Token that you create in the Huggingface website After that you should be able to work as usual
This is still a bug however. In my final settings (from two profiles, lets say), I dont use anything from Huggingface, but I still see this error.
Current workaround if you are using privategpt without using anything from huggingface is to comment out the llm
and embedding
sections in the default settings.yaml
file, but fill them in your settings-<profile_name>.yaml
override file.
# llm:
# mode: llamacpp
# # Should be matching the selected model
# max_new_tokens: 512
# context_window: 3900
# tokenizer: mistralai/Mistral-7B-Instruct-v0.2
# temperature: 0.1 # The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1)
# embedding:
# # Should be matching the value above in most cases
# mode: huggingface
# ingest_mode: simple
# embed_dim: 384 # 384 is for BAAI/bge-small-en-v1.5
You need to follow the steps given by HuggingFace first (here):
- pip install huggingface_hub
- huggingface-cli login here you enter the Token that you create in the Huggingface website After that you should be able to work as usual
Thanks this sorted me
This is still a bug however. In my final settings (from two profiles, lets say), I dont use anything from Huggingface, but I still see this error.
Current workaround if you are using privategpt without using anything from huggingface is to comment out the
llm
andembedding
sections in the defaultsettings.yaml
file, but fill them in yoursettings-<profile_name>.yaml
override file.# llm: # mode: llamacpp # # Should be matching the selected model # max_new_tokens: 512 # context_window: 3900 # tokenizer: mistralai/Mistral-7B-Instruct-v0.2 # temperature: 0.1 # The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1) # embedding: # # Should be matching the value above in most cases # mode: huggingface # ingest_mode: simple # embed_dim: 384 # 384 is for BAAI/bge-small-en-v1.5
This solution worked.
This is still a bug however. In my final settings (from two profiles, lets say), I dont use anything from Huggingface, but I still see this error.
Current workaround if you are using privategpt without using anything from huggingface is to comment out the
llm
andembedding
sections in the defaultsettings.yaml
file, but fill them in yoursettings-<profile_name>.yaml
override file.# llm: # mode: llamacpp # # Should be matching the selected model # max_new_tokens: 512 # context_window: 3900 # tokenizer: mistralai/Mistral-7B-Instruct-v0.2 # temperature: 0.1 # The temperature of the model. Increasing the temperature will make the model answer more creatively. A value of 0.1 would be more factual. (Default: 0.1) # embedding: # # Should be matching the value above in most cases # mode: huggingface # ingest_mode: simple # embed_dim: 384 # 384 is for BAAI/bge-small-en-v1.5
I'm still getting:
PermissionError: [Errno 13] Permission denied: 'tiktoken_cache'
How were you able to sort it out (using settings-ollama.yaml file without any changes)
You probably need to recursive chown the /home/worker/app directory to the worker user. I assume you ran into this running in docker?
You probably need to recursive chown the /home/worker/app directory to the worker user. I assume you ran into this running in docker?
Correct.
Indeed, after adding:
RUN chown -R worker:worker /home/worker/app
(and applying your solution of course)
to Dockerfile.local
it's now working
Thanks a lot
- Create an account on Huggingface.co and then create a Token in its settings -> Access Tokens.
- Add token to your ENV as
HUGGINGFACE_TOKEN
or just hardcode in setup.py if testing. - Add token arg in following places in setup.py
hf_hub_download(
repo_id=settings().llamacpp.llm_hf_repo_id,
filename=settings().llamacpp.llm_hf_model_file,
cache_dir=models_cache_path,
local_dir=models_path,
resume_download=resume_download,
token=settings().huggingface.access_token, # add this
)
AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=settings().llm.tokenizer,
cache_dir=models_cache_path,
token=settings().huggingface.access_token, # add this
)
- Navigate to Mistral-7B-Instruct-v0.2 on Huggingface.co and accept its privacy policy.
- Run setup
This solved the same issue for me.
You need to follow the steps given by HuggingFace first (here):
- pip install huggingface_hub
- huggingface-cli login here you enter the Token that you create in the Huggingface website After that you should be able to work as usual
I get valid the token but then this appears
FileNotFoundError: [WinError 2] The system can't find the specified file
There's another bug in ollama_settings.yaml which can cause PGPT_PROFILES=ollama make run fails.
You should use embedding_api_base instead of api_base for embedding.
The source code of embedding_component.py did require embedding_api_base property.
ollama: llm_model: mistral embedding_model: nomic-embed-text #api_base: http://localhost:11434 embedding_api_base: http://localhost:11434
Create an account on Huggingface.co and then create a Token in its settings -> Access Tokens.
Add token to your ENV as
HUGGINGFACE_TOKEN
or just hardcode in setup.py if testing.Add token arg in following places in setup.py
hf_hub_download( repo_id=settings().llamacpp.llm_hf_repo_id, filename=settings().llamacpp.llm_hf_model_file, cache_dir=models_cache_path, local_dir=models_path, resume_download=resume_download, token=settings().huggingface.access_token, # add this ) AutoTokenizer.from_pretrained( pretrained_model_name_or_path=settings().llm.tokenizer, cache_dir=models_cache_path, token=settings().huggingface.access_token, # add this )
* Navigate to Mistral-7B-Instruct-v0.2 on Huggingface.co and accept its privacy policy. * Run setup
This solved the same issue for me.
This solved for me too! thanks a lot
After
- Create an account on Huggingface.co and then create a Token in its settings -> Access Tokens.
- Add token to your ENV as
HUGGINGFACE_TOKEN
or just hardcode in setup.py if testing.- Add token arg in following places in setup.py
hf_hub_download( repo_id=settings().llamacpp.llm_hf_repo_id, filename=settings().llamacpp.llm_hf_model_file, cache_dir=models_cache_path, local_dir=models_path, resume_download=resume_download, token=settings().huggingface.access_token, # add this ) AutoTokenizer.from_pretrained( pretrained_model_name_or_path=settings().llm.tokenizer, cache_dir=models_cache_path, token=settings().huggingface.access_token, # add this )
- Navigate to Mistral-7B-Instruct-v0.2 on Huggingface.co and accept its privacy policy.
- Run setup
This solved the same issue for me.
That fixed it for me. Also if you are getting a We couldn't connect to 'https://huggingface.co'
after than, change your token to Write, that should fix it.
Advanced Troubleshooting Steps
-
Check Network Issues:
- Ensure no network restrictions block Hugging Face.
- Use a VPN to bypass regional restrictions.
-
Environment Isolation:
- Create a fresh virtual environment to rule out dependency conflicts:
python -m venv newenv source newenv/bin/activate # or newenv\Scripts\activate on Windows pip install huggingface_hub huggingface-cli login
- Create a fresh virtual environment to rule out dependency conflicts:
-
Token Scope Verification:
- Ensure the token scope includes
read
andwrite
permissions. Update token permissions on Hugging Face if needed.
- Ensure the token scope includes
-
Inspect Detailed Logs:
- Enable detailed logging to capture more information about the error:
import logging logging.basicConfig(level=logging.DEBUG)
- Enable detailed logging to capture more information about the error:
-
Use Direct API Calls:
- Verify access by making direct API calls using
requests
:import requests headers = {"Authorization": f"Bearer {your_token}"} response = requests.get("https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json", headers=headers) print(response.status_code, response.text)
- Verify access by making direct API calls using
If you need further tailored assistance, please provide specific details about the error logs or configurations you are using. This will help diagnose the issue more accurately.
Why is the tokenizer needed and why does it access an external site?
The tokenizer is a great question for you to ask Google or any Large Language Models. As for the internet calls, it is due to model licensing requiring permission to access certain paid or "premium" models. It makes it so you can't go and make money or train your own AI without giving them credit or such. If you are asking about the API method I just posted, that is a workaround to test to see if the issue is specific to the installed python libraries caching something or if it is due to an issue on huggingface's side.
I think in the case of ollama
such a hugging face requirement should not exist. Btw this gated repo issue is happening to me on Ubuntu 22.04 and not on Windows.
I think in the case of
ollama
such a hugging face requirement should not exist. Btw this gated repo issue is happening to me on Ubuntu 22.04 and not on Windows.
Got the same on Debian12 but solved
You need to follow the steps given by HuggingFace first (here):
- pip install huggingface_hub
- huggingface-cli login here you enter the Token that you create in the Huggingface website After that you should be able to work as usual
After doing the above, I still have to browse to the URL: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 and then click on the button to grant me access to the model. Only then it worked for me.
This worked for me:
- accepted its privacy policy
- created a Access Token with the 'finegrained' permission under the account settings (Ticked all options under Repositories and Inference.)
- ran
pip install --upgrade huggingface_hub
, and add this two lines into your code.
from huggingface_hub import login
login(token="the access token I just created")